Updates from: 04/07/2023 01:10:25
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory On Premises Scim Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-scim-provisioning.md
Once the agent is installed, no further configuration is necesary on-prem, and a
4. Select the agent that you installed from the dropdown list and select **Assign Agent(s)**. 5. Now either wait 10 minutes or restart the **Microsoft Azure AD Connect Provisioning Agent** before proceeding to the next step & testing the connection. 6. In the **Tenant URL** field, provide the SCIM endpoint URL for your application. The URL is typically unique to each target application and must be resolveable by DNS. An example for a scenario where the agent is installed on the same host as the application is https://localhost:8585/scim ![Screenshot that shows assigning an agent.](./media/on-premises-scim-provisioning/scim-2.png)
->[!NOTE]
->The Azure AD provisioning service currently drops everything in the URL after the hostname.
- 7. Select **Test Connection**, and save the credentials. The application SCIM endpoint must be actively listening for inbound provisioning requests, otherwise the test will fail. Use the steps [here](on-premises-ecma-troubleshoot.md#troubleshoot-test-connection-issues) if you run into connectivity issues.
+ >[!NOTE]
+> If the test connection fails, you will see the request made. Please note that while the URL in the test connection error message is truncated, the actual request sent to the aplication contains the entire URL provided above.
+ 8. Configure any [attribute mappings](customize-application-attributes.md) or [scoping](define-conditional-rules-for-provisioning-user-accounts.md) rules required for your application. 9. Add users to scope by [assigning users and groups](../../active-directory/manage-apps/add-application-portal-assign-users.md) to the application. 10. Test provisioning a few users [on demand](provision-on-demand.md).
active-directory Concept Certificate Based Authentication Technical Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-technical-deep-dive.md
If CBA enabled user cannot use MF cert (such as on mobile device without smart c
## MFA with Single-factor certificate-based authentication Azure AD CBA can be used as a second factor to meet MFA requirements with single-factor certificates.
-Some of the supported combintaions are
+Some of the supported combinations are
1. CBA (first factor) + passwordless phone sign-in (PSI as second factor) 1. CBA (first factor) + FIDO2 security keys (second factor)
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
description: Learn how to use number matching in MFA notifications
Previously updated : 03/28/2023 Last updated : 04/05/2023
No, number matching isn't enforced because it's not a supported feature for MFA
### What happens if a user runs an older version of Microsoft Authenticator?
-If a user is running an older version of Microsoft Authenticator that doesn't support number matching, authentication won't work if number matching is enabled. Users need to upgrade to the latest version of Microsoft Authenticator to use it for sign-in if they use Android versions prior to 6.2006.4198, or iOS versions prior to 6.4.12.
+If a user is running an older version of Microsoft Authenticator that doesn't support number matching, authentication won't work if number matching is enabled. Users need to upgrade to the latest version of Microsoft Authenticator to use it for sign-in.
### Why is my user prompted to tap on one of three numbers rather than enter the number in their Microsoft Authenticator app?
-Older versions of Microsoft Authenticator prompt users to tap and select a number rather than enter the number in Microsoft Authenticator. These authentications won't fail, but Microsoft highly recommends that users upgrade to the latest version of Microsoft Authenticator if they use Android versions prior to 6.2108.5654, or iOS versions prior to 6.5.82, so they can use number match.
-
-Minimum Microsoft Authenticator version supporting number matching:
--- Android: 6.2006.4198-- iOS: 6.4.12-
-Minimum Microsoft Authenticator version for number matching which prompts to enter a number:
--- Android 6.2111.7701-- iOS 6.5.85
+Older versions of Microsoft Authenticator prompt users to tap and select a number rather than enter the number in Microsoft Authenticator. These authentications won't fail, but Microsoft highly recommends that users upgrade to the latest version of Microsoft Authenticator.
### How can users recheck the number on mobile iOS devices after the match request appears?
active-directory How To Mfa Server Migration Utility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-server-migration-utility.md
Previously updated : 03/27/2023 Last updated : 04/05/2023
Once complete, navigate to the Multi-factor Authentication Server folder, and op
You've successfully installed the Migration Utility. >[!NOTE]
-> To ensure no changes in behavior during migration, if your MFA Server is associated with an MFA Provider with no tenant reference, you'll need to update the default MFA settings (e.g. custom greetings) for the tenant you're migrating to match the settings in your MFA Provider. We recommend doing this before migrating any users.
+> To ensure no changes in behavior during migration, if your MFA Server is associated with an MFA Provider with no tenant reference, you'll need to update the default MFA settings (such as custom greetings) for the tenant you're migrating to match the settings in your MFA Provider. We recommend doing this before migrating any users.
+
+### Run a secondary MFA Server (optional)
+
+If your MFA Server implementation has a large number of users or a busy primary MFA Server, you may want to consider deploying a dedicated secondary MFA Server for running the MFA Server Migration Utility and Migration Sync services. After upgrading your primary MFA Server, either upgrade an existing secondary server or deploy a new secondary server. The secondary server you choose should not be handling other MFA traffic.
+
+The Configure-MultiFactorAuthMigrationUtility.ps1 script should be run on the secondary server to register a certificate with the MFA Server Migration Utility app registration. The certificate is used to authenticate to Microsoft Graph. Running the Migration Utility and Sync services on a secondary MFA Server should improve performance of both manual and automated user migrations.
+ ### Migrate user data Migrating user data doesn't remove or alter any data in the Multi-Factor Authentication Server database. Likewise, this process won't change where a user performs MFA. This process is a one-way copy of data from the on-premises server to the corresponding user object in Azure AD.
active-directory Howto Sspr Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-deployment.md
For more information about pricing, see [Azure Active Directory pricing](https:/
### Guided walkthrough
-For a guided walkthrough of many of the recommendations in this article, see the [Plan your self-service password reset deployment](https://go.microsoft.com/fwlink/?linkid=2221600) guide.
+For a guided walkthrough of many of the recommendations in this article, see the [Plan your self-service password reset deployment](https://go.microsoft.com/fwlink/?linkid=2221501) guide when signed in to the Microsoft 365 Admin Center. To review best practices without signing in and activating automated setup features, go to the [M365 Setup portal](https://go.microsoft.com/fwlink/?linkid=2221600).
### Training resources
active-directory Howto Sspr Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-windows.md
Previously updated : 03/22/2023 Last updated : 04/05/2023
The following limitations apply to using SSPR from the Windows sign-in screen:
- This feature doesn't work for networks with 802.1x network authentication deployed and the option "Perform immediately before user logon". For networks with 802.1x network authentication deployed, it's recommended to use machine authentication to enable this feature. - Hybrid Azure AD joined machines must have network connectivity line of sight to a domain controller to use the new password and update cached credentials. This means that devices must either be on the organization's internal network or on a VPN with network access to an on-premises domain controller. - If using an image, prior to running sysprep ensure that the web cache is cleared for the built-in Administrator prior to performing the CopyProfile step. More information about this step can be found in the support article [Performance poor when using custom default user profile](https://support.microsoft.com/help/4056823/performance-issue-with-custom-default-user-profile).-- The following settings are known to interfere with the ability to use and reset passwords on Windows devices:
+- The following settings are known to interfere with the ability to use and reset passwords on Windows 10 devices:
- If lock screen notifications are turned off, **Reset password** won't work. - *HideFastUserSwitching* is set to enabled or 1 - *DontDisplayLastUserName* is set to enabled or 1
active-directory Active Directory Jwt Claims Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-jwt-claims-customization.md
Previously updated : 12/19/2022 Last updated : 04/04/2023 -+ # Customize claims issued in the JSON web token (JWT) for enterprise applications (Preview)
-The Microsoft identity platform supports single sign-on (SSO) with most enterprise applications, including both applications pre-integrated in the Azure AD app gallery and custom applications. When a user authenticates to an application
- through the Microsoft identity platform using the OIDC protocol, the Microsoft identity platform sends a token to the application. And then, the application validates and uses the token to log the user in instead of prompting for a username and password.
+The Microsoft identity platform supports [single sign-on (SSO)](../manage-apps/what-is-single-sign-on.md) with most enterprise applications, including both applications preintegrated in the Azure AD app gallery and custom applications. When SSO is configured and a user authenticates to an application through the Microsoft identity platform using the OIDC protocol, the Microsoft identity platform sends a token to the application. And then, the application validates and uses the token to log the user in instead of prompting for a username and password.
These JSON Web tokens (JWT) used by OIDC & OAuth applications (preview) contain pieces of information about the user known as *claims*. A *claim* is information that an identity provider states about a user inside the token they issue for that user.
You can view, create or edit the attributes and claims issued in the JWT token t
:::image type="content" source="./media/active-directory-jwt-claims-customization/attributes-claims.png" alt-text="Screenshot of opening the Attributes & Claims section in the Azure portal.":::
-Claims customization may be required for various reasons by an application. A good example is when an application has been written to require a different set of claim URIs or claim values. Using the **Attributes & Claims** section you can add or remove a claim for your application. You can also create a custom claim that is specific for an application based on the use case.
+An application may need claims customization for various reasons. For example, when an application requires a different set of claim URIs or claim values. Using the **Attributes & Claims** section, you can add or remove a claim for your application. You can also create a custom claim that is specific for an application based on the use case.
You can also assign any constant (static) value to any claims, which you define in Azure AD. The following steps outline how to assign a constant value:
You can also assign any constant (static) value to any claims, which you define
:::image type="content" source="./media/active-directory-jwt-claims-customization/customize-claim.png" alt-text="Screenshot of customizing a claim in the Azure portal.":::
-The constant value is displayed on the Attributes overview.
+The Attributes overview displays the constant value.
:::image type="content" source="./media/active-directory-jwt-claims-customization/claims-overview.png" alt-text="Screenshot of displaying claims in the Azure portal.":::
You can use the following special claims transformations functions.
| Function | Description | |-|-|
-| **ExtractMailPrefix()** | Removes the domain suffix from either the email address or the user principal name. This function extracts only the first part of the user name being passed through (for example, "joe_smith" instead of joe_smith@contoso.com). |
+| **ExtractMailPrefix()** | Removes the domain suffix from either the email address or the user principal name. This function extracts only the first part of the user name. For example, `joe_smith` instead of `joe_smith@contoso.com`. |
| **ToLower()** | Converts the characters of the selected attribute into lowercase characters. | | **ToUpper()** | Converts the characters of the selected attribute into uppercase characters. |
To add application-specific claims:
To apply a transformation to a user attribute: 1. In **Manage claim**, select *Transformation* as the claim source to open the **Manage transformation** page.
-1. Select the function from the transformation dropdown. Depending on the function selected, you'll have to provide parameters and a constant value to evaluate in the transformation. Refer to the following table for more information about the available functions.
-1. **Treat source as multivalued** is a checkbox indicating whether the transform should be applied to all values or just the first. By default, transformations are only applied to the first element in a multi-value claim, by checking this box it ensures it's applied to all. This checkbox is only be enabled for multi-valued attributes, for example `user.proxyaddresses`.
+1. Select the function from the transformation dropdown. Depending on the function selected, provide parameters and a constant value to evaluate in the transformation.
+1. **Treat source as multivalued** indicates whether the transform is applied to all values or just the first. By default, the first element in a multi-value claim is applied the transformations. When you check this box, it ensures it's applied to all. This checkbox is only enabled for multi-valued attributes. For example, `user.proxyaddresses`.
1. To apply multiple transformations, select **Add transformation**. You can apply a maximum of two transformations to a claim. For example, you could first extract the email prefix of the `user.mail`. Then, make the string upper case. :::image type="content" source="./media/active-directory-jwt-claims-customization/sso-saml-multiple-claims-transformation.png" alt-text="Screenshot of claims transformation.":::
You can use the following functions to transform claims.
| Function | Description | |-|-|
-| **ExtractMailPrefix()** | Removes the domain suffix from either the email address or the user principal name. This function extracts only the first part of the user name being passed through (for example, "joe_smith" instead of joe_smith@contoso.com). |
-| **Join()** | Creates a new value by joining two attributes. Optionally, you can use a separator between the two attributes. For NameID claim transformation, the Join() function has specific behavior when the transformation input has a domain part. It removes the domain part from input before joining it with the separator and the selected parameter. For example, if the input of the transformation is 'joe_smith@contoso.com' and the separator is '@' and the parameter is 'fabrikam.com', this input combination results in 'joe_smith@fabrikam.com'. |
+| **ExtractMailPrefix()** | Removes the domain suffix from either the email address or the user principal name. This function extracts only the first part of the user name. For example, `joe_smith` instead of `joe_smith@contoso.com`. |
+| **Join()** | Creates a new value by joining two attributes. Optionally, you can use a separator between the two attributes. For NameID claim transformation, the Join() function has specific behavior when the transformation input has a domain part. It removes the domain part from input before joining it with the separator and the selected parameter. For example, if the input of the transformation is `joe_smith@contoso.com` and the separator is `@` and the parameter is `fabrikam.com`, this input combination results in `joe_smith@fabrikam.com`. |
| **ToLowercase()** | Converts the characters of the selected attribute into lowercase characters. | | **ToUppercase()** | Converts the characters of the selected attribute into uppercase characters. |
-| **Contains()** | Outputs an attribute or constant if the input matches the specified value. Otherwise, you can specify another output if there's no match. <br/>For example, if you want to emit a claim where the value is the user's email address if it contains the domain "@contoso.com", otherwise you want to output the user principal name. To perform this function, you configure the following values:<br/>*Parameter 1(input)*: user.email<br/>*Value*: "@contoso.com"<br/>Parameter 2 (output): user.email<br/>Parameter 3 (output if there's no match): user.userprincipalname |
+| **Contains()** | Outputs an attribute or constant if the input matches the specified value. Otherwise, you can specify another output if there's no match. <br/>For example, if you want to emit a claim where the value is the user's email address if it contains the domain `@contoso.com`, otherwise you want to output the user principal name. To perform this function, you configure the following values:<br/>*Parameter 1(input)*: user.email<br/>*Value*: "@contoso.com"<br/>Parameter 2 (output): user.email<br/>Parameter 3 (output if there's no match): user.userprincipalname |
| **EndWith()** | Outputs an attribute or constant if the input ends with the specified value. Otherwise, you can specify another output if there's no match.<br/>For example, if you want to emit a claim where the value is the user's employee ID if the employee ID ends with "000", otherwise you want to output an extension attribute. To perform this function, you configure the following values:<br/>*Parameter 1(input)*: user.employeeid<br/>*Value*: "000"<br/>Parameter 2 (output): user.employeeid<br/>Parameter 3 (output if there's no match): user.extensionattribute1 | | **StartWith()** | Outputs an attribute or constant if the input starts with the specified value. Otherwise, you can specify another output if there's no match.<br/>For example, if you want to emit a claim where the value is the user's employee ID if the country/region starts with "US", otherwise you want to output an extension attribute. To perform this function, you configure the following values:<br/>*Parameter 1(input)*: user.country<br/>*Value*: "US"<br/>Parameter 2 (output): user.employeeid<br/>Parameter 3 (output if there's no match): user.extensionattribute1 |
-| **Extract() - After matching** | Returns the substring after it matches the specified value.<br/>For example, if the input's value is "Finance_BSimon", the matching value is "Finance_", then the claim's output is "BSimon". |
-| **Extract() - Before matching** | Returns the substring until it matches the specified value.<br/>For example, if the input's value is "BSimon_US", the matching value is "_US", then the claim's output is "BSimon". |
-| **Extract() - Between matching** | Returns the substring until it matches the specified value.<br/>For example, if the input's value is "Finance_BSimon_US", the first matching value is "Finance\_", the second matching value is "\_US", then the claim's output is "BSimon". |
-| **ExtractAlpha() - Prefix** | Returns the prefix alphabetical part of the string.<br/>For example, if the input's value is "BSimon_123", then it returns "BSimon". |
-| **ExtractAlpha() - Suffix** | Returns the suffix alphabetical part of the string.<br/>For example, if the input's value is "123_Simon", then it returns "Simon". |
-| **ExtractNumeric() - Prefix** | Returns the prefix numerical part of the string.<br/>For example, if the input's value is "123_BSimon", then it returns "123". |
-| **ExtractNumeric() - Suffix** | Returns the suffix numerical part of the string.<br/>For example, if the input's value is "BSimon_123", then it returns "123". |
-| **IfEmpty()** | Outputs an attribute or constant if the input is null or empty.<br/>For example, if you want to output an attribute stored in an extensionattribute if the employee ID for a given user is empty. To perform this function, you configure the following values:<br/>Parameter 1(input): user.employeeid<br/>Parameter 2 (output): user.extensionattribute1<br/>Parameter 3 (output if there's no match): user.employeeid |
-| **IfNotEmpty()** | Outputs an attribute or constant if the input isn't null or empty.<br/>For example, if you want to output an attribute stored in an extensionattribute if the employee ID for a given user isn't empty. To perform this function, you configure the following values:<br/>Parameter 1(input): user.employeeid<br/>Parameter 2 (output): user.extensionattribute1 |
+| **Extract() - After matching** | Returns the substring after it matches the specified value.<br/>For example, if the input's value is `Finance_BSimon`, the matching value is `Finance_`, then the claim's output is `BSimon`. |
+| **Extract() - Before matching** | Returns the substring until it matches the specified value.<br/>For example, if the input's value is `BSimon_US`, the matching value is `_US`, then the claim's output is `BSimon`. |
+| **Extract() - Between matching** | Returns the substring until it matches the specified value.<br/>For example, if the input's value is `Finance_BSimon_US`, the first matching value is `Finance_`, the second matching value is `_US`, then the claim's output is `BSimon`. |
+| **ExtractAlpha() - Prefix** | Returns the prefix alphabetical part of the string.<br/>For example, if the input's value is `BSimon_123`, then it returns `BSimon`. |
+| **ExtractAlpha() - Suffix** | Returns the suffix alphabetical part of the string.<br/>For example, if the input's value is `123_Simon`, then it returns `Simon`. |
+| **ExtractNumeric() - Prefix** | Returns the prefix numerical part of the string.<br/>For example, if the input's value is `123_BSimon`, then it returns `123`. |
+| **ExtractNumeric() - Suffix** | Returns the suffix numerical part of the string.<br/>For example, if the input's value is `BSimon_123`, then it returns `123`. |
+| **IfEmpty()** | Outputs an attribute or constant if the input is null or empty.<br/>For example, if you want to output an attribute stored in an extension attribute if the employee ID for a given user is empty. To perform this function, configure the following values:<br/>Parameter 1(input): user.employeeid<br/>Parameter 2 (output): user.extensionattribute1<br/>Parameter 3 (output if there's no match): user.employeeid |
+| **IfNotEmpty()** | Outputs an attribute or constant if the input isn't null or empty.<br/>For example, if you want to output an attribute stored in an extension attribute if the employee ID for a given user isn't empty. To perform this function, you configure the following values:<br/>Parameter 1(input): user.employeeid<br/>Parameter 2 (output): user.extensionattribute1 |
| **Substring() - Fixed Length** (Preview)| Extracts parts of a string claim type, beginning at the character at the specified position, and returns the specified number of characters.<br/>SourceClaim - The claim source of the transform that should be executed.<br/>StartIndex - The zero-based starting character position of a substring in this instance.<br/>Length - The length in characters of the substring.<br/>For example:<br/>sourceClaim - PleaseExtractThisNow<br/>StartIndex - 6<br/>Length - 11<br/>Output: ExtractThis |
-| **Substring() - EndOfString** (Preview) | Extracts parts of a string claim type, beginning at the character at the specified position, and returns the rest of the claim from the specified start index. <br/>SourceClaim - The claim source of the transform that should be executed.<br/>StartIndex - The zero-based starting character position of a substring in this instance.<br/>For example:<br/>sourceClaim - PleaseExtractThisNow<br/>StartIndex - 6<br/>Output: ExtractThisNow |
-| **RegexReplace()** (Preview) | RegexReplace() transformation accepts as input parameters:<br/>- Parameter 1: a user attribute as regex input<br/>- An option to trust the source as multivalued<br/>- Regex pattern<br/>- Replacement pattern. The replacement pattern may contain static text format along with a reference that points to regex output groups and more input parameters.<br/><br/>More instructions about how to use the RegexReplace() transformation are described later in this article. |
+| **Substring() - EndOfString** (Preview) | Extracts parts of a string claim type, beginning at the character at the specified position, and returns the rest of the claim from the specified start index. <br/>SourceClaim - The claim source of the transform.<br/>StartIndex - The zero-based starting character position of a substring in this instance.<br/>For example:<br/>sourceClaim - PleaseExtractThisNow<br/>StartIndex - 6<br/>Output: ExtractThisNow |
+| **RegexReplace()** (Preview) | RegexReplace() transformation accepts as input parameters:<br/>- Parameter 1: a user attribute as regex input<br/>- An option to trust the source as multivalued<br/>- Regex pattern<br/>- Replacement pattern. The replacement pattern may contain static text format along with a reference that points to regex output groups and more input parameters. |
If you need other transformations, submit your idea in the [feedback forum in Azure AD](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789) under the *SaaS application* category.
The following table provides information about the first level of transformation
| Action | Field | Description | | :-- | :- | :- |
-| 1 | Transformation | Select the **RegexReplace()** option from the **Transformation** options to use the regex-based claims transformation method for claims transformation. |
-| 2 | Parameter 1 | The input for the regular expression transformation. For example, user.mail that has a user email address such as `admin@fabrikam.com`. |
-| 3 | Treat source as multivalued | Some input user attributes can be multi-value user attributes. If the selected user attribute supports multiple values and the user wants to use multiple values for the transformation, they need to select **Treat source as multivalued**. If selected, all values are used for the regex match, otherwise only the first value is used. |
-| 4 | Regex pattern | A regular expression that is evaluated against the value of user attribute selected as *Parameter 1*. For example a regular expression to extract the user alias from the user's email address would be represented as `(?'domain'^.*?)(?i)(\@fabrikam\.com)$`. |
-| 5 | Add additional parameter | More than one user attribute can be used for the transformation. The values of the attributes would then be merged with regex transformation output. Up to five additional parameters are supported. |
-| 6 | Replacement pattern | The replacement pattern is the text template, which contains placeholders for regex outcome. All group names must be wrapped inside the curly braces such as {group-name}. Let's say the administration wants to use user alias with some other domain name, for example `xyz.com` and merge country name with it. In this case, the replacement pattern would be `{country}.{domain}@xyz.com`, where `{country}` is the value of input parameter and `{domain}` is the group output from the regular expression evaluation. In such a case, the expected outcome is `US.swmal@xyz.com`. |
+| 1 | **Transformation** | Select the **RegexReplace()** option from the **Transformation** options to use the regex-based claims transformation method for claims transformation. |
+| 2 | **Parameter 1** | The input for the regular expression transformation. For example, user.mail that has a user email address. For example, `admin@fabrikam.com`. |
+| 3 | **Treat source as multivalued** | Some input user attributes can be multi-value user attributes. If the selected user attribute supports multiple values and multiple values are needed for the transformation, select **Treat source as multivalued**. the regex match uses all values, otherwise the regex match uses only the first value. |
+| 4 | **Regex pattern** | A regular expression that's evaluated against the value of the user attribute selected as *Parameter 1*. For example, a regular expression to extract the user alias from the user's email address is represented as `(?'domain'^.*?)(?i)(\@fabrikam\.com)$`. |
+| 5 | **Add additional parameter** | Use more than one user attribute for the transformation. The values of the attributes are merged with the regex transformation output. Supports up to five more parameters. |
+| 6 | **Replacement pattern** | The replacement pattern is the text template that contains placeholders for regex outcome. Wrap all group names inside the curly braces such as `{group-name}`. For example, the administration wants to use user alias with some other domain name and merge country name with it. In this case, the replacement pattern for `xyz.com` is `{country}.{domain}@xyz.com`, where `{country}` is the value of input parameter and `{domain}` is the group output from the regular expression evaluation. The expected outcome is `US.swmal@xyz.com`. |
-The following image shows an example of the second level of transformation:
+The following image shows an example of the second level of transformation:
:::image type="content" source="./media/active-directory-jwt-claims-customization/regexreplace-transform2.png" alt-text="Screenshot of second level of claims transformation.":::
The following table provides information about the second level of transformatio
| Action | Field | Description | | :-- | :- | :- |
-| 1 | Transformation | Regex-based claims transformations aren't limited to the first transformation and can be used as the second level transformation as well. Any other transformation method can be used as the first transformation. |
-| 2 | Parameter 1 | If **RegexReplace()** is selected as a second level transformation, output of first level transformation is used as an input for the second level transformation. The second level regex expression should match the output of the first transformation or the transformation won't be applied. |
-| 3 | Regex pattern | **Regex pattern** is the regular expression for the second level transformation. |
-| 4 | Parameter input | User attribute inputs for the second level transformations. |
-| 5 | Parameter input | Administrators can delete the selected input parameter if they don't need it anymore. |
-| 6 | Replacement pattern | The replacement pattern is the text template, which contains placeholders for regex outcome group name, input parameter group name, and static text value. All group names must be wrapped inside the curly braces such as `{group-name}`. Let's say the administration wants to use user alias with some other domain name, for example `xyz.com` and merge country name with it. In this case, the replacement pattern would be `{country}.{domain}@xyz.com`, where `{country}` is the value of input parameter and `{domain}` is the group output from the regular expression evaluation. In such a case, the expected outcome is `US.swmal@xyz.com`. |
-| 7 | Test transformation | The RegexReplace() transformation is evaluated only if the value of the selected user attribute for *Parameter 1* matches with the regular expression provided in the **Regex pattern** textbox. If they don't match, the default claim value is added to the token. To validate regular expression against the input parameter value, a test experience is available within the transform blade. This test experience operates on dummy values only. When additional input parameters are used, the name of the parameter is added to the test result instead of the actual value. To access the test section, select **Test transformation**. |
+| 1 | **Transformation** | Use regex-based claims transformations as the second level transformation as well. Use any other transformation method as the first transformation. |
+| 2 | **Parameter 1** | When **RegexReplace()** is the second level transformation, use the output of the first level transformation as input for the second level transformation. To apply the transformation, the second level regex expression should match the output of the first transformation. |
+| 3 | **Regex pattern** | **Regex pattern** is the regular expression for the second level transformation. |
+| 4 | **Parameter input** | User attribute inputs for the second level transformations. |
+| 5 | **Parameter input** | Administrators can delete the selected input parameter if they don't need it anymore. |
+| 6 | **Replacement pattern** | The replacement pattern is the text template that contains placeholders for the regex outcome group name, input parameter group name, and static text value. Wrap all group names inside the curly braces such as `{group-name}`. For example, the administration wants to use user alias with some other domain name and merge country name with it. In this case, the replacement pattern for `xyz.com` is `{country}.{domain}@xyz.com`, where `{country}` is the value of input parameter and `{domain}` is the group output from the regular expression evaluation. The expected outcome is `US.swmal@xyz.com`. |
+| 7 | **Test transformation** | Evaluates the RegexReplace() transformation when the value of the selected user attribute for *Parameter 1* matches with the regular expression provided in the **Regex pattern** textbox. Adds the claim to the token when they don't match. To validate the regular expression against the input parameter value, a test experience is available within the transform blade. This test experience operates on test values only. Adds the name of the parameter to the test result instead of the actual value when using more input parameters. To access the test section, select **Test transformation**. |
The following image shows an example of testing the transformations:
The following table provides information about testing the transformations. The
| Action | Field | Description | | :-- | :- | :- |
-| 1 | Test transformation | Select the close or (X) button to hide the test section and re-render the **Test transformation** button again on the blade. |
-| 2 | Test regex input | Accepts input that is used for the regular expression test evaluation. In case regex-based claims transformation is configured as a second level transformation, a value is provided that would be the expected output of the first transformation. |
-| 3 | Run test | After the test regex input is provided and the **Regex pattern**, **Replacement pattern** and **Input parameters** are configured, the expression can be evaluated by selecting **Run test**. |
-| 4 | Test transformation result | If evaluation succeeds, an output of test transformation will be rendered against the **Test transformation result** label. |
-| 5 | Remove transformation | The second level transformation can be removed by selecting **Remove transformation**. |
-| 6 | Specify output if no match | When a regex input value is configured against the *Parameter 1* that doesn't match the **Regular expression**, the transformation is skipped. In such cases, the alternate user attribute can be configured, which is added to the token for the claim by checking **Specify output if no match**. |
-| 7 | Parameter 3 | If an alternate user attribute needs to be returned when there's no match and **Specify output if no match** is checked, an alternate user attribute can be selected using the dropdown. This dropdown is available against **Parameter 3 (output if no match)**. |
-| 8 | Summary | At the bottom of the blade, a full summary of the format is displayed that explains the meaning of the transformation in simple text. |
-| 9 | Add | After the configuration settings for the transformation are verified, it can be saved to a claims policy by selecting **Add**. Changes won't be saved unless **Save** is selected on the **Manage Claim** blade. |
+| 1 | **Test transformation** | Select the close or (X) button to hide the test section and re-render the **Test transformation** button again on the blade. |
+| 2 | **Test regex input** | Accepts input for the regular expression test evaluation. When a regex-based claims transformation is configured as a second level transformation, provide a value that's the expected output of the first transformation. |
+| 3 | **Run test** | After providing the test regex input and configuring the **Regex pattern**, **Replacement pattern** and **Input parameters**, select **Run test** to evaluate the expression. |
+| 4 | **Test transformation result** | If evaluation succeeds, an output of the test transformation is rendered against the **Test transformation result** label. |
+| 5 | **Remove transformation** | Removes the second level transformation. |
+| 6 | **Specify output if no match** | Skips the transformation when configuring a regex input value against the *Parameter 1* that doesn't match the **Regular expression**. In such cases, configure the alternate user attribute, which adds it to the token for the claim. |
+| 7 | **Parameter 3** | When selecting **Specify output if no match** and an alternate user attribute is needed when there's no match, select an alternate user attribute. This dropdown is available against **Parameter 3 (output if no match)**. |
+| 8 | **Summary** | A full summary of the format is displayed that explains the meaning of the transformation in simple text. |
+| 9 | **Add** | After verifying the configuration settings for the transformation, Select **Add** to save it to a claims policy. Select **Save** on the **Manage Claim** blade to save the changes. |
RegexReplace() transformation is also available for the group claims transformations. ### Transformation validations
-When the following conditions occur after **Add** or **Run test** is selected, a message is displayed that provides more information about the issue:
+A message provides more information when the following conditions occur after selecting **Add** or **Run test**:
-* Input parameters with duplicate user attributes aren't allowed.
+* Input parameters with duplicate user attributes were used.
* Unused input parameters found. Defined input parameters should have respective usage into the Replacement pattern text. * The provided test regex input doesn't match with the provided regular expression.
-* The source for the groups into the replacement pattern isn't found.
+* No sources for the groups in the replacement pattern are found.
## Emit claims based on conditions
The user type can be:
* **Any** - All users are allowed to access the application. * **Members**: Native member of the tenant
-* **All guests**: User is brought over from an external organization with or without Azure AD.
+* **All guests**: User moved from an external organization with or without Azure AD.
* **AAD guests**: Guest user belongs to another organization using Azure AD. * **External guests**: Guest user belongs to an external organization that doesn't have Azure AD.
-One scenario where the user type is helpful is when the source of a claim is different for a guest and an employee accessing an application. You can specify that if the user is an employee, the NameID is sourced from user.email. If the user is a guest, then the NameID is sourced from user.extensionattribute1.
+One scenario where the user type is helpful is when the source of a claim is different for a guest and an employee accessing an application. You can specify that if the user is an employee, get the NameID from user.email. If the user is a guest, then the NameID comes from user.extensionattribute1.
To add a claim condition:
To add a claim condition:
1. Select the group(s) to which the user should belong. You can select up to 50 unique groups across all claims for a given application. 1. Select the **Source** where the claim is going to retrieve its value. You can select a user attribute from the source attribute dropdown or apply a transformation to the user attribute before emitting it as a claim.
-The order in which you add the conditions are important. Azure AD first evaluates all conditions with source `Attribute` and then evaluates all conditions with source `Transformation` to decide which value to emit in the claim. Conditions with the same source are evaluated from top to bottom. The last value, which matches the expression is emitted in the claim. Transformations such as `IsNotEmpty` and `Contains` act like restrictions.
+The order in which you add the conditions are important. Azure AD first evaluates all conditions with source `Attribute` and then evaluates all conditions with source `Transformation` to decide which value to emit in the claim. Azure AD evaluates conditions with the same source from top to bottom. The claim emits the last value that matches the expression in the claim. Transformations such as `IsNotEmpty` and `Contains` act like restrictions.
For example, Britta Simon is a guest user in the Contoso tenant. Britta belongs to another organization that also uses Azure AD. Given the following configuration for the Fabrikam application, when Britta tries to sign in to Fabrikam, the Microsoft identity platform evaluates the conditions.
-First, the Microsoft identity platform verifies whether Britta's user type is **All guests**. Because this is true, the Microsoft identity platform assigns the source for the claim to `user.extensionattribute1`. Second, the Microsoft identity platform verifies whether Britta's user type is **AAD guests**, because this is also true, the Microsoft identity platform assigns the source for the claim to `user.mail`. Finally, the claim is emitted with a value of `user.mail` for Britta.
+First, the Microsoft identity platform verifies whether Britta's user type is **All guests**. If the user type is **All guests**, the Microsoft identity platform assigns the source for the claim to `user.extensionattribute1`. Second, the Microsoft identity platform verifies whether Britta's user type is **AAD guests**, because this value is also true, the Microsoft identity platform assigns the source for the claim to `user.mail`. Finally, the claim has a value of `user.mail` for Britta.
:::image type="content" source="./media/active-directory-jwt-claims-customization/sso-saml-user-conditional-claims.png" alt-text="Screenshot of claims conditional configuration.":::
-As another example, consider when Britta Simon tries to sign in and the following configuration is used. Azure AD first evaluates all conditions with source `Attribute`. Because Britta's user type is **AAD guests**, `user.mail` is assigned as the source for the claim. Next, Azure AD evaluates the transformations. Because Britta is a guest, `user.extensionattribute1` is now the new source for the claim. Because Britta is in **AAD guests**, `user.othermail` is now the source for this claim. Finally, the claim is emitted with a value of `user.othermail` for Britta.
+As another example, consider when Britta Simon tries to sign in using the following configuration. Azure AD first evaluates all conditions with source `Attribute`. The source for the claim is `user.mail` when Britta's user type is **AAD guests**. Next, Azure AD evaluates the transformations. Because Britta is a guest, `user.extensionattribute1` is the new source for the claim. Because Britta is in **AAD guests**, `user.othermail` is the new source for this claim. Finally, the claim is emitted with a value of `user.othermail` for Britta.
:::image type="content" source="./media/active-directory-jwt-claims-customization/sso-saml-user-conditional-claims-2.png" alt-text="Screenshot of more claims conditional configuration.":::
-As a final example, consider what happens if Britta has no `user.othermail` configured or it's empty. In both cases the condition entry is ignored, and the claim falls back to `user.extensionattribute1` instead.
+As a final example, consider what happens if Britta has no `user.othermail` configured or it's empty. The claim falls back to `user.extensionattribute1` ignoring the condition entry in both cases.
## Advanced claims options
-Advanced claims options can be configured for OIDC applications to expose the same claim as SAML tokens and vice versa for applications that intend to use the same claim for both SAML2.0 and OIDC response tokens.
+Configure advanced claims options for OIDC applications to expose the same claim as SAML tokens. Also for applications that intend to use the same claim for both SAML2.0 and OIDC response tokens.
-Advanced claim options can be configured by checking the box under **Advanced Claims Options** in the **Manage claims** blade.
+Configure advanced claim options by checking the box under **Advanced Claims Options** in the **Manage claims** blade.
## Next steps
-* [Configure single sign-on on applications that aren't in the Azure AD application gallery](../manage-apps/configure-saml-single-sign-on.md)
+* Learn more about the [claims and tokens used in Azure AD](security-tokens.md).
active-directory Multi Service Web App Access Microsoft Graph As App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-access-microsoft-graph-as-app.md
Previously updated : 08/19/2022 Last updated : 04/05/2023 ms.devlang: csharp, javascript
When accessing the Microsoft Graph, the managed identity needs to have proper pe
# [PowerShell](#tab/azure-powershell) ```powershell
-# Install the module. (You need admin on the machine.)
-# Install-Module AzureAD.
+# Install the module.
+# Install-Module Microsoft.Graph -Scope CurrentUser
-# Your tenant ID (in the Azure portal, under Azure Active Directory > Overview).
-$TenantID="<tenant-id>"
-$resourceGroup = "securewebappresourcegroup"
-$webAppName="SecureWebApp-20201102125811"
+# The tenant ID
+$TenantId = "11111111-1111-1111-1111-111111111111"
-# Get the ID of the managed identity for the web app.
-$spID = (Get-AzWebApp -ResourceGroupName $resourceGroup -Name $webAppName).identity.principalid
+# The name of your web app, which has a managed identity.
+$webAppName = "SecureWebApp-20201106120003"
+$resourceGroupName = "SecureWebApp-20201106120003ResourceGroup"
-# Check the Microsoft Graph documentation for the permission you need for the operation.
-$PermissionName = "User.Read.All"
+# The name of the app role that the managed identity should be assigned to.
+$appRoleName = "User.Read.All"
-Connect-AzureAD -TenantId $TenantID
+# Get the web app's managed identity's object ID.
+Connect-AzAccount -Tenant $TenantId
+$managedIdentityObjectId = (Get-AzWebApp -ResourceGroupName $resourceGroupName -Name $webAppName).identity.principalid
-# Get the service principal for Microsoft Graph.
-# First result should be AppId 00000003-0000-0000-c000-000000000000
-$GraphServicePrincipal = Get-AzureADServicePrincipal -SearchString "Microsoft Graph" | Select-Object -first 1
+Connect-MgGraph -TenantId $TenantId -Scopes 'Application.Read.All','AppRoleAssignment.ReadWrite.All'
-# Assign permissions to the managed identity service principal.
-$AppRole = $GraphServicePrincipal.AppRoles | `
-Where-Object {$_.Value -eq $PermissionName -and $_.AllowedMemberTypes -contains "Application"}
+# Get Microsoft Graph app's service principal and app role.
+$serverApplicationName = "Microsoft Graph"
+$serverServicePrincipal = (Get-MgServicePrincipal -Filter "DisplayName eq '$serverApplicationName'")
+$serverServicePrincipalObjectId = $serverServicePrincipal.Id
-New-AzureAdServiceAppRoleAssignment -ObjectId $spID -PrincipalId $spID `
--ResourceId $GraphServicePrincipal.ObjectId -Id $AppRole.Id
+$appRoleId = ($serverServicePrincipal.AppRoles | Where-Object {$_.Value -eq $appRoleName }).Id
+
+# Assign the managed identity access to the app role.
+New-MgServicePrincipalAppRoleAssignment `
+ -ServicePrincipalId $managedIdentityObjectId `
+ -PrincipalId $managedIdentityObjectId `
+ -ResourceId $serverServicePrincipalObjectId `
+ -AppRoleId $appRoleId
``` # [Azure CLI](#tab/azure-cli)
Run the install commands.
```dotnetcli dotnet add package Microsoft.Identity.Web.MicrosoftGraph
+dotnet add package Microsoft.Graph
``` #### Package Manager Console
Open the project/solution in Visual Studio, and open the console by using the **
Run the install commands. ```powershell Install-Package Microsoft.Identity.Web.MicrosoftGraph
+Install-Package Microsoft.Graph
``` ### Example
using System;
using System.Collections.Generic; using System.Threading.Tasks; using Microsoft.AspNetCore.Mvc.RazorPages;
-using Azure.Identity;ΓÇï
-using Microsoft.Graph.Core;ΓÇïΓÇï
-using System.Net.Http.Headers;
+using Microsoft.Extensions.Logging;
+using Microsoft.Graph;
+using Azure.Identity;
...
public async Task OnGetAsync()
var credential = new ChainedTokenCredential( new ManagedIdentityCredential(), new EnvironmentCredential());
- var token = credential.GetToken(
- new Azure.Core.TokenRequestContext(
- new[] { "https://graph.microsoft.com/.default" }));
- var accessToken = token.Token;
- var graphServiceClient = new GraphServiceClient(
- new DelegateAuthenticationProvider((requestMessage) =>
- {
- requestMessage
- .Headers
- .Authorization = new AuthenticationHeaderValue("bearer", accessToken);
+ string[] scopes = new[] { "https://graph.microsoft.com/.default" };
- return Task.CompletedTask;
- }));
+ var graphServiceClient = new GraphServiceClient(
+ credential, scopes);
- // MSGraphUser is a DTO class being used to hold User information from the graph service client call
List<MSGraphUser> msGraphUsers = new List<MSGraphUser>(); try {
- var users =await graphServiceClient.Users.Request().GetAsync();
- foreach(var u in users)
+ //var users = await graphServiceClient.Users.Request().GetAsync();
+ var users = await graphServiceClient.Users.GetAsync();
+ foreach (var u in users.Value)
{ MSGraphUser user = new MSGraphUser(); user.userPrincipalName = u.UserPrincipalName;
public async Task OnGetAsync()
msGraphUsers.Add(user); } }
- catch(Exception ex)
+ catch (Exception ex)
{ string msg = ex.Message; }
active-directory Licensing Groups Migrate Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-groups-migrate-users.md
The most important thing to keep in mind is that you should avoid a situation wh
1. Verify that no license assignments failed by checking each group for users in error state. For more information, see [Identifying and resolving license problems for a group](licensing-groups-resolve-problems.md).
-Consider removing the original direct assignments. We recommend that you do it gradually, and monitor the outcome on a subset of users first. If you could leave the original direct assignments on users, but when the users leave their licensed groups they retain the directly assigned licenses, which might not be what you want.
+Consider removing the original direct assignments. We recommend that you do it gradually, and monitor the outcome on a subset of users first. You could leave the original direct assignments on users, but when the users leave their licensed groups they retain the directly assigned licenses, which might not be what you want.
## An example
active-directory Azure Ad Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/azure-ad-account.md
Azure Active Directory is available as an identity provider option for B2B colla
## Guest sign-in using Azure Active Directory accounts
-Azure Active Directory is available in the list of External Identities identity providers by default. No further configuration is needed to allow guest users to sign in with their Azure AD account using either the invitation flow or a self-service sign-up user flow.
+If you want to enable guest users to sign in with their Azure AD account, you can use either the invitation flow or a self-service sign-up user flow. No additional configuration is required.
:::image type="content" source="media/azure-ad-account/azure-ad-account-identity-provider.png" alt-text="Screenshot of Azure AD account in the identity provider list." lightbox="media/azure-ad-account/azure-ad-account-identity-provider.png":::
active-directory Concept Secure Remote Workers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-secure-remote-workers.md
This guide assumes that your cloud only or hybrid identities have been establish
### Guided walkthrough
-For a guided walkthrough of many of the recommendations in this article, see the [Set up Azure AD](https://go.microsoft.com/fwlink/?linkid=2221308) guide.
+For a guided walkthrough of many of the recommendations in this article, see the [Set up Azure AD](https://go.microsoft.com/fwlink/?linkid=2224193) guide when signed in to the Microsoft 365 Admin Center. To review best practices without signing in and activating automated setup features, go to the [M365 Setup portal](https://go.microsoft.com/fwlink/?linkid=2221308).
## Guidance for Azure AD Free, Office 365, or Microsoft 365 customers.
active-directory Concept Identity Protection Risks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-risks.md
Premium detections are visible only to Azure AD Premium P2 customers. Customers
| Activity from anonymous IP address | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#activity-from-anonymous-ip-addresses). This detection identifies that users were active from an IP address that has been identified as an anonymous proxy IP address. | | Suspicious inbox forwarding | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#suspicious-inbox-forwarding). This detection looks for suspicious email forwarding rules, for example, if a user created an inbox rule that forwards a copy of all emails to an external address. | | Mass Access to Sensitive Files | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/defender-cloud-apps/investigate-anomaly-alerts#unusual-file-access-by-user). This detection looks at your environment and triggers alerts when users access multiple files from Microsoft SharePoint or Microsoft OneDrive. An alert is triggered only if the number of accessed files is uncommon for the user and the files might contain sensitive information|
+| Verified threat actor IP | Real-time | This risk detection type indicates sign-in activity that is consistent with known IP addresses associated with nation state actors or cyber crime groups, based on Microsoft Threat Intelligence Center (MSTIC).|
#### Nonpremium sign-in risk detections
active-directory Howto Manage Inactive User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md
Title: How to manage inactive user accounts in Azure AD
-description: Learn about how to detect and handle user accounts in Azure AD that have become obsolete
+ Title: How to manage inactive user accounts
+description: Learn how to detect and resolve user accounts that have become obsolete
Previously updated : 10/31/2022 Last updated : 04/05/2023
-# How To: Manage inactive user accounts in Azure AD
+# How To: Manage inactive user accounts
-In large environments, user accounts are not always deleted when employees leave an organization. As an IT administrator, you want to detect and handle these obsolete user accounts because they represent a security risk.
+In large environments, user accounts aren't always deleted when employees leave an organization. As an IT administrator, you want to detect and resolve these obsolete user accounts because they represent a security risk.
-This article explains a method to handle obsolete user accounts in Azure AD.
+This article explains a method to handle obsolete user accounts in Azure Active Directory (Azure AD).
## What are inactive user accounts?
-Inactive accounts are user accounts that are not required anymore by members of your organization to gain access to your resources. One key identifier for inactive accounts is that they haven't been used *for a while* to sign-in to your environment. Because inactive accounts are tied to the sign-in activity, you can use the timestamp of the last sign-in that was successful to detect them.
+Inactive accounts are user accounts that aren't required anymore by members of your organization to gain access to your resources. One key identifier for inactive accounts is that they haven't been used *for a while* to sign in to your environment. Because inactive accounts are tied to the sign-in activity, you can use the timestamp of the last sign-in that was successful to detect them.
-The challenge of this method is to define what *for a while* means in the case of your environment. For example, users might not sign-in to an environment *for a while*, because they are on vacation. When defining what your delta for inactive user accounts is, you need to factor in all legitimate reasons for not signing in to your environment. In many organizations, the delta for inactive user accounts is between 90 and 180 days.
+The challenge of this method is to define what *for a while* means for your environment. For example, users might not sign in to an environment *for a while*, because they are on vacation. When defining what your delta for inactive user accounts is, you need to factor in all legitimate reasons for not signing in to your environment. In many organizations, the delta for inactive user accounts is between 90 and 180 days.
The last successful sign-in provides potential insights into a user's continued need for access to resources. It can help with determining if group membership or app access is still needed or could be removed. For external user management, you can understand if an external user is still active within the tenant or should be cleaned up.
-
-## How to detect inactive user accounts
+## Detect inactive user accounts with Microsoft Graph
+<a name="how-to-detect-inactive-user-accounts"></a>
-You detect inactive accounts by evaluating the **lastSignInDateTime** property exposed by the **signInActivity** resource type of the **Microsoft Graph** API. The **lastSignInDateTime** property shows the last time a user made a successful interactive sign-in to Azure AD. Using this property, you can implement a solution for the following scenarios:
+You can detect inactive accounts by evaluating the `lastSignInDateTime` property exposed by the `signInActivity` resource type of the **Microsoft Graph API**. The `lastSignInDateTime` property shows the last time a user made a successful interactive sign-in to Azure AD. Using this property, you can implement a solution for the following scenarios:
-- **Users by name**: In this scenario, you search for a specific user by name, which enables you to evaluate the lastSignInDateTime: `https://graph.microsoft.com/v1.0/users?$filter=startswith(displayName,'markvi')&$select=displayName,signInActivity`
+- **Last sign-in date and time for all users**: In this scenario, you need to generate a report of the last sign-in date of all users. You request a list of all users, and the last `lastSignInDateTime` for each respective user:
+ - `https://graph.microsoft.com/v1.0/users?$select=displayName,signInActivity`
-- **Users by date**: In this scenario, you request a list of users with a lastSignInDateTime before a specified date: `https://graph.microsoft.com/v1.0/users?$filter=signInActivity/lastSignInDateTime le 2019-06-01T00:00:00Z`
+- **Users by name**: In this scenario, you search for a specific user by name, which enables you to evaluate the `lastSignInDateTime`:
+ - `https://graph.microsoft.com/v1.0/users?$filter=startswith(displayName,'markvi')&$select=displayName,signInActivity`
-> [!NOTE]
-> When you request the signInActivity property while listing users, the maximum page size is 120 users. Requests with $top set higher than 120 will fail. SignInActivity supports `$filter` (`eq`, `ne`, `not`, `ge`, `le`) *but* not with any other filterable properties.
+- **Users by date**: In this scenario, you request a list of users with a `lastSignInDateTime` before a specified date:
+ - `https://graph.microsoft.com/v1.0/users?$filter=signInActivity/lastSignInDateTime le 2019-06-01T00:00:00Z`
> [!NOTE]
-> There may be the need to generate a report of the last sign in date of all users, if so you can use the following scenario.
-> **Last Sign In Date and Time for All Users**: In this scenario, you request a list of all users, and the last lastSignInDateTime for each respective user: `https://graph.microsoft.com/v1.0/users?$select=displayName,signInActivity`
-
-## What you need to know
-
-This section lists what you need to know about the lastSignInDateTime property.
-
-### How can I access this property?
-
-The **lastSignInDateTime** property is exposed by the [signInActivity resource type](/graph/api/resources/signinactivity) of the [Microsoft Graph API](/graph/overview#whats-in-microsoft-graph).
+> When you request the `signInActivity` property while listing users, the maximum page size is 120 users. Requests with $top set higher than 120 will fail. The `signInActivity` property supports `$filter` (`eq`, `ne`, `not`, `ge`, `le`) *but not with any other filterable properties*.
-### Is the lastSignInDateTime property available through the Get-AzureAdUser cmdlet?
+### What you need to know
-No.
+The following details relate to the `lastSignInDateTime` property.
-### What edition of Azure AD do I need to access the property?
+- The `lastSignInDateTime` property is exposed by the [signInActivity resource type](/graph/api/resources/signinactivity) of the [Microsoft Graph API](/graph/overview#whats-in-microsoft-graph).
-To access this property, you need an Azure Active Directory Premium edition.
+- The property is *not* available through the Get-AzureAdUser cmdlet.
-### What permission do I need to read the property?
+- To access the property, you need an Azure Active Directory Premium edition license.
-To read this property, you need to grant the app the following Microsoft Graph permissions:
+- To read the property, you need to grant the app the following Microsoft Graph permissions:
+ - AuditLog.Read.All
+ - Directory.Read.All
+ - User.Read.All
-- AuditLog.Read.All-- Directory.Read.All -- User.Read.All--
-### When does Azure AD update the property?
-
-Each interactive sign-in that was successful results in an update of the underlying data store. Typically, successful sign-ins show up in the related sign-in report within 10 minutes.
+- Each interactive sign-in that was successful results in an update of the underlying data store. Typically, successful sign-ins show up in the related sign-in report within 10 minutes.
+- To generate a `lastSignInDateTime` timestamp, you need a successful sign-in. The value of the `lastSignInDateTime` property may be blank if:
+ - The last successful sign-in of a user took place before April 2020.
+ - The affected user account was never used for a successful sign-in.
+
+- The last sign-in date is associated with the user object. The value is retained until the next sign-in of the user.
-### What does a blank property value mean?
+## How to investigate a single user
-To generate a lastSignInDateTime timestamp, you need a successful sign-in. Because the lastSignInDateTime property is a new feature, the value of the lastSignInDateTime property can be blank if:
+If you need to view the latest sign-in activity for a user you can view the user's sign-in details in Azure AD. You can also use the Microsoft Graph **users by name** scenario described in the [previous section](#detect-inactive-user-accounts-with-microsoft-graph).
-- The last successful sign-in of a user took place before April 2020.-- The affected user account was never used for a successful sign-in.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Go to **Azure AD** > **Users** > select a user from the list.
+1. In the **My Feed** area of the user's Overview, locate the **Sign-ins** tile.
-### For how long is the last sign-in retained?
+ ![Screenshot of the user overview page with the sign-in activity tile highlighted.](media/howto-manage-inactive-user-accounts/last-sign-activity-tile.png)
-The last sign-in date is associated with the user object. The value is retained until the next sign-in of the user.
+The last sign-in date and time shown on this tile may take up to 24 hours to update, which means the date and time may not be current. If you need to see the activity in near real time, select the **See all sign-ins** link on the **Sign-ins** tile to view all sign-in activity for that user.
## Next steps
active-directory Howto Use Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-use-recommendations.md
Each recommendation provides the same set of details that explain what the recom
- The **Impacted resources** table contains a list of resources identified by the recommendation. The resource's name, ID, date it was first detected, and status are provided. The resource could be an application or resource service principal, for example.
+> [!NOTE]
+> In the Azure portal the impacted resources are limited to a maximum of 50 resources. To view more resources, you should use the expand query parameter at the end of your API query on Microsoft graph. For example: Get: https://graph.microsoft.com/beta/directory/recommendations?$expand=impactedResources
+ ## How to update a recommendation To update the status of a recommendation or a related resource, sign in to Azure using a least-privileged role for updating a recommendation.
For more information, see the [Microsoft Graph documentation for recommendations
## Next steps - [Review the Azure AD recommendations overview](overview-recommendations.md)-- [Learn about Service Health notifications](overview-service-health-notifications.md)
+- [Learn about Service Health notifications](overview-service-health-notifications.md)
active-directory Alinto Protect Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/alinto-protect-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
![Provisioning tab automatic](common/provisioning-automatic.png)
-1. In the **Admin Credentials** section, input your Alinto Protect Tenant URL as `https://cloud.cleanmail.{Domain}/api/v3/scim2` and corresponding Secret Token obtained from Step 2. Click **Test Connection** to ensure Azure AD can connect to Alinto Protect. If the connection fails, ensure your Alinto Protect account has Admin permissions and try again.
+1. In the **Admin Credentials** section, input your Alinto Protect Tenant URL as `https://cloud.cleanmail.eu/api/v3/scim2` and corresponding Secret Token obtained from Step 2. Click **Test Connection** to ensure Azure AD can connect to Alinto Protect. If the connection fails, ensure your Alinto Protect account has Admin permissions and try again.
![Token](common/provisioning-testconnection-tenanturltoken.png)-
- >[!NOTE]
- >In the Tenant URL, **{Domain}** will be the country code top-level domain. For example, if the country is US, then the Tenant URL will be `https://cloud.cleanmail.com/api/v3/scim2`
-
+
1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ![Notification Email](common/provisioning-notification-email.png)
active-directory Asana Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/asana-provisioning-tutorial.md
Previously updated : 11/21/2022 Last updated : 04/04/2023
The scenario outlined in this tutorial assumes that you already have the followi
### Generate Secret Token in Asana
-* Sign in to [Asana](https://app.asana.com/) by using your admin account.
+* Sign in to [Asana](https://app.asana.com/-/login) by using your admin account.
* Select the profile photo from the top bar, and select your current organization-name settings. * Go to the **Service Accounts** tab. * Select **Add Service Account**.
active-directory Askspoke Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/askspoke-provisioning-tutorial.md
na Previously updated : 11/21/2022 Last updated : 04/04/2023 # Tutorial: Configure askSpoke for automatic user provisioning
-This tutorial describes the steps you need to perform in both askSpoke and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [askSpoke](https://www.askspoke.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+This tutorial describes the steps you need to perform in both askSpoke and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [askSpoke](https://www.atspoke.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities Supported
active-directory Easy Metrics Connector Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/easy-metrics-connector-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Easy Metrics Connector
+description: Learn how to configure single sign-on between Azure Active Directory and Easy Metrics Connector.
++++++++ Last updated : 03/31/2023++++
+# Azure Active Directory SSO integration with Easy Metrics Connector
+
+In this article, you learn how to integrate Easy Metrics Connector with Azure Active Directory (Azure AD). This application is a bridge between Azure AD and Auth0, federating Authentication to Microsoft Azure AD for our customers. When you integrate Easy Metrics Connector with Azure AD, you can:
+
+* Control in Azure AD who has access to Easy Metrics Connector.
+* Enable your users to be automatically signed-in to Easy Metrics Connector with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You configure and test Azure AD single sign-on for Easy Metrics Connector in a test environment. Easy Metrics Connector supports only **SP** initiated single sign-on.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Easy Metrics Connector, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Easy Metrics Connector single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Easy Metrics Connector application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Easy Metrics Connector from the Azure AD gallery
+
+Add Easy Metrics Connector from the Azure AD application gallery to configure single sign-on with Easy Metrics Connector. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Easy Metrics Connector** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type the value provided by [Easy Metrics Connector support team](mailto:support@easymetrics.com).
+
+ b. In the **Reply URL** textbox, type the value provided by [Easy Metrics Connector support team](mailto:support@easymetrics.com).
+
+ c. In the **Sign on URL** textbox, type the value provided by [Easy Metrics Connector support team](mailto:support@easymetrics.com).
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (PEM)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificate-base64-download.png "Certificate")
+
+## Configure Easy Metrics Connector SSO
+
+To configure single sign-on on **Easy Metrics Connector** side, you need to send the **Certificate (PEM)** to [Easy Metrics Connector support team](mailto:support@easymetrics.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Easy Metrics Connector test user
+
+In this section, you create a user called Britta Simon in Easy Metrics Connector. Work with [Easy Metrics Connector support team](mailto:support@easymetrics.com) to add the users in the Easy Metrics Connector platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Easy Metrics Connector Sign-on URL where you can initiate the login flow.
+
+* Go to Easy Metrics Connector Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Easy Metrics Connector tile in the My Apps, this will redirect to Easy Metrics Connector Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Easy Metrics Connector you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Evidence Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/evidence-tutorial.md
Previously updated : 11/21/2022 Last updated : 04/03/2023
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://<your tenant>.evidence.com/?class=UIX&proc=Login` > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL, Identifier and Reply URL. Contact [Evidence.com Client support team](https://communities.taser.com/support/SupportContactUs?typ=LE) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Sign on URL, Identifier and Reply URL. Contact [Evidence.com Client support team](https://my.axon.com/s/contactsupport) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
active-directory Harness Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/harness-provisioning-tutorial.md
Previously updated : 11/21/2022 Last updated : 04/04/2023
Before you configure and enable automatic user provisioning, decide which users
## Set up Harness for provisioning
-1. Sign in to your [Harness Admin Console](https://app.harness.io/#/login), and then go to **Continuous Security** > **Access Management**.
+1. Sign in to your [Harness Admin Console](https://app.harness.io/auth/#/signin), and then go to **Continuous Security** > **Access Management**.
![Harness Admin Console](media/harness-provisioning-tutorial/admin.png)
active-directory Maptician Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/maptician-provisioning-tutorial.md
Previously updated : 11/21/2022 Last updated : 04/04/2023 # Tutorial: Configure Maptician for automatic user provisioning
-This tutorial describes the steps you need to perform in both Maptician and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Maptician](https://www.maptician.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+This tutorial describes the steps you need to perform in both Maptician and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Maptician](https://www.maptician.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities Supported
The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
-* A [Maptician](https://www.maptician.com/) tenant.
+* A [Maptician](https://www.maptician.com) tenant.
* A user account in Maptician with Admin permissions. ## Step 1. Plan your provisioning deployment
active-directory Servicenow Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/servicenow-provisioning-tutorial.md
Previously updated : 3/10/2023 Last updated : 04/04/2023 # Configure ServiceNow for automatic user provisioning
-This article describes the steps that you'll take in both ServiceNow and Azure Active Directory (Azure AD) to configure automatic user provisioning. When Azure AD is configured, it automatically provisions and deprovisions users and groups to [ServiceNow](https://www.servicenow.com/) by using the Azure AD provisioning service.
+This article describes the steps that you'll take in both ServiceNow and Azure Active Directory (Azure AD) to configure automatic user provisioning. When Azure AD is configured, it automatically provisions and deprovisions users and groups to [ServiceNow](https://www.servicenow.com) by using the Azure AD provisioning service.
For more information on the Azure AD automatic user provisioning service, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). ## Capabilities supported > [!div class="checklist"]
-> - Create users in ServiceNow
-> - Remove users in ServiceNow when they don't need access anymore
-> - Keep user attributes synchronized between Azure AD and ServiceNow
-> - Provision groups and group memberships in ServiceNow
-> - Allow [single sign-on](servicenow-tutorial.md) to ServiceNow (recommended)
+> - Create users in ServiceNow.
+> - Remove users in ServiceNow when they don't need access anymore.
+> - Keep user attributes synchronized between Azure AD and ServiceNow.
+> - Provision groups and group memberships in ServiceNow.
+> - Allow [single sign-on](servicenow-tutorial.md) to ServiceNow (recommended).
## Prerequisites - An Azure AD user account with an active subscription. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.-- A [ServiceNow instance](https://www.servicenow.com/) of Calgary or higher-- A [ServiceNow Express instance](https://www.servicenow.com/) of Helsinki or higher-- A user account in ServiceNow with the admin role
+- A [ServiceNow instance](https://www.servicenow.com) of Calgary or higher.
+- A [ServiceNow Express instance](https://www.servicenow.com) of Helsinki or higher.
+- A user account in ServiceNow with the admin role.
## Step 1: Plan your provisioning deployment
active-directory Symantec Web Security Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/symantec-web-security-service.md
Previously updated : 11/21/2022 Last updated : 04/05/2023
The objective of this tutorial is to demonstrate the steps to be performed in Sy
The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-* An Azure AD tenant
-* [A Symantec Web Security Service (WSS) tenant](https://www.websecurity.symantec.com/buy-renew?inid=brmenu_nav_brhome)
+* An Azure AD tenant.
+* [A Symantec Web Security Service (WSS) tenant](https://www.websecurity.digicert.com/buy-renew?inid=brmenu_nav_brhome).
* A user account in Symantec Web Security Service (WSS) with Admin permissions. ## Assigning users to Symantec Web Security Service (WSS)
active-directory Uniflow Online Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/uniflow-online-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure uniFlow Online for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to uniFlow Online.
++
+writer: twimmers
+
+ms.assetid: 5bfc5602-343d-436a-b797-56e8c790e0ba
++++ Last updated : 03/31/2023+++
+# Tutorial: Configure uniFlow Online for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both uniFlow Online and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users to [uniFlow Online](https://www.nt-ware.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in uniFlow Online.
+> * Disable users in uniFLOW Online.
+> * Remove users in uniFlow Online when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and uniFlow Online.
+> * [Single sign-on](uniflow-online-tutorial.md) to uniFlow Online (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* An administrator account with uniFlow Online.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and uniFlow Online](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure uniFlow Online to support provisioning with Azure AD
+* In a different web browser window, sign in to uniFLOW Online website as an administrator.
+* Select **Extensions** tab **> Identity Providers > Configure identity providers**.
+* Click on **Add identity provider**. On the **ADD IDENTITY PROVIDER** section, perform the following steps:
+ * Enter the **Display name** .
+ * For **Provider type**, select **WS-Federation** option from the dropdown.
+ * For **WS-Federation type**, select **Azure Active Directory** option from the dropdown.
+ * Click **Save**.
+* Enable the Advanced Administrative View within your user Profile settings by navigating to **Profile settings > Administrator view** and setting it to **Advanced**.
+* The provisioning tab will now be available within the Identity Provider configuration.
+* Click **Enable Provisioning** when you are ready to set up user provisioning in your company's Microsoft Azure Active Directory.
+ * **Provisioning tenant URL** (only displayed once after **Provisioning** is enabled): You need this URL when setting up provisioning in your Microsoft Azure Active Directory application.
+ * **Provisioning secret token** (only displayed once after **Provisioning** is enabled): You need this token when setting up provisioning in your Microsoft Azure Active Directory application.
+
+## Step 3. Add uniFlow Online from the Azure AD application gallery
+
+Add uniFlow Online from the Azure AD application gallery to start managing provisioning to uniFlow Online. If you have previously setup uniFlow Online for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to uniFlow Online
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for uniFlow Online in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **uniFlow Online**.
+
+ ![Screenshot of the uniFlow Online link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your uniFlow Online Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to uniFlow Online. If the connection fails, ensure your uniFlow Online account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to uniFlow Online**.
+
+1. Review the user attributes that are synchronized from Azure AD to uniFlow Online in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in uniFlow Online for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the uniFlow Online API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by uniFlow Online|
+ |||||
+ |userName|String|&check;|&check;
+ |externalId|String|&check;|&check;
+ |emails[type eq "work"].value|String|&check;|
+ |active|Boolean||&check;
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String||
+ |displayName|String||
+ |title|String||
+ |addresses[type eq "work"].streetAddress|String||
+ |title|String||
+ |phoneNumbers[type eq "work"].value|String||
+ |urn:ietf:params:scim:schemas:extension:uniFLOWOnline:2.0:User:cardNumber|String||
+ |urn:ietf:params:scim:schemas:extension:uniFLOWOnline:2.0:User:cardRegistrationCode|String||
+ |urn:ietf:params:scim:schemas:extension:uniFLOWOnline:2.0:User:localUsername|String||
+ |urn:ietf:params:scim:schemas:extension:uniFLOWOnline:2.0:User:pin|String||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for uniFlow Online, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users that you would like to provision to uniFlow Online by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Whitesource Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/whitesource-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Whitesource'
+ Title: 'Tutorial: Azure Active Directory SSO integration with Whitesource'
description: Learn how to configure single sign-on between Azure Active Directory and Whitesource.
Previously updated : 11/21/2022 Last updated : 04/03/2023
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Whitesource
+# Tutorial: Azure Active Directory SSO integration with Whitesource
-In this tutorial, you'll learn how to integrate Whitesource with Azure Active Directory (Azure AD). When you integrate Whitesource with Azure AD, you can:
+In this tutorial, you learn how to integrate Whitesource with Azure Active Directory (Azure AD). When you integrate Whitesource with Azure AD, you can:
* Control in Azure AD who has access to Whitesource. * Enable your users to be automatically signed-in to Whitesource with their Azure AD accounts.
To configure the integration of Whitesource into Azure AD, you need to add White
1. In the **Add from the gallery** section, type **Whitesource** in the search box. 1. Select **Whitesource** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
- Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+ Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
## Configure and test Azure AD SSO for Whitesource
Follow these steps to enable Azure AD SSO in the Azure portal.
`com.whitesource.sp` > [!NOTE]
- > These value is not real. Update these value with the actual Sign on URL. Contact [Whitesource Client support team](https://www.whitesourcesoftware.com/contact-us/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These value is not real. Update these value with the actual Sign on URL. Contact [Whitesource Client support team](https://www.mend.io/contact-us/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
Follow these steps to enable Azure AD SSO in the Azure portal.
### Create an Azure AD test user
-In this section, you'll create a test user in the Azure portal called B.Simon.
+In this section, you create a test user in the Azure portal called B.Simon.
1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**. 1. Select **New user** at the top of the screen.
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Whitesource.
+In this section, you enable B.Simon to use Azure single sign-on by granting access to Whitesource.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Whitesource**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Whitesource SSO
-To configure single sign-on on **Whitesource** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Whitesource support team](https://www.whitesourcesoftware.com/contact-us/). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **Whitesource** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Whitesource support team](https://www.mend.io/contact-us/). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Whitesource test user
aks Azure Cni Powered By Cilium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-powered-by-cilium.md
Azure CNI powered by Cilium currently has the following limitations:
* Available only for Linux and not for Windows. * Cilium L7 policy enforcement is disabled. * Hubble is disabled.
+* Not yet configured for compatibility with Istio ([Istio issue #27619](https://github.com/istio/istio/issues/27619)).
* Kubernetes services with `internalTrafficPolicy=Local` aren't supported ([Cilium issue #17796](https://github.com/cilium/cilium/issues/17796)). * Multiple Kubernetes services can't use the same host port with different protocols (for example, TCP or UDP) ([Cilium issue #14287](https://github.com/cilium/cilium/issues/14287)). * Network policies may be enforced on reply packets when a pod connects to itself via service cluster IP ([Cilium issue #19406](https://github.com/cilium/cilium/issues/19406)).
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-cluster.md
After receiving the error message, you have two options to mitigate the issue:
To remove usage of deprecated APIs, follow these steps:
-1. Remove the deprecated API, which is listed in the error message. Check the past usage by enabling [container insights][container-insights] and exploring kube audit logs.
+1. Remove the deprecated API, which is listed in the error message. In the Azure portal, navigate to your cluster's overview page, and select **Diagnose and solve problems**. You can find recent usages detected under the **Known Issues, Availability and Performance** category by navigating to **Selected Kubernetes API deprecations** on the left-hand side. You can also check past API usage by enabling [container insights][container-insights] and exploring kube audit logs.
-2. Wait 12 hours from the time the last deprecated api usage was seen.
+ :::image type="content" source="./media/upgrade-cluster/applens-api-detection-inline.png" lightbox="./media/upgrade-cluster/applens-api-detection-full.png" alt-text="A screenshot of the Azure portal showing the 'Selected Kubernetes API deprecations' section.":::
+
+2. Wait 12 hours from the time the last deprecated API usage was seen.
3. Retry your cluster upgrade.
app-service App Service Configure Premium Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-configure-premium-tier.md
description: Learn how to better performance for your web, mobile, and API app i
keywords: app service, azure app service, scale, scalable, app service plan, app service cost ms.assetid: ff00902b-9858-4bee-ab95-d3406018c688 Previously updated : 03/21/2022 Last updated : 04/06/2023 # Configure PremiumV3 tier for Azure App Service
-The new **PremiumV3** pricing tier gives you faster processors, SSD storage, and quadruple the memory-to-core ratio of the existing pricing tiers (double the **PremiumV2** tier). With the performance advantage, you could save money by running your apps on fewer instances. In this article, you learn how to create an app in **PremiumV3** tier or scale up an app to **PremiumV3** tier.
+The new **PremiumV3** pricing tier gives you faster processors, SSD storage, memory-optimized options, and quadruple the memory-to-core ratio of the existing pricing tiers (double the **PremiumV2** tier). With the performance and memory advantage, you could save money by running your apps on fewer instances. In this article, you learn how to create an app in **PremiumV3** tier or scale up an app to **PremiumV3** tier.
## Prerequisites
-To scale-up an app to **PremiumV3**, you need to have an Azure App Service app that runs in a pricing tier lower than **PremiumV3**, and the app must be running in an App Service deployment that supports PremiumV3.
+To scale-up an app to **PremiumV3**, you need to have an Azure App Service app that runs in a pricing tier lower than **PremiumV3**, and the app must be running in an App Service deployment that supports **PremiumV3**. Additionally the App Service deployment must support the desired SKU within **PremiumV3**.
<a name="availability"></a>
To scale-up an app to **PremiumV3**, you need to have an Azure App Service app t
The **PremiumV3** tier is available for both native and custom containers, including both Windows containers and Linux containers.
-> [!NOTE]
-> Any Windows containers running in the **Premium Container** tier during the preview period continue to function as is, but the **Premium Container** tier will continue to remain in preview. The **PremiumV3** tier is the official replacement for the **Premium Container** tier.
-
-**PremiumV3** is available in some Azure regions and availability in additional regions is being added continually. To see if it's available in your region, run the following Azure CLI command in the [Azure Cloud Shell](../cloud-shell/overview.md):
+**PremiumV3** as well as specific **PremiumV3** SKUs are available in some Azure regions and availability in additional regions is being added continually. To see if a specific **PremiumV3** offering is available in your region, run the following Azure CLI command in the [Azure Cloud Shell](../cloud-shell/overview.md) (substitute _P1v3_ with the desired SKU):
```azurecli-interactive az appservice list-locations --sku P1V3
The pricing tier of an App Service app is defined in the [App Service plan](over
When configuring the App Service plan in the <a href="https://portal.azure.com" target="_blank">Azure portal</a>, select **Pricing tier**.
-Select **Production**, then select **P1V3**, **P2V3**, or **P3V3**, then click **Apply**.
+Select **Production**, then select **P0V3**, **P1V3**, **P2V3**, **P3V3**, **P1mV3**, **P2mV3**, **P3mV3**, **P4mV3**, or **P5mV3**, then click **Apply**.
![Screenshot showing the recommended pricing tiers for your app.](media/app-service-configure-premium-tier/scale-up-tier-select.png) > [!IMPORTANT]
-> If you don't see **P1V3**, **P2V3**, and **P3V3** as options, or if the options are greyed out, then **PremiumV3** likely isn't available in the underlying App Service deployment that contains the App Service plan. See [Scale up from an unsupported resource group and region combination](#unsupported) for more details.
+> If you don't see any of **P0V3**, **P1V3**, **P2V3**, **P3V3**, **P1mV3**, **P2mV3**, **P3mV3**, **P4mV3**, and **P5mV3** as options, or if some options are greyed out, then either **PremiumV3** or an individual SKU within **PremiumV3** isn't available in the underlying App Service deployment that contains the App Service plan. See [Scale up from an unsupported resource group and region combination](#unsupported) for more details.
## Scale up an existing app to PremiumV3 tier
-Before scaling an existing app to **PremiumV3** tier, make sure that **PremiumV3** is available. For information, see [PremiumV3 availability](#availability). If it's not available, see [Scale up from an unsupported resource group and region combination](#unsupported).
+Before scaling an existing app to **PremiumV3** tier, make sure that both **PremiumV3** as well as the specific SKU within **PremiumV3** are available. For information, see [PremiumV3 availability](#availability). If it's not available, see [Scale up from an unsupported resource group and region combination](#unsupported).
Depending on your hosting environment, scaling up may require extra steps.
In the left navigation of your App Service app page, select **Scale up (App Serv
![Screenshot showing how to scale up your app service plan.](media/app-service-configure-premium-tier/scale-up-tier-portal.png)
-Select **Production**, then select **P1V3**, **P2V3**, or **P3V3**, then click **Apply**.
+Select **Production**, then select **P0V3**, **P1V3**, **P2V3**, **P3V3**, **P1mV3**, **P2mV3**, **P3mV3**, **P4mV3**, or **P5mV3**, then click **Apply**.
![Screenshot showing the recommended pricing tiers for your app.](media/app-service-configure-premium-tier/scale-up-tier-select.png)
If your operation finishes successfully, your app's overview page shows that it'
### If you get an error
-Some App Service plans can't scale up to the PremiumV3 tier if the underlying App Service deployment doesnΓÇÖt support PremiumV3. See [Scale up from an unsupported resource group and region combination](#unsupported) for more details.
+Some App Service plans can't scale up to the **PremiumV3** tier, or to a newer SKU within **PremiumV3**, if the underlying App Service deployment doesnΓÇÖt support the requested **PremiumV3** SKU. See [Scale up from an unsupported resource group and region combination](#unsupported) for more details.
<a name="unsupported"></a> ## Scale up from an unsupported resource group and region combination
-If your app runs in an App Service deployment where **PremiumV3** isn't available, or if your app runs in a region that currently does not support **PremiumV3**, you need to re-deploy your app to take advantage of **PremiumV3**. You have two options:
+If your app runs in an App Service deployment where **PremiumV3** isn't available, or if your app runs in a region that currently does not support **PremiumV3**, you need to re-deploy your app to take advantage of **PremiumV3**. Alternatively newer **PremiumV3** SKUs may not be available, in which case you also need to re-deploy your app to take advantage of newer SKUs within **PremiumV3**. You have two options:
-- Create an app in a new resource group and with a new App Service plan. When creating the App Service plan, select a **PremiumV3** tier. This step ensures that the App Service plan is deployed into a deployment unit that supports **PremiumV3**. Then, redeploy your application code into the newly created app. Even if you scale the App Service plan down to a lower tier to save costs, you can always scale back up to **PremiumV3** because the deployment unit supports it.
+- Create an app in a new resource group and with a new App Service plan. When creating the App Service plan, select the desired **PremiumV3** tier. This step ensures that the App Service plan is deployed into a deployment unit that supports **PremiumV3** as well as the specific SKU within **PremiumV3**. Then, redeploy your application code into the newly created app. Even if you scale the new App Service plan down to a lower tier to save costs, you can always scale back up to **PremiumV3** and the desired SKU within **PremiumV3** because the deployment unit supports it.
- If your app already runs in an existing **Premium** tier, then you can clone your app with all app settings, connection strings, and deployment configuration into a new resource group on a new app service plan that uses **PremiumV3**. ![Screenshot showing how to clone your app.](media/app-service-configure-premium-tier/clone-app.png) In the **Clone app** page, you can create an App Service plan using **PremiumV3** in the region you want, and specify the app settings and configuration that you want to clone. -
-## Moving from Premium Container to Premium V3 SKU
-
-The Premium Container SKU will be retired on **30th June 2022**. You should move your applications to the **Premium V3 SKU** ahead of this date. Use the clone functionality in the Azure App Service CLI experience to [move your application from your Premium Container App Service Plan to a new Premium V3 App Service plan](https://aka.ms/pcsku).
- ## Automate with scripts You can automate app creation in the **PremiumV3** tier with scripts, using the [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/). ### Azure CLI
-The following command creates an App Service plan in _P1V3_. You can run it in the Cloud Shell. The options for `--sku` are P1V3, _P2V3_, and _P3V3_.
+The following command creates an App Service plan in _P1V3_. You can run it in the Cloud Shell. The options for `--sku` are _P0V3_, _P1V3_, _P2V3_, _P3V3_, _P1mV3_, _P2mV3_, _P3mV3_, _P4mV3_, and _P5mV3_.
```azurecli-interactive az appservice plan create \
app-service App Service Undelete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-undelete.md
Title: Restore deleted apps
description: Learn how to restore a deleted app in Azure App Service. Avoid the headache of an accidentally deleted app. Previously updated : 11/4/2022 Last updated : 4/3/2023
If you happened to accidentally delete your app in Azure App Service, you can re
> [!NOTE] > - Deleted apps are purged from the system 30 days after the initial deletion. After an app is purged, it can't be recovered.
-> - Undelete functionality isn't supported for the Consumption plan.
-> - Apps Service apps running in an App Service Environment don't support snapshots. Therefore, undelete functionality and clone functionality aren't supported for App Service apps running in an App Service Environment.
+> - Undelete functionality isn't supported for function apps hosted on the Consumption plan or Elastic Premium plan.
+> - App Service apps running in an App Service Environment don't support snapshots. Therefore, undelete functionality and clone functionality aren't supported for App Service apps running in an App Service Environment.
> ## Re-register App Service resource provider
The detailed information includes:
## Restore deleted app >[!NOTE]
->- `Restore-AzDeletedWebApp` isn't supported for function apps.
+>- `Restore-AzDeletedWebApp` isn't supported for function apps hosted on the Consumption plan or Elastic Premium plan.
>- The Restore-AzDeletedWebApp cmdlet restores a deleted web app. The web app specified by TargetResourceGroupName, TargetName, and TargetSlot will be overwritten with the contents and settings of the deleted web app. If the target parameters are not specified, they will automatically be filled with the deleted web app's resource group, name, and slot. If the target web app does not exist, it will automatically be created in the app service plan specified by TargetAppServicePlanName. >- By default `Restore-AzDeletedWebApp` will restore both your app configuration as well any content. If you want to only restore content, you use the **`-RestoreContentOnly`** flag with this commandlet.
-Once the app you want to restore has been identified, you can restore it using `Restore-AzDeletedWebApp`, please see below examples
+After identifying the app you want to restore, you can restore it using `Restore-AzDeletedWebApp`, as shown in the following examples.
>*You can find the full commandlet reference here: **[Restore-AzDeletedWebApp](/powershell/module/az.websites/restore-azdeletedwebapp)*** . >Restore to the original app name:
Restore-AzDeletedWebApp -ResourceGroupName <original_rg> -Name <original_app> -D
The inputs for command are: -- **Target Resource Group**: Target resource group where the app will be restored
+- **Target Resource Group**: Target resource group where the app is to be restored
- **TargetName**: Target app for the deleted app to be restored to - **TargetAppServicePlanName**: App Service plan linked to the app - **Name**: Name for the app, should be globally unique. - **ResourceGroupName**: Original resource group for the deleted app - **Slot**: Slot for the deleted app -- **RestoreContentOnly**: By default `Restore-AzDeletedWebApp` will restore both your app configuration as well any content. If you want to only restore content, you can use the `-RestoreContentOnly` flag with this commandlet.
+- **RestoreContentOnly**: By default `Restore-AzDeletedWebApp` restores both your app configuration as well any content. If you want to only restore content, you can use the `-RestoreContentOnly` flag with this commandlet.
> [!NOTE] > If the app was hosted on and then deleted from an App Service Environment, it can be restored only if the corresponding App Service Environment still exists.
+## Restore deleted function app
+
+If the function app was hosted on a **Dedicated app service plan**, it can be restored, as long as it was using the default App Service storage.
+
+1. Fetch the DeletedSiteId of the app version you want to restore, using Get-AzDeletedWebApp cmdlet:
+
+```powershell
+Get-AzDeletedWebApp -ResourceGroupName <RGofDeletedApp> -Name <NameofApp>
+```
+2. Create a new function app in a Dedicated plan. Refer to the instructions for [how to create an app in the portal](../azure-functions/functions-create-function-app-portal.md#create-a-function-app).
+3. Restore to the newly created function app using this cmdlet:
+
+```powershell
+Restore-AzDeletedWebApp -ResourceGroupName <RGofnewapp> -Name <newApp> -deletedId "/subscriptions/xxxx/providers/Microsoft.Web/locations/xxxx/deletedSites/xxxx"
+```
+
+Currently there's no support for Undelete (Restore-AzDeletedWebApp) Function app that's hosted in a Consumption plan or Elastic premium plan since the content resides on Azure Files in a Storage account. If you haven't 'hard' deleted that Azure Files storage account, or if the account exists and file shares haven't been deleted, then you may use the steps as workaround:
+
+
+1. Create a new function app in a Consumption or Premium plan. Refer the instructions for [how to create an app in the portal](../azure-functions/functions-create-function-app-portal.md#create-a-function-app).
+2. Set the following [app settings](../azure-functions/functions-how-to-use-azure-function-app-settings.md?tabs=portal#settings) to refer to the old storage account , which contains the content from the previous app.
+
+ | App Setting | Suggested value |
+ | | - |
+ | **AzureWebJobsStorage** | Connection String for the storage account used by the deleted app. |
+ | **WEBSITE_CONTENTAZUREFILECONNECTIONSTRING** | Connection String for the storage account used by the deleted app. |
+ | **WEBSITE_CONTENTSHARE** | File share on storage account used by the deleted app. |
app-service Overview Hosting Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-hosting-plans.md
description: Learn how App Service plans work in Azure App Service, how they're
keywords: app service, azure app service, scale, scalable, scalability, app service plan, app service cost ms.assetid: dea3f41e-cf35-481b-a6bc-33d7fc9d01b1 Previously updated : 10/01/2020 Last updated : 04/06/2023
Each tier also provides a specific subset of App Service features. These feature
<a name="new-pricing-tier-premiumv3"></a> > [!NOTE]
-> The new **PremiumV3** pricing tier guarantees machines with faster processors (minimum 195 [ACU](../virtual-machines/acu.md) per virtual CPU), SSD storage, and quadruple memory-to-core ratio compared to **Standard** tier. **PremiumV3** also supports higher scale via increased instance count while still providing all the advanced capabilities found in **Standard** tier. All features available in the existing **PremiumV2** tier are included in **PremiumV3**.
+> The **PremiumV3** pricing tier guarantees machines with faster processors (minimum 195 [ACU](../virtual-machines/acu.md) per virtual CPU), SSD storage, memory-optimized options and quadruple memory-to-core ratio compared to **Standard** tier. **PremiumV3** also supports higher scale via increased instance count while still providing all the advanced capabilities found in **Standard** tier. All features available in the existing **PremiumV2** tier are included in **PremiumV3**.
>
-> Similar to other dedicated tiers, three VM sizes are available for this tier:
->
-> - Small (2 CPU core, 8 GiB of memory)
-> - Medium (4 CPU cores, 16 GiB of memory)
-> - Large (8 CPU cores, 32 GiB of memory) 
+> Multiple VM sizes are available for this tier including 4-to-1 and 8-to-1 memory-to-core ratios:
+>
+> - P0v3&nbsp;&nbsp;&nbsp;&nbsp;(1 vCPU, 4 GiB of memory)
+> - P1v3&nbsp;&nbsp;&nbsp;&nbsp;(2 vCPU, 8 GiB of memory)
+> - P1mv3&nbsp;(2 vCPU, 16 GiB of memory)
+> - P2v3&nbsp;&nbsp;&nbsp;&nbsp;(4 vCPU, 16 GiB of memory)
+> - P2mv3&nbsp;(4 vCPU, 32 GiB of memory)
+> - P3v3&nbsp;&nbsp;&nbsp;&nbsp;(8 vCPU, 32 GiB of memory) 
+> - P3mv3&nbsp;(8 vCPU, 64 GiB of memory)
+> - P4mv3&nbsp;(16 vCPU, 128 GiB of memory)
+> - P5mv3&nbsp;(32 vCPU, 256 GiB of memory)
> > For **PremiumV3** pricing information, see [App Service Pricing](https://azure.microsoft.com/pricing/details/app-service/). >
Isolate your app into a new App Service plan when:
| B1, S1, P1v2, I1v1 | 8 | | B2, S2, P2v2, I2v1 | 16 | | B3, S3, P3v2, I3v1 | 32 |
+ | P0v3 | 8 |
| P1v3, I1v2 | 16 |
- | P2v3, I2v2 | 32 |
- | P3v3, I3v2 | 64 |
+ | P2v3, I2v2, P1mv3 | 32 |
+ | P3v3, I3v2, P2mv3 | 64 |
+ | I4v2, I5v2, I6v2 | Max density bounded by vCPU usage |
+ | P3mv3, P4mv3, P5mv3 | Max density bounded by vCPU usage |
- You want to scale the app independently from the other apps in the existing plan. - The app needs resource in a different geographical region.
app-service Scenario Secure App Access Microsoft Graph As App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-access-microsoft-graph-as-app.md
Previously updated : 03/14/2023 Last updated : 04/05/2023 ms.devlang: csharp
Run the install commands.
```dotnetcli dotnet add package Microsoft.Identity.Web.MicrosoftGraph
+dotnet add package Microsoft.Graph
``` #### Package Manager Console
Open the project/solution in Visual Studio, and open the console by using the **
Run the install commands. ```powershell Install-Package Microsoft.Identity.Web.MicrosoftGraph
+Install-Package Microsoft.Graph
``` ### .NET Example
using System;
using System.Collections.Generic; using System.Threading.Tasks; using Microsoft.AspNetCore.Mvc.RazorPages;
-using Azure.Identity;ΓÇï
-using Microsoft.Graph.Core;ΓÇïΓÇï
-using System.Net.Http.Headers;
+using Microsoft.Extensions.Logging;
+using Microsoft.Graph;
+using Azure.Identity;
...
public async Task OnGetAsync()
var credential = new ChainedTokenCredential( new ManagedIdentityCredential(), new EnvironmentCredential());
- var token = credential.GetToken(
- new Azure.Core.TokenRequestContext(
- new[] { "https://graph.microsoft.com/.default" }));
- var accessToken = token.Token;
- var graphServiceClient = new GraphServiceClient(
- new DelegateAuthenticationProvider((requestMessage) =>
- {
- requestMessage
- .Headers
- .Authorization = new AuthenticationHeaderValue("bearer", accessToken);
+ string[] scopes = new[] { "https://graph.microsoft.com/.default" };
- return Task.CompletedTask;
- }));
+ var graphServiceClient = new GraphServiceClient(
+ credential, scopes);
- // MSGraphUser is a DTO class being used to hold User information from the graph service client call
List<MSGraphUser> msGraphUsers = new List<MSGraphUser>(); try {
- var users =await graphServiceClient.Users.Request().GetAsync();
- foreach(var u in users)
+ //var users = await graphServiceClient.Users.Request().GetAsync();
+ var users = await graphServiceClient.Users.GetAsync();
+ foreach (var u in users.Value)
{ MSGraphUser user = new MSGraphUser(); user.userPrincipalName = u.UserPrincipalName;
public async Task OnGetAsync()
msGraphUsers.Add(user); } }
- catch(Exception ex)
+ catch (Exception ex)
{ string msg = ex.Message; }
application-gateway Application Gateway Private Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-private-deployment.md
A list of all Azure CLI references for Private Link Configuration on Application
+>[!Note]
+>Feature registration may take up to 30 minutes to transition from Registering to Registered status.
+ For more information about preview features, see [Set up preview features in Azure subscription](../azure-resource-manager/management/preview-features.md) ## Unregister from the preview
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
The following service-specific bindings are currently included in the preview:
| Service | Trigger | Input binding | Output binding | |-|-|-|-|
-| [Azure Blobs][blob-sdk-types] | Preview support | Preview support | Not yet supported |
+| [Azure Blobs][blob-sdk-types] | Preview support | Preview support | Not yet supported<sup>1</sup> |
+| [Azure Cosmos DB][cosmos-sdk-types] | SDK types not used<sup>2</sup> | Preview support | Not yet supported<sup>1</sup> |
[blob-sdk-types]: ./functions-bindings-storage-blob.md?tabs=isolated-process%2Cextensionv5&pivots=programming-language-csharp#binding-types
+[cosmos-sdk-types]: ./functions-bindings-cosmosdb-v2.md?tabs=isolated-process%2Cextensionv4&pivots=programming-language-csharp#binding-types
+
+<sup>1</sup> Support for SDK type bindings does not presently extend to output bindings.
+
+<sup>2</sup> The Cosmos DB trigger uses the [Azure Cosmos DB change feed](../cosmos-db/change-feed.md) and exposes change feed items as JSON-serializable types. The absence of SDK types is by-design for this scenario.
The [SDK type binding samples](https://github.com/Azure/azure-functions-dotnet-worker/tree/main/samples/WorkerBindingSamples) show examples of working with the various supported types.
azure-functions Functions Bindings Azure Sql Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-input.md
namespace AzureSQLSamples
public static IActionResult Run( [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "gettodoitem")] HttpRequest req,
- [Sql("select [Id], [order], [title], [url], [completed] from dbo.ToDo where Id = @Id",
- CommandType = System.Data.CommandType.Text,
- Parameters = "@Id={Query.id}",
- ConnectionStringSetting = "SqlConnectionString")]
+ [Sql(commandText: "select [Id], [order], [title], [url], [completed] from dbo.ToDo where Id = @Id",
+ commandText: System.Data.CommandType.Text,
+ parameters: "@Id={Query.id}",
+ connectionStringSetting: "SqlConnectionString")]
IEnumerable<ToDoItem> toDoItem) { return new OkObjectResult(toDoItem.FirstOrDefault());
namespace AzureSQLSamples
public static IActionResult Run( [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "gettodoitems/{priority}")] HttpRequest req,
- [Sql("select [Id], [order], [title], [url], [completed] from dbo.ToDo where [Priority] > @Priority",
- CommandType = System.Data.CommandType.Text,
- Parameters = "@Priority={priority}",
- ConnectionStringSetting = "SqlConnectionString")]
+ [Sql(commandText: "select [Id], [order], [title], [url], [completed] from dbo.ToDo where [Priority] > @Priority",
+ commandType: System.Data.CommandType.Text,
+ parameters: "@Priority={priority}",
+ connectionStringSetting: "SqlConnectionString")]
IEnumerable<ToDoItem> toDoItems) { return new OkObjectResult(toDoItems);
namespace AzureSQLSamples
public static IActionResult Run( [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "gettodoitem")] HttpRequest req,
- [Sql("select [Id], [order], [title], [url], [completed] from dbo.ToDo where Id = @Id",
- CommandType = System.Data.CommandType.Text,
- Parameters = "@Id={Query.id}",
- ConnectionStringSetting = "SqlConnectionString")]
+ [SqlInput(commandText: "select [Id], [order], [title], [url], [completed] from dbo.ToDo where Id = @Id",
+ commandType: System.Data.CommandType.Text,
+ parameters: "@Id={Query.id}",
+ connectionStringSetting: "SqlConnectionString")]
IEnumerable<ToDoItem> toDoItem) { return new OkObjectResult(toDoItem.FirstOrDefault());
namespace AzureSQLSamples
public static IActionResult Run( [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "gettodoitems/{priority}")] HttpRequest req,
- [Sql("select [Id], [order], [title], [url], [completed] from dbo.ToDo where [Priority] > @Priority",
- CommandType = System.Data.CommandType.Text,
- Parameters = "@Priority={priority}",
- ConnectionStringSetting = "SqlConnectionString")]
+ [SqlInput(commandText: "select [Id], [order], [title], [url], [completed] from dbo.ToDo where [Priority] > @Priority",
+ commandType: System.Data.CommandType.Text,
+ parameters: "@Priority={priority}",
+ connectionStringSetting: "SqlConnectionString")]
IEnumerable<ToDoItem> toDoItems) { return new OkObjectResult(toDoItems);
namespace AzureSQL.ToDo
public static IActionResult Run( [HttpTrigger(AuthorizationLevel.Anonymous, "delete", Route = "DeleteFunction")] HttpRequest req, ILogger log,
- [Sql("DeleteToDo", CommandType = System.Data.CommandType.StoredProcedure,
- Parameters = "@Id={Query.id}", ConnectionStringSetting = "SqlConnectionString")]
+ [SqlInput(commandText: "DeleteToDo", commandType: System.Data.CommandType.StoredProcedure,
+ parameters: "@Id={Query.id}", connectionStringSetting: "SqlConnectionString")]
IEnumerable<ToDoItem> toDoItems) { return new OkObjectResult(toDoItems);
namespace AzureSQL.ToDo
} ```
-<!-- Uncomment to support C# script examples.
# [C# Script](#tab/csharp-script) >+
+More samples for the Azure SQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharpscript).
+
+This section contains the following examples:
+
+* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-csharpscript)
+* [HTTP trigger, delete rows](#http-trigger-delete-one-or-multiple-rows-csharpscript)
+
+The examples refer to a `ToDoItem` class and a corresponding database table:
+++
+<a id="http-trigger-look-up-id-from-query-string-csharpscript"></a>
+### HTTP trigger, get row by ID from query string
+
+The following example shows an Azure SQL input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function is triggered by an HTTP request that uses a query string to specify the ID. That ID is used to retrieve a `ToDoItem` record with the specified query.
+
+> [!NOTE]
+> The HTTP query string parameter is case-sensitive.
+>
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "get"
+ ]
+},
+{
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+},
+{
+ "name": "todoItem",
+ "type": "sql",
+ "direction": "in",
+ "commandText": "select [Id], [order], [title], [url], [completed] from dbo.ToDo where Id = @Id",
+ "commandType": "Text",
+ "parameters": "@Id = {Query.id}",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+Here's the C# script code:
+
+```cs
+#r "Newtonsoft.Json"
+
+using System.Net;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Extensions.Primitives;
+using Newtonsoft.Json;
+using System.Collections.Generic;
+
+public static IActionResult Run(HttpRequest req, ILogger log, IEnumerable<ToDoItem> todoItem)
+{
+ return new OkObjectResult(todoItem);
+}
+```
++
+<a id="http-trigger-delete-one-or-multiple-rows-csharpscript"></a>
+### HTTP trigger, delete rows
+
+The following example shows an Azure SQL input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding to execute a stored procedure with input from the HTTP request query parameter. In this example, the stored procedure deletes a single record or all records depending on the value of the parameter.
+
+The stored procedure `dbo.DeleteToDo` must be created on the SQL database.
++
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "get"
+ ]
+},
+{
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+},
+{
+ "name": "todoItems",
+ "type": "sql",
+ "direction": "in",
+ "commandText": "DeleteToDo",
+ "commandType": "StoredProcedure",
+ "parameters": "@Id = {Query.id}",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
++
+The [configuration](#configuration) section explains these properties.
+
+Here's the C# script code:
+
+```cs
+#r "Newtonsoft.Json"
+
+using System.Net;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Extensions.Primitives;
+using Newtonsoft.Json;
+using System.Collections.Generic;
+
+public static IActionResult Run(HttpRequest req, ILogger log, IEnumerable<ToDoItem> todoItems)
+{
+ return new OkObjectResult(todoItems);
+}
+```
+ ::: zone-end
azure-functions Functions Bindings Azure Sql Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-output.md
namespace AzureSQL.ToDo
public static async Task<IActionResult> Run( [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "PostFunction")] HttpRequest req, ILogger log,
- [Sql("dbo.ToDo", ConnectionStringSetting = "SqlConnectionString")] IAsyncCollector<ToDoItem> toDoItems,
- [Sql("dbo.RequestLog", ConnectionStringSetting = "SqlConnectionString")] IAsyncCollector<RequestLog> requestLogs)
+ [Sql(commandText: "dbo.ToDo", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<ToDoItem> toDoItems,
+ [Sql(commandText: "dbo.RequestLog", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<RequestLog> requestLogs)
{ string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); ToDoItem toDoItem = JsonConvert.DeserializeObject<ToDoItem>(requestBody);
namespace AzureSQLSamples
public static async Task<IActionResult> Run( [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "addtodo-asynccollector")] HttpRequest req,
- [Sql("dbo.ToDo", ConnectionStringSetting = "SqlConnectionString")] IAsyncCollector<ToDoItem> newItems)
+ [Sql(commandText: "dbo.ToDo", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<ToDoItem> newItems)
{ string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); var incomingItems = JsonConvert.DeserializeObject<ToDoItem[]>(requestBody);
The examples refer to a `ToDoItem` class and a corresponding database table:
The following example shows a [C# function](functions-dotnet-class-library.md) that adds a record to a database, using data provided in an HTTP POST request as a JSON body.
+```cs
+using System;
+using System.Collections.Generic;
+using System.IO;
+using System.Threading.Tasks;
+using Microsoft.Azure.Functions.Worker.Extensions.Sql;
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Azure.Functions.Worker.Http;
+using Microsoft.Extensions.Logging;
+using Newtonsoft.Json;
+
+namespace AzureSQL.ToDo
+{
+ public static class PostToDo
+ {
+ // create a new ToDoItem from body object
+ // uses output binding to insert new item into ToDo table
+ [FunctionName("PostToDo")]
+ public static async Task<IActionResult> Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "PostFunction")] HttpRequest req,
+ ILogger log,
+ [SqlOutput(commandText: "dbo.ToDo", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<ToDoItem> toDoItems)
+ {
+ string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
+ ToDoItem toDoItem = JsonConvert.DeserializeObject<ToDoItem>(requestBody);
+
+ // generate a new id for the todo item
+ toDoItem.Id = Guid.NewGuid();
+
+ // set Url from env variable ToDoUri
+ toDoItem.url = Environment.GetEnvironmentVariable("ToDoUri")+"?id="+toDoItem.Id.ToString();
+
+ // if completed is not provided, default to false
+ if (toDoItem.completed == null)
+ {
+ toDoItem.completed = false;
+ }
+
+ await toDoItems.AddAsync(toDoItem);
+ await toDoItems.FlushAsync();
+ List<ToDoItem> toDoItemList = new List<ToDoItem> { toDoItem };
+
+ return new OkObjectResult(toDoItemList);
+ }
+ }
+}
+```
<a id="http-trigger-write-to-two-tables-c-oop"></a>
namespace AzureSQL.ToDo
public static async Task<IActionResult> Run( [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "PostFunction")] HttpRequest req, ILogger log,
- [Sql("dbo.ToDo", ConnectionStringSetting = "SqlConnectionString")] IAsyncCollector<ToDoItem> toDoItems,
- [Sql("dbo.RequestLog", ConnectionStringSetting = "SqlConnectionString")] IAsyncCollector<RequestLog> requestLogs)
+ [SqlOutput(commandText: "dbo.ToDo", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<ToDoItem> toDoItems,
+ [SqlOutput(commandText: "dbo.RequestLog", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<RequestLog> requestLogs)
{ string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); ToDoItem toDoItem = JsonConvert.DeserializeObject<ToDoItem>(requestBody);
namespace AzureSQLSamples
public static async Task<IActionResult> Run( [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "addtodo-asynccollector")] HttpRequest req,
- [Sql("dbo.ToDo", ConnectionStringSetting = "SqlConnectionString")] IAsyncCollector<ToDoItem> newItems)
+ [SqlOutput(commandText: "dbo.ToDo", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<ToDoItem> newItems)
{ string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); var incomingItems = JsonConvert.DeserializeObject<ToDoItem[]>(requestBody);
namespace AzureSQLSamples
} ```
-<!-- Uncomment to support C# script examples.
+ # [C# Script](#tab/csharp-script) >+
+More samples for the Azure SQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharpscript).
+
+This section contains the following examples:
+
+* [HTTP trigger, write records to a table](#http-trigger-write-records-to-table-csharpscript)
+* [HTTP trigger, write to two tables](#http-trigger-write-to-two-tables-csharpscript)
+
+The examples refer to a `ToDoItem` class and a corresponding database table:
++++
+<a id="http-trigger-write-records-to-table-csharpscript"></a>
+### HTTP trigger, write records to a table
+
+The following example shows a SQL output binding in a function.json file and a [C# script function](functions-reference-csharp.md) that adds records to a table, using data provided in an HTTP POST request as a JSON body.
+
+The following is binding data in the function.json file:
+
+```json
+{
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "post"
+ ]
+},
+{
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+},
+{
+ "name": "todoItem",
+ "type": "sql",
+ "direction": "out",
+ "commandText": "dbo.ToDo",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following is sample C# script code:
+
+```cs
+#r "Newtonsoft.Json"
+
+using System.Net;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Extensions.Primitives;
+using Newtonsoft.Json;
+
+public static IActionResult Run(HttpRequest req, ILogger log, out ToDoItem todoItem, out RequestLog requestLog)
+{
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ string requestBody = new StreamReader(req.Body).ReadToEnd();
+ todoItem = JsonConvert.DeserializeObject<ToDoItem>(requestBody);
+
+ requestLog = new RequestLog();
+ requestLog.RequestTimeStamp = DateTime.Now;
+ requestLog.ItemCount = 1;
+
+ return new OkObjectResult(todoItem);
+}
+
+public class RequestLog {
+ public DateTime RequestTimeStamp { get; set; }
+ public int ItemCount { get; set; }
+}
+```
+
+<a id="http-trigger-write-to-two-tables-csharpscript"></a>
+### HTTP trigger, write to two tables
+
+The following example shows a SQL output binding in a function.json file and a [C# script function](functions-reference-csharp.md) that adds records to a database in two different tables (`dbo.ToDo` and `dbo.RequestLog`), using data provided in an HTTP POST request as a JSON body and multiple output bindings.
+
+The second table, `dbo.RequestLog`, corresponds to the following definition:
+
+```sql
+CREATE TABLE dbo.RequestLog (
+ Id int identity(1,1) primary key,
+ RequestTimeStamp datetime2 not null,
+ ItemCount int not null
+)
+```
+
+The following is binding data in the function.json file:
+
+```json
+{
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "post"
+ ]
+},
+{
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+},
+{
+ "name": "todoItem",
+ "type": "sql",
+ "direction": "out",
+ "commandText": "dbo.ToDo",
+ "connectionStringSetting": "SqlConnectionString"
+},
+{
+ "name": "requestLog",
+ "type": "sql",
+ "direction": "out",
+ "commandText": "dbo.RequestLog",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following is sample C# script code:
+
+```cs
+#r "Newtonsoft.Json"
+
+using System.Net;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Extensions.Primitives;
+using Newtonsoft.Json;
+
+public static IActionResult Run(HttpRequest req, ILogger log, out ToDoItem todoItem)
+{
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ string requestBody = new StreamReader(req.Body).ReadToEnd();
+ todoItem = JsonConvert.DeserializeObject<ToDoItem>(requestBody);
+
+ return new OkObjectResult(todoItem);
+}
+```
++ ::: zone-end
azure-functions Functions Bindings Azure Sql Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-trigger.md
namespace AzureSQL.ToDo
{ [FunctionName("ToDoTrigger")] public static void Run(
- [SqlTrigger("[dbo].[ToDo]", ConnectionStringSetting = "SqlConnectionString")]
+ [SqlTrigger("[dbo].[ToDo]", "SqlConnectionString")]
IReadOnlyList<SqlChange<ToDoItem>> changes, ILogger logger) {
azure-functions Functions Bindings Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql.md
Add the extension to your project by installing this [NuGet package](https://www
dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Sql --prerelease ```
-<!-- awaiting bundle support
# [C# script](#tab/csharp-script)
-Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
+Functions run as C# script, which is supported primarily for C# portal editing. The SQL bindings extension is part of a preview [extension bundle], which is specified in your host.json project file.
++
+You can add the preview extension bundle by adding or replacing the following code in your `host.json` file:
+
+```json
+{
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle.Preview",
+ "version": "[4.*, 5.0.0)"
+ }
+}
+```
+
+<!-- To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions]. -->
+
+<!-- You can install this version of the extension in your function app by registering the [extension bundle], version 3.x, or a later version. -->
-You can install this version of the extension in your function app by registering the [extension bundle], version 3.x, or a later version.
>
azure-functions Functions Bindings Cosmosdb V2 Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-input.md
See the [Example section](#example) for complete examples.
## Usage ::: zone pivot="programming-language-csharp"
-The parameter type supported by the Event Grid trigger depends on the Functions runtime version, the extension package version, and the C# modality used.
+
+The parameter type supported by the Cosmos DB input binding depends on the Functions runtime version, the extension package version, and the C# modality used.
# [Functions 2.x+](#tab/functionsv2/in-process)
The parameter type supported by the Event Grid trigger depends on the Functions
# [Extension 4.x+](#tab/extensionv4/isolated-process)
-Only JSON string inputs are currently supported.
# [Functions 2.x+](#tab/functionsv2/csharp-script)
azure-functions Functions Bindings Cosmosdb V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2.md
To learn more, see [Update your extensions].
::: zone-end +
+## Binding types
+
+The binding types supported for .NET depend on both the extension version and C# execution mode, which can be one of the following:
+
+# [In-process class library](#tab/in-process)
+
+An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
+
+# [Isolated process](#tab/isolated-process)
+
+An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
+
+# [C# script](#tab/csharp-script)
+
+C# script is used primarily when creating C# functions in the Azure portal.
+++
+Choose a version to see binding type details for the mode and version.
+
+# [Extension 4.x and higher](#tab/extensionv4/in-process)
+
+The Azure Cosmos DB extension supports parameter types according to the table below.
+
+| Binding | Parameter types |
+|-|-|-|
+| Cosmos DB trigger | JSON serializable types<sup>1</sup><br/>`IEnumerable<T>`<sup>2</sup> |
+| Cosmos DB input | JSON serializable types<sup>1</sup><br/>`IEnumerable<T>`<sup>2</sup><br/>[CosmosClient] |
+| Cosmos DB output | JSON serializable types<sup>1</sup> |
+
+<sup>1</sup> Documents containing JSON data can be deserialized into known plain-old CLR object (POCO) types.
+
+<sup>2</sup> `IEnumerable<T>` provides a collection of documents. Here, `T` is a JSON serializable type. When specified for a trigger, it allows a single invocation to process a batch of documents. When used for an input binding, this allows multiple documents to be returned by the query.
+
+# [Functions 2.x and higher](#tab/functionsv2/in-process)
+
+Earlier versions of the extension exposed types from the now deprecated [Microsoft.Azure.Documents] namespace. Newer types from [Microsoft.Azure.Cosmos] are exclusive to **extension 4.x and higher**.
+
+# [Extension 4.x and higher](#tab/extensionv4/isolated-process)
+
+The isolated worker process supports parameter types according to the table below. Binding to JSON serializeable types is currently the only option that is generally available. Support for binding to types from [Microsoft.Azure.Cosmos] is in preview.
+
+| Binding | Parameter types | Preview parameter types<sup>1</sup> |
+|-|-|-|
+| Cosmos DB trigger | JSON serializable types<sup>2</sup><br/>`IEnumerable<T>`<sup>3</sup> | *No preview types* |
+| Cosmos DB input | JSON serializable types<sup>2</sup><br/>`IEnumerable<T>`<sup>3</sup> | [CosmosClient]<br/>[Database]<br/>[Container] |
+| Cosmos DB output | JSON serializable types<sup>2</sup> | *No preview types*<sup>4</sup> |
+
+<sup>1</sup> Preview types require use of [Microsoft.Azure.Functions.Worker.Extensions.CosmosDB 4.1.0-preview1 or later][sdk-types-extension-version], [Microsoft.Azure.Functions.Worker 1.12.1-preview1 or later][sdk-types-worker-version], and [Microsoft.Azure.Functions.Worker.Sdk 1.9.0-preview1 or later][sdk-types-worker-sdk-version]. When developing on your local machine, you will need [Azure Functions Core Tools version 4.0.5000 or later](./functions-run-local.md). When using a preview type, [binding expressions](./functions-bindings-expressions-patterns.md) that rely on trigger data are not supported.
+
+[sdk-types-extension-version]: https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.CosmosDB/4.1.0-preview1
+[sdk-types-worker-version]: https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/1.12.1-preview1
+[sdk-types-worker-sdk-version]: https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk/1.9.0-preview1
+
+<sup>2</sup> Documents containing JSON data can be deserialized into known plain-old CLR object (POCO) types.
+
+<sup>3</sup> `IEnumerable<T>` provides a collection of documents. Here, `T` is a JSON serializable type. When specified for a trigger, it allows a single invocation to process a batch of documents. When used for an input binding, this allows multiple documents to be returned by the query.
+
+<sup>4</sup> Support for SDK type bindings does not presently extend to output bindings.
+
+# [Functions 2.x and higher](#tab/functionsv2/isolated-process)
+
+Earlier versions of extensions in the isolated worker process only support binding to JSON serializable types. Additional options are available to **extension 4.x and higher**.
+
+# [Extension 4.x and higher](#tab/extensionv4/csharp-script)
+
+The Azure Cosmos DB extension supports parameter types according to the table below.
+
+| Binding | Parameter types |
+|-|-|-|
+| Cosmos DB trigger | JSON serializable types<sup>1</sup><br/>`IEnumerable<T>`<sup>2</sup> |
+| Cosmos DB input | JSON serializable types<sup>1</sup><br/>`IEnumerable<T>`<sup>2</sup><br/>[CosmosClient] |
+| Cosmos DB output | JSON serializable types<sup>1</sup> |
+
+<sup>1</sup> Documents containing JSON data can be deserialized into known plain-old CLR object (POCO) types.
+
+<sup>2</sup> `IEnumerable<T>` provides a collection of documents. Here, `T` is a JSON serializable type. When specified for a trigger, it allows a single invocation to process a batch of documents. When used for an input binding, this allows multiple documents to be returned by the query.
+
+# [Functions 2.x and higher](#tab/functionsv2/csharp-script)
+
+Earlier versions of the extension exposed types from the now deprecated [Microsoft.Azure.Documents] namespace. Newer types from [Microsoft.Azure.Cosmos] are exclusive to **extension 4.x and higher**.
+++
+[Microsoft.Azure.Cosmos]: /dotnet/api/microsoft.azure.cosmos
+[CosmosClient]: /dotnet/api/microsoft.azure.cosmos.cosmosclient
+[Database]: /dotnet/api/microsoft.azure.cosmos.database
+[Container]: /dotnet/api/microsoft.azure.cosmos.container
+
+[Microsoft.Azure.Documents]: /dotnet/api/microsoft.azure.documents
+[DocumentClient]: /dotnet/api/microsoft.azure.documents.client.documentclient
++ ## Exceptions and return codes | Binding | Reference |
azure-functions Functions Networking Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-networking-options.md
The following APIs let you programmatically manage regional virtual network inte
+ **Azure CLI**: Use the [`az functionapp vnet-integration`](/cli/azure/functionapp/vnet-integration) commands to add, list, or remove a regional virtual network integration. + **ARM templates**: Regional virtual network integration can be enabled by using an Azure Resource Manager template. For a full example, see [this Functions quickstart template](https://azure.microsoft.com/resources/templates/function-premium-vnet-integration/).
+## Testing
+
+When testing functions in a function app with private endpoints, you must do your testing from within the same virtual network, such as on a virtual machine (VM) in that network. To use the **Code + Test** option in the portal from that VM, you need to add following [CORS origins](./functions-how-to-use-azure-function-app-settings.md?tabs=portal#cors) to your function app:
+
+* https://functions-next.azure.com
+* https://functions-staging.azure.com
+* https://functions.azure.com
+* https://portal.azure.com
+ ## Troubleshooting [!INCLUDE [app-service-web-vnet-troubleshooting](../../includes/app-service-web-vnet-troubleshooting.md)]
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-azure-maps.md
Azure Maps consists of the following services that can provide geographic contex
### Data service
-Data is imperative for maps. Use the Data service to upload and store geospatial data for use with spatial operations or image composition. Bringing customer data closer to the Azure Maps service will reduce latency, increase productivity, and create new scenarios in your applications. For details on this service, see [Data service].
+Data is imperative for maps. Use the Data service to upload and store geospatial data for use with spatial operations or image composition. By bringing customer data closer to the Azure Maps service, you reduce latency and increase productivity. For more information on this service, see [Data service].
### Geolocation service Use the Geolocation service to retrieve the two-letter country/region code for an IP address. This service can help you enhance user experience by providing customized application content based on geographic location.
-For more details, read the [Geolocation service documentation](/rest/api/maps/geolocation).
+For more information, see the [Geolocation service] documentation.
### Render service
-[Render service V2](/rest/api/maps/render-v2) introduces a new version of the [Get Map Tile V2 API](/rest/api/maps/render-v2/get-map-tile) that supports using Azure Maps tiles not only in the Azure Maps SDKs but other map controls as well. It includes raster and vector tile formats, 256x256 or 512x512 (where applicable) tile sizes and numerous map types such as road, weather, contour, or map tiles created using Azure Maps Creator. For a complete list, see [TilesetID] in the REST API documentation. It's recommended that you use Render service V2 instead of Render service V1. You're required to display the appropriate copyright attribution on the map anytime you use the Azure Maps Render service V2, either as basemaps or layers, in any third-party map control. For more information, see [How to use the Get Map Attribution API](how-to-show-attribution.md).
+[Render service V2] introduces a new version of the [Get Map Tile V2 API] that supports using Azure Maps tiles not only in the Azure Maps SDKs but other map controls as well. It includes raster and vector tile formats, 256x256 or 512x512 tile sizes (where applicable) and numerous map types such as road, weather, contour, or map tiles. For a complete list, see [TilesetID] in the REST API documentation. It's recommended that you use Render service V2 instead of Render service V1. You're required to display the appropriate copyright attribution on the map anytime you use the Azure Maps Render service V2, either as basemaps or layers, in any third-party map control. For more information, see [How to use the Get Map Attribution API].
:::image type="content" source="./media/about-azure-maps/intro_map.png" border="false" alt-text="Example of a map from the Render service V2"::: ### Route service
-The route services can be used to calculate the estimated arrival times (ETAs) for each requested route. Route APIs consider factors, such as real-time traffic information and historic traffic data, like the typical road speeds on the requested day of the week and time of day. The APIs return the shortest or fastest routes available to multiple destinations at a time in sequence or in optimized order, based on time or distance. The service allows developers to calculate directions across several travel modes, such as car, truck, bicycle, or walking, and electric vehicle. The service also considers inputs, such as departure time, weight restrictions, or hazardous material transport.
+The route service is used to calculate the estimated arrival times (ETAs) for each requested route. Factors such as real-time traffic information and historic traffic data, like the typical road speeds on the requested day of the week and time of day are considered. The route service returns the shortest or fastest routes available to multiple destinations at a time in sequence or in optimized order, based on time or distance. The service allows developers to calculate directions across several travel modes, such as car, truck, bicycle, or walking, and electric vehicle. The service also considers inputs, such as departure time, weight restrictions, or hazardous material transport.
:::image type="content" source="./media/about-azure-maps/intro_route.png" border="false" alt-text="Example of a map from the Route service":::
The Route service offers advanced set features, such as:
* Matrices of travel time and distance between a set of origins and destinations. * Finding routes or distances that users can travel based on time or fuel requirements.
-For details on the routing capabilities, read the [Route service documentation](/rest/api/maps/route).
+For more information on routing capabilities, see the [Route service] documentation.
### Search service
-The Search service helps developers search for addresses, places, business listings by name or category, and other geographic information. Also, services can [reverse geocode](https://en.wikipedia.org/wiki/Reverse_geocoding) addresses and cross streets based on latitudes and longitudes.
+The Search service helps developers search for addresses, places, business listings by name or category, and other geographic information. Also, services can [reverse geocode] addresses and cross streets based on latitudes and longitudes.
:::image type="content" source="./media/about-azure-maps/intro_search.png" border="false" alt-text="Example of a search on a map":::
The Search service also provides advanced features such as:
* Batch a group of search requests. * Search electric vehicle charging stations and Point of Interest (POI) data by brand name.
-For more details on search capabilities, read the [Search service documentation](/rest/api/maps/search).
+For more information on search capabilities, see the [Search service] documentation.
### Spatial service The Spatial service quickly analyzes location information to help inform customers of ongoing events happening in time and space. It enables near real-time analysis and predictive modeling of events.
-The service enables customers to enhance their location intelligence with a library of common geospatial mathematical calculations. Common calculations include closest point, great circle distance, and buffers. To learn more about the service and the various features, read the [Spatial service documentation](/rest/api/maps/spatial).
+The service enables customers to enhance their location intelligence with a library of common geospatial mathematical calculations. Common calculations include closest point, great circle distance, and buffers. For more information about the Spatial service and its various features, see the [Spatial service] documentation.
### Timezone service
-The Time zone service enables you to query current, historical, and future time zone information. You can use either latitude and longitude pairs or an [IANA ID](https://www.iana.org/) as an input. The Time zone service also allows for:
+The Time zone service enables you to query current, historical, and future time zone information. You can use either latitude and longitude pairs or an [IANA ID] as an input. The Time zone service also allows for:
* Converting Microsoft Windows time-zone IDs to IANA time zones. * Fetching a time-zone offset to UTC.
A typical JSON response for a query to the Time zone service looks like the foll
} ```
-For details on this service, read the [Time zone service documentation](/rest/api/maps/timezone).
+For more information, see the [Time zone service] documentation.
### Traffic service
The Traffic service is a suite of web services that developers can use for web o
![Example of a map with traffic information](media/about-azure-maps/intro_traffic.png)
-For more information, see the [Traffic service documentation](/rest/api/maps/traffic).
+For more information, see the [Traffic service] documentation.
-### Weather services
+### Weather service
-Weather services offer APIs that developers can use to retrieve weather information for a particular location. The information contains details such as observation date and time, brief description of the weather conditions, weather icon, precipitation indicator flags, temperature, and wind speed information. Other details such as RealFeelΓäó Temperature and UV index are also returned.
+The Weather service offers API to retrieve weather information for a particular location. This information includes observation date and time, weather conditions, precipitation indicator flags, temperature, and wind speed information. Other details such as RealFeelΓäó Temperature and UV index are also returned.
-Developers can use the [Get Weather along route API](/rest/api/maps/weather/getweatheralongroute) to retrieve weather information along a particular route. Also, the service supports the generation of weather notifications for waypoints that are affected by weather hazards, such as flooding or heavy rain.
+Developers can use the [Get Weather along route API] to retrieve weather information along a particular route. Also, the service supports the generation of weather notifications for waypoints affected by weather hazards, such as flooding or heavy rain.
-The [Get Map Tile V2 API](/rest/api/maps/render-v2/get-map-tile) allows you to request past, current, and future radar and satellite tiles.
+The [Get Map Tile V2 API] allows you to request past, current, and future radar and satellite tiles.
![Example of map with real-time weather radar tiles](media/about-azure-maps/intro_weather.png)
Maps Creator provides the following
## Programming model
-Azure Maps is built for mobility and can help you develop cross-platform applications. It uses a programming model that's language agnostic and supports JSON output through [REST APIs](/rest/api/maps/).
+Azure Maps is built for mobility and can help you develop cross-platform applications. It uses a programming model that's language agnostic and supports JSON output through [REST APIs].
-Also, Azure Maps offers a convenient [JavaScript map control](/javascript/api/azure-maps-control) with a simple programming model. The development is quick and easy for both web and mobile applications.
+Also, Azure Maps offers a convenient [JavaScript map control] with a simple programming model. The development is quick and easy for both web and mobile applications.
## Power BI visual
The Azure Maps Power BI visual provides a rich set of data visualizations for sp
:::image type="content" source="./media/about-azure-maps/intro-power-bi.png" border="false" alt-text="Power BI desktop with the Azure Maps Power BI visual displaying business data":::
-For more information, see the [Get started with Azure Maps Power BI visual](power-bi-visual-get-started.md) article.
+For more information, see [Get started with Azure Maps Power BI visual].
## Usage
Verify that the location of your current IP address is in a supported country/re
Try a sample app that showcases Azure Maps:
-[Quickstart: Create a web app](quick-demo-map-app.md)
+[Quickstart: Create a web app]
Stay up to date on Azure Maps: [Azure Maps blog]
-[Data service]: /rest/api/maps/data-v2
-[Dataset service]: creator-indoor-maps.md#datasets
+<! learn.microsoft.com links >
[Conversion service]: creator-indoor-maps.md#convert-a-drawing-package
-[Tileset service]: creator-indoor-maps.md#tilesets
[Custom styling service]: creator-indoor-maps.md#custom-styling-preview
-[style service]: /rest/api/maps/v20220901preview/style
-[visual style editor]: https://azure.github.io/Azure-Maps-Style-Editor
+[Dataset service]: creator-indoor-maps.md#datasets
[Feature State service]: creator-indoor-maps.md#feature-statesets
-[WFS service]: creator-indoor-maps.md#web-feature-service-api
+[Get started with Azure Maps Power BI visual]: power-bi-visual-get-started.md
+[How to use the Get Map Attribution API]: how-to-show-attribution.md
+[Quickstart: Create a web app]: quick-demo-map-app.md
+[Tileset service]: creator-indoor-maps.md#tilesets
[Wayfinding service]: creator-indoor-maps.md#wayfinding-preview
-[wayfinding API]: /rest/api/maps/v20220901preview/wayfinding
+[WFS service]: creator-indoor-maps.md#web-feature-service-api
+<! REST API Links >
+[Data service]: /rest/api/maps/data-v2
+[Geolocation service]: /rest/api/maps/geolocation
+[Get Map Tile V2 API]: /rest/api/maps/render-v2/get-map-tile
+[Get Weather along route API]: /rest/api/maps/weather/getweatheralongroute
+[Render service V2]: /rest/api/maps/render-v2
+[REST APIs]: /rest/api/maps/
+[Route service]: /rest/api/maps/route
[routeset API]: /rest/api/maps/v20220901preview/routeset
-[Open Geospatial Consortium API]: https://docs.opengeospatial.org/is/17-069r3/17-069r3.html
-[Azure portal]: https://portal.azure.com
-[Azure Maps account]: https://azure.microsoft.com/services/azure-maps/
+[Search service]: /rest/api/maps/search
+[Spatial service]: /rest/api/maps/spatial
+[style service]: /rest/api/maps/v20220901preview/style
[TilesetID]: /rest/api/maps/render-v2/get-map-tile#tilesetid
+[Time zone service]: /rest/api/maps/timezone
+[Traffic service]: /rest/api/maps/traffic
+[wayfinding API]: /rest/api/maps/v20220901preview/wayfinding
+<! JavaScript API Links >
+[JavaScript map control]: /javascript/api/azure-maps-control
+<! External Links >
+[Azure Maps account]: https://azure.microsoft.com/services/azure-maps/
[Azure Maps blog]: https://azure.microsoft.com/blog/topics/azure-maps/
+[Azure portal]: https://portal.azure.com
+[IANA ID]: https://www.iana.org/
[Microsoft Trust Center]: https://www.microsoft.com/trust-center/privacy
+[Open Geospatial Consortium API]: https://docs.opengeospatial.org/is/17-069r3/17-069r3.html
+[reverse geocode]: https://en.wikipedia.org/wiki/Reverse_geocoding
[Subprocessor List]: https://servicetrust.microsoft.com/DocumentPage/aead9e68-1190-4d90-ad93-36418de5c594
+[visual style editor]: https://azure.github.io/Azure-Maps-Style-Editor
azure-maps Create Data Source Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-android-sdk.md
map.sources.add(source)
::: zone-end
-The `importDataFromUrl` method provides an easy way to load a GeoJSON feed into a data source but provides limited control on how the data is loaded and what happens after it's been loaded. The following code is a reusable class for importing data from the web or assets folder and returning it to the UI thread via a callback function. Next, add additional post load logic in the callback to process the data, add it to the map, calculate its bounding box, and update the maps camera.
+The `importDataFromUrl` method provides an easy way to load a GeoJSON feed into a data source but provides limited control on how the data is loaded and what happens after it's been loaded. The following code is a reusable class for importing data from the web or assets folder and returning it to the UI thread via a callback function. Next, add more post load logic in the callback to process the data, add it to the map, calculate its bounding box, and update the maps camera.
::: zone pivot="programming-language-java-android"
A vector tile source describes how to access a vector tile layer. Use the `Vecto
- Changing the style of the data in the vector maps doesn't require downloading the data again, since the new style can be applied on the client. In contrast, changing the style of a raster tile layer typically requires loading tiles from the server then applying the new style. - Since the data is delivered in vector form, there's less server-side processing required to prepare the data. As a result, the newer data can be made available faster.
-Azure Maps adheres to the [Mapbox Vector Tile Specification](https://github.com/mapbox/vector-tile-spec), an open standard. Azure Maps provides the following vector tiles services as part of the platform:
+Azure Maps adheres to the [Mapbox Vector Tile Specification], an open standard. Azure Maps provides the following vector tiles services as part of the platform:
-- Road tiles [documentation](/rest/api/maps/render-v2/get-map-tile)-- Traffic incidents [documentation](/rest/api/maps/traffic/gettrafficincidenttile)-- Traffic flow [documentation](/rest/api/maps/traffic/gettrafficflowtile)-- Azure Maps Creator also allows custom vector tiles to be created and accessed through the [Render V2-Get Map Tile API](/rest/api/maps/render-v2/get-map-tile)
+- [Road tiles]
+- [Traffic incidents]
+- [Traffic flow]
+- Azure Maps Creator also allows custom vector tiles to be created and accessed through the [Render V2-Get Map Tile API]
> [!TIP] > When using vector or raster image tiles from the Azure Maps render service with the web SDK, you can replace `atlas.microsoft.com` with the placeholder `azmapsdomain.invalid`. This placeholder will be replaced with the same domain used by the map and will automatically append the same authentication details as well. This greatly simplifies authentication with the render service when using Azure Active Directory authentication.
map.layers.add(layer, "labels")
## Connecting a data source to a layer
-Data is rendered on the map using rendering layers. A single data source can be referenced by one or more rendering layers. The following rendering layers require a data source:
+Data is rendered on the map using rendering layers. One or more rendering layers can reference a single data source. The following rendering layers require a data source:
-- [Bubble layer](map-add-bubble-layer-android.md) - renders point data as scaled circles on the map.-- [Symbol layer](how-to-add-symbol-to-android-map.md) - renders point data as icons or text.-- [Heat map layer](map-add-heat-map-layer-android.md) - renders point data as a density heat map.-- [Line layer](android-map-add-line-layer.md) - render a line and or render the outline of polygons.-- [Polygon layer](how-to-add-shapes-to-android-map.md) - fills the area of a polygon with a solid color or image pattern.
+- [Bubble layer] - renders point data as scaled circles on the map.
+- [Symbol layer]- renders point data as icons or text.
+- [Heat map layer] - renders point data as a density heat map.
+- [Line layer] - render a line and or render the outline of polygons.
+- [Polygon layer] - fills the area of a polygon with a solid color or image pattern.
The following code shows how to create a data source, add it to the map, and connect it to a bubble layer. And then, import GeoJSON point data from a remote location into the data source.
map.sources.add(source)
There are more rendering layers that don't connect to these data sources, but they directly load the data for rendering. -- [Tile layer](how-to-add-tile-layer-android-map.md) - superimposes a raster tile layer on top of the map.
+- [Tile layer] - superimposes a raster tile layer on top of the map.
## One data source with multiple layers
See the following articles for more code samples to add to your maps:
> [!div class="nextstepaction"] > [Web SDK Code samples](/samples/browse/?products=azure-maps)+
+<! learn.microsoft.com links >
+[Bubble layer]: map-add-bubble-layer-android.md
+[Symbol layer]: how-to-add-symbol-to-android-map.md
+[Heat map layer]: map-add-heat-map-layer-android.md
+[Line layer]: android-map-add-line-layer.md
+[Polygon layer]: how-to-add-shapes-to-android-map.md
+[Tile layer]: how-to-add-tile-layer-android-map.md
+<! REST API Links >
+[Road tiles]: /rest/api/maps/render-v2/get-map-tile
+[Traffic incidents]: /rest/api/maps/traffic/gettrafficincidenttile
+[Traffic flow]: /rest/api/maps/traffic/gettrafficflowtile
+[Render V2-Get Map Tile API]: /rest/api/maps/render-v2/get-map-tile
+<! External Links >
+[Mapbox Vector Tile Specification]: https://github.com/mapbox/vector-tile-spec
azure-maps Create Data Source Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-ios-sdk.md
feature.addProperty("custom-property", value: "value")
source.add(feature: feature) ```
-Alternatively the properties can be loaded into a dictionary (JSON) first then passed into the feature when creating it, as shown below.
+Alternatively the properties can be loaded into a dictionary (JSON) first then passed into the feature when creating it, as the following code demonstrates:
```swift //Create a dictionary to store properties for the feature.
let featureCollection = FeatureCollection(features)
### Serialize and deserialize GeoJSON
-The feature collection, feature, and geometry classes all have `fromJson(_:)` and `toJson()` static methods, which help with serialization. The formatted valid JSON String passed through the `fromJson()` method will create the geometry object. This `fromJson()` method also means you can use `JSONSerialization` or other serialization/deserialization strategies. The following code shows how to take a stringified GeoJSON feature and deserialize it into the `Feature` class, then serialize it back into a GeoJSON string.
+The feature collection, feature, and geometry classes all have `fromJson(_:)` and `toJson()` static methods, which help with serialization. The formatted valid JSON String passed through the `fromJson()` method creates the geometry object. This `fromJson()` method also means you can use `JSONSerialization` or other serialization/deserialization strategies. The following code shows how to take a stringified GeoJSON feature and deserialize it into the `Feature` class, then serialize it back into a GeoJSON string.
```swift // Take a stringified GeoJSON object.
let featureString = feature.toJson()
Most GeoJSON files contain a `FeatureCollection`. Read GeoJSON files as strings and use the `FeatureCollection.fromJson(_:)` method to deserialize it.
-The `DataSource` class has a built in method called `importData(fromURL:)` that can load in GeoJSON files using a URL to a file on the web or the device.
+The `DataSource` class has a built-in method called `importData(fromURL:)` that can load in GeoJSON files using a URL to a file on the web or the device.
```swift // Create a data source.
source.importData(fromURL: url)
map.sources.add(source) ```
-The `importData(fromURL:)` method provides an easily way to load a GeoJSON feed into a data source but provides limited control on how the data is loaded and what happens after its been loaded. The following code is a reusable class for importing data from the web or assets folder and returning it to the UI thread via a callback function. In the callback you can then add additional post load logic to process the data, add it to the map, calculate its bounding box, and update the maps camera.
+The `importData(fromURL:)` method provides a way to load a GeoJSON feed into a data source but provides limited control on how the data is loaded and what happens after it's been loaded. The following code is a reusable class for importing data from the web or assets folder and returning it to the UI thread via a callback function. In the callback, you can then add more post load logic to process the data, add it to the map, calculate its bounding box, and update the maps camera.
```swift import Foundation
public class Utils: NSObject {
} ```
-The code below shows how to use this utility to import GeoJSON data as a string and return it to the main thread via a callback. In the callback, the string data can be serialized into a GeoJSON Feature collection and added to the data source. Optionally, update the maps camera to focus in on the data.
+The following code shows how to use this utility to import GeoJSON data as a string and return it to the main thread via a callback. In the callback, the string data can be serialized into a GeoJSON Feature collection and added to the data source. Optionally, update the maps camera to focus in on the data.
```swift // Create a data source and add it to the map.
Utils.importData(fromURL: url) { result in
### Update a feature
-The `DataSource` class makes its easy to add and remove features. Updating the geometry or properties of a feature requires replacing the feature in the data source. There are two methods that can be used to update a feature(s):
+The `DataSource` class makes it easy to add and remove features. Updating the geometry or properties of a feature requires replacing the feature in the data source. There are two methods that can be used to update a feature(s):
1. Create the new feature(s) with the desired updates and replace all features in the data source using the `set` method. This method works well when you want to update all features in a data source.
A vector tile source describes how to access a vector tile layer. Use the `Vecto
- Changing the style of the data in the vector maps doesn't require downloading the data again, since the new style can be applied on the client. In contrast, changing the style of a raster tile layer typically requires loading tiles from the server then applying the new style. - Since the data is delivered in vector form, there's less server-side processing required to prepare the data. As a result, the newer data can be made available faster.
-Azure Maps adheres to the [Mapbox Vector Tile Specification](https://github.com/mapbox/vector-tile-spec), an open standard. Azure Maps provides the following vector tiles services as part of the platform:
+Azure Maps adheres to the [Mapbox Vector Tile Specification], an open standard. Azure Maps provides the following vector tiles services as part of the platform:
-- Road tiles [documentation](/rest/api/maps/render-v2/get-map-tile)-- Traffic incidents [documentation](/rest/api/maps/traffic/gettrafficincidenttile)-- Traffic flow [documentation](/rest/api/maps/traffic/gettrafficflowtile)-- Azure Maps Creator also allows custom vector tiles to be created and accessed through the [Render V2-Get Map Tile API](/rest/api/maps/render-v2/get-map-tile)
+- [Road tiles]
+- [Traffic incidents]
+- [Traffic flow]
+- Azure Maps Creator also allows custom vector tiles to be created and accessed through the [Render V2-Get Map Tile API]
> [!TIP] > When using vector or raster image tiles from the Azure Maps render service with the iOS SDK, you can replace `atlas.microsoft.com` with the `AzureMap`'s property' `domainPlaceholder`. This placeholder will be replaced with the same domain used by the map and will automatically append the same authentication details as well. This greatly simplifies authentication with the render service when using Azure Active Directory authentication.
map.layers.insertLayer(layer, below: "labels")
## Connect a data source to a layer
-Data is rendered on the map using rendering layers. A single data source can be referenced by one or more rendering layers. The following rendering layers require a data source:
+Data is rendered on the map using rendering layers. One or more rendering layers can reference a single data source. The following rendering layers require a data source:
-- [Bubble layer](add-bubble-layer-map-ios.md) - renders point data as scaled circles on the map.-- [Symbol layer](add-symbol-layer-ios.md) - renders point data as icons or text.-- [Heat map layer](add-heat-map-layer-ios.md) - renders point data as a density heat map.-- [Line layer](add-line-layer-map-ios.md) - render a line and or render the outline of polygons.-- [Polygon layer](add-polygon-layer-map-ios.md) - fills the area of a polygon with a solid color or image pattern.
+- [Bubble layer] - renders point data as scaled circles on the map.
+- [Symbol layer]- renders point data as icons or text.
+- [Heat map layer] - renders point data as a density heat map.
+- [Line layer] - render a line and or render the outline of polygons.
+- [Polygon layer] - fills the area of a polygon with a solid color or image pattern.
The following code shows how to create a data source, add it to the map, import GeoJSON point data from a remote location into the data source, and then connect it to a bubble layer.
let layer = BubbleLayer(source: source)
map.layers.addLayer(layer) ```
-There are additional rendering layers that don't connect to these data sources, but they directly load the data for rendering.
+There are other rendering layers that don't connect to these data sources, but they directly load the data for rendering.
-- [Tile layer](add-tile-layer-map-ios.md) - superimposes a raster tile layer on top of the map.
+- [Tile layer] - superimposes a raster tile layer on top of the map.
## One data source with multiple layers
Multiple layers can be connected to a single data source. There are many differe
In most mapping platforms, you would need a polygon object, a line object, and a pin for each position in the polygon. As the polygon is modified, you would need to manually update the line and pins, which can quickly become complex.
-With Azure Maps, all you need is a single polygon in a data source as shown in the code below.
+With Azure Maps, all you need is a single polygon in a data source as shown in the following code.
```swift // Create a data source and add it to the map.
map.layers.addLayers([polygonLayer, lineLayer, bubbleLayer])
See the following articles for more code samples to add to your maps: -- [Cluster point data](clustering-point-data-ios-sdk.md)-- [Add a symbol layer](Add-symbol-layer-ios.md)-- [Add a bubble layer](add-bubble-layer-map-ios.md)-- [Add a line layer](add-line-layer-map-ios.md)-- [Add a polygon layer](Add-polygon-layer-map-ios.md)-- [Add a heat map](Add-heat-map-layer-ios.md)-- [Web SDK Code samples](/samples/browse/?products=azure-maps)
+- [Cluster point data]
+- [Add a symbol layer]
+- [Add a bubble layer]
+- [Add a line layer]
+- [Add a polygon layer]
+- [Add a heat map]
+- [Web SDK Code samples]
+
+<! learn.microsoft.com links >
+[Cluster point data]: clustering-point-data-ios-sdk.md
+[Add a symbol layer]: Add-symbol-layer-ios.md
+[Add a bubble layer]: add-bubble-layer-map-ios.md
+[Add a line layer]: add-line-layer-map-ios.md
+[Add a polygon layer]: Add-polygon-layer-map-ios.md
+[Add a heat map]: Add-heat-map-layer-ios.md
+[Web SDK Code samples]: /samples/browse/?products=azure-maps
+<! learn.microsoft.com links >
+[Bubble layer]: add-bubble-layer-map-ios.md
+[Symbol layer]: Add-symbol-layer-ios.md
+[Heat map layer]: Add-heat-map-layer-ios.md
+[Line layer]: add-line-layer-map-ios.md
+[Polygon layer]: Add-polygon-layer-map-ios.md
+[Tile layer]: how-to-add-tile-layer-android-map.md
+<! REST API Links >
+[Road tiles]: /rest/api/maps/render-v2/get-map-tile
+[Traffic incidents]: /rest/api/maps/traffic/gettrafficincidenttile
+[Traffic flow]: /rest/api/maps/traffic/gettrafficflowtile
+[Render V2-Get Map Tile API]: /rest/api/maps/render-v2/get-map-tile
+<! External Links >
+[Mapbox Vector Tile Specification]: https://github.com/mapbox/vector-tile-spec
azure-maps Create Data Source Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-web-sdk.md
The Azure Maps Web SDK stores data in data sources. Using data sources optimizes
## GeoJSON data source
-A GeoJSON based data source load and store data locally using the `DataSource` class. GeoJSON data can be manually created or created using the helper classes in the [atlas.data](/javascript/api/azure-maps-control/atlas.data) namespace. The `DataSource` class provides functions to import local or remote GeoJSON files. Remote GeoJSON files must be hosted on a CORs enabled endpoint. The `DataSource` class provides functionality for clustering point data. And, data can easily be added, removed, and updated with the `DataSource` class. The following code shows how GeoJSON data can be created in Azure Maps.
+A GeoJSON based data source load and store data locally using the `DataSource` class. GeoJSON data can be manually created or created using the helper classes in the [atlas.data] namespace. The `DataSource` class provides functions to import local or remote GeoJSON files. Remote GeoJSON files must be hosted on a CORs enabled endpoint. The `DataSource` class provides functionality for clustering point data. And, data can easily be added, removed, and updated with the `DataSource` class. The following code shows how GeoJSON data can be created in Azure Maps.
```javascript //Create raw GeoJSON object.
var geoJsonClass = new atlas.data.Feature(new atlas.data.Point([-100, 45]), {
}); ```
-Once created, data sources can be added to the map through the `map.sources` property, which is a [SourceManager](/javascript/api/azure-maps-control/atlas.sourcemanager). The following code shows how to create a `DataSource` and add it to the map.
+Once created, data sources can be added to the map through the `map.sources` property, which is a [SourceManager]. The following code shows how to create a `DataSource` and add it to the map.
```javascript //Create a data source and add it to the map.
source.setShapes(geoJsonData);
## Vector tile source
-A vector tile source describes how to access a vector tile layer. Use the [VectorTileSource](/javascript/api/azure-maps-control/atlas.source.vectortilesource) class to instantiate a vector tile source. Vector tile layers are similar to tile layers, but they aren't the same. A tile layer is a raster image. Vector tile layers are a compressed file, in **PBF** format. This compressed file contains vector map data, and one or more layers. The file can be rendered and styled on the client, based on the style of each layer. The data in a vector tile contain geographic features in the form of points, lines, and polygons. There are several advantages of using vector tile layers instead of raster tile layers:
+A vector tile source describes how to access a vector tile layer. Use the [VectorTileSource] class to instantiate a vector tile source. Vector tile layers are similar to tile layers, but they aren't the same. A tile layer is a raster image. Vector tile layers are a compressed file, in **PBF** format. This compressed file contains vector map data, and one or more layers. The file can be rendered and styled on the client, based on the style of each layer. The data in a vector tile contain geographic features in the form of points, lines, and polygons. There are several advantages of using vector tile layers instead of raster tile layers:
* A file size of a vector tile is typically much smaller than an equivalent raster tile. As such, less bandwidth is used. It means lower latency, a faster map, and a better user experience. * Since vector tiles are rendered on the client, they adapt to the resolution of the device they're being displayed on. As a result, the rendered maps appear more well defined, with crystal clear labels. * Changing the style of the data in the vector maps doesn't require downloading the data again, since the new style can be applied on the client. In contrast, changing the style of a raster tile layer typically requires loading tiles from the server then applying the new style. * Since the data is delivered in vector form, there's less server-side processing required to prepare the data. As a result, the newer data can be made available faster.
-Azure Maps adheres to the [Mapbox Vector Tile Specification](https://github.com/mapbox/vector-tile-spec), an open standard. Azure Maps provides the following vector tiles services as part of the platform:
+Azure Maps adheres to the [Mapbox Vector Tile Specification], an open standard. Azure Maps provides the following vector tiles services as part of the platform:
-- Road tiles [documentation](/rest/api/maps/render-v2/get-map-tile)-- Traffic incidents [documentation](/rest/api/maps/traffic/gettrafficincidenttile)-- Traffic flow [documentation](/rest/api/maps/traffic/gettrafficflowtile)-- Azure Maps Creator also allows custom vector tiles to be created and accessed through the [Render V2-Get Map Tile API](/rest/api/maps/render-v2/get-map-tile)
+* [Road tiles]
+* [Traffic incidents]
+* [Traffic flow]
+* Azure Maps Creator also allows custom vector tiles to be created and accessed through the [Render V2-Get Map Tile API]
> [!TIP] > When using vector or raster image tiles from the Azure Maps render service with the web SDK, you can replace `atlas.microsoft.com` with the placeholder `{azMapsDomain}`. This placeholder will be replaced with the same domain used by the map and will automatically append the same authentication details as well. This greatly simplifies authentication with the render service when using Azure Active Directory authentication.
map.layers.add(flowLayer, 'labels');
## Connecting a data source to a layer
-Data is rendered on the map using rendering layers. A single data source can be referenced by one or more rendering layers. The following rendering layers require a data source:
+Data is rendered on the map using rendering layers. One or more rendering layers can reference a single data source. The following rendering layers require a data source:
-* [Bubble layer](map-add-bubble-layer.md) - renders point data as scaled circles on the map.
-* [Symbol layer](map-add-pin.md) - renders point data as icons or text.
-* [Heat map layer](map-add-heat-map-layer.md) - renders point data as a density heat map.
-* [Line layer](map-add-shape.md) - render a line and or render the outline of polygons.
-* [Polygon layer](map-add-shape.md) - fills the area of a polygon with a solid color or image pattern.
+* [Bubble layer] - renders point data as scaled circles on the map.
+* [Symbol layer]- renders point data as icons or text.
+* [Heat map layer] - renders point data as a density heat map.
+* [Line layer] - render a line and or render the outline of polygons.
+* [Polygon layer] - fills the area of a polygon with a solid color or image pattern.
-The following code shows how to create a data source, add it to the map, and connect it to a bubble layer. And then, import GeoJSON point data from a remote location into the data source.
+The following code shows how to create a data source, add it to the map, and connect it to a bubble layer. And then, import GeoJSON point data from a remote location into the data source.
```javascript //Create a data source and add it to the map.
map.layers.add(new atlas.layer.BubbleLayer(source));
source.importDataFromUrl('https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/significant_month.geojson'); ```
-There are additional rendering layers that don't connect to these data sources, but they directly load the data for rendering.
+There are other rendering layers that don't connect to these data sources, but they directly load the data for rendering.
-* [Image layer](map-add-image-layer.md) - overlays a single image on top of the map and binds its corners to a set of specified coordinates.
-* [Tile layer](map-add-tile-layer.md) - superimposes a raster tile layer on top of the map.
+* [Image layer] - overlays a single image on top of the map and binds its corners to a set of specified coordinates.
+* [Tile layer] - superimposes a raster tile layer on top of the map.
## One data source with multiple layers
Multiple layers can be connected to a single data source. There are many differe
In most mapping platforms, you would need a polygon object, a line object, and a pin for each position in the polygon. As the polygon is modified, you would need to manually update the line and pins, which can quickly become complex.
-With Azure Maps, all you need is a single polygon in a data source as shown in the code below.
+With Azure Maps, all you need is a single polygon in a data source as shown in the following code.
```javascript //Create a data source and add it to the map.
map.layers.add([polygonLayer, lineLayer, bubbleLayer]);
Learn more about the classes and methods used in this article: > [!div class="nextstepaction"]
-> [DataSource](/javascript/api/azure-maps-control/atlas.source.datasource)
+> [DataSource]
> [!div class="nextstepaction"]
-> [DataSourceOptions](/javascript/api/azure-maps-control/atlas.datasourceoptions)
+> [DataSourceOptions]
> [!div class="nextstepaction"]
-> [VectorTileSource](/javascript/api/azure-maps-control/atlas.source.vectortilesource)
+> [VectorTileSource]
> [!div class="nextstepaction"]
-> [VectorTileSourceOptions](/javascript/api/azure-maps-control/atlas.vectortilesourceoptions)
+> [VectorTileSourceOptions]
See the following articles for more code samples to add to your maps:
See the following articles for more code samples to add to your maps:
> [Add a popup](map-add-popup.md) > [!div class="nextstepaction"]
-> [Use data-driven style expressions](data-driven-style-expressions-web-sdk.md)
+> [Use data-driven style expressions]
> [!div class="nextstepaction"]
-> [Add a symbol layer](map-add-pin.md)
+> [Add a symbol layer]
> [!div class="nextstepaction"]
-> [Add a bubble layer](map-add-bubble-layer.md)
+> [Add a bubble layer]
> [!div class="nextstepaction"]
-> [Add a line layer](map-add-line-layer.md)
+> [Add a line layer]
> [!div class="nextstepaction"]
-> [Add a polygon layer](map-add-shape.md)
+> [Add a polygon layer]
> [!div class="nextstepaction"]
-> [Add a heat map](map-add-heat-map-layer.md)
+> [Add a heat map]
> [!div class="nextstepaction"]
-> [Code samples](/samples/browse/?products=azure-maps)
+> [Code samples]
+
+<! learn.microsoft.com links >
+[Bubble layer]: map-add-bubble-layer.md
+[Symbol layer]: map-add-pin.md
+[Heat map layer]: map-add-heat-map-layer.md
+[Line layer]: map-add-line-layer.md
+[Polygon layer]: map-add-shape.md
+[Tile layer]: map-add-tile-layer.md
+[Image layer]: map-add-image-layer.md
+
+[Add a symbol layer]: map-add-pin.md
+[Add a bubble layer]: map-add-bubble-layer.md
+[Add a line layer]: map-add-line-layer.md
+[Add a polygon layer]: map-add-shape.md
+[Add a heat map]: map-add-heat-map-layer.md
+[Use data-driven style expressions]: data-driven-style-expressions-web-sdk.md
+[Code samples]: /samples/browse/?products=azure-maps
+<! REST API Links >
+[Road tiles]: /rest/api/maps/render-v2/get-map-tile
+[Traffic incidents]: /rest/api/maps/traffic/gettrafficincidenttile
+[Traffic flow]: /rest/api/maps/traffic/gettrafficflowtile
+[Render V2-Get Map Tile API]: /rest/api/maps/render-v2/get-map-tile
+<! javascript API Links >
+[atlas.data]: /javascript/api/azure-maps-control/atlas.data
+[SourceManager]: /javascript/api/azure-maps-control/atlas.sourcemanager
+[VectorTileSource]: /javascript/api/azure-maps-control/atlas.source.vectortilesource
+[DataSource]: /javascript/api/azure-maps-control/atlas.source.datasource
+[DataSourceOptions]: /javascript/api/azure-maps-control/atlas.datasourceoptions
+[VectorTileSourceOptions]: /javascript/api/azure-maps-control/atlas.vectortilesourceoptions
+<! External Links >
+[Mapbox Vector Tile Specification]: https://github.com/mapbox/vector-tile-spec
azure-maps Data Driven Style Expressions Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/data-driven-style-expressions-web-sdk.md
Expressions are represented as JSON arrays. The first element of an expression i
The Azure Maps Web SDK supports many types of expressions. Expressions can be used on their own or in combination with other expressions.
-| Type of expressions | Description |
-||-|
-| [Aggregate expression](#aggregate-expression) | An expression that defines a calculation that is processed over a set of data and can be used with the `clusterProperties` option of a `DataSource`. |
-| [Boolean expressions](#boolean-expressions) | Boolean expressions provide a set of boolean operators expressions for evaluating boolean comparisons. |
-| [Color expressions](#color-expressions) | Color expressions make it easier to create and manipulate color values. |
-| [Conditional expressions](#conditional-expressions) | Conditional expressions provide logic operations that are like if-statements. |
-| [Data expressions](#data-expressions) | Provides access to the property data in a feature. |
-| [Interpolate and Step expressions](#interpolate-and-step-expressions) | Interpolate and step expressions can be used to calculate values along an interpolated curve or step function. |
-| [Layer specific expressions](#layer-specific-expressions) | Special expressions that are only applicable to a single layer. |
-| [Math expressions](#math-expressions) | Provides mathematical operators to perform data-driven calculations within the expression framework. |
-| [String operator expressions](#string-operator-expressions) | String operator expressions perform conversion operations on strings such as concatenating and converting the case. |
-| [Type expressions](#type-expressions) | Type expressions provide tools for testing and converting different data types like strings, numbers, and boolean values. |
-| [Variable binding expressions](#variable-binding-expressions) | Variable binding expressions store the results of a calculation in a variable and referenced elsewhere in an expression multiple times without having to recalculate the stored value. |
-| [Zoom expression](#zoom-expression) | Retrieves the current zoom level of the map at render time. |
+| Type of expressions | Description |
+||-|
+| [Aggregate expression] | An expression that defines a calculation that is processed over a set of data and can be used with the `clusterProperties` option of a `DataSource`. |
+| [Boolean expressions] | Boolean expressions provide a set of boolean operators expressions for evaluating boolean comparisons. |
+| [Color expressions] | Color expressions make it easier to create and manipulate color values. |
+| [Conditional expressions] | Conditional expressions provide logic operations that are like if-statements. |
+| [Data expressions] | Provides access to the property data in a feature. |
+| [Interpolate and Step expressions] | Interpolate and step expressions can be used to calculate values along an interpolated curve or step function. |
+| [Layer specific expressions] | Special expressions that are only applicable to a single layer. |
+| [Math expressions] | Provides mathematical operators to perform data-driven calculations within the expression framework. |
+| [String operator expressions] | String operator expressions perform conversion operations on strings such as concatenating and converting the case. |
+| [Type expressions] | Type expressions provide tools for testing and converting different data types like strings, numbers, and boolean values. |
+| [Variable binding expressions] | Variable binding expressions store the results of a calculation in a variable and referenced elsewhere in an expression multiple times without having to recalculate the stored value. |
+| [Zoom expression] | Retrieves the current zoom level of the map at render time. |
All examples in this document use the following feature to demonstrate different ways in which the different types of expressions can be used.
Data expressions provide access to the property data in a feature.
| `['id']` | value | Gets the feature's ID if it has one. | | `['in', boolean | string | number, array]` | boolean | Determines whether an item exists in an array | | `['in', substring, string]` | boolean | Determines whether a substring exists in a string |
-| `['index-of', boolean | string | number, array | string]`<br/><br/>`['index-of', boolean | string | number, array | string, number]` | number | Returns the first position at which an item can be found in an array or a substring can be found in a string, or `-1` if the input cannot be found. Accepts an optional index from where to begin the search. |
+| `['index-of', boolean | string | number, array | string]`<br/><br/>`['index-of', boolean | string | number, array | string, number]` | number | Returns the first position at which an item can be found in an array or a substring can be found in a string, or `-1` if the input can't be found. Accepts an optional index from where to begin the search. |
| `['length', string | array]` | number | Gets the length of a string or an array. | | `['slice', array | string, number]`<br/><br/>`['slice', array | string, number, number]` | string \| array | Returns an item from an array or a substring from a string from a specified start index, or between a start index and an end index if set. The return value is inclusive of the start index but not of the end index. |
var layer = new atlas.layer.BubbleLayer(datasource, null, {
}); ```
-The above example will work fine, if all the point features have the `zoneColor` property. If they don't, the color will likely fall back to "black". To modify the fallback color, use a `case` expression in combination with the `has` expression to check if the property exists. If the property doesn't exist, return a fallback color.
+The above example works fine, if all the point features have the `zoneColor` property. If they don't, the color defaults to "black". To modify the fallback color, use a `case` expression in combination with the `has` expression to check if the property exists. If the property doesn't exist, return a fallback color.
```javascript var layer = new atlas.layer.BubbleLayer(datasource, null, {
var layer = new atlas.layer.BubbleLayer(datasource, null, {
}); ```
-Bubble and symbol layers will render the coordinates of all shapes in a data source, by default. This behavior can highlight the vertices of a polygon or a line. The `filter` option of the layer can be used to limit the geometry type of the features it renders, by using a `['geometry-type']` expression within a boolean expression. The following example limits a bubble layer so that only `Point` features are rendered.
+Bubble and symbol layers render the coordinates of all shapes in a data source, by default. This behavior can highlight the vertices of a polygon or a line. The `filter` option of the layer can be used to limit the geometry type of the features it renders, by using a `['geometry-type']` expression within a boolean expression. The following example limits a bubble layer so that only `Point` features are rendered.
```javascript var layer = new atlas.layer.BubbleLayer(datasource, null, {
var layer = new atlas.layer.BubbleLayer(datasource, null, {
}); ```
-Similarly, the outline of Polygons will render in line layers. To disable this behavior in a line layer, add a filter that only allows `LineString` and `MultiLineString` features.
+Similarly, the outline of Polygons render in line layers. To disable this behavior in a line layer, add a filter that only allows `LineString` and `MultiLineString` features.
-Here are some additional examples of how to use data expressions:
+Here are some more examples of how to use data expressions:
```javascript //Get item [2] from an array "properties.abcArray[1]" = "c"
If all features in a data set have a `revenue` property, which is a number. Then
### Accumulated expression
-The `accumulated` expression gets the value of a cluster property accumulated so far. This can only be used in the `clusterProperties` option of a clustered `DataSource` source.
+The `accumulated` expression gets the value of a cluster property accumulated so far, used in the `clusterProperties` option of a clustered `DataSource` source.
**Usage**
The `accumulated` expression gets the value of a cluster property accumulated s
Boolean expressions provide a set of boolean operators expressions for evaluating boolean comparisons.
-When comparing values, the comparison is strictly typed. Values of different types are always considered unequal. Cases where the types are known to be different at parse time are considered invalid and will produce a parse error.
+The comparison is strictly typed when values are compared. Values of different types are always considered unequal. Cases where the types are known to be different at parse time are considered invalid and produces a parse error.
| Expression | Return type | Description | ||-|-| | `['!', boolean]` | boolean | Logical negation. Returns `true` if the input is `false`, and `false` if the input is `true`. |
-| `['!=', value, value]` | boolean | Returns `true` if the input values are not equal, `false` otherwise. |
+| `['!=', value, value]` | boolean | Returns `true` if the input values aren't equal, `false` otherwise. |
| `['<', value, value]` | boolean | Returns `true` if the first input is strictly less than the second, `false` otherwise. The arguments are required to be either both strings or both numbers. | | `['<=', value, value]` | boolean | Returns `true` if the first input is less than or equal to the second, `false` otherwise. The arguments are required to be either both strings or both numbers. | | `['==', value, value]` | boolean | Returns `true` if the input values are equal, `false` otherwise. The arguments are required to be either both strings or both numbers. |
The following pseudocode defines the structure of the `case` expression.
**Example**
-The following example steps through different boolean conditions until it finds one that evaluates to `true`, and then returns that associated value. If no boolean condition evaluates to `true`, a fallback value will be returned.
+The following example steps through different boolean conditions until it finds one that evaluates to `true`, and then returns that associated value. If no boolean condition evaluates to `true`, a fallback value is returned.
```javascript var layer = new atlas.layer.BubbleLayer(datasource, null, {
var layer = new atlas.layer.BubbleLayer(datasource, null, {
### Match expression
-A `match` expression is a type of conditional expression that provides switch-statement like logic. The input can be any expression such as `['get', 'entityType']` that returns a string or a number. Each label must be either a single literal value or an array of literal values, whose values must be all strings or all numbers. The input matches if any of the values in the array match. Each label must be unique. If the input type doesn't match the type of the labels, the result will be the fallback value.
+A `match` expression is a type of conditional expression that provides switch-statement like logic. The input can be any expression such as `['get', 'entityType']` that returns a string or a number. Each label must be either a single literal value or an array of literal values, whose values must be all strings or all numbers. The input matches if any of the values in the array match. Each label must be unique. If the input type doesn't match the type of the labels, the result is the fallback value.
The following pseudocode defines the structure of the `match` expression.
var layer = new atlas.layer.BubbleLayer(datasource, null, {
}); ```
-The following example uses an array to list a set of labels that should all return the same value. This approach is much more efficient than listing each label individually. In this case, if the `entityType` property is "restaurant" or "grocery_store", the color "red" will be returned.
+The following example uses an array to list a set of labels that should all return the same value. This approach is much more efficient than listing each label individually. In this case, if the `entityType` property is "restaurant" or "grocery_store", the color "red" is returned.
```javascript var layer = new atlas.layer.BubbleLayer(datasource, null, {
The following pseudocode defines the structure of the `coalesce` expression.
**Example**
-The following example uses a `coalesce` expression to set the `textField` option of a symbol layer. If the `title` property is missing from the feature or set to `null`, the expression will then try looking for the `subTitle` property, if its missing or `null`, it will then fall back to an empty string.
+The following example uses a `coalesce` expression to set the `textField` option of a symbol layer. If the `title` property is missing from the feature or set to `null`, the expression tries looking for the `subTitle` property, if it's missing or `null`, it falls back to an empty string.
```javascript var layer = new atlas.layer.SymbolLayer(datasource, null, {
Type expressions provide tools for testing and converting different data types l
||-|-| | `['array', value]` \| `['array', type: "string" | "number" | "boolean", value]` | Object[] | Asserts that the input is an array. | | `['boolean', value]` \| `["boolean", value, fallback: value, fallback: value, ...]` | boolean | Asserts that the input value is a boolean. If multiple values are provided, each one is evaluated in order until a boolean is obtained. If none of the inputs are booleans, the expression is an error. |
-| `['collator', { 'case-sensitive': boolean, 'diacritic-sensitive': boolean, 'locale': string }]` | collator | Returns a collator for use in locale-dependent comparison operations. The case-sensitive and diacritic-sensitive options default to false. The locale argument specifies the IETF language tag of the locale to use. If none is provided, the default locale is used. If the requested locale is not available, the collator will use a system-defined fallback locale. Use resolved-locale to test the results of locale fallback behavior. |
-| `['literal', array]`<br/><br/>`['literal', object]` | array \| object | Returns a literal array or object value. Use this expression to prevent an array or object from being evaluated as an expression. This is necessary when an array or object needs to be returned by an expression. |
+| `['collator', { 'case-sensitive': boolean, 'diacritic-sensitive': boolean, 'locale': string }]` | collator | Returns a collator for use in locale-dependent comparison operations. The case-sensitive and diacritic-sensitive options default to false. The locale argument specifies the IETF language tag of the locale to use. If none is provided, the default locale is used. If the requested locale isn't available, the collator uses a system-defined fallback locale. Use resolved-locale to test the results of locale fallback behavior. |
+| `['literal', array]`<br/><br/>`['literal', object]` | array \| object | Returns a literal array or object value. Use this expression to prevent an array or object from being evaluated as an expression, necessary when an array or object is returned by an expression. |
| `['image', string]` | string | Checks to see if a specified image ID is loaded into the maps image sprite. If it is, the ID is returned, otherwise null is returned. | | `['number', value]` \| `["number", value, fallback: value, fallback: value, ...]` | number | Asserts that the input value is a number. If multiple values are provided, each one is evaluated in order until a number is obtained. If none of the inputs are numbers, the expression is an error. | | `['object', value]` \| `["object", value, fallback: value, fallback: value, ...]` | Object | Asserts that the input value is an object. If multiple values are provided, each one is evaluated in order until an object is obtained. If none of the inputs are objects, the expression is an error. | | `['string', value]` \| `["string", value, fallback: value, fallback: value, ...]` | string | Asserts that the input value is a string. If multiple values are provided, each one is evaluated in order until a string is obtained. If none of the inputs are strings, the expression is an error. | | `['to-boolean', value]` | boolean | Converts the input value to a boolean. The result is `false` when the input is an empty string, `0`, `false`, `null`, or `NaN`; otherwise its `true`. | | `['to-color', value]`<br/><br/>`['to-color', value1, value2…]` | color | Converts the input value to a color. If multiple values are provided, each one is evaluated in order until the first successful conversion is obtained. If none of the inputs can be converted, the expression is an error. |
-| `['to-number', value]`<br/><br/>`['to-number', value1, value2, …]` | number | Converts the input value to a number, if possible. If the input is `null` or `false`, the result is 0. If the input is `true`, the result is 1. If the input is a string, it's converted to a number using the [ToNumber](https://tc39.github.io/ecma262/#sec-tonumber-applied-to-the-string-type) string function of the ECMAScript Language Specification. If multiple values are provided, each one is evaluated in order until the first successful conversion is obtained. If none of the inputs can be converted, the expression is an error. |
-| `['to-string', value]` | string | Converts the input value to a string. If the input is `null`, the result is `""`. If the input is a boolean, the result is `"true"` or `"false"`. If the input is a number, it's converted to a string using the [ToString](https://tc39.github.io/ecma262/#sec-tostring-applied-to-the-number-type) number function of the ECMAScript Language Specification. If the input is a color, it's converted to CSS RGBA color string `"rgba(r,g,b,a)"`. Otherwise, the input is converted to a string using the [JSON.stringify](https://tc39.github.io/ecma262/#sec-json.stringify) function of the ECMAScript Language Specification. |
+| `['to-number', value]`<br/><br/>`['to-number', value1, value2, …]` | number | Converts the input value to a number, if possible. If the input is `null` or `false`, the result is 0. If the input is `true`, the result is 1. If the input is a string, it's converted to a number using the [ToNumber] string function of the ECMAScript Language Specification. If multiple values are provided, each one is evaluated in order until the first successful conversion is obtained. If none of the inputs can be converted, the expression is an error. |
+| `['to-string', value]` | string | Converts the input value to a string. If the input is `null`, the result is `""`. If the input is a boolean, the result is `"true"` or `"false"`. If the input is a number, it's converted to a string using the [ToString] number function of the ECMAScript Language Specification. If the input is a color, it's converted to CSS RGBA color string `"rgba(r,g,b,a)"`. Otherwise, the input is converted to a string using the [JSON.stringify] function of the ECMAScript Language Specification. |
| `['typeof', value]` | string | Returns a string describing the type of the given value. | > [!TIP]
Color expressions make it easier to create and manipulate color values.
| Expression | Return type | Description | ||-|-|
-| `['rgb', number, number, number]` | color | Creates a color value from *red*, *green*, and *blue* components that must range between `0` and `255`, and an alpha component of `1`. If any component is out of range, the expression is an error. |
-| `['rgba', number, number, number, number]` | color | Creates a color value from *red*, *green*, *blue* components that must range between `0` and `255`, and an alpha component within a range of `0` and `1`. If any component is out of range, the expression is an error. |
+| `['rgb', number, number, number]` | color | Creates a color value from *red*, *green*, and *blue* components ranging between `0` and `255`, and an alpha component of `1`. If any component is out of range, the expression is an error. |
+| `['rgba', number, number, number, number]` | color | Creates a color value from *red*, *green*, *blue* components ranging between `0` and `255`, and an alpha component within a range of `0` and `1`. If any component is out of range, the expression is an error. |
| `['to-rgba']` | \[number, number, number, number\] | Returns a four-element array containing the input color's *red*, *green*, *blue*, and *alpha* components, in that order. | **Example**
-The following example creates an RGB color value that has a *red* value of `255`, and *green* and *blue* values that are calculated by multiplying `2.5` by the value of the `temperature` property. As the temperature changes, the color will change to different shades of *red*.
+The following example creates an RGB color value that has a *red* value of `255`, and *green* and *blue* values calculated by multiplying `2.5` by the value of the `temperature` property. As the temperature changes, the color changes to different shades of *red*.
```javascript var layer = new atlas.layer.BubbleLayer(datasource, null, {
String operator expressions perform conversion operations on strings such as con
| `['concat', string, string, …]` | string | Concatenates multiple strings together. Each value must be a string. Use the `to-string` type expression to convert other value types to string if needed. | | `['downcase', string]` | string | Converts the specified string to lowercase. | | `['is-supported-script', string]` \| `['is-supported-script', Expression]`| boolean | Determines if the input string uses a character set supported by the current font stack. For example: `['is-supported-script', 'ಗೌರವಾರ್ಥವಾಗಿ']` |
-| `['resolved-locale', string]` | string | Returns the IETF language tag of the locale being used by the provided collator. This can be used to determine the default system locale, or to determine if a requested locale was successfully loaded. |
+| `['resolved-locale', string]` | string | Returns the IETF language tag of the locale being used by the provided collator that can be used to determine the default system locale or to determine if a requested locale was successfully loaded. |
| `['upcase', string]` | string | Converts the specified string to uppercase. | **Example**
var layer = new atlas.layer.SymbolLayer(datasource, null, {
}); ```
-The above expression renders a pin on the map with the text "64┬░F" overlaid on top of it as shown in the image below.
+The above expression renders a pin on the map with the text "64┬░F" overlaid on top of it as shown in the following image.
![String operator expression example](media/how-to-expressions/string-operator-expression.png)
There are three types of interpolation methods that can be used in an `interpola
- `['linear']` - Interpolates linearly between the pair of stops. - `['exponential', base]` - Interpolates exponentially between the stops. The `base` value controls the rate at which the output increases. Higher values make the output increase more towards the high end of the range. A `base` value close to 1 produces an output that increases more linearly.-- `['cubic-bezier', x1, y1, x2, y2]` - Interpolates using a [cubic Bezier curve](https://developer.mozilla.org/docs/Web/CSS/timing-function) defined by the given control points.
+- `['cubic-bezier', x1, y1, x2, y2]` - Interpolates using a [cubic Bezier curve] defined by the given control points.
-Here is an example of what these different types of interpolations look like.
+Here's an example of what these different types of interpolations look like.
| Linear | Exponential | Cubic Bezier | ||-|--|
The following pseudocode defines the structure of the `interpolate` expression.
**Example**
-The following example uses a `linear interpolate` expression to set the `color` property of a bubble layer based on the `temperature` property of the point feature. If the `temperature` value is less than 60, "blue" will be returned. If it's between 60 and less than 70, yellow will be returned. If it's between 70 and less than 80, "orange" will be returned. If it's 80 or greater, "red" will be returned.
+The following example uses a `linear interpolate` expression to set the `color` property of a bubble layer based on the `temperature` property of the point feature. If the `temperature` value is less than 60, "blue" is returned. If it's between 60 and less than 70, yellow is returned. If it's between 70 and less than 80, "orange" is returned. If it's 80 or greater, "red" is returned.
```javascript var layer = new atlas.layer.BubbleLayer(datasource, null, {
The following image demonstrates how the colors are chosen for the above express
### Step expression
-A `step` expression can be used to calculate discrete, stepped result values by evaluating a [piecewise-constant function](https://mathworld.wolfram.com/PiecewiseConstantFunction.html) defined by stops.
+A `step` expression can be used to calculate discrete, stepped result values by evaluating a [piecewise-constant function] defined by stops.
The following pseudocode defines the structure of the `step` expression.
Step expressions return the output value of the stop just before the input value
**Example**
-The following example uses a `step` expression to set the `color` property of a bubble layer based on the `temperature` property of the point feature. If the `temperature` value is less than 60, "blue" will be returned. If it's between 60 and less than 70, "yellow" will be returned. If it's between 70 and less than 80, "orange" will be returned. If it's 80 or greater, "red" will be returned.
+The following example uses a `step` expression to set the `color` property of a bubble layer based on the `temperature` property of the point feature. If the `temperature` value is less than 60, "blue" is returned. If it's between 60 and less than 70, "yellow" is returned. If it's between 70 and less than 80, "orange" is returned. If it's 80 or greater, "red" is returned.
```javascript var layer = new atlas.layer.BubbleLayer(datasource, null, {
Special expressions that only apply to specific layers.
### Heat map density expression
-A heat map density expression retrieves the heat map density value for each pixel in a heat map layer and is defined as `['heatmap-density']`. This value is a number between `0` and `1`. It's used in combination with a `interpolation` or `step` expression to define the color gradient used to colorize the heat map. This expression can only be used in the [color option](/javascript/api/azure-maps-control/atlas.heatmaplayeroptions#color) of the heat map layer.
+A heat map density expression retrieves the heat map density value for each pixel in a heat map layer and is defined as `['heatmap-density']`. This value is a number between `0` and `1`. It's used in combination with a `interpolation` or `step` expression to define the color gradient used to colorize the heat map. This expression can only be used in the [color option] of the heat map layer.
> [!TIP] > The color at index 0, in an interpolation expression or the default color of a step color, defines the color of the area where there's no data. The color at index 0 can be used to define a background color. Many prefer to set this value to transparent or a semi-transparent black.
var layer = new atlas.layer.HeatMapLayer(datasource, null, {
}); ```
-For more information, see the [Add a heat map layer](map-add-heat-map-layer.md) documentation.
+For more information, see the [Add a heat map layer] documentation.
### Line progress expression
-A line progress expression retrieves the progress along a gradient line in a line layer and is defined as `['line-progress']`. This value is a number between 0 and 1. It's used in combination with an `interpolation` or `step` expression. This expression can only be used with the [strokeGradient option](/javascript/api/azure-maps-control/atlas.linelayeroptions#strokegradient) of the line layer.
+A line progress expression retrieves the progress along a gradient line in a line layer and is defined as `['line-progress']`. This value is a number between 0 and 1. It's used in combination with an `interpolation` or `step` expression. This expression can only be used with the [strokeGradient option] of the line layer.
> [!NOTE] > The `strokeGradient` option of the line layer requires the `lineMetrics` option of the data source to be set to `true`.
var layer = new atlas.layer.LineLayer(datasource, null, {
}); ```
-[See live example](map-add-line-layer.md#line-stroke-gradient)
+For an interactive working example, see [Add a stroke gradient to a line].
### Text field format expression The text field format expression can be used with the `textField` option of the symbol layers `textOptions` property to provide mixed text formatting. This expression allows a set of input strings and formatting options to be specified. The following options can be specified for each input string in this expression. -- `'font-scale'` - Specifies the scaling factor for the font size. If specified, this value will override the `size` property of the `textOptions` for the individual string.-- `'text-font'` - Specifies one or more font families that should be used for this string. If specified, this value will override the `font` property of the `textOptions` for the individual string.
+- `'font-scale'` - Specifies the scaling factor for the font size. If specified, this value overrides the `size` property of the `textOptions` for the individual string.
+- `'text-font'` - Specifies one or more font families that should be used for this string. If specified, this value overrides the `font` property of the `textOptions` for the individual string.
The following pseudocode defines the structure of the text field format expression.
var layer = new atlas.layer.SymbolLayer(datasource, null, {
}); ```
-This layer will render the point feature as shown in the image below:
+This layer renders the point feature as shown in the following image:
![Image of Point feature with formatted text field](media/how-to-expressions/text-field-format-expression.png) ### Number format expression
-The `number-format` expression can only be used with the `textField` option of a symbol layer. This expression converts the provided number into a formatted string. This expression wraps JavaScript's [Number.toLocalString](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number/toLocaleString) function and supports the following set of options.
+The `number-format` expression can only be used with the `textField` option of a symbol layer. This expression converts the provided number into a formatted string. This expression wraps JavaScript's [Number.toLocalString] function and supports the following set of options.
-- `locale` - Specify this option for converting numbers to strings in a way that aligns with the specified language. Pass a [BCP 47 language tag](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Intl#Locale_identification_and_negotiation) into this option.-- `currency` - To convert the number into a string representing a currency. Possible values are the [ISO 4217 currency codes](https://en.wikipedia.org/wiki/ISO_4217), such as "USD" for the US dollar, "EUR" for the euro, or "CNY" for the Chinese RMB.
+- `locale` - Specify this option for converting numbers to strings in a way that aligns with the specified language. Pass a [BCP 47 language tag] into this option.
+- `currency` - To convert the number into a string representing a currency. Possible values are the [ISO 4217 currency codes], such as "USD" for the US dollar, "EUR" for the euro, or "CNY" for the Chinese RMB.
- `'min-fraction-digits'` - Specifies the minimum number of decimal places to include in the string version of the number. - `'max-fraction-digits'` - Specifies the maximum number of decimal places to include in the string version of the number.
var layer = new atlas.layer.SymbolLayer(datasource, null, {
}); ```
-This layer will render the point feature as shown in the image below:
+This layer renders the point feature as shown in the following image:
![Number format expression example](media/how-to-expressions/number-format-expression.png) ### Image expression
-An image expression can be used with the `image` and `textField` options of a symbol layer, and the `fillPattern` option of the polygon layer. This expression checks that the requested image exists in the style and will return either the resolved image name or `null`, depending on whether or not the image is currently in the style. This validation process is synchronous and requires the image to have been added to the style before requesting it in the image argument.
+An image expression can be used with the `image` and `textField` options of a symbol layer, and the `fillPattern` option of the polygon layer. This expression checks that the requested image exists in the style and returns either the resolved image name or `null`, depending on whether or not the image is currently in the style. This validation process is synchronous and requires the image to have been added to the style before requesting it in the image argument.
**Example**
map.imageSprite.add('wifi-icon', 'wifi.png').then(function () {
}); ```
-This layer will render the text field in the symbol layer as shown in the image below:
+This layer renders the text field in the symbol layer as shown in the following image:
![Image expression example](media/how-to-expressions/image-expression.png)
A `zoom` expression is used to retrieve the current zoom level of the map at ren
**Example**
-By default, the radii of data points rendered in the heat map layer have a fixed pixel radius for all zoom levels. As the map is zoomed, the data aggregates together and the heat map layer looks different. A `zoom` expression can be used to scale the radius for each zoom level such that each data point covers the same physical area of the map. It will make the heat map layer look more static and consistent. Each zoom level of the map has twice as many pixels vertically and horizontally as the previous zoom level. Scaling the radius, such that it doubles with each zoom level, will create a heat map that looks consistent on all zoom levels. It can be accomplished using the `zoom` expression with a `base 2 exponential interpolation` expression, with the pixel radius set for the minimum zoom level and a scaled radius for the maximum zoom level calculated as `2 * Math.pow(2, minZoom - maxZoom)` as shown below.
+By default, the radii of data points rendered in the heat map layer have a fixed pixel radius for all zoom levels. As the map is zoomed, the data aggregates together and the heat map layer looks different. A `zoom` expression can be used to scale the radius for each zoom level such that each data point covers the same physical area of the map. It makes the heat map layer look more static and consistent. Each zoom level of the map has twice as many pixels vertically and horizontally as the previous zoom level. Scaling the radius, such that it doubles with each zoom level, creates a heat map that looks consistent on all zoom levels. It can be accomplished using the `zoom` expression with a `base 2 exponential interpolation` expression, with the pixel radius set for the minimum zoom level and a scaled radius for the maximum zoom level calculated as `2 * Math.pow(2, minZoom - maxZoom)` as demonstrated in the following example.
-```javascript
+```javascript
var layer = new atlas.layer.HeatMapLayer(datasource, null, { radius: [ 'interpolate',
var layer = new atlas.layer.HeatMapLayer(datasource, null, {
}; ```
-[See live example](map-add-heat-map-layer.md#consistent-zoomable-heat-map)
+For an interactive working example, see [Consistent zoomable heat map].
## Variable binding expressions
-Variable binding expressions store the results of a calculation in a variable. So, that the calculation results can be referenced elsewhere in an expression multiple times. It is a useful optimization for expressions that involve many calculations.
+Variable binding expressions store the results of a calculation in a variable. So, that the calculation results can be referenced elsewhere in an expression multiple times. It's a useful optimization for expressions that involve many calculations.
| Expression | Return type | Description | |--||--|
var layer = new atlas.layer.BubbleLayer(datasource, null, {
color: [ //Divide the point features `revenue` property by the `temperature` property and store it in a variable called `ratio`. 'let', 'ratio', ['/', ['get', 'revenue'], ['get', 'temperature']],
- //Evaluate the child expression in which the stored variable will be used.
+ //Evaluate the child expression in which the stored variable is used.
[ 'case',
See the following articles for more code samples that implement expressions:
> [Add a polygon layer](map-add-shape.md) > [!div class="nextstepaction"]
-> [Add a heat map layer](map-add-heat-map-layer.md)
+> [Add a heat map layer]
Learn more about the layer options that support expressions:
Learn more about the layer options that support expressions:
> [!div class="nextstepaction"] > [SymbolLayerOptions](/javascript/api/azure-maps-control/atlas.symbollayeroptions)+
+<! Internal Links >
+[Aggregate expression]: #aggregate-expression
+[Boolean expressions]: #boolean-expressions
+[Color expressions]: #color-expressions
+[Conditional expressions]: #conditional-expressions
+[Data expressions]: #data-expressions
+[Interpolate and Step expressions]: #interpolate-and-step-expressions
+[Layer specific expressions]: #layer-specific-expressions
+[Math expressions]: #math-expressions
+[String operator expressions]: #string-operator-expressions
+[Type expressions]: #type-expressions
+[Variable binding expressions]: #variable-binding-expressions
+[Zoom expression]: #zoom-expression
+
+<! learn.microsoft.com links >
+[Add a heat map layer]: map-add-heat-map-layer.md
+[Add a stroke gradient to a line]: map-add-line-layer.md#line-stroke-gradient
+[Consistent zoomable heat map]: map-add-heat-map-layer.md#consistent-zoomable-heat-map
+
+<! External Links >
+[BCP 47 language tag]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Intl#Locale_identification_and_negotiation
+[cubic Bezier curve]: https://developer.mozilla.org/docs/Web/CSS/timing-function
+[ISO 4217 currency codes]: https://en.wikipedia.org/wiki/ISO_4217
+[JSON.stringify]: https://tc39.github.io/ecma262/#sec-json.stringify
+[Number.toLocalString]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number/toLocaleString
+[piecewise-constant function]: https://mathworld.wolfram.com/PiecewiseConstantFunction.html
+[ToNumber]: https://tc39.github.io/ecma262/#sec-tonumber-applied-to-the-string-type
+[ToString]: https://tc39.github.io/ecma262/#sec-tostring-applied-to-the-number-type
+
+<! JavaScript API Links >
+[color option]: /javascript/api/azure-maps-control/atlas.heatmaplayeroptions#color
+[strokeGradient option]: /javascript/api/azure-maps-control/atlas.linelayeroptions#strokegradient
azure-monitor Get Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/get-metric.md
Title: Get-Metric in Azure Monitor Application Insights
description: Learn how to effectively use the GetMetric() call to capture locally pre-aggregated metrics for .NET and .NET Core applications with Azure Monitor Application Insights. Previously updated : 04/28/2020 Last updated : 04/05/2023 ms.devlang: csharp
SeverityLevel.Error);
## Next steps
+* [Metrics - Get - REST API](https://learn.microsoft.com/rest/api/application-insights/metrics/get)
+* [Application Insights API for custom events and metrics](api-custom-events-metrics.md)
* [Learn more](./worker-service.md) about monitoring worker service applications. * Use [log-based and pre-aggregated metrics](./pre-aggregated-metrics-log-metrics.md). * Get started with [metrics explorer](../essentials/metrics-getting-started.md).
azure-monitor Ip Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-collection.md
Title: Application Insights IP address collection | Microsoft Docs description: Understand how Application Insights handles IP addresses and geolocation. Previously updated : 11/15/2022 Last updated : 04/06/2023
Content-Length: 54
} ```
+### Powershell
+
+The Powershell 'Update-AzApplicationInsights' cmdlet can disable IP masking with the `DisableIPMasking` parameter.
+
+```powershell
+Update-AzApplicationInsights -Name "aiName" -ResourceGroupName "rgName" -DisableIPMasking $false
+```
+
+For more information on the 'Update-AzApplicationInsights' cmdlet, see [Update-AzApplicationInsights](https://learn.microsoft.com/powershell/module/az.applicationinsights/update-azapplicationinsights)
+ ## Telemetry initializer If you need a more flexible alternative than `DisableIpMasking`, you can use a [telemetry initializer](./api-filtering-sampling.md#addmodify-properties-itelemetryinitializer) to copy all or part of the IP address to a custom field. The code for this class is the same across .NET versions.
azure-monitor Migrate From Instrumentation Keys To Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/migrate-from-instrumentation-keys-to-connection-strings.md
Title: Migrate from Application Insights instrumentation keys to connection strings description: Learn the steps required to upgrade from Azure Monitor Application Insights instrumentation keys to connection strings. Previously updated : 02/14/2022 Last updated : 05/06/2023
azure-monitor Pre Aggregated Metrics Log Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pre-aggregated-metrics-log-metrics.md
Title: Log-based and pre-aggregated metrics in Application Insights | Microsoft Docs description: This article explains when to use log-based versus pre-aggregated metrics in Application Insights. Previously updated : 01/06/2023 Last updated : 04/05/2023
At the same time, collecting a complete set of events might be impractical or ev
## Pre-aggregated metrics
-In addition to log-based metrics, in late 2018, the Application Insights team shipped a public preview of metrics that are stored in a specialized repository that's optimized for time series. The new metrics are no longer kept as individual events with lots of properties. Instead, they're stored as pre-aggregated time series, and only with key dimensions. This change makes the new metrics superior at query time. Retrieving data happens much faster and requires less compute power. As a result, new scenarios are enabled, such as [near real time alerting on dimensions of metrics](../alerts/alerts-metric-near-real-time.md) and more responsive [dashboards](./overview-dashboard.md).
+In addition to log-based metrics, in late 2018, the Application Insights team shipped a public preview of metrics that are stored in a specialized repository that's optimized for time series. The new metrics are no longer kept as individual events with lots of properties. Instead, they're stored as pre-aggregated time series, and only with key dimensions. This change makes the new metrics superior at query time. Retrieving data happens faster and requires less compute power. As a result, new scenarios are enabled, such as [near real time alerting on dimensions of metrics](../alerts/alerts-metric-near-real-time.md) and more responsive [dashboards](./overview-dashboard.md).
> [!IMPORTANT] > Both log-based and pre-aggregated metrics coexist in Application Insights. To differentiate the two, in the Application Insights user experience the pre-aggregated metrics are now called Standard metrics (preview). The traditional metrics from the events were renamed to Log-based metrics.
Pre-aggregated metrics are stored as time series in Azure Monitor. [Azure Monito
## Why is collection of custom metrics dimensions turned off by default?
-The collection of custom metrics dimensions is turned off by default because in the future storing custom metrics with dimensions will be billed separately from Application Insights. Storing the non-dimensional custom metrics will remain free (up to a quota). You can learn about the upcoming pricing model changes on our official [pricing page](https://azure.microsoft.com/pricing/details/monitor/).
+The collection of custom metrics dimensions is turned off by default because in the future storing custom metrics with dimensions will be billed separately from Application Insights. Storing the nondimensional custom metrics remain free (up to a quota). You can learn about the upcoming pricing model changes on our official [pricing page](https://azure.microsoft.com/pricing/details/monitor/).
## Create charts and explore log-based and standard pre-aggregated metrics
Selecting the [Enable alerting on custom metric dimensions](#custom-metrics-dime
## Next steps
+* [Metrics - Get - REST API](https://learn.microsoft.com/rest/api/application-insights/metrics/get)
+* [Application Insights API for custom events and metrics](api-custom-events-metrics.md)
* [Near real time alerting](../alerts/alerts-metric-near-real-time.md) * [GetMetric and TrackValue](./api-custom-events-metrics.md#getmetric)
azure-monitor Standard Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/standard-metrics.md
Title: Azure Application Insights standard metrics | Microsoft Docs
description: This article lists Azure Application Insights metrics with supported aggregations and dimensions. Previously updated : 03/22/2023 Last updated : 04/05/2023
The count of trace statements logged with the TrackTrace() Application Insights
## Next steps
+* [Metrics - Get - REST API](https://learn.microsoft.com/rest/api/application-insights/metrics/get)
+* [Application Insights API for custom events and metrics](api-custom-events-metrics.md)
* Learn about [Log-based and pre-aggregated metrics](./pre-aggregated-metrics-log-metrics.md). * [Log-based metrics queries and definitions](../essentials/app-insights-metrics.md).
azure-monitor Worker Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/worker-service.md
This section provides answers to common questions.
| .NET Core app scenario | Package | |||
-| Without HostedServices | AspNetCore |
+| Without HostedServices | WorkerService |
| With HostedServices | AspNetCore (not WorkerService) | | With HostedServices, monitoring only HostedServices | WorkerService (rare scenario) |
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
Title: Azure Monitor Logs cost calculations and options
description: Cost details for data stored in a Log Analytics workspace in Azure Monitor, including commitment tiers and data size calculation. Previously updated : 03/24/2022 Last updated : 04/06/2023 ms.reviwer: dalek git
When you link workspaces to a cluster, the pricing tier is changed to cluster, a
If your linked workspace is using the legacy Per Node pricing tier, it will be billed based on data ingested against the cluster's commitment tier, and no longer Per Node. Per-node data allocations from Microsoft Defender for Cloud will continue to be applied.
+If a cluster is deleted, billing for the cluster will stop even if the cluster is within it's 31-day commitment period.
+ For more information on how to create a dedicated cluster and specify its billing type, see [Create a dedicated cluster](logs-dedicated-clusters.md#create-a-dedicated-cluster). ## Basic Logs
azure-relay Ip Firewall Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/ip-firewall-virtual-networks.md
This section shows you how to use the Azure portal to create IP firewall rules f
1. To restrict access to specific networks and IP addresses, select the **Selected networks** option. In the **Firewall** section, follow these steps: 1. Select **Add your client IP address** option to give your current client IP the access to the namespace. 2. For **address range**, enter a specific IPv4 address or a range of IPv4 address in CIDR notation.
- 3. If you want to allow Microsoft services trusted by the Azure Relay service to bypass this firewall, select **Yes** for **Allow trusted Microsoft services to bypass this firewall?**.
+ 3. If you want to allow Microsoft services trusted by the Azure Relay service to bypass this firewall, select **Yes** for **Allow [trusted Microsoft services](#trusted-services) to bypass this firewall?**.
:::image type="content" source="./media/ip-firewall/selected-networks-trusted-access-disabled.png" alt-text="Screenshot showing the Public access tab of the Networking page with the Firewall enabled."::: 1. Select **Save** on the toolbar to save the settings. Wait for a few minutes for the confirmation to show up on the portal notifications.
The template takes one parameter: **ipMask**, which is a single IPv4 address or
To deploy the template, follow the instructions for [Azure Resource Manager](../azure-resource-manager/templates/deploy-powershell.md).
+## Trusted services
+The following services are the trusted services for Azure Relay.
+- Azure Event Grid
+- Azure IoT Hub
+- Azure Stream Analytics
+- Azure Monitor
+- Azure API Management
+- Azure Synapse
+- Azure Data Explorer
+- Azure IoT Central
+- Azure Healthcare Data Services
+- Azure Digital Twins
+- Azure Arc
## Next steps
azure-vmware Azure Vmware Solution Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-known-issues.md
description: This article provides details about the known issues of Azure VMwar
Previously updated : 4/4/2023 Last updated : 4/6/2023 # Known issues: Azure VMware Solution
Refer to the table below to find details about resolution dates or possible work
| :- | : | :- | :- | | [VMSA-2021-002 ESXiArgs](https://www.vmware.com/security/advisories/VMSA-2021-0002.html) OpenSLP vulnerability publicized in February 2023 | 2021 | [Disable OpenSLP service](https://kb.vmware.com/s/article/76372) | February 2021 - Resolved in [ESXi 7.0 U3c](concepts-private-clouds-clusters.md#vmware-software-versions) | | After my private cloud NSX-T Data Center upgrade to version [3.2.2](https://docs.vmware.com/en/VMware-NSX/3.2.2/rn/vmware-nsxt-data-center-322-release-notes/https://docsupdatetracker.net/index.html), the NSX-T Manager **DNS - Forwarder Upstream Server Timeout** alarm is raised | February 2023 | [Enable private cloud internet Access](concepts-design-public-internet-access.md), alarm is raised because NSX-T Manager cannot access the configured CloudFlare DNS server. Otherwise, [change the default DNS zone to point to a valid and reachable DNS server.](configure-dns-azure-vmware-solution.md) | February 2023 |
-| When first logging into the vSphere Client, the **Cluster-n: vSAN health alarms are suppressed** alert is active | 2021 | This is should be considered an informational message, since Microsoft manages the service. Select the **Reset to Green** link to clear it. | 2021 |
+| When first logging into the vSphere Client, the **Cluster-n: vSAN health alarms are suppressed** alert is active | 2021 | This should be considered an informational message, since Microsoft manages the service. Select the **Reset to Green** link to clear it. | 2021 |
+| When adding a cluster to my private cloud, the **Cluster-n: vSAN physical disk alarm 'Operation'** and **Cluster-n: vSAN cluster alarm 'vSAN Cluster Configuration Consistency'** alerts are active in the vSphere Client | 2021 | This should be considered an informational message, since Microsoft manages the service. Select the **Reset to Green** link to clear it. | 2021 |
In this article, you learned about the current known issues with the Azure VMware Solution.
azure-vmware Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-identity.md
Title: Concepts - Identity and access
description: Learn about the identity and access concepts of Azure VMware Solution Previously updated : 11/18/2022 Last updated : 4/6/2023
The following permissions are assigned to the **cloudadmin** user in Azure VMwar
> [!NOTE] > **VMware NSX-T Data Center cloudadmin user** on Azure VMware Solution is not the same as the **cloudadmin user** mentioned in the VMware product documentation.
-> Permissions below apply to NSX-T's Policy API. Manager API functionality may be limited.
+> Permissions below apply to NSX-T Data Center's Policy API. Manager API functionality may be limited.
| Category | Type | Operation | Permission | |--|--|-||
You can create custom roles in NSX-T Data Center with permissions lesser than or
4. **Apply** the changes and **Save** the Role. > [!NOTE]
-> The VMware NSX-T Data Center **System** > **Identity Firewall AD** configuration option isn't supported by the NSX custom role. The recommendation is to assign the **Security Operator** role to the user with the custom role to allow managing the Identity Firewall (IDFW) feature for that user.
+> The VMware NSX-T Data Center **System** > **Identity Firewall AD** configuration option isn't supported by the NSX-T Data Center custom role. The recommendation is to assign the **Security Operator** role to the user with the custom role to allow managing the Identity Firewall (IDFW) feature for that user.
> [!NOTE] > The VMware NSX-T Data Center Traceflow feature isn't supported by the VMware NSX-T Data Center custom role. The recommendation is to assign the **Auditor** role to the user along with above custom role to enable Traceflow feature for that user. > [!NOTE]
-> VMware vRealize Automation(vRA) integration with the NSX-T Data Center component of the Azure VMware Solution requires the ΓÇ£auditorΓÇ¥ role to be added to the user with the NSX-T Manager cloudadmin role.
+> VMware vRealize Automation (vRA) integration with the NSX-T Data Center component of the Azure VMware Solution requires the ΓÇ£auditorΓÇ¥ role to be added to the user with the NSX-T Manager cloudadmin role.
## Next steps
azure-vmware Concepts Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-security-recommendations.md
Title: Concepts - Security recommendations for Azure VMware Solution
description: Learn about tips and best practices to help protect Azure VMware Solution deployments from vulnerabilities and malicious actors. Previously updated : 01/10/2022 Last updated : 4/6/2023
The following are network-related security recommendations for Azure VMware Solu
| Deploy and configure Network Security Groups on VNET | Ensure any VNET deployed has [Network Security Groups](../virtual-network/network-security-groups-overview.md) configured to control ingress and egress to your environment. | | Review and implement recommendations within the Azure security baseline for Azure VMware Solution | [Azure security baseline for Azure VMware Solution](/security/benchmark/azure/baselines/vmware-solution-security-baseline/) |
-## HCX
+## VMware HCX
-See the following information for recommendations to secure your HCX deployment.
+See the following information for recommendations to secure your VMware HCX deployment.
| **Recommendation** | **Comments** | | :-- | :-- |
-| Stay current with HCX service updates | HCX service updates can include new features, software fixes, and security patches. Apply service updates during a maintenance window where no new HCX operations are queued up by following these [steps](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-F4AEAACB-212B-4FB6-AC36-9E5106879222.html). |
+| Stay current with VMware HCX service updates | VMware HCX service updates can include new features, software fixes, and security patches. Apply service updates during a maintenance window where no new VMware HCX operations are queued up by following these [steps](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-F4AEAACB-212B-4FB6-AC36-9E5106879222.html). |
azure-vmware Ecosystem App Monitoring Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/ecosystem-app-monitoring-solutions.md
Our application performance monitoring and troubleshooting partners have industr
You can find more information about these solutions here: - [NETSCOUT](https://www.netscout.com/technology-partners/microsoft-azure)-- [Turbonomic](https://blog.turbonomic.com/turbonomic-announces-partnership-and-support-for-azure-vmware-service)
+- [Turbonomic](https://www.ibm.com/products/turbonomic/integrations/microsoft-azure?mhsrc=ibmsearch_a&mhq=Azure)
azure-vmware Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/introduction.md
Title: Introduction
description: Learn the features and benefits of Azure VMware Solution to deploy and manage VMware-based workloads in Azure. Azure VMware Solution SLA guarantees that Azure VMware management tools (vCenter Server and NSX Manager) will be available at least 99.9% of the time. Previously updated : 1/4/2023 Last updated : 4/6/2023
The following table provides a detailed list of roles and responsibilities betwe
| **Role** | **Task/details** | | -- | - |
-| Microsoft - Azure VMware Solution | Physical infrastructure<ul><li>Azure regions</li><li>Azure availability zones</li><li>Express Route/Global reach</ul></li>Compute/Network/Storage<ul><li>Rack and power Bare Metal hosts</li><li>Rack and power network equipment</ul></li>Software defined Data Center (SDDC) deploy/lifecycle<ul><li>VMware ESXi deploy, patch, and upgrade</li><li>VMware vCenter Servers deploy, patch, and upgrade</li><li>VMware NSX-T Data Centers deploy, patch, and upgrade</li><li>vSAN deploy, patch, and upgrade</ul></li>SDDC Networking - VMware NSX-T Data Center provider config<ul><li>Microsoft Edge node/cluster, VMware NSX-T Data Center host preparation</li><li>Provider Tier-0 and Tenant Tier-1 Gateway</li><li>Connectivity from Tier-0 (using BGP) to Azure Network via Express Route</ul></li>SDDC Compute - VMware vCenter Server provider config<ul><li>Create default cluster</li><li>Configure virtual networking for vMotion, Management, vSAN, and others</ul></li>SDDC backup/restore<ul><li>Backup and restore VMware vCenter Server</li><li>Backup and restore VMware NSX-T Data Center NSX-T Manager</ul></li>SDDC health monitoring and corrective actions, for example: replace failed hosts</br><br>(optional) HCX deploys with fully configured compute profile on cloud side as add-on</br><br>(optional) SRM deploys, upgrade, and scale up/down</br><br>Support - SDDC platforms and HCX |
-| Customer | Request Azure VMware Solution host quote with Microsoft<br>Plan and create a request for SDDCs on Azure portal with:<ul><li>Host count</li><li>Management network range</li><li>Other information</ul></li>Configure SDDC network and security (VMware NSX-T Data Center)<ul><li>Network segments to host applications</li><li>Additional Tier -1 routers</li><li>Firewall</li><li>VMware NSX-T Data Center LB</li><li>IPsec VPN</li><li>NAT</li><li>Public IP addresses</li><li>Distributed firewall/gateway firewall</li><li>Network extension using HCX or VMware NSX-T Data Center</li><li>AD/LDAP config for RBAC</ul></li>Configure SDDC - VMware vCenter Server<ul><li>AD/LDAP config for RBAC</li><li>Deploy and lifecycle management of Virtual Machines (VMs) and application<ul><li>Install operating systems</li><li>Patch operating systems</li><li>Install antivirus software</li><li>Install backup software</li><li>Install configuration management software</li><li>Install application components</li><li>VM networking using VMware NSX-T Data Center segments</ul></li><li>Migrate Virtual Machines (VMs)<ul><li>HCX configuration</li><li>Live vMotion</li><li>Cold migration</li><li>Content library sync</ul></li></ul></li>Configure SDDC - vSAN<ul><li>Define and maintain vSAN VM policies</li><li>Add hosts to maintain adequate 'slack space'</ul></li>Configure HCX<ul><li>Download and deploy HCA connector OVA in on-premises</li><li>Pairing on-premises HCX connector</li><li>Configure the network profile, compute profile, and service mesh</li><li>Configure HCX network extension/MON</li><li>Upgrade/updates</ul></li>Network configuration to connect to on-premises, VNET, or internet</br><br>Add or delete hosts requests to cluster from Portal</br><br>Deploy/lifecycle management of partner (third party) solutions |
+| Microsoft - Azure VMware Solution | Physical infrastructure<ul><li>Azure regions</li><li>Azure availability zones</li><li>Express Route/Global Reach</ul></li>Compute/Network/Storage<ul><li>Rack and power Bare Metal hosts</li><li>Rack and power network equipment</ul></li>Software defined Data Center (SDDC) deploy/lifecycle<ul><li>VMware ESXi deploy, patch, and upgrade</li><li>VMware vCenter Servers deploy, patch, and upgrade</li><li>VMware NSX-T Data Centers deploy, patch, and upgrade</li><li>VMware vSAN deploy, patch, and upgrade</ul></li>SDDC Networking - VMware NSX-T Data Center provider config<ul><li>Microsoft Edge node/cluster, VMware NSX-T Data Center host preparation</li><li>Provider Tier-0 and Tenant Tier-1 Gateway</li><li>Connectivity from Tier-0 (using BGP) to Azure Network via Express Route</ul></li>SDDC Compute - VMware vCenter Server provider config<ul><li>Create default cluster</li><li>Configure virtual networking for vMotion, Management, vSAN, and others</ul></li>SDDC backup/restore<ul><li>Backup and restore VMware vCenter Server</li><li>Backup and restore VMware NSX-T Data Center NSX-T Manager</ul></li>SDDC health monitoring and corrective actions, for example: replace failed hosts</br><br>(optional) VMware HCX deploys with fully configured compute profile on cloud side as add-on</br><br>(optional) SRM deploys, upgrade, and scale up/down</br><br>Support - SDDC platforms and VMware HCX |
+| Customer | Request Azure VMware Solution host quote with Microsoft<br>Plan and create a request for SDDCs on Azure portal with:<ul><li>Host count</li><li>Management network range</li><li>Other information</ul></li>Configure SDDC network and security (VMware NSX-T Data Center)<ul><li>Network segments to host applications</li><li>Additional Tier -1 routers</li><li>Firewall</li><li>VMware NSX-T Data Center LB</li><li>IPsec VPN</li><li>NAT</li><li>Public IP addresses</li><li>Distributed firewall/gateway firewall</li><li>Network extension using VMware HCX or VMware NSX-T Data Center</li><li>AD/LDAP config for RBAC</ul></li>Configure SDDC - VMware vCenter Server<ul><li>AD/LDAP config for RBAC</li><li>Deploy and lifecycle management of Virtual Machines (VMs) and application<ul><li>Install operating systems</li><li>Patch operating systems</li><li>Install antivirus software</li><li>Install backup software</li><li>Install configuration management software</li><li>Install application components</li><li>VM networking using VMware NSX-T Data Center segments</ul></li><li>Migrate Virtual Machines (VMs)<ul><li>VMware HCX configuration</li><li>Live vMotion</li><li>Cold migration</li><li>Content library sync</ul></li></ul></li>Configure SDDC - vSAN<ul><li>Define and maintain vSAN VM policies</li><li>Add hosts to maintain adequate 'slack space'</ul></li>Configure VMware HCX<ul><li>Download and deploy HCA connector OVA in on-premises</li><li>Pairing on-premises VMware HCX connector</li><li>Configure the network profile, compute profile, and service mesh</li><li>Configure VMware HCX network extension/MON</li><li>Upgrade/updates</ul></li>Network configuration to connect to on-premises, VNET, or internet</br><br>Add or delete hosts requests to cluster from Portal</br><br>Deploy/lifecycle management of partner (third party) solutions |
| Partner ecosystem | Support for their product/solution. For reference, the following are some of the supported Azure VMware Solution partner solution/product:<ul><li>BCDR - SRM, JetStream, RiverMeadow, and others</li><li>Backup - Veeam, Commvault, Rubrik, and others</li><li>VDI - Horizon/Citrix</li><li>Security solutions - BitDefender, TrendMicro, Checkpoint</li><li>Other VMware products - vRA, vROps, AVI |
azure-vmware Protect Azure Vmware Solution With Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/protect-azure-vmware-solution-with-application-gateway.md
Now you'll configure Application Gateway with Azure VMware Solution VMs as backe
This procedure shows you how to define backend address pools using VMs running on an Azure VMware Solution private cloud on an existing application gateway. >[!NOTE]
->This procedure assumes you have multiple domains, so we'll use examples of www.contoso.com and www.fabrikam.com.
+>This procedure assumes you have multiple domains, so we'll use examples of www.contoso.com and www.contoso2.com.
-1. In your private cloud, create two different pools of VMs. One represents Contoso and the second Fabrikam.
+1. In your private cloud, create two different pools of VMs. One represents Contoso and the second contoso2.
:::image type="content" source="media/application-gateway/app-gateway-multi-backend-pool.png" alt-text="Screenshot showing summary of a web server's details in VMware vSphere Client."lightbox="media/application-gateway/app-gateway-multi-backend-pool.png":::
azure-vmware Tutorial Access Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-access-private-cloud.md
Title: Tutorial - Access your private cloud
description: Learn how to access an Azure VMware Solution private cloud Previously updated : 10/27/2022 Last updated : 4/6/2023
In this tutorial, you learn how to:
1. Once validation passes, select **Create** to start the virtual machine creation process.
-## Connect to the local vCenter of your private cloud
+## Connect to the vCenter Server of your private cloud
-1. From the jump box, sign in to vSphere Client with VMware vCenter Server SSO using a cloud admin username and verify that the user interface displays successfully.
+1. From the jump box, sign in to vSphere Client with VMware vCenter Server SSO using a cloudadmin username and verify that the user interface displays successfully.
1. In the Azure portal, select your private cloud, and then **Manage** > **Identity**.
In this tutorial, you learn how to:
1. In the second tab of the browser, sign in to NSX-T Manager.
+ :::image type="content" source="media/tutorial-access-private-cloud/ss9-nsx-manager-login.png" alt-text="Screenshot of the NSX-T Manager sign in page."lightbox="media/tutorial-access-private-cloud/ss9-nsx-manager-login.png" border="true":::
+ :::image type="content" source="media/tutorial-access-private-cloud/ss10-nsx-manager-home.png" alt-text="Screenshot of the NSX-T Manager Overview."lightbox="media/tutorial-access-private-cloud/ss10-nsx-manager-home.png" border="true"::: ## Next steps
azure-vmware Tutorial Expressroute Global Reach Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-expressroute-global-reach-private-cloud.md
description: Learn how to create ExpressRoute Global Reach peering to a private
Previously updated : 10/27/2022 Last updated : 4/6/2023 # Tutorial: Peer on-premises environments to Azure VMware Solution
Now that you've created an authorization key for the private cloud ExpressRoute
## Verify on-premises network connectivity
-In your **on-premises edge router**, you should now see where the ExpressRoute connects the NSX-T network segments and the Azure VMware Solution management segments.
+In your **on-premises edge router**, you should now see where the ExpressRoute connects the NSX-T Data Center network segments and the Azure VMware Solution management segments.
>[!IMPORTANT] >Everyone has a different environment, and some will need to allow these routes to propagate back into the on-premises network.
azure-web-pubsub Howto Authorize From Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-authorize-from-application.md
This sample shows how to assign a `Web PubSub Service Owner` role to a service p
1. Click **Add > Add role assignment**.
-1. On the **Roles** tab, select `Web PubSub App Server`.
+1. On the **Roles** tab, select `Web PubSub Service Owner`.
1. Click **Next**.
To learn more about how to assign and manage Azure role assignments, see these a
- [Assign Azure roles using Azure CLI](../role-based-access-control/role-assignments-cli.md) - [Assign Azure roles using Azure Resource Manager templates](../role-based-access-control/role-assignments-template.md)
-## Sample codes
+## Use Postman to get the Azure AD token
+1. Launch Postman
+
+2. For the method, select **GET**.
+
+3. For the **URI**, enter `https://login.microsoftonline.com/<TENANT ID>/oauth2/token`. Replace `<TENANT ID>` with the **Directory (tenant) ID** value in the **Overview** tab of the application you created earlier.
+
+4. On the **Headers** tab, add **Content-Type** key and `application/x-www-form-urlencoded` for the value.
+
+![Screenshot of the basic info using postman to get the token.](./media/howto-authorize-from-application/get-azure-ad-token-using-postman.png)
+
+5. Switch to the **Body** tab, and add the following keys and values.
+ 1. Select **x-www-form-urlencoded**.
+ 2. Add `grant_type` key, and type `client_credentials` for the value.
+ 3. Add `client_id` key, and paste the value of **Application (client) ID** in the **Overview** tab of the application you created earlier.
+ 4. Add `client_secret` key, and paste the value of client secret you noted down earlier.
+ 5. Add `resource` key, and type `https://webpubsub.azure.com` for the value.
+
+![Screenshot of the body parameters when using postman to get the token.](./media/howto-authorize-from-application/get-azure-ad-token-using-postman-body.png)
+
+6. Select **Send** to send the request to get the token. You see the token in the `access_token` field.
+
+![Screenshot of the response token when using postman to get the token.](./media/howto-authorize-from-application/get-azure-ad-token-using-postman-response.png)
+
+## Sample codes using Azure AD auth
We officially support 4 programming languages:
We officially support 4 programming languages:
See the following related articles: - [Overview of Azure AD for Web PubSub](concept-azure-ad-authorization.md)-- [Authorize request to Web PubSub resources with Azure AD from managed identities](howto-authorize-from-managed-identity.md)
+- [Authorize request to Web PubSub resources with Azure AD from managed identities](howto-authorize-from-managed-identity.md)
azure-web-pubsub Reference Functions Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-functions-bindings.md
Previously updated : 07/04/2022 Last updated : 04/04/2023 # Azure Web PubSub trigger and bindings for Azure Functions
Working with the trigger and bindings requires you reference the appropriate pac
> Install the client library from [NuGet](https://www.nuget.org/) with specified package and version. > > ```bash
-> func extensions install --package Microsoft.Azure.WebJobs.Extensions.WebPubSub --version 1.0.0
+> func extensions install --package Microsoft.Azure.WebJobs.Extensions.WebPubSub
> ``` [NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.WebPubSub
Use the function trigger to handle requests from Azure Web PubSub service.
```cs [FunctionName("WebPubSubTrigger")] public static void Run(
- [WebPubSubTrigger("<hub>", WebPubSubEventType.User, "message")]
- UserEventRequest request,
- WebPubSubConnectionContext context,
- string data,
- WebPubSubDataType dataType)
+ [WebPubSubTrigger("<hub>", WebPubSubEventType.User, "message")] UserEventRequest request)
{
- Console.WriteLine($"Request from: {context.UserId}");
- Console.WriteLine($"Request message data: {data}");
- Console.WriteLine($"Request message dataType: {dataType}");
+ Console.WriteLine($"Request from: {request.ConnectionContext.UserId}");
+ Console.WriteLine($"Request message data: {request.Data}");
+ Console.WriteLine($"Request message dataType: {request.DataType}");
} ``` `WebPubSubTrigger` binding also supports return value in synchronize scenarios, for example, system `Connect` and user event, when server can check and deny the client request, or send messages to the caller directly. `Connect` event respects `ConnectEventResponse` and `EventErrorResponse`, and user event respects `UserEventResponse` and `EventErrorResponse`, rest types not matching current scenario will be ignored. And if `EventErrorResponse` is returned, service will drop the client connection. ```cs
-[FunctionName("WebPubSubTriggerReturnValue")]
-public static MessageResponse Run(
- [WebPubSubTrigger("<hub>", WebPubSubEventType.User, "message")]
- UserEventRequest request,
- ConnectionContext context,
- string data,
- WebPubSubDataType dataType)
+[FunctionName("WebPubSubTriggerReturnValueFunction")]
+public static UserEventResponse Run(
+ [WebPubSubTrigger("hub", WebPubSubEventType.User, "message")] UserEventRequest request)
{
- return new UserEventResponse
- {
- Data = BinaryData.FromString("ack"),
- DataType = WebPubSubDataType.Text
- };
+ return request.CreateResponse(BinaryData.FromString("ack"), WebPubSubDataType.Text);
} ```
Here's an `WebPubSubTrigger` attribute in a method signature:
```csharp [FunctionName("WebPubSubTrigger")] public static void Run([WebPubSubTrigger("<hub>", <WebPubSubEventType>, "<event-name>")]
-WebPubSubConnectionContext context, ILogger log)
+ WebPubSubConnectionContext context, ILogger log)
{ ... }
In weakly typed language like JavaScript, `name` in `function.json` will be used
|clientCertificates|`IList<ClientCertificate>`|A list of certificate thumbprint from clients in system `connect` request|-| |reason|`string`|Reason in system `disconnected` request|-|
+> [!IMPORTANT]
+> In C#, multiple types supported parameter __MUST__ be put in the first, i.e. `request` or `data` that other than the default `BinaryData` type to make the function binding correctly.
+ ### Return response `WebPubSubTrigger` will respect customer returned response for synchronous events of `connect` and user event. Only matched response will be sent back to service, otherwise, it will be ignored. Besides, `WebPubSubTrigger` return object supports users to `SetState()` and `ClearStates()` to manage the metadata for the connection. And the extension will merge the results from return value with the original ones from request `WebPubSubConnectionContext.States`. Value in existing key will be overwrite and value in new key will be added.
azure-web-pubsub Reference Server Sdk Js https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-server-sdk-js.md
await serviceClient.sendToAll({ message: "Hello world!" }, { onResponse });
You can set the following environment variable to get the debug logs when using this library. -- Getting debug logs from the SignalR client library
+- Getting debug logs from the Azure Web PubSub client library
```bash export AZURE_LOG_LEVEL=verbose
app.listen(3000, () =>
You can set the following environment variable to get the debug logs when using this library. -- Getting debug logs from the SignalR client library
+- Getting debug logs from the Azure Web PubSub client library
```bash export AZURE_LOG_LEVEL=verbose
backup Archive Tier Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/archive-tier-support.md
Title: Azure Backup - Archive tier overview description: Learn about Archive tier support for Azure Backup. Previously updated : 03/17/2023 Last updated : 04/06/2023
To ensure that all recovery points are moved to Archive tier,
If the list of recovery points is blank, then all the eligible/recommended recovery points are moved to the vault Archive tier.
+### Can I use 'File Recovery' option to restore specific files in Azure VM backup for archived recovery points?
+
+No. Currently, the **File Recovery** option doesn't support restoring specific files from an archived recovery point of an Azure VM backup.
+ ## Next steps - [Use Archive tier](use-archive-tier-support.md)
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-restore-vms.md
Title: Restore VMs by using the Azure portal
description: Restore an Azure virtual machine from a recovery point by using the Azure portal, including the Cross Region Restore feature. Previously updated : 02/14/2023 Last updated : 04/06/2023
To begin using the feature, read the [Before You Begin section](./backup-create-
To see if CRR is enabled, follow the instructions in [Configure Cross Region Restore](backup-create-rs-vault.md#set-cross-region-restore).
+>[!Note]
+>Cross-region restore is currently not supported for machines running on Ultra disks. [Learn more about Ultra disk backup supportability](backup-support-matrix-iaas.md#ultra-disk-backup).
+ ### View backup items in secondary region If CRR is enabled, you can view the backup items in the secondary region.
backup Backup Create Recovery Services Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-create-recovery-services-vault.md
Title: Create and configure Recovery Services vaults description: Learn how to create and configure Recovery Services vaults, and how to restore in a secondary region by using Cross Region Restore. Previously updated : 12/14/2022 Last updated : 04/06/2023
Before you begin, consider the following information:
A vault created with GRS redundancy includes the option to configure the Cross Region Restore feature. Every GRS vault has a banner that links to the documentation.
+>[!Note]
+>Cross-region restore is currently not supported for machines running on Ultra disks. [Learn more about Ultra disk backup supportability](backup-support-matrix-iaas.md#ultra-disk-backup).
+ ![Screenshot that shows the banner about backup configuration.](./media/backup-azure-arm-restore-vms/banner.png) To configure Cross Region Restore for the vault:
backup Backup Reports Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-reports-email.md
Title: Email Azure Backup Reports description: Create automated tasks to receive periodic reports via email Previously updated : 04/06/2022 Last updated : 04/06/2023
To perform the authorization, follow the steps below:
6. To test whether the logic app works after authorization, you can go back to the logic app, open **Overview** and select **Run Trigger** in the top pane, to test whether an email is being generated successfully.
+>[!Note]
+>The *sender* account associated with the email is the same as the account that is used to authorize the Office 365 connection during configuration of the email report. To change the sender, you need to use a different account to authorize the connection.
+ ## Contents of the email * All the charts and graphs shown in the portal are available as inline content in the email. [Learn more](configure-reports.md) about the information shown in Backup Reports.
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Title: Support matrix for Azure VM backups description: Get a summary of support settings and limitations for backing up Azure VMs by using the Azure Backup service. Previously updated : 04/05/2023 Last updated : 04/06/2023
Adding a disk to a protected VM | Supported.
Resizing a disk on a protected VM | Supported. Shared storage| Backing up VMs by using Cluster Shared Volumes (CSV) or Scale-Out File Server isn't supported. CSV writers are likely to fail during backup. On restore, disks that contain CSV volumes might not come up. [Shared disks](../virtual-machines/disks-shared-enable.md) | Not supported.
-<a name="ultra-disk-backup">Ultra SSD disks</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). The support is currently in preview. <br><br> To enroll your subscription for this feature, [fill this form](https://forms.office.com/r/1GLRnNCntU).
+<a name="ultra-disk-backup">Ultra SSD disks</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). The support is currently in preview. <br><br> To enroll your subscription for this feature, [fill this form](https://forms.office.com/r/1GLRnNCntU). <br><br> - Configuration of Ultra disk protection is supported via Recovery Services vault only. This configuration is currently not supported via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Ultra disks.
[Temporary disks](../virtual-machines/managed-disks-overview.md#temporary-disk) | Azure Backup doesn't back up temporary disks. NVMe/[ephemeral disks](../virtual-machines/ephemeral-os-disks.md) | Not supported. [Resilient File System (ReFS)](/windows-server/storage/refs/refs-overview) restore | Supported. Volume Shadow Copy Service (VSS) supports app-consistent backups on ReFS.
batch Batch Automatic Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-automatic-scaling.md
Title: Automatically scale compute nodes in an Azure Batch pool description: Enable automatic scaling on a cloud pool to dynamically adjust the number of compute nodes in the pool. Previously updated : 12/13/2021 Last updated : 04/06/2023
Azure Batch can automatically scale pools based on parameters that you define, saving you time and money. With automatic scaling, Batch dynamically adds nodes to a pool as task demands increase, and removes compute nodes as task demands decrease.
-To enable automatic scaling on a pool of compute nodes, you associate the pool with an *autoscale formula* that you define. The Batch service uses the autoscale formula to determine how many nodes are needed to execute your workload. These nodes may be dedicated nodes or [Azure Spot nodes](batch-spot-vms.md). Batch will then periodically review service metrics data and use it to adjust the number of nodes in the pool based on your formula and at an interval that you define.
+To enable automatic scaling on a pool of compute nodes, you associate the pool with an *autoscale formula* that you define. The Batch service uses the autoscale formula to determine how many nodes are needed to execute your workload. These nodes may be dedicated nodes or [Azure Spot nodes](batch-spot-vms.md). Batch periodically reviews service metrics data and uses it to adjust the number of nodes in the pool based on your formula and at an interval that you define.
You can enable automatic scaling when you create a pool, or apply it to an existing pool. Batch enables you to evaluate your formulas before assigning them to pools and to monitor the status of automatic scaling runs. Once you configure a pool with automatic scaling, you can make changes to the formula later.
$NodeDeallocationOption = taskcompletion;
#### Preempted nodes
-This example creates a pool that starts with 25 Spot nodes. Every time a Spot node is preempted, it is replaced with a dedicated node. As with the first example, the `maxNumberofVMs` variable prevents the pool from exceeding 25 VMs. This example is useful for taking advantage of Spot VMs while also ensuring that only a fixed number of preemptions will occur for the lifetime of the pool.
+This example creates a pool that starts with 25 Spot nodes. Every time a Spot node is preempted, it's replaced with a dedicated node. As with the first example, the `maxNumberofVMs` variable prevents the pool from exceeding 25 VMs. This example is useful for taking advantage of Spot VMs while also ensuring that only a fixed number of pre-emptions occur for the lifetime of the pool.
``` maxNumberofVMs = 25;
$TargetLowPriorityNodes = min(maxNumberofVMs , maxNumberofVMs - $TargetDedicated
$NodeDeallocationOption = taskcompletion; ```
-You'll learn more about [how to create autoscale formulas](#write-an-autoscale-formula) and see additional [example autoscale formulas](#example-autoscale-formulas) later in this topic.
+You'll learn more about [how to create autoscale formulas](#write-an-autoscale-formula) and see more [example autoscale formulas](#example-autoscale-formulas) later in this topic.
## Variables
User-defined variables are variables that you define. In the example formula sho
> [!NOTE] > Service-defined variables are always preceded by a dollar sign ($). For user-defined variables, the dollar sign is optional.
-The following tables show the read-write and read-only variables that are defined by the Batch service.
+The following tables show the read-write and read-only variables defined by the Batch service.
### Read-write service-defined variables
You can get and set the values of these service-defined variables to manage the
| | | | $TargetDedicatedNodes |The target number of dedicated compute nodes for the pool. This is specified as a target because a pool may not always achieve the desired number of nodes. For example, if the target number of dedicated nodes is modified by an autoscale evaluation before the pool has reached the initial target, the pool may not reach the target. <br /><br /> A pool in an account created in Batch service mode may not achieve its target if the target exceeds a Batch account node or core quota. A pool in an account created in user subscription mode may not achieve its target if the target exceeds the shared core quota for the subscription.| | $TargetLowPriorityNodes |The target number of Spot compute nodes for the pool. This specified as a target because a pool may not always achieve the desired number of nodes. For example, if the target number of Spot nodes is modified by an autoscale evaluation before the pool has reached the initial target, the pool may not reach the target. A pool may also not achieve its target if the target exceeds a Batch account node or core quota. <br /><br /> For more information on Spot compute nodes, see [Use Spot VMs with Batch](batch-spot-vms.md). |
-| $NodeDeallocationOption |The action that occurs when compute nodes are removed from a pool. Possible values are:<ul><li>**requeue**: The default value. Ends tasks immediately and puts them back on the job queue so that they are rescheduled. This action ensures the target number of nodes is reached as quickly as possible. However, it may be less efficient, as any running tasks will be interrupted and will then have to be completely restarted. <li>**terminate**: Ends tasks immediately and removes them from the job queue.<li>**taskcompletion**: Waits for currently running tasks to finish and then removes the node from the pool. Use this option to avoid tasks being interrupted and requeued, wasting any work the task has done.<li>**retaineddata**: Waits for all the local task-retained data on the node to be cleaned up before removing the node from the pool.</ul> |
+| $NodeDeallocationOption |The action that occurs when compute nodes are removed from a pool. Possible values are:<ul><li>**requeue**: The default value. Ends tasks immediately and puts them back on the job queue so that they're rescheduled. This action ensures the target number of nodes is reached as quickly as possible. However, it may be less efficient, as any running tasks are interrupted and restarted. <li>**terminate**: Ends tasks immediately and removes them from the job queue.<li>**taskcompletion**: Waits for currently running tasks to finish and then removes the node from the pool. Use this option to avoid tasks being interrupted and requeued, wasting any work the task has done.<li>**retaineddata**: Waits for all the local task-retained data on the node to be cleaned up before removing the node from the pool.</ul> |
> [!NOTE] > The `$TargetDedicatedNodes` variable can also be specified using the alias `$TargetDedicated`. Similarly, the `$TargetLowPriorityNodes` variable can be specified using the alias `$TargetLowPriority`. If both the fully named variable and its alias are set by the formula, the value assigned to the fully named variable will take precedence.
You can get and set the values of these service-defined variables to manage the
You can get the value of these service-defined variables to make adjustments that are based on metrics from the Batch service. > [!IMPORTANT]
-> Job release tasks are not currently included in variables that provide task counts, such as $ActiveTasks and $PendingTasks. Depending on your autoscale formula, this can result in nodes being removed with no nodes available to run job release tasks.
+> Job release tasks aren't currently included in variables that provide task counts, such as $ActiveTasks and $PendingTasks. Depending on your autoscale formula, this can result in nodes being removed with no nodes available to run job release tasks.
> [!TIP] > These read-only service-defined variables are *objects* that provide various methods to access data associated with each. For more information, see [Obtain sample data](#obtain-sample-data) later in this article.
You can get the value of these service-defined variables to make adjustments tha
| Variable | Description | | | | | $CPUPercent |The average percentage of CPU usage. |
-| $WallClockSeconds |The number of seconds consumed. |
-| $MemoryBytes |The average number of megabytes used. |
-| $DiskBytes |The average number of gigabytes used on the local disks. |
-| $DiskReadBytes |The number of bytes read. |
-| $DiskWriteBytes |The number of bytes written. |
-| $DiskReadOps |The count of read disk operations performed. |
-| $DiskWriteOps |The count of write disk operations performed. |
-| $NetworkInBytes |The number of inbound bytes. |
-| $NetworkOutBytes |The number of outbound bytes. |
-| $SampleNodeCount |The count of compute nodes. |
-| $ActiveTasks |The number of tasks that are ready to execute but are not yet executing. This includes all tasks that are in the active state and whose dependencies have been satisfied. Any tasks that are in the active state but whose dependencies have not been satisfied are excluded from the $ActiveTasks count. For a multi-instance task, $ActiveTasks will include the number of instances set on the task.|
+| $WallClockSeconds |The number of seconds consumed. Retiring after 2024-Mar-31. |
+| $MemoryBytes |The average number of megabytes used. Retiring after 2024-Mar-31. |
+| $DiskBytes |The average number of gigabytes used on the local disks. Retiring after 2024-Mar-31. |
+| $DiskReadBytes |The number of bytes read. Retiring after 2024-Mar-31. |
+| $DiskWriteBytes |The number of bytes written. Retiring after 2024-Mar-31. |
+| $DiskReadOps |The count of read disk operations performed. Retiring after 2024-Mar-31. |
+| $DiskWriteOps |The count of write disk operations performed. Retiring after 2024-Mar-31. |
+| $NetworkInBytes |The number of inbound bytes. Retiring after 2024-Mar-31. |
+| $NetworkOutBytes |The number of outbound bytes. Retiring after 2024-Mar-31. |
+| $SampleNodeCount |The count of compute nodes. Retiring after 2024-Mar-31. |
+| $ActiveTasks |The number of tasks that are ready to execute but aren't yet executing. This includes all tasks that are in the active state and whose dependencies have been satisfied. Any tasks that are in the active state but whose dependencies haven't been satisfied are excluded from the $ActiveTasks count. For a multi-instance task, $ActiveTasks includes the number of instances set on the task.|
| $RunningTasks |The number of tasks in a running state. | | $PendingTasks |The sum of $ActiveTasks and $RunningTasks. | | $SucceededTasks |The number of tasks that finished successfully. |
You can get the value of these service-defined variables to make adjustments tha
| $CurrentLowPriorityNodes |The current number of Spot compute nodes, including any nodes that have been preempted. | | $PreemptedNodeCount | The number of nodes in the pool that are in a preempted state. |
+> [!WARNING]
+> Select service-defined variables will be retired after **31 March 2024** as noted in the table above. After the retirement
+> date, these service-defined variables will no longer be populated with sample data. Please discontinue use of these variables
+> before this date.
+ > [!WARNING] > `$PreemptedNodeCount` is currently not available and will return `0` valued data.
These operations are allowed on the types that are listed in the previous sectio
| timeinterval *operator* timeinterval |<, <=, ==, >=, >, != |double | | double *operator* double |&&, &#124;&#124; |double |
-When testing a double with a ternary operator (`double ? statement1 : statement2`), nonzero is **true**, and zero is **false**.
+Testing a double with a ternary operator (`double ? statement1 : statement2`), results in nonzero as **true**, and zero as **false**.
## Functions
The core operation of an autoscale formula is to obtain task and resource metric
### Methods
-Autoscale formulas act on samples of metric data provided by the Batch service. A formula will grow or shrink the pool size based on the values that it obtains. Service-defined variables are objects that provide methods to access data that is associated with that object. For example, the following expression shows a request to get the last five minutes of CPU usage:
+Autoscale formulas act on samples of metric data provided by the Batch service. A formula grows or shrinks the pool compute nodes based on the values that it obtains. Service-defined variables are objects that provide methods to access data that is associated with that object. For example, the following expression shows a request to get the last five minutes of CPU usage:
``` $CPUPercent.GetSample(TimeInterval_Minute * 5)
The following methods may be used to obtain sample data about service-defined va
| Method | Description | | | |
-| GetSample() |The `GetSample()` method returns a vector of data samples.<br/><br/>A sample is 30 seconds worth of metrics data. In other words, samples are obtained every 30 seconds. But as noted below, there is a delay between when a sample is collected and when it is available to a formula. As such, not all samples for a given time period may be available for evaluation by a formula.<ul><li>`doubleVec GetSample(double count)`: Specifies the number of samples to obtain from the most recent samples that were collected. `GetSample(1)` returns the last available sample. For metrics like `$CPUPercent`, however, `GetSample(1)` shouldn't be used, because it's impossible to know *when* the sample was collected. It could be recent, or, because of system issues, it might be much older. In such cases, it's better to use a time interval as shown below.<li>`doubleVec GetSample((timestamp or timeinterval) startTime [, double samplePercent])`: Specifies a time frame for gathering sample data. Optionally, it also specifies the percentage of samples that must be available in the requested time frame. For example, `$CPUPercent.GetSample(TimeInterval_Minute * 10)` would return 20 samples if all samples for the last 10 minutes are present in the `CPUPercent` history. If the last minute of history wasn't available, only 18 samples would be returned. In this case `$CPUPercent.GetSample(TimeInterval_Minute * 10, 95)` would fail because only 90 percent of the samples are available, but `$CPUPercent.GetSample(TimeInterval_Minute * 10, 80)` would succeed.<li>`doubleVec GetSample((timestamp or timeinterval) startTime, (timestamp or timeinterval) endTime [, double samplePercent])`: Specifies a time frame for gathering data, with both a start time and an end time. As mentioned above, there is a delay between when a sample is collected and when it becomes available to a formula. Consider this delay when you use the `GetSample` method. See `GetSamplePercent` below. |
+| GetSample() |The `GetSample()` method returns a vector of data samples.<br/><br/>A sample is 30 seconds worth of metrics data. In other words, samples are obtained every 30 seconds. But as noted below, there's a delay between when a sample is collected and when it's available to a formula. As such, not all samples for a given time period may be available for evaluation by a formula.<ul><li>`doubleVec GetSample(double count)`: Specifies the number of samples to obtain from the most recent samples that were collected. `GetSample(1)` returns the last available sample. For metrics like `$CPUPercent`, however, `GetSample(1)` shouldn't be used, because it's impossible to know *when* the sample was collected. It could be recent, or, because of system issues, it might be much older. In such cases, it's better to use a time interval as shown below.<li>`doubleVec GetSample((timestamp or timeinterval) startTime [, double samplePercent])`: Specifies a time frame for gathering sample data. Optionally, it also specifies the percentage of samples that must be available in the requested time frame. For example, `$CPUPercent.GetSample(TimeInterval_Minute * 10)` would return 20 samples if all samples for the last 10 minutes are present in the `CPUPercent` history. If the last minute of history wasn't available, only 18 samples would be returned. In this case `$CPUPercent.GetSample(TimeInterval_Minute * 10, 95)` would fail because only 90 percent of the samples are available, but `$CPUPercent.GetSample(TimeInterval_Minute * 10, 80)` would succeed.<li>`doubleVec GetSample((timestamp or timeinterval) startTime, (timestamp or timeinterval) endTime [, double samplePercent])`: Specifies a time frame for gathering data, with both a start time and an end time. As mentioned above, there's a delay between when a sample is collected and when it becomes available to a formula. Consider this delay when you use the `GetSample` method. See `GetSamplePercent` below. |
| GetSamplePeriod() |Returns the period of samples that were taken in a historical sample data set. | | Count() |Returns the total number of samples in the metric history. | | HistoryBeginTime() |Returns the time stamp of the oldest available data sample for the metric. |
The following methods may be used to obtain sample data about service-defined va
### Samples
-The Batch service periodically takes samples of task and resource metrics and makes them available to your autoscale formulas. These samples are recorded every 30 seconds by the Batch service. However, there is typically a delay between when those samples were recorded and when they are made available to (and can be read by) your autoscale formulas. Additionally, samples may not be recorded for a particular interval because of factors such as network or other infrastructure issues.
+The Batch service periodically takes samples of task and resource metrics and makes them available to your autoscale formulas. These samples are recorded every 30 seconds by the Batch service. However, there's typically a delay between when those samples were recorded and when they're made available to (and read by) your autoscale formulas. Additionally, samples may not be recorded for a particular interval because of factors such as network or other infrastructure issues.
### Sample percentage
-When `samplePercent` is passed to the `GetSample()` method or the `GetSamplePercent()` method is called, _percent_ refers to a comparison between the total possible number of samples that are recorded by the Batch service and the number of samples that are available to your autoscale formula.
+When `samplePercent` is passed to the `GetSample()` method or the `GetSamplePercent()` method is called, _percent_ refers to a comparison between the total possible number of samples recorded by the Batch service and the number of samples that are available to your autoscale formula.
Let's look at a 10-minute timespan as an example. Because samples are recorded every 30 seconds within that 10-minute timespan, the maximum total number of samples recorded by Batch would be 20 samples (2 per minute). However, due to the inherent latency of the reporting mechanism and other issues within Azure, there may be only 15 samples that are available to your autoscale formula for reading. So, for example, for that 10-minute period, only 75% of the total number of samples recorded may be available to your formula. ### GetSample() and sample ranges
-Your autoscale formulas will grow and shrink your pools by adding or removing nodes. Because nodes cost you money, be sure that your formulas use an intelligent method of analysis that is based on sufficient data. We recommend that you use a trending-type analysis in your formulas. This type grows and shrinks your pools based on a range of collected samples.
+Your autoscale formulas grow or shrink your pools by adding or removing nodes. Because nodes cost you money, be sure that your formulas use an intelligent method of analysis that is based on sufficient data. We recommend that you use a trending-type analysis in your formulas. This type grows and shrinks your pools based on a range of collected samples.
To do so, use `GetSample(interval look-back start, interval look-back end)` to return a vector of samples:
To do so, use `GetSample(interval look-back start, interval look-back end)` to r
$runningTasksSample = $RunningTasks.GetSample(1 * TimeInterval_Minute, 6 * TimeInterval_Minute); ```
-When the above line is evaluated by Batch, it returns a range of samples as a vector of values. For example:
+When Batch evaluates the above line, it returns a range of samples as a vector of values. For example:
``` $runningTasksSample=[1,1,1,1,1,1,1,1,1,1];
$runningTasksSample=[1,1,1,1,1,1,1,1,1,1];
Once you've collected the vector of samples, you can then use functions like `min()`, `max()`, and `avg()` to derive meaningful values from the collected range.
-For additional security, you can force a formula evaluation to fail if less than a certain sample percentage is available for a particular time period. When you force a formula evaluation to fail, you instruct Batch to cease further evaluation of the formula if the specified percentage of samples is not available. In this case, no change is made to the pool size. To specify a required percentage of samples for the evaluation to succeed, specify it as the third parameter to `GetSample()`. Here, a requirement of 75 percent of samples is specified:
+To exercise extra caution, you can force a formula evaluation to fail if less than a certain sample percentage is available for a particular time period. When you force a formula evaluation to fail, you instruct Batch to cease further evaluation of the formula if the specified percentage of samples isn't available. In this case, no change is made to the pool size. To specify a required percentage of samples for the evaluation to succeed, specify it as the third parameter to `GetSample()`. Here, a requirement of 75 percent of samples is specified:
``` $runningTasksSample = $RunningTasks.GetSample(60 * TimeInterval_Second, 120 * TimeInterval_Second, 75);
$runningTasksSample = $RunningTasks.GetSample(60 * TimeInterval_Second, 120 * Ti
Because there may be a delay in sample availability, you should always specify a time range with a look-back start time that is older than one minute. It takes approximately one minute for samples to propagate through the system, so samples in the range `(0 * TimeInterval_Second, 60 * TimeInterval_Second)` may not be available. Again, you can use the percentage parameter of `GetSample()` to force a particular sample percentage requirement. > [!IMPORTANT]
-> We strongly recommend that you **avoid relying *only* on `GetSample(1)` in your autoscale formulas**. This is because `GetSample(1)` essentially says to the Batch service, "Give me the last sample you have, no matter how long ago you retrieved it." Since it is only a single sample, and it may be an older sample, it may not be representative of the larger picture of recent task or resource state. If you do use `GetSample(1)`, make sure that it's part of a larger statement and not the only data point that your formula relies on.
+> We strongly recommend that you **avoid relying *only* on `GetSample(1)` in your autoscale formulas**. This is because `GetSample(1)` essentially says to the Batch service, "Give me the last sample you have, no matter how long ago you retrieved it." Since it's only a single sample, and it may be an older sample, it may not be representative of the larger picture of recent task or resource state. If you do use `GetSample(1)`, make sure that it's part of a larger statement and not the only data point that your formula relies on.
## Write an autoscale formula
First, let's define the requirements for our new autoscale formula. The formula
- Always restrict the maximum number of dedicated nodes to 400. - When reducing the number of nodes, don't remove nodes that are running tasks; if necessary, wait until tasks have finished before removing nodes.
-The first statement in our formula will increase the number of nodes during high CPU usage. We'll define a statement that populates a user-defined variable (`$totalDedicatedNodes`) with a value that is 110 percent of the current target number of dedicated nodes, but only if the minimum average CPU usage during the last 10 minutes was above 70 percent. Otherwise, it uses the value for the current number of dedicated nodes.
+The first statement in our formula increases the number of nodes during high CPU usage. We define a statement that populates a user-defined variable (`$totalDedicatedNodes`) with a value that is 110 percent of the current target number of dedicated nodes, but only if the minimum average CPU usage during the last 10 minutes was above 70 percent. Otherwise, it uses the value for the current number of dedicated nodes.
``` $totalDedicatedNodes =
$totalDedicatedNodes =
($CurrentDedicatedNodes * 0.9) : $totalDedicatedNodes; ```
-Now, we'll limit the target number of dedicated compute nodes to a maximum of 400.
+Now, we limit the target number of dedicated compute nodes to a maximum of 400.
``` $TargetDedicatedNodes = min(400, $totalDedicatedNodes) ```
-Finally, we'll ensure that nodes aren't removed until their tasks are finished.
+Finally, we ensure that nodes aren't removed until their tasks are finished.
``` $NodeDeallocationOption = taskcompletion;
When you enable autoscaling on an existing pool, keep in mind:
- If autoscaling is currently disabled on the pool, you must specify a valid autoscale formula when you issue the request. You can optionally specify an automatic scaling interval. If you don't specify an interval, the default value of 15 minutes is used. - If autoscaling is currently enabled on the pool, you can specify a new formula, a new interval, or both. You must specify at least one of these properties. - If you specify a new automatic scaling interval, the existing schedule is stopped and a new schedule is started. The new schedule's start time is the time at which the request to enable autoscaling was issued.
- - If you omit either the autoscale formula or interval, the Batch service will continue to use the current value of that setting.
+ - If you omit either the autoscale formula or interval, the Batch service continues to use the current value of that setting.
> [!NOTE] > If you specified values for the *targetDedicatedNodes* or *targetLowPriorityNodes* parameters of the **CreatePool** method when you created the pool in .NET, or for the comparable parameters in another language, then those values are ignored when the autoscale formula is evaluated.
AutoScaleRun.Results:
## Get information about autoscale runs
-To ensure that your formula is performing as expected, we recommend that you periodically check the results of the autoscaling runs that Batch performs on your pool. To do so, get (or refresh) a reference to the pool, then examine the properties of its last autoscale run.
+It's recommended to periodically check the Batch service's evaluation of your autoscale formula. To do so, get
+(or refresh) a reference to the pool, then examine the properties of its last autoscale run.
In Batch .NET, the [CloudPool.AutoScaleRun](/dotnet/api/microsoft.azure.batch.cloudpool.autoscalerun) property has several properties that provide information about the latest automatic scaling run performed on the pool:
Error:
``` ## Get autoscale run history using pool autoscale events
-You can also check automatic scaling history by querying [PoolAutoScaleEvent](batch-pool-autoscale-event.md). This event is emitted by Batch Service to record each occurrence of autoscale formula evaluation and execution, which can be helpful to troubleshoot potential issues.
+You can also check automatic scaling history by querying [PoolAutoScaleEvent](batch-pool-autoscale-event.md). Batch emits this event to record each occurrence of autoscale formula evaluation and execution, which can be helpful to troubleshoot potential issues.
Sample event for PoolAutoScaleEvent: ```json
This example shows a C# example with an autoscale formula that sets the pool siz
Specifically, this formula does the following: - Sets the initial pool size to four nodes.-- Does not adjust the pool size within the first 10 minutes of the pool's lifecycle.
+- Doesn't adjust the pool size within the first 10 minutes of the pool's lifecycle.
- After 10 minutes, obtains the max value of the number of running and active tasks within the past 60 minutes. - If both values are 0 (indicating that no tasks were running or active in the last 60 minutes), the pool size is set to 0. - If either value is greater than zero, no change is made.
batch Batch Custom Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-custom-images.md
Title: Use a managed image to create a custom image pool description: Create a Batch custom image pool from a managed image to provision compute nodes with the software and data for your application. Previously updated : 11/18/2020 Last updated : 04/06/2023 ms.devlang: csharp
ms.devlang: csharp
To create a custom image pool for your Batch pool's virtual machines (VMs), you can use a managed image to create an [Azure Compute Gallery image](batch-sig-images.md). Using just a managed image is also supported, but only for API versions up to and including 2019-08-01.
-> [!IMPORTANT]
-> In most cases, you should create custom images using the Azure Compute Gallery. By using the Azure Compute Gallery, you can provision pools faster, scale larger quantities of VMs, and have improved reliability when provisioning VMs. To learn more, see [Use the Azure Compute Gallery to create a custom pool](batch-sig-images.md).
+> [!WARNING]
+> Support for creating a Batch pool using a managed image is being retired after **31 March 2026**. Please migrate to
+> hosting custom images in Azure Compute Gallery to use for creating a [custom image pool in Batch](batch-sig-images.md).
+> For more information, see the [migration guide](batch-custom-image-pools-to-azure-compute-gallery-migration-guide.md).
This topic explains how to create a custom image pool using only a managed image.
To scale Batch pools reliably with a managed image, we recommend creating the ma
### Prepare a VM
-If you are creating a new VM for the image, use a first party Azure Marketplace image supported by Batch as the base image for your managed image. Only first party images can be used as a base image. To get a full list of Azure Marketplace image references supported by Azure Batch, see the [List node agent SKUs](/java/api/com.microsoft.azure.batch.protocol.accounts.listnodeagentskus) operation.
+If you're creating a new VM for the image, use a first party Azure Marketplace image supported by Batch as the base image for your managed image. Only first party images can be used as a base image. To get a full list of Azure Marketplace image references supported by Azure Batch, see the [List node agent SKUs](/java/api/com.microsoft.azure.batch.protocol.accounts.listnodeagentskus) operation.
> [!NOTE] > You can't use a third-party image that has additional license and purchase terms as your base image. For information about these Marketplace images, see the guidance for [Linux](../virtual-machines/linux/cli-ps-findimage.md#check-the-purchase-plan-information) or [Windows](../virtual-machines/windows/cli-ps-findimage.md#view-purchase-plan-properties)VMs. - Ensure the VM is created with a managed disk. This is the default storage setting when you create a VM.-- Do not install Azure extensions, such as the Custom Script extension, on the VM. If the image contains a pre-installed extension, Azure may encounter problems when deploying the Batch pool.
+- Don't install Azure extensions, such as the Custom Script extension, on the VM. If the image contains a preinstalled extension, Azure may encounter problems when deploying the Batch pool.
- When using attached data disks, you need to mount and format the disks from within a VM to use them. - Ensure that the base OS image you provide uses the default temp drive. The Batch node agent currently expects the default temp drive.-- Ensure that the OS disk is not encrypted.-- Once the VM is running, connect to it via RDP (for Windows) or SSH (for Linux). Install any necessary software or copy desired data.
+- Ensure that the OS disk isn't encrypted.
+- Once the VM is running, connect to it via RDP (for Windows) or SSH (for Linux). Install any necessary software or copy desired data.
### Create a VM snapshot
Request Body
## Considerations for large pools
-If you plan to create a pool with hundreds of VMs or more using a custom image, it is important to follow the preceding guidance to use an image created from a VM snapshot.
+If you plan to create a pool with hundreds of VMs or more using a custom image, it's important to follow the preceding guidance to use an image created from a VM snapshot.
Also note the following considerations:
Also note the following considerations:
- **Resize timeout** - If your pool contains a fixed number of nodes (doesn't autoscale), increase the resizeTimeout property of the pool to a value such as 20-30 minutes. If your pool doesn't reach its target size within the timeout period, perform another [resize operation](/rest/api/batchservice/pool/resize). If you plan a pool with more than 300 compute nodes, you might need to resize the pool multiple times to reach the target size.
-
-By using the [Azure Compute Gallery](batch-sig-images.md), you can create larger pools with your customized images along with more Shared Image replicas. Using Shared Images, the time it takes for the pool to reach the steady state is up to 25% faster, and the VM idle latency is up to 30% shorter.
+
+By using the [Azure Compute Gallery](batch-sig-images.md), you can create larger pools with your customized images along
+with more Shared Image replicas along with improved performance benefits such as decreased time for nodes to become ready.
## Considerations for using Packer
-Creating a managed image resource directly with Packer can only be done with user subscription mode Batch accounts. For Batch service mode accounts, you need to create a VHD first, then import the VHD to a managed image resource. Depending on your pool allocation mode (user subscription, or Batch service), your steps to create a managed image resource will vary.
+Creating a managed image resource directly with Packer can only be done with user subscription mode Batch accounts. For Batch service mode accounts, you need to create a VHD first, then import the VHD to a managed image resource. Depending on your pool allocation mode (user subscription, or Batch service), your steps to create a managed image resource varies.
Ensure that the resource used to create the managed image exists for the lifetimes of any pool referencing the custom image. Failure to do so can result in pool allocation failures and/or resize failures.
-If the image or the underlying resource is removed, you may get an error similar to: `There was an error encountered while performing the last resize on the pool. Please try resizing the pool again. Code: AllocationFailed`. If you get this error, ensure that the underlying resource has not been removed.
+If the image or the underlying resource is removed, you may get an error similar to: `There was an error encountered while performing the last resize on the pool. Please try resizing the pool again. Code: AllocationFailed`. If you get this error, ensure that the underlying resource hasn't been removed.
For more information on using Packer to create a VM, see [Build a Linux image with Packer](../virtual-machines/linux/build-image-with-packer.md) or [Build a Windows image with Packer](../virtual-machines/windows/build-image-with-packer.md).
batch Batch Docker Container Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-docker-container-workloads.md
Title: Container workloads
+ Title: Container workloads on Azure Batch
description: Learn how to run and scale apps from container images on Azure Batch. Create a pool of compute nodes that support running container tasks. Previously updated : 03/20/2023 Last updated : 04/05/2023 ms.devlang: csharp, python
-# Run container applications on Azure Batch
+
+# Use Azure Batch to run container workloads
Azure Batch lets you run and scale large numbers of batch computing jobs on Azure. Batch tasks can run directly on virtual machines (nodes) in a Batch pool, but you can also set up a Batch pool to run tasks in Docker-compatible containers on the nodes. This article shows you how to create a pool of compute nodes that support running container tasks, and then run container tasks on the pool.
The code examples here use the Batch .NET and Python SDKs. You can also use othe
## Why use containers?
-Using containers provides an easy way to run Batch tasks without having to manage an environment and dependencies to run applications. Containers deploy applications as lightweight, portable, self-sufficient units that can run in several different environments. For example, build and test a container locally, then upload the container image to a registry in Azure or elsewhere. The container deployment model ensures that the runtime environment of your application is always correctly installed and configured wherever you host the application. Container-based tasks in Batch can also take advantage of features of non-container tasks, including application packages and management of resource files and output files.
+Containers provide an easy way to run Batch tasks without having to manage an environment and dependencies to run applications. Containers deploy applications as lightweight, portable, self-sufficient units that can run in several different environments. For example, build and test a container locally, then upload the container image to a registry in Azure or elsewhere. The container deployment model ensures that the runtime environment of your application is always correctly installed and configured wherever you host the application. Container-based tasks in Batch can also take advantage of features of non-container tasks, including application packages and management of resource files and output files.
## Prerequisites
You should be familiar with container concepts and how to create a Batch pool an
- **Accounts**: In your Azure subscription, you need to create a [Batch account](accounts.md) and optionally an Azure Storage account. -- **A supported VM image**: Containers are only supported in pools created with the Virtual Machine Configuration, from a supported image (listed in the next section). If you provide a custom image, see the considerations in the following section and the requirements in [Use a managed custom image to create a pool of virtual machines](batch-custom-images.md).
+- **A supported virtual machine (VM) image**: Containers are only supported in pools created with the Virtual Machine Configuration, from a supported image (listed in the next section). If you provide a custom image, see the considerations in the following section and the requirements in [Use a managed image to create a custom image pool](batch-custom-images.md).
Keep in mind the following limitations: -- Batch provides RDMA support only for containers running on Linux pools.-- For Windows container workloads, we recommend choosing a multicore VM size for your pool.
+- Batch provides remote direct memory access (RDMA) support only for containers that run on Linux pools.
+- For Windows container workloads, you should choose a multicore VM size for your pool.
-## Supported virtual machine images
+## Supported VM images
Use one of the following supported Windows or Linux images to create a pool of VM compute nodes for container workloads. For more information about Marketplace images that are compatible with Batch, see [List of virtual machine images](batch-linux-nodes.md#list-of-virtual-machine-images). ### Windows support
-Batch supports Windows server images that have container support designations. Typically these image sku names are suffixed with `-with-containers` or `-with-containers-smalldisk`. Additionally, [the API to list all supported images in Batch](batch-linux-nodes.md#list-of-virtual-machine-images) will denote a `DockerCompatible` capability if the image supports Docker containers.
+Batch supports Windows server images that have container support designations. Typically, these image SKU names are suffixed with `-with-containers` or `-with-containers-smalldisk`. Additionally, [the API to list all supported images in Batch](batch-linux-nodes.md#list-of-virtual-machine-images) denotes a `DockerCompatible` capability if the image supports Docker containers.
You can also create custom images from VMs running Docker on Windows.
For Linux container workloads, Batch currently supports the following Linux imag
These images are only supported for use in Azure Batch pools and are geared for Docker container execution. They feature: -- A pre-installed Docker-compatible [Moby](https://github.com/moby/moby) container runtime-- Pre-installed NVIDIA GPU drivers and NVIDIA container runtime, to streamline deployment on Azure N-series VMs-- VM images with the suffix of '-rdma' are pre-configured with support for InfiniBand RDMA VM sizes. These VM images should not be used with VM sizes that do not have InfiniBand support.
+- A pre-installed Docker-compatible [Moby container runtime](https://github.com/moby/moby).
+- Pre-installed NVIDIA GPU drivers and NVIDIA container runtime, to streamline deployment on Azure N-series VMs.
+- VM images with the suffix of `-rdma` are pre-configured with support for InfiniBand RDMA VM sizes. These VM images shouldn't be used with VM sizes that don't have InfiniBand support.
-You can also create custom images from VMs running Docker on one of the Linux distributions that is compatible with Batch. If you choose to provide your own custom Linux image, see the instructions in [Use a managed custom image to create a pool of virtual machines](batch-custom-images.md).
+You can also create custom images from VMs running Docker on one of the Linux distributions that's compatible with Batch. If you choose to provide your own custom Linux image, see the instructions in [Use a managed image to create a custom image pool](batch-custom-images.md).
-For Docker support on a custom image, install [Docker Community Edition (CE)](https://www.docker.com/community-edition) or [Docker Enterprise Edition (EE)](https://docs.docker.com/).
+For Docker support on a custom image, install [Docker Pro](https://www.docker.com/products/pro/) or the open-source [Docker Community Edition](https://www.docker.com/community).
Additional considerations for using a custom Linux image: - To take advantage of the GPU performance of Azure N-series sizes when using a custom image, pre-install NVIDIA drivers. Also, you need to install the Docker Engine Utility for NVIDIA GPUs, [NVIDIA Docker](https://github.com/NVIDIA/nvidia-docker).-- To access the Azure RDMA network, use an RDMA-capable VM size. Necessary RDMA drivers are installed in the CentOS HPC and Ubuntu images supported by Batch. Additional configuration may be needed to run MPI workloads. See [Use RDMA-capable or GPU-enabled instances in Batch pool](batch-pool-compute-intensive-sizes.md).
+- To access the Azure RDMA network, use an RDMA-capable VM size. Necessary RDMA drivers are installed in the CentOS HPC and Ubuntu images supported by Batch. Additional configuration may be needed to run MPI workloads. See [Use RDMA or GPU instances in Batch pool](batch-pool-compute-intensive-sizes.md).
## Container configuration for Batch pool
-To enable a Batch pool to run container workloads, you must specify [ContainerConfiguration](/dotnet/api/microsoft.azure.batch.containerconfiguration) settings in the pool's [VirtualMachineConfiguration](/dotnet/api/microsoft.azure.batch.virtualmachineconfiguration) object. (This article provides links to the Batch .NET API reference. Corresponding settings are in the [Batch Python](/python/api/overview/azure/batch) API.)
+To enable a Batch pool to run container workloads, you must specify [ContainerConfiguration](/dotnet/api/microsoft.azure.batch.containerconfiguration) settings in the pool's [VirtualMachineConfiguration](/dotnet/api/microsoft.azure.batch.virtualmachineconfiguration) object. This article provides links to the Batch .NET API reference. Corresponding settings are in the [Batch Python](/python/api/overview/azure/batch) API.
-You can create a container-enabled pool with or without prefetched container images, as shown in the following examples. The pull (or prefetch) process lets you pre-load container images from either Docker Hub or another container registry on the Internet. For best performance, use an [Azure container registry](../container-registry/container-registry-intro.md) in the same region as the Batch account.
+You can create a container-enabled pool with or without prefetched container images, as shown in the following examples. The pull (or prefetch) process lets you preload container images from either Docker Hub or another container registry on the Internet. For best performance, use an [Azure container registry](../container-registry/container-registry-intro.md) in the same region as the Batch account.
-The advantage of prefetching container images is that when tasks first start running they don't have to wait for the container image to download. The container configuration pulls container images to the VMs when the pool is created. Tasks that run on the pool can then reference the list of container images and container run options.
+The advantage of prefetching container images is that when tasks first start running, they don't have to wait for the container image to download. The container configuration pulls container images to the VMs when the pool is created. Tasks that run on the pool can then reference the list of container images and container run options.
### Pool without prefetched container images To configure a container-enabled pool without prefetched container images, define `ContainerConfiguration` and `VirtualMachineConfiguration` objects as shown in the following examples. These examples use the Ubuntu Server for Azure Batch container pools image from the Marketplace.
-**Note**: Ubuntu server version used in the example is for illustration purposes. Feel free to change the node_agent_sku_id to the version you are using.
+**Note**: Ubuntu server version used in the example is for illustration purposes. Feel free to change the *node_agent_sku_id* to the version you're using.
```python image_ref_to_use = batch.models.ImageReference(
CloudPool pool = batchClient.PoolOperations.CreatePool(
### Prefetch images for container configuration
-To prefetch container images on the pool, add the list of container images (`container_image_names`, in Python) to the `ContainerConfiguration`.
+To prefetch container images on the pool, add the list of container images (`container_image_names` in Python) to the `ContainerConfiguration`.
The following basic Python example shows how to prefetch a standard Ubuntu container image from [Docker Hub](https://hub.docker.com).
CloudPool pool = batchClient.PoolOperations.CreatePool(
### Managed identity support for ACR
-When accessing containers stored in [Azure Container Registry](https://azure.microsoft.com/services/container-registry), either a username/password or a managed identity can be used to authenticate with the service. To use a managed identity, first ensure that the identity has been [assigned to the pool](managed-identity-pools.md) and that the identity has the `AcrPull` role assigned for the container registry you wish to access. Then, simply tell Batch which identity to use when authenticating with ACR.
+When you access containers stored in [Azure Container Registry](https://azure.microsoft.com/services/container-registry), either a username/password or a managed identity can be used to authenticate with the service. To use a managed identity, first ensure that the identity has been [assigned to the pool](managed-identity-pools.md) and that the identity has the `AcrPull` role assigned for the container registry you wish to access. Then, simply tell Batch which identity to use when authenticating with ACR.
```csharp ContainerRegistry containerRegistry = new ContainerRegistry(
CloudPool pool = batchClient.PoolOperations.CreatePool(
To run a container task on a container-enabled pool, specify container-specific settings. Settings include the image to use, registry, and container run options. -- Use the `ContainerSettings` property of the task classes to configure container-specific settings. These settings are defined by the [TaskContainerSettings](/dotnet/api/microsoft.azure.batch.taskcontainersettings) class. Note that the `--rm` container option doesn't require an additional `--runtime` option since it is taken care of by Batch.
+- Use the `ContainerSettings` property of the task classes to configure container-specific settings. These settings are defined by the [TaskContainerSettings](/dotnet/api/microsoft.azure.batch.taskcontainersettings) class. Note that the `--rm` container option doesn't require an additional `--runtime` option since it's taken care of by Batch.
-- If you run tasks on container images, the [cloud task](/dotnet/api/microsoft.azure.batch.cloudtask) and [job manager task](/dotnet/api/microsoft.azure.batch.cloudjob.jobmanagertask) require container settings. However, the [start task](/dotnet/api/microsoft.azure.batch.starttask), [job preparation task](/dotnet/api/microsoft.azure.batch.cloudjob.jobpreparationtask), and [job release task](/dotnet/api/microsoft.azure.batch.cloudjob.jobreleasetask) do not require container settings (that is, they can run within a container context or directly on the node).
+- If you run tasks on container images, the [cloud task](/dotnet/api/microsoft.azure.batch.cloudtask) and [job manager task](/dotnet/api/microsoft.azure.batch.cloudjob.jobmanagertask) require container settings. However, the [start task](/dotnet/api/microsoft.azure.batch.starttask), [job preparation task](/dotnet/api/microsoft.azure.batch.cloudjob.jobpreparationtask), and [job release task](/dotnet/api/microsoft.azure.batch.cloudjob.jobreleasetask) don't require container settings (that is, they can run within a container context or directly on the node).
- For Windows, tasks must be run with [ElevationLevel](/rest/api/batchservice/task/add#elevationlevel) set to `admin`. -- For Linux, Batch will map the user/group permission to the container. If access to any folder within the container requires Administrator permission, you may need to run the task as pool scope with admin elevation level. This will ensure Batch runs the task as root in the container context. Otherwise, a non-admin user may not have access to those folders.
+- For Linux, Batch maps the user/group permission to the container. If access to any folder within the container requires Administrator permission, you may need to run the task as pool scope with admin elevation level. This ensures that Batch runs the task as root in the container context. Otherwise, a non-admin user might not have access to those folders.
-- For container pools with GPU-enabled hardware, Batch will automatically enable GPU for container tasks, so you shouldn't include the `ΓÇôgpus` argument.
+- For container pools with GPU-enabled hardware, Batch automatically enables GPU for container tasks, so you shouldn't include the `ΓÇôgpus` argument.
### Container task command line When you run a container task, Batch automatically uses the [docker create](https://docs.docker.com/engine/reference/commandline/create/) command to create a container using the image specified in the task. Batch then controls task execution in the container.
-As with non-container Batch tasks, you set a command line for a container task. Because Batch automatically creates the container, the command line only specifies the command or commands that will run in the container.
+As with non-container Batch tasks, you set a command line for a container task. Because Batch automatically creates the container, the command line only specifies the command or commands that run in the container.
If the container image for a Batch task is configured with an [ENTRYPOINT](https://docs.docker.com/engine/reference/builder/#exec-form-entrypoint-example) script, you can set your command line to either use the default ENTRYPOINT or override it:
Optional [ContainerRunOptions](/dotnet/api/microsoft.azure.batch.taskcontainerse
### Container task working directory
-A Batch container task executes in a working directory in the container that is very similar to the directory Batch sets up for a regular (non-container) task. Note that this working directory is different from the [WORKDIR](https://docs.docker.com/engine/reference/builder/#workdir) if configured in the image, or the default container working directory (`C:\` on a Windows container, or `/` on a Linux container).
+A Batch container task executes in a working directory in the container that's very similar to the directory that Batch sets up for a regular (non-container) task. Note that this working directory is different from the [WORKDIR](https://docs.docker.com/engine/reference/builder/#workdir) if configured in the image, or the default container working directory (`C:\` on a Windows container, or `/` on a Linux container).
For a Batch container task: -- All directories recursively below the `AZ_BATCH_NODE_ROOT_DIR` on the host node (the root of Azure Batch directories) are mapped into the container-- All task environment variables are mapped into the container
+- All directories recursively below the `AZ_BATCH_NODE_ROOT_DIR` on the host node (the root of Azure Batch directories) are mapped into the container.
+- All task environment variables are mapped into the container.
- The task working directory `AZ_BATCH_TASK_WORKING_DIR` on the node is set the same as for a regular task and mapped into the container. These mappings allow you to work with container tasks in much the same way as non-container tasks. For example, install applications using application packages, access resource files from Azure Storage, use task environment settings, and persist task output files after the container stops.
containerTask.ContainerSettings = cmdContainerSettings;
## Next steps -- For information on installing and using Docker CE on Linux, see the [Docker](https://docs.docker.com/engine/installation/) documentation.-- Learn how to [Use a managed custom image to create a pool of virtual machines](batch-custom-images.md).-- Learn more about the [Moby project](https://mobyproject.org/), a framework for creating container-based systems.
+- For information on installing and using Docker CE on Linux, see the [Docker documentation](https://docs.docker.com/engine/installation).
+- Learn how to [Use a managed image to create a custom image pool](batch-custom-images.md).
+- Learn more about the [Moby project](https://mobyproject.org), a framework for creating container-based systems.
batch Batch Rendering Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-rendering-applications.md
Where applicable, pay-for-use licensing is available for the pre-installed rende
Some applications only support Windows, but most are supported on both Windows and Linux.
-> [!IMPORTANT]
+> [!WARNING]
> The rendering VM images and pay-for-use licensing have been [deprecated and will be retired on February 29, 2024](https://azure.microsoft.com/updates/azure-batch-rendering-vm-images-licensing-will-be-retired-on-29-february-2024/). To use Batch for rendering, [a custom VM image and standard application licensing should be used.](batch-rendering-functionality.md#batch-pools-using-custom-vm-images-and-standard-application-licensing) ## Applications on latest CentOS 7 rendering image
batch Batch Rendering Functionality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-rendering-functionality.md
Most rendering applications will require licenses obtained from a license server
## Batch pools using rendering VM images
-> [!IMPORTANT]
+> [!WARNING]
> The rendering VM images and pay-for-use licensing have been [deprecated and will be retired on February 29, 2024](https://azure.microsoft.com/updates/azure-batch-rendering-vm-images-licensing-will-be-retired-on-29-february-2024/). To use Batch for rendering, [a custom VM image and standard application licensing should be used.](batch-rendering-functionality.md#batch-pools-using-custom-vm-images-and-standard-application-licensing) ### Rendering application installation
batch Batch Rendering Using https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-rendering-using.md
# Using Azure Batch rendering
-> [!IMPORTANT]
+> [!WARNING]
> The rendering VM images and pay-for-use licensing have been [deprecated and will be retired on February 29, 2024](https://azure.microsoft.com/updates/azure-batch-rendering-vm-images-licensing-will-be-retired-on-29-february-2024/). To use Batch for rendering, [a custom VM image and standard application licensing should be used.](batch-rendering-functionality.md#batch-pools-using-custom-vm-images-and-standard-application-licensing) There are several ways to use Azure Batch rendering:
There are several ways to use Azure Batch rendering:
* For each of the rendering applications, a number of pool and job templates are provided that can be used to easily create pools and to submit jobs. A set of templates is listed in the application UI, with the template files being accessed from GitHub. * Custom templates can be authored from scratch or the supplied templates from GitHub can be copied and modified. * Client application plug-ins:
- * Plug-ins are available that allow Batch rendering to be used from directly within the client design and modeling applications. The plug-ins mainly invoke the Batch Explorer application with contextual information about the current 3D model and includes features to help manage assets.
+ * Plug-ins are available that allow Batch rendering to be used from directly within the client design and modeling applications. The plug-ins mainly invoke the Batch Explorer application with contextual information about the current 3D model and include features to help manage assets.
The best way to try Azure Batch rendering and simplest way for end-users, who are not developers and not Azure experts, is to use the Batch Explorer application, either directly or invoked from a client application plug-in.
batch Batch Spot Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-spot-vms.md
Title: Run workloads on cost-effective Spot VMs description: Learn how to provision Spot VMs to reduce the cost of Azure Batch workloads. Previously updated : 03/15/2023 Last updated : 04/06/2023
Batch offers two types of low-cost pre-emptible VMs:
The type of node you get depends on your Batch account's pool allocation mode, which is settable during account creation. Batch accounts that use the **user subscription** pool allocation mode always get Spot VMs. Batch accounts that use the **Batch managed** pool allocation mode always get low-priority VMs.
+> [!WARNING]
+> Support for low-priority VMs will be retired after **30 September 2025**. Please
+> [migrate to using Spot VMs in Batch](low-priority-vms-retirement-migration-guide.md) before then.
+ Azure Spot VMs and Batch low-priority VMs are similar but have a few differences in behavior. | | Spot VMs | Low-priority VMs |
Azure Spot VMs and Batch low-priority VMs are similar but have a few differences
Azure Batch provides several capabilities that make it easy to consume and benefit from Spot VMs: -- Batch pools can contain both dedicated VMs and Spot VMs. The number of each type of VM can be specified when a pool is created, or changed at any time for an existing pool, using the explicit resize operation or using auto-scale. Job and task submission can remain unchanged, regardless of the VM types in the pool. You can also configure a pool to completely use Spot VMs to run jobs as cheaply as possible, but spin up dedicated VMs if the capacity drops below a minimum threshold, to keep jobs running.
+- Batch pools can contain both dedicated VMs and Spot VMs. The number of each type of VM can be specified when a pool is created, or changed at any time for an existing pool, using the explicit resize operation or using autoscale. Job and task submission can remain unchanged, regardless of the VM types in the pool. You can also configure a pool to completely use Spot VMs to run jobs as cheaply as possible, but spin up dedicated VMs if the capacity drops below a minimum threshold, to keep jobs running.
- Batch pools automatically seek the target number of Spot VMs. If VMs are preempted or unavailable, Batch attempts to replace the lost capacity and return to the target. - When tasks are interrupted, Batch detects and automatically requeues tasks to run again. - Spot VMs have a separate vCPU quota that differs from the one for dedicated VMs. The quota for Spot VMs is higher than the quota for dedicated VMs, because Spot VMs cost less. For more information, see [Batch service quotas and limits](batch-quota-limit.md#resource-quotas).
To view these metrics in the Azure portal
- Spot VMs in Batch don't support setting a max price and don't support price-based evictions. They can only be evicted for capacity reasons. - Spot VMs are only available for Virtual Machine Configuration pools and not for Cloud Service Configuration pools, which are [deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). - Spot VMs aren't available for some clouds, VM sizes, and subscription offer types. See more about [Spot limitations](../virtual-machines/spot-vms.md#limitations).-- Currently, [Ephemeral OS disks](create-pool-ephemeral-os-disk.md) are not supported with Spot VMs due to the service managed
+- Currently, [Ephemeral OS disks](create-pool-ephemeral-os-disk.md) aren't supported with Spot VMs due to the service managed
eviction policy of Stop-Deallocate. ## Next steps
chaos-studio Chaos Studio Fault Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md
Configuring the shutdown fault:
} ```
+## Disable Autoscale
+
+| Property | Value |
+| | |
+| Capability name | DisaleAutoscale |
+| Target type | Microsoft-AutoscaleSettings |
+| Description | Disables the [autoscale service](/azure/azure-monitor/autoscale/autoscale-overview). When autoscale is disabled, resources such as Virtual Machine Scale Sets, Web apps, Service bus, and [more](/azure/azure-monitor/autoscale/autoscale-overview#supported-services-for-autoscale) aren't automatically added or removed based on the load of the application.
+| Prerequisites | The autoScalesetting resource that's enabled on the resource must be onboarded to Chaos Studio.
+| Urn | urn:csci:microsoft:autoscalesettings:disableAutoscale/1.0 |
+| Fault type | Continuous |
+| Parameters (key, value) | |
+| enableOnComplete | Boolean. Configures whether autoscaling will be re-enabled once the action is done. Default is `true`. |
++
+```json
+{
+  "name": "BranchOne",
+  "actions": [
+    {
+    "type": "continuous",
+ "name":ΓÇ»"urn:csci:microsoft:autoscaleSetting:disableAutoscale/1.0",
+ "parameters":ΓÇ»[
+     {
+      "key": "enableOnComplete",
+      "value": "true"
+     }                
+  ],                                
+ "duration":ΓÇ»"PT2M",
+   "selectorId": "Selector1",     
+ΓÇ» }
+ ]
+}
+```
+ ## Key Vault Deny Access | Property | Value |
chaos-studio Chaos Studio Fault Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-providers.md
The following are the supported resource types for faults, the target types, and suggested roles to use when giving an experiment permission to a resource of that type.
-| Resource Type | Target name/type | Suggested role assignment |
-| - | - | - |
-| Microsoft.Cache/Redis (service-direct) | Microsoft-AzureCacheForRedis | Redis Cache Contributor |
-| Microsoft.ClassicCompute/domainNames (service-direct) | Microsoft-DomainNames | Classic Virtual Machine Contributor |
-| Microsoft.Compute/virtualMachines (agent-based) | Microsoft-Agent | Reader |
-| Microsoft.Compute/virtualMachineScaleSets (agent-based) | Microsoft-Agent | Reader |
-| Microsoft.Compute/virtualMachines (service-direct) | Microsoft-VirtualMachine | Virtual Machine Contributor |
-| Microsoft.Compute/virtualMachineScaleSets (service-direct) | Microsoft-VirtualMachineScaleSet | Virtual Machine Contributor |
-| Microsoft.ContainerService/managedClusters (service-direct) | Microsoft-AzureKubernetesServiceChaosMesh | Azure Kubernetes Service Cluster Admin Role |
-| Microsoft.DocumentDb/databaseAccounts (CosmosDB, service-direct) | Microsoft-CosmosDB | Cosmos DB Operator |
-| Microsoft.KeyVault/vaults (service-direct) | Microsoft-KeyVault | Key Vault Contributor |
-| Microsoft.Network/networkSecurityGroups (service-direct) | Microsoft-NetworkSecurityGroup | Network Contributor |
+| Resource Type | Target name/type | Suggested role assignment |
+|-|--|-|
+| Microsoft.Cache/Redis (service-direct) | Microsoft-AzureCacheForRedis | Redis Cache Contributor |
+| Microsoft.ClassicCompute/domainNames (service-direct) | Microsoft-DomainNames | Classic Virtual Machine Contributor |
+| Microsoft.Compute/virtualMachines (agent-based) | Microsoft-Agent | Reader |
+| Microsoft.Compute/virtualMachineScaleSets (agent-based) | Microsoft-Agent | Reader |
+| Microsoft.Compute/virtualMachines (service-direct) | Microsoft-VirtualMachine | Virtual Machine Contributor |
+| Microsoft.Compute/virtualMachineScaleSets (service-direct) | Microsoft-VirtualMachineScaleSet | Virtual Machine Contributor |
+| Microsoft.ContainerService/managedClusters (service-direct) | Microsoft-AzureKubernetesServiceChaosMesh | Azure Kubernetes Service Cluster Admin Role |
+| Microsoft.DocumentDb/databaseAccounts (CosmosDB, service-direct) | Microsoft-CosmosDB | Cosmos DB Operator |
+| Microsoft.Insights/autoscalesettings (service-direct) | Microsoft-AutoScaleSettings | Web Plan Contributor |
+| Microsoft.KeyVault/vaults (service-direct) | Microsoft-KeyVault | Key Vault Contributor |
+| Microsoft.Network/networkSecurityGroups (service-direct) | Microsoft-NetworkSecurityGroup | Network Contributor |
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
These models can be used with Completion API requests. `gpt-35-turbo` is the onl
| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | | | | - | -- | - |
-| ada | N/A | South Central US <sup>2</sup> | 2,049 | Oct 2019|
+| ada | N/A | East US <sup>2</sup> | 2,049 | Oct 2019|
| text-ada-001 | East US, South Central US, West Europe | N/A | 2,049 | Oct 2019|
-| babbage | N/A | South Central US<sup>2</sup> | 2,049 | Oct 2019 |
+| babbage | N/A | East US<sup>2</sup> | 2,049 | Oct 2019 |
| text-babbage-001 | East US, South Central US, West Europe | N/A | 2,049 | Oct 2019 |
-| curie | N/A | South Central US<sup>2</sup> | 2,049 | Oct 2019 |
+| curie | N/A | East US<sup>2</sup> | 2,049 | Oct 2019 |
| text-curie-001 | East US, South Central US, West Europe | N/A | 2,049 | Oct 2019 | | davinci<sup>1</sup> | N/A | Currently unavailable | 2,049 | Oct 2019| | text-davinci-001 | South Central US, West Europe | N/A | | |
These models can be used with Completion API requests. `gpt-35-turbo` is the onl
| gpt-35-turbo<sup>3</sup> (ChatGPT) (preview) | East US, South Central US | N/A | 4,096 | Sep 2021 | <sup>1</sup> The model is available by request only. Currently we aren't accepting new requests to use the model.
-<br><sup>2</sup> East US and West Europe were previously available, but due to high demand they are currently unavailable for new customers to use for fine-tuning. Please use US South Central region for fine-tuning.
+<br><sup>2</sup> South Central US and West Europe were previously available, but due to high demand they are currently unavailable for new customers to use for fine-tuning. Please use the East US region for fine-tuning.
<br><sup>3</sup> Currently, only version `0301` of this model is available. This version of the model will be deprecated on 8/1/2023 in favor of newer version of the gpt-35-model. See [ChatGPT model versioning](../how-to/chatgpt.md#model-versioning) for more details. ### GPT-4 Models
cognitive-services Understand Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/understand-embeddings.md
An embedding is a special format of data representation that can be easily utili
## Embedding models
-Different Azure OpenAI embedding models are specifically created to be good at a particular task. **Similarity embeddings** are good at capturing semantic similarity between two or more pieces of text. **Text search embeddings** help measure long documents are relevant to a short query. **Code search embeddings** are useful for embedding code snippets and embedding natural language search queries.
+Different Azure OpenAI embedding models are specifically created to be good at a particular task. **Similarity embeddings** are good at capturing semantic similarity between two or more pieces of text. **Text search embeddings** help measure whether long documents are relevant to a short query. **Code search embeddings** are useful for embedding code snippets and embedding natural language search queries.
Embeddings make it easier to do machine learning on large inputs representing words by capturing the semantic similarities in a vector space. Therefore, we can use embeddings to determine if two text chunks are semantically related or similar, and provide a score to assess similarity.
Azure OpenAI embeddings rely on cosine similarity to compute similarity between
## Next steps
-Learn more about using Azure OpenAI and embeddings to perform document search with our [embeddings tutorial](../tutorials/embeddings.md).
+Learn more about using Azure OpenAI and embeddings to perform document search with our [embeddings tutorial](../tutorials/embeddings.md).
communication-services Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md
In this article, you will learn which capabilities are supported for Teams exter
| | Remove a participant | ✔️ | | | Manage breakout rooms | ❌ | | | Participation in breakout rooms | ❌ |
-| | Admit participants in the lobby into the Teams meeting | ❌ |
+| | Admit participants in the lobby into the Teams meeting | ✔️ |
| | Be admitted from the lobby into the Teams meeting | ✔️ | | | Promote participant to a presenter or attendee | ❌ | | | Be promoted to presenter or attendee | ✔️ |
In this article, you will learn which capabilities are supported for Teams exter
| | See Large gallery view | ❌ | | | Receive video stream from Teams media bot | ❌ | | | Receive adjusted stream for "content from Camera" | ❌ |
-| | Add and remove video stream from spotlight | ❌ |
-| | Allow video stream to be selected for spotlight | ❌ |
+| | Add and remove video stream from spotlight | ✔️ |
+| | Allow video stream to be selected for spotlight | ✔️ |
| | Apply Teams background effects | ❌ | | Recording & transcription | Manage Teams convenient recording | ❌ | | | Receive information of call being recorded | ✔️ |
In this article, you will learn which capabilities are supported for Teams exter
| | Manage Teams closed captions | ❌ | | | Support for compliance recording | ✔️ | | | [Azure Communication Services recording](../../voice-video-calling/call-recording.md) | ❌ |
-| Engagement | Raise and lower hand | ❌ |
-| | Indicate other participants' raised and lowered hands | ❌ |
+| Engagement | Raise and lower hand | ✔️ |
+| | Indicate other participants' raised and lowered hands | ✔️ |
| | Trigger reactions | ❌ | | | Indicate other participants' reactions | ❌ | | Integrations | Control Teams third-party applications | ❌ |
communication-services Teams Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/teams-administration.md
Title: Teams controls for Teams external user
-description: Teams administrator controls to impact Azure Communication Services support for Teams external users
+description: Teams administrator controls to impact Azure Communication Services support for Teams external users.
Last updated 7/9/2022
# Teams controls
-Teams administrators control organization-wide policies and manage and assign user policies. Teams meeting policies are tied to the organizer of the Teams meeting. Teams meetings also have options to customize specific Teams meetings further.
+In this article, you learn what tools Microsoft 365 provides to control the user experience in Microsoft Teams meetings. You know what those tools are, how they interact, which roles and licenses you need, and many more.
+Let's start with a high-level decision tree diagram describing whether a specific feature is allowed for a meeting participant. A subset of the controls might control individual features. Let's take an example of call recording in the Teams meeting. Microsoft 365 administrators can't control this feature with tenant configuration but can control it via
+- Policy assigned to the users,
+- A sensibility label selected for the meeting,
+- Meeting template selected for the meeting
+- Meeting options defined by the meeting organizer
+- Role of the user in the meeting.
-## Teams policies
-Teams administrators have the following policies to control the experience for Teams external users in Teams meetings.
+![Decision tree for enabling functionality](../virtual-visits/media/decision-tree-functionality-enabled.svg)
-|Setting name|Policy scope|Description| Supported |
-| - | -| -| |
-| [Anonymous users can join a meeting](/microsoftteams/meeting-settings-in-teams#allow-anonymous-users-to-join-meetings) | organization-wide | If disabled, Teams external users can't join Teams meeting | ✔️ |
-| [Let anonymous people join a meeting](/microsoftteams/meeting-settings-in-teams#allow-anonymous-users-to-join-meetings) | per-organizer | If disabled, Teams external users can't join Teams meeting | ✔️ |
-| [Let anonymous people start a meeting](/microsoftteams/meeting-settings-in-teams#allow-anonymous-users-to-join-meetings)| per-organizer | If enabled, Teams external users can start a Teams meeting without Teams user | ✔️ |
-| [Automatically admit people](/microsoftteams/meeting-policies-participants-and-guests#automatically-admit-people) | per-organizer | If set to "Everyone", Teams external users can bypass lobby. Otherwise, Teams external users have to wait in the lobby until an authenticated user admits them.| ✔️ |
-| [Who can present in meetings](/microsoftteams/meeting-policies-in-teams-general#designated-presenter-role-mode) | per-user | If set to "Everyone", Teams external users join Teams meeting as presenters. Otherwise, they join as attendees. | ✔️ |
-| [Blocked anonymous join client types](/powershell/module/skype/set-csteamsmeetingpolicy) | per-organizer | If property "BlockedAnonymousJoinClientTypes" is set to "Teams" or "Null", the Teams external users via Azure Communication Services can join Teams meeting | ✔️ |
+## Tenant configurations
+Tenant configurations are organization-wide settings that impact everyone in the tenant. There's only one configuration, and the administrator can't create a new configuration. Just modify the existing one. Microsoft Teams provides, for example, configuration for [federation with Azure Communication Services](/powershell/module/teams/set-csteamsacsfederationconfiguration), [federation with Skype for Business](/powershell/module/skype/set-cstenantfederationconfiguration), or [configuration to control Teams meetings](/powershell/module/skype/set-csteamsmeetingconfiguration) (this configuration is being deprecated). Teams administrators can use these configurations as safeguards to disable capabilities for everyone in the tenant easily.
+- Required role: Teams or global administrator
+- Licensing: standard licensing
+- Tools: Teams Admin Center or PowerShell
+
+|Setting name | Description| Tenant configuration |Property |
+|--|--|--|--|
+|Enable federation with Azure Communication Services| If enabled, Azure Communication Services users can join Teams meeting as Communication Services users even if Teams anonymous users are not allowed| [CsTeamsAcsFederationConfiguration](/PowerShell/module/teams/set-csteamsacsfederationconfiguration)| EnableAcsUsers|
+|List federated Azure Communication Services resources | Users from listed Azure Communication Services resources can join Teams meeting if Teams anonymous users are not allowed to join. |[CsTeamsAcsFederationConfiguration](/PowerShell/module/teams/set-csteamsacsfederationconfiguration)| AllowedAcsResources |
+|[Anonymous users can join a meeting](/microsoftteams/meeting-settings-in-teams#allow-anonymous-users-to-join-meetings) | If disabled, Teams external users can't join Teams meetings. | [CsTeamsMeetingConfiguration](/PowerShell/module/skype/set-csteamsmeetingconfiguration) | DisableAnonymousJoin |
Your custom application should consider user authentication and other security measures to protect Teams meetings. Be mindful of the security implications of enabling anonymous users to join meetings. Use the [Teams security guide](/microsoftteams/teams-security-guide#addressing-threats-to-teams-meetings) to configure capabilities available to anonymous users.
-## Teams meeting options
+## Tenant policies
+Tenant policies are configurations that can be assigned to specific users or groups of users. A policy consists of properties with one of the following scopes: per-organizer, per-user, or both. Scope controls which policy is considered when evaluating the feature's availability, the organizers, participants, or both.
+Popular tenant policies are meeting, calling, messaging, or external access policies. A tenant has, by default, a Global policy assigned to everyone in the tenant. However, an administrator can create a new policy of a specific type, define a custom configuration, and assign it to users or groups of users. The following priority takes place when selecting which policy applies to the user:
+1. Directly assigned policy: The policy is assigned directly to the user.
+2. Group-assigned policy: The policy is assigned to a group the user is part of.
+3. Organization-wide policy: The Global policy applies.
+
+![Decision tree for selection of policy for evaluation](../virtual-visits/media/decision-tree-policy-selection.svg)
+
+An organizer-assigned policy can disable the feature in all meetings this user organizes. Disabled capability by policy can't be enabled with other tools. For example, administrators can use a global meeting policy to allow recording for everyone. Then create a new meeting policy called "external customers", which would disable recording. Admin then assigns the new policy to a group of users that conduct calls with external customers.
+- Required role: Teams or global administrator
+- Licensing: standard licensing
+- Tools: Teams Admin Center or PowerShell
+
+|Setting name | Policy scope|Description |Tenant policy| property |
+|--|--|--|--|--|
+|[Let anonymous people join a meeting](/microsoftteams/meeting-settings-in-teams#allow-anonymous-users-to-join-meetings) | per-organizer | If disabled, Teams external users can't join Teams meetings. | [CsExternalAccessPolicy](/PowerShell/module/skype/set-csexternalaccesspolicy)| EnableAcsFederationAccess |
+|Blocked anonymous join client types | per-organizer | If the property "BlockedAnonymousJoinClientTypes" is set to "Teams" or "Null", the Teams external users via Azure Communication Services can join Teams meeting. | [CsTeamsMeetingPolicy](/powershell/module/skype/set-csteamsmeetingpolicy) | BlockedAnonymousJoinClientTypes |
+|[Anonymous users can join a meeting](/microsoftteams/meeting-settings-in-teams#allow-anonymous-users-to-join-meetings) | per-organizer | If disabled, Teams external users can't join Teams meetings. |[CsTeamsMeetingPolicy](/powershell/module/skype/set-csteamsmeetingpolicy)| AllowAnonymousUsersToJoinMeeting|
+|[Let anonymous people start a meeting](/microsoftteams/meeting-settings-in-teams#allow-anonymous-users-to-join-meetings)| per-organizer | Teams external users can start a Teams meeting without Teams user if enabled. | [CsTeamsMeetingPolicy](/powershell/module/skype/set-csteamsmeetingpolicy) |AllowAnonymousUsersToStartMeeting|
+|Anonymous users can dial out to phone users | per-organizer | If enabled, Teams external users can add phone participants to the meeting.| [CsTeamsMeetingPolicy](/powershell/module/skype/set-csteamsmeetingpolicy) |AllowAnonymousUsersToDialOut|
+|[Automatically admit people](/microsoftteams/meeting-policies-participants-and-guests#automatically-admit-people)| per-organizer| If set to "Everyone", Teams external users can bypass the lobby. Otherwise, Teams external users must wait in the lobby until an organizer, coorganizer, or presenter admits them.| [CsTeamsMeetingPolicy](/powershell/module/skype/set-csteamsmeetingpolicy) |AutoAdmittedUsers|
+
+## Sensitivity label
+[Sensitivity labels allow Teams admins](/microsoft-365/compliance/sensitivity-labels-meetings) to protect and regulate access to sensitive organizational content during Microsoft Teams meetings. Compliance administrators can create sensitivity labels in [Microsoft Purview compliance portal](/microsoft-365/compliance/go-to-the-securitycompliance-center) and assign them with policy to users or groups of users. These labels can then be assigned to Teams meeting via meeting templates or meeting options.
+The sensitivity labels control subset of meeting options, adds new controls such as prevention of copy & paste in the chat, and might require justification for a change of the sensitivity label of the meeting.
+- Required role: Compliance or global administrator to manage sensitivity labels and policy. Teams or global administrators to manage meeting templates. Meeting organizer to manage meeting options.
+- Licensing: Microsoft Purview
+- Tools: Microsoft Purview compliance portal
+
+Meeting options that can be controlled via sensitivity label:
+- Who can bypass the lobby
+- Who can present
+- Who can record
+- Encryption for meeting video and audio
+- Automatically record
+- Video watermark for screen sharing and camera streams
+- Prevent copying of meeting chat
+- Prevent or allow copying of chat contents to the clipboard
+
+## Meeting templates
+Teams administrators can use meeting templates to control meeting settings that the meeting organizer usually controls. With templates, Teams administrators can create consistent meeting experiences in your organization and help enforce compliance requirements and business rules. Meeting templates can be used to enforce settings or to set default values. The administrator can lock the individual template option so the meeting organizer can't change it, or it can be left unlocked for the meeting organizer to change it if needed.
+- Required role: Teams or global administrator to manage meeting templates. Meeting organizer to select meeting template for the meeting.
+- Licensing: Microsoft Premium
+- Tools: Teams Admin Center
-Teams meeting organizers can also configure the Teams meeting options to adjust the experience for participants. The following options are supported in Azure Communication Services for external users:
+
+|Group|Teams meeting option | Description|
+|--|--|--|
+|Security|Sensitivity label| Specifies the sensitivity label to be used for the meeting.
+||Lobby - Who can bypass the lobby? | Specify who can bypass the lobby and join the meeting directly.
+||People calling in by phone can bypass the lobby| Specifies whether phone users can bypass the lobby.
+||Enable meeting end-to-end encryption | Specifies if the meeting is encrypted.
+||Enable Watermarks|Specifies if watermarks are used for camera feeds and content that's shared on screen in the meeting.
+|Audio and video|Enable mic for attendees? | When off, you can unmute individual attendees as needed.
+||Enable camera for attendees? | When on, meeting attendees can turn on video.
+|Recording and transcription|Record meetings automatically| Speficy if the meeting is recorded automatically.
+||Who can record meetings?| Specifies who can record the meeting.
+|Roles|Notify when callers join and leave|A sound plays when someone calling in by phone joins or leaves the meeting.
+|Meeting engagement|Allow meeting chat| Specifies if the meeting chat is available. It can also be used to prevent chatting before and after the meeting.
+||Allow reactions| Specifies if attendees can use reactions in the meeting.
+||Enable Q&A | Specifies if attendees can use the Q&A feature to ask questions during the meeting.
+||Manage what attendees see | Specifies if meeting organizers can preview and approve content being shared on screen before other meeting participants can see it.
+
+## Meeting roles
+[Teams meeting roles](https://support.microsoft.com/office/roles-in-a-teams-meeting-c16fa7d0-1666-4dde-8686-0a0bfe16e019) define permissions that Teams meeting participants have. Microsoft Teams provides roles: organizer, coorganizer, presenter, and attendee. The meeting organizer can assign default roles to the participants. Coorganizers and presenters share most organizer permissions, while attendees are more controlled. There can be only one meeting organizer.
+- Required role: Organizer, coorganizer, and presenter can change the roles of individual participants. Each user knows their role.
+- Licensing: standard licensing
+- Tools: Microsoft Teams and Graph API (only default roles)
+
+List of permissions per role can be found [here](https://support.microsoft.com/office/roles-in-a-teams-meeting-c16fa7d0-1666-4dde-8686-0a0bfe16e019) and more details about [coorganizer](https://support.microsoft.com/office/add-co-organizers-to-a-meeting-in-teams-0de2c31c-8207-47ff-ae2a-fc1792d466e2).
+
+## Meeting options
+[Meeting options](https://support.microsoft.com/office/participant-settings-for-a-teams-meeting-53261366-dbd5-45f9-aae9-a70e6354f88e) allow meeting organizer and coorganizers to customize the meeting experience before and during the meeting. Default values, editability, and visibility of individual features depend on the tenant configuration, policies, sensitivity label, and meeting template.
+- Required role: Organizer or coorganizer can change available meeting options.
+- Licensing: standard licensing
+- Tools: Microsoft Teams and Graph API (only before the meeting starts)
+
+Here's an overview of Meeting options:
|Option name|Description| Supported | | | | |
-| [Automatically admit people](/microsoftteams/meeting-policies-participants-and-guests#automatically-admit-people) | If set to "Everyone", Teams external users can bypass lobby. Otherwise, Teams external users have to wait in the lobby until an authenticated user admits them.| ✔️ |
+| [Automatically admit people](/microsoftteams/meeting-policies-participants-and-guests#automatically-admit-people) | If set to "Everyone", Teams external users can bypass the lobby. Otherwise, Teams external users have to wait in the lobby until an authenticated user admits them.| ✔️ |
| [Always let callers bypass the lobby](/microsoftteams/meeting-policies-participants-and-guests#allow-dial-in-users-to-bypass-the-lobby)| Participants joining through phone can bypass lobby | Not applicable | | Announce when callers join or leave| Participants hear announcement sounds when phone participants join and leave the meeting | ✔️ |
-| [Choose co-organizers](https://support.microsoft.com/office/add-co-organizers-to-a-meeting-in-teams-0de2c31c-8207-47ff-ae2a-fc1792d466e2)| Not applicable to external users | ✔️ |
+| [Choose coorganizers](https://support.microsoft.com/office/add-co-organizers-to-a-meeting-in-teams-0de2c31c-8207-47ff-ae2a-fc1792d466e2)| Not applicable to external users | ✔️ |
| [Who can present in meetings](/microsoftteams/meeting-policies-in-teams-general#designated-presenter-role-mode) | Controls who in the Teams meeting can have assigned presenter role. | ✔️ |
-|[Manage what attendees see](https://support.microsoft.com/office/spotlight-someone-s-video-in-a-teams-meeting-58be74a4-efac-4e89-a212-8d198182081e)|Teams organizer, co-organizer and presenter can spotlight videos for everyone. Azure Communication Services does not receive the spotlight signals. |❌|
-|[Allow mic for attendees](https://support.microsoft.com/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If external user is attendee, then this option controls whether external user can send local audio |✔️|
-|[Allow camera for attendees](https://support.microsoft.com/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If external user is attendee, then this option controls whether external user can send local video |✔️|
+|[Manage what attendees see](https://support.microsoft.com/office/spotlight-someone-s-video-in-a-teams-meeting-58be74a4-efac-4e89-a212-8d198182081e)|Teams organizer, coorganizer and presenter can spotlight videos for everyone. |✔️|
+|[Allow mic for attendees](https://support.microsoft.com/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If an external user is an attendee, then this option controls whether the external user can send local audio |✔️|
+|[Allow camera for attendees](https://support.microsoft.com/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If an external user is an attendee, then this option controls whether the external user can send local video |✔️|
|[Record automatically](/graph/api/resources/onlinemeeting)|Records meeting when anyone starts the meeting. The user in the lobby does not start a recording.|✔️| |Allow meeting chat|If enabled, external users can use the chat associated with the Teams meeting.|✔️| |[Allow reactions](/microsoftteams/meeting-policies-in-teams-general#meeting-reactions)|If enabled, external users can use reactions in the Teams meeting |❌| |[RTMP-IN](/microsoftteams/stream-teams-meetings)|If enabled, organizers can stream meetings and webinars to external endpoints by providing a Real-Time Messaging Protocol (RTMP) URL and key to the built-in Custom Streaming app in Teams. |Not applicable| |[Provide CART Captions](https://support.microsoft.com/office/use-cart-captions-in-a-microsoft-teams-meeting-human-generated-captions-2dd889e8-32a8-4582-98b8-6c96cf14eb47)|Communication access real-time translation (CART) is a service in which a trained CART captioner listens to the speech and instantaneously translates all speech to text. As a meeting organizer, you can set up and offer CART captioning to your audience instead of the Microsoft Teams built-in live captions that are automatically generated.|❌|-
+|[Apply a watermark to everyone's video feed](https://support.microsoft.com/office/watermark-for-teams-meetings-a9166432-f429-4a19-9a72-c9e8fdf4f589)| Watermark is applied to everyone's video feed. Instead, no video is received. |❌|
+|[Apply a watermark to shared content](https://support.microsoft.com/office/watermark-for-teams-meetings-a9166432-f429-4a19-9a72-c9e8fdf4f589)| Watermark is applied to screen share feed. Instead, no screen-sharing video is received. | ❌|
+|[End-to-end encryption](https://support.microsoft.com/office/use-end-to-end-encryption-for-teams-meetings-a8326d15-d187-49c4-ac99-14c17dbd617c)| Enable end-to-end encryption for Teams meeting. Audio and video streams are encoded end-to-end. Azure Communication Services can't join meetings with end-to-end encryption. |❌|
+|[Who can record](https://support.microsoft.com/office/record-a-meeting-in-teams-34dfbe7f-b07d-4a27-b4c6-de62f1348c24)| Select which roles in the meeting can manage Teams recording. Azure Communication Services does not provide API for Teams recording. | ❌|
+|[Enable Q&A](https://support.microsoft.com/office/q-a-in-teams-meetings-f3c84c72-57c3-4b6d-aea5-67b11face787)| Allow Q&A in the Teams meeting |❌|
+|[Enable language interpretation](https://support.microsoft.com/office/use-language-interpretation-in-a-teams-meeting-b9fdde0f-1896-48ba-8540-efc99f5f4b2e) |Allows professional interpreters to translate what a speaker says into another language in real-time. |❌|
+|[Enable Green room](https://support.microsoft.com/office/green-room-for-teams-meetings-5b744652-789f-42da-ad56-78a68e8460d5)| Use a green room to prepare with other presenters, practice sharing materials, and more before attendees enter the meeting. |❌|
## Next steps
communication-services Teams User Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user-calling.md
The following list presents the set of features that are currently available in
| | Adding Teams user honors Teams guest access configuration | ✔️ | | | Add a phone number | ✔️ | | | Remove a participant | ✔️ |
+| | Admit participants in the lobby into the Teams meeting | ✔️ |
+| | Be admitted from the lobby into the Teams meeting | ✔️ |
| | Adding Teams users honors information barriers | ✔️ | | Device Management | Ask for permission to use audio and/or video | ✔️ | | | Get camera list | ✔️ |
The following list presents the set of features that are currently available in
| | See Large gallery view | ❌ | | | Receive video stream from Teams media bot | ❌ | | | Receive adjusted stream for "content from Camera" | ❌ |
-| | Add and remove video stream from spotlight | ❌ |
-| | Allow video stream to be selected for spotlight | ❌ |
+| | Add and remove video stream from spotlight | ✔️ |
+| | Allow video stream to be selected for spotlight | ✔️ |
| | Apply Teams background effects | ❌ | | Recording & transcription | Manage Teams convenient recording | ❌ | | | Receive information of call being recorded | ✔️ |
The following list presents the set of features that are currently available in
| | Receive information of call being transcribed | ✔️ | | | Manage Teams closed captions | ❌ | | | Support for compliance recording | ✔️ |
-| Engagement | Raise and lower hand | ❌ |
-| | Indicate other participants' raised and lowered hands | ❌ |
+| Engagement | Raise and lower hand | ✔️ |
+| | Indicate other participants' raised and lowered hands | ✔️ |
| | Trigger reactions | ❌ | | | Indicate other participants' reactions | ❌ | | Integrations | Control Teams third-party applications | ❌ |
communication-services Meeting Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/meeting-capabilities.md
The following list of capabilities is allowed when Teams user participates in Te
| | Remove a participant | ✔️ | | | Manage breakout rooms | ❌ | | | Participation in breakout rooms | ❌ |
-| | Admit participants in the lobby into the Teams meeting | ❌ |
+| | Admit participants in the lobby into the Teams meeting | ✔️ |
| | Be admitted from the lobby into the Teams meeting | ✔️ | | | Promote participant to a presenter or attendee | ❌ | | | Be promoted to presenter or attendee | ✔️ |
The following list of capabilities is allowed when Teams user participates in Te
| | See Large gallery view | ❌ | | | Receive video stream from Teams media bot | ❌ | | | Receive adjusted stream for "content from Camera" | ❌ |
-| | Add and remove video stream from spotlight | ❌ |
-| | Allow video stream to be selected for spotlight | ❌ |
+| | Add and remove video stream from spotlight | ✔️ |
+| | Allow video stream to be selected for spotlight | ✔️ |
| | Apply Teams background effects | ❌ | | Recording & transcription | Manage Teams convenient recording | ❌ | | | Receive information of call being recorded | ✔️ |
The following list of capabilities is allowed when Teams user participates in Te
| | Manage Teams closed captions | ❌ | | | Support for compliance recording | ✔️ | | | [Azure Communication Services recording](../../voice-video-calling/call-recording.md) | ❌ |
-| Engagement | Raise and lower hand | ❌ |
-| | Indicate other participants' raised and lowered hands | ❌ |
+| Engagement | Raise and lower hand | ✔️ |
+| | Indicate other participants' raised and lowered hands | ✔️ |
| | Trigger reactions | ❌ | | | Indicate other participants' reactions | ❌ | | Integrations | Control Teams third-party applications | ❌ |
communication-services Govern Meeting Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/virtual-visits/govern-meeting-experience.md
+
+ Title: Govern Teams meeting experience with Azure Communication Services
+description: Learn how you can design Teams meeting experience for virtual appointments applications.
+++++ Last updated : 04/03/2023++++++
+# Govern meeting experience
+In this article, you learn how to use existing Microsoft 365 tools to control the experience of your virtual visits with Microsoft Teams. You learn the best practices and considerations for individual approaches. Administrators and meeting organizers can control the experience in Teams meetings by the following tools: tenant configuration, tenant policy, assigned role, meeting options, meeting templates, and sensitivity labels. You can learn more about individual tools in [this article](../guest/teams-administration.md).
+
+## Policies, roles & meeting options
+Teams policies, meeting roles, and meeting options are part of standard Microsoft Teams. A meeting organizer and coorganizer can customize the experience via meeting options. Organizations have two options how to prevent hosts of virtual appointments from changing the meeting experience:
+1. Control the experience via policy.
+1. Demote the host to the role presenter.
+
+We recommend using policy to control the experience of the Teams meeting. Here's how to do it:
+Teams administrator creates a new meeting policy that defines desired experience in the Teams meeting and assigns the meeting policy to selected Teams users that conduct virtual appointments. When a Teams user creates a Teams meeting, the assigned policy restricts, hides, or disables features in the Teams meeting for all participants.
+
+|Pros| Cons|
+|--|--|
+| Teams user can't modify the experience defined by administrator | The policy affects all meetings organized by the Teams user
+| If an Azure Communication Services user joins the meeting, Azure Monitor Logs get Call Summary and Call Diagnostics for the organizer. | Host can't lower the requirements if the meeting doesn't require strict requirements.
+
+Another approach is using a dedicated user account in the tenant to schedule Teams meeting for virtual appointments. This user is an organizer that shapes the experience via assigned meeting policy and then customizes the experience via meeting options. Hosts are invited as presenters and, therefore, can't control meeting options. This approach significantly reduces the flexibility of the experience and therefore isn't recommended.
+
+|Pros| Cons|
+|--|--|
+|All Teams users can have assigned relaxed policies, as part of the enforcement is done through meeting options | There's a risk that customers with the role presenter can demote the host to an attendee.
+|Teams user can't modify the experience defined by administrator | If the Azure Communication Services user joins the meeting, Azure Monitor Logs won't get Call Summary and Call Diagnostics for the host unless the Azure Communication Services resource is in the same tenant.
+||[Teams user can create only 2000 meetings a month](/graph/throttling-limits#cloud-communication-service-limits), which limits scalability of the solution.
+||Impacts analytics and reporting.
+||Host can't lower the requirements if the meeting doesn't require strict requirements.
+
+## Meeting templates ΓÇô Teams premium
+Microsoft Premium provides a new way how to design a meeting experience. Meeting templates allow developers to configure experience just for virtual appointments. On top of meeting templates, it provides more features and controls that can control the experience of the meeting. When a Teams user schedules a virtual appointment, some meeting options can have a preselected default value, some values can be locked, and some entirely hidden.
+
+|Pros| Cons|
+|--|--|
+|Teams user can't modify the experience defined by administrator | Host can't lower the requirements if the meeting doesn't require strict requirements.
+|If an Azure Communication Services user joins the meeting, Azure Monitor Logs get Call Summary and Call Diagnostics for the organizer.|
+|Provides more meeting options to control the experience. |
+
+## Sensitivity label ΓÇô Microsoft Purview
+Microsoft Purview allows administrators to protect information with sensitivity labels. Teams meeting can have assigned sensitivity labels via template or on the creation of the meeting. If the meeting template permits it, the meeting organizer can change the experience via a sensitivity label during the meeting and, in case of lowering the requirements, can justify this action. This tool provides more flexibility to the meeting host, which also complies with the organization's requirements.
+
+|Pros| Cons|
+|--|--|
+|Teams users can't modify the experience defined by the administrators
+|If an Azure Communication Services user joins the meeting, Azure Monitor Logs get Call Summary and Call Diagnostics for the organizer.
+|Provides more meeting options to control the experience.
+|Host can lower the requirements if the meeting doesn't require strict requirements.
+
+## Next steps
+- [Overview of virtual appointments](./overview.md)
+- [Build your own virtual appointments](../../../tutorials/virtual-visits/sample-builder.md)
+- [Learn about Teams controls](../guest/teams-administration.md).
+- [Plan user experience in Teams meetings](./plan-user-experience.md)
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/virtual-visits/overview.md
+
+ Title: Overview of virtual appointments with Azure Communication Services
+description: Learn concepts for virtual appointments applications.
+++++ Last updated : 04/03/2023++++++
+# Overview of virtual appointments
+
+Virtual appointments are a communication pattern where a **consumer** and a **business** assemble for a scheduled appointment. The **organizational boundary** between consumer and business, and **scheduled** nature of the interaction, are key attributes of most virtual appointments. Many industries operate virtual appointments: meetings with a healthcare provider, a loan officer, or a product support technician.
+
+## Personas
+
+No matter the industry, there are at least three personas involved in a virtual appointment and certain tasks they accomplish:
+- **Office Manager.** The office manager configures the businessΓÇÖ availability and booking rules for providers and consumers.
+- **Provider.** The provider gets on the call with the consumer. They must be able to view upcoming virtual appointments and join the virtual appointment and engage in communication.
+- **Consumer**. The consumer who schedules and motivates the appointment. They must schedule an appointment, enjoy reminders of the appointment, typically through SMS or email, and join the virtual appointment and engage in communication.
+
+## Architecture options
+
+Azure and Teams are interoperable. This interoperability gives organizations choice in how they deliver virtual appointments using Microsoft's cloud. Three examples include:
+
+- **Microsoft 365** provides a zero-code suite for virtual appointments using Microsoft [Teams](https://www.microsoft.com/microsoft-teams/group-chat-software/) and [Bookings](https://www.microsoft.com/microsoft-365/business/scheduling-and-booking-app). This is the easiest option but customization is limited. [Check out this video for an introduction.](https://www.youtube.com/watch?v=zqfGrwW2lEw)
+- **Microsoft 365 + Azure hybrid.** Combine Microsoft 365 Teams and Bookings with a custom Azure application for the consumer experience. Organizations take advantage of Microsoft 365's employee familiarity but customize and embed the consumer appointment experience in their own application.
+- **Azure custom.** Build the entire solution on Azure primitives: the business experience, the consumer experience, and scheduling systems.
+
+![Diagram of virtual appointment implementation options.](../../../tutorials/media/virtual-visits/virtual-visit-options.svg)
+
+These three **implementation options** are columns in the table below, while each row provides a **use case** and the **enabling technologies**.
+
+|*Persona* | **Use Case** | **Microsoft 365** | **Microsoft 365 + Azure hybrid** | **Azure Custom** |
+|--||--|||
+| *Manager* | Configure Business Availability | Bookings | Bookings | Custom |
+| *Provider* | Managing upcoming appointments | Outlook & Teams | Outlook & Teams | Custom |
+| *Provider* | Join the appointment | Teams | Teams | ACS Calling & Chat |
+| *Consumer* | Schedule an appointment | Bookings | Bookings | ACS Rooms |
+| *Consumer*| Be reminded of an appointment | Bookings | Bookings | ACS SMS |
+| *Consumer*| Join the appointment | Teams or virtual appointments | ACS Calling & Chat | ACS Calling & Chat |
+
+There are other ways to customize and combine Microsoft tools to deliver a virtual appointments experience:
+- **Replace Bookings with a custom scheduling experience with Graph.** You can build your own consumer-facing scheduling experience that controls Microsoft 365 meetings with Graph APIs.
+- **Replace TeamsΓÇÖ provider experience with Azure.** You can still use Microsoft 365 and Bookings to manage meetings but have the business user launch a custom Azure application to join the Teams meeting. This might be useful where you want to split or customize virtual appointment interactions from day-to-day employee Teams activity.
+
+## Next steps
+- [Build your own virtual appointments](../../../tutorials/virtual-visits/sample-builder.md)
+- [Learn about Teams controls](../guest/teams-administration.md).
+- [Govern user experience in Teams meetings](./govern-meeting-experience.md)
+- [Plan user experience in Teams meetings](./plan-user-experience.md)
communication-services Plan User Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/virtual-visits/plan-user-experience.md
+
+ Title: Plan user experience for virtual appointments
+
+description: Plan the user experience for virtual appointments with Azure Communication Services and Microsoft Teams.
++ Last updated : 4/3/2023+++++
+# Plan user experience
+You can configure Microsoft Teams tenant to allow Azure Communication Services users to join Teams meetings scheduled by the organization. In this article, you learn how to optimize the user experience when connecting Azure Communication Services. Microsoft Teams users might have available actions that aren't supported by the application. The lack of support in Communication Services or implementation in the application can cause inconsistency in feature parity. For those cases, we provide best practices for improving the experience of virtual appointments.
+
+## Default user experience
+The Teams user that schedules a Teams meeting defines the default experience of the Teams meeting. The Teams user interface shows all available features based on the configuration of the organizer. You can learn the configuration via Teams Admin Center or PowerShell. This information can then be used to customize the experience of the application powered by Azure Communication Services. On the other hand, features not supported by the application can be disabled by Teams policy, meeting template, sensitivity label, or meeting options. Combining these two principles allows you to provide the best user experience for all participants.
+
+How to improve the user experience:
+1. Learn the default configuration of the organizer in the tenant.
+1. Adjust custom applications based on the configuration.
+1. List supported features in your application.
+1. Adjust Teams tenant configurations, tenant policy, meeting templates, sensitivity labels, and meeting options based on the supported list.
+
+You can learn more about Microsoft Teams controls [here](../guest/teams-administration.md).
+
+## Role assignment changes
+Organizers, coorganizers, and presenters can promote and demote participants during the meeting. This role change can lead to the loss or gain of new functionality. Developers can subscribe to the `roleChanged` event of the `Call` object to update the user interface based on the role. Developers can find an assigned role in the property `role` of the object `Call`. You can learn more details [here](./../../../how-tos/calling-sdk/manage-role-assignment.md). You can find available actions for individual roles [here](https://support.microsoft.com/office/roles-in-a-teams-meeting-c16fa7d0-1666-4dde-8686-0a0bfe16e019).
+
+## Teams meeting option changes
+The meeting organizer can change Teams meeting options before and during the meeting. Developers can read meeting options before the meeting starts with [Graph API for `onlineMeeting` resource](/graph/api/onlinemeeting-get). As developers currently don't have a way to read changes during the meeting, we recommend limiting the changes during the meeting to ensure the best user experience.
+
+## Next steps
+- [Read about onlineMeeting Graph API](/graph/api/onlinemeeting-get)
+- [Learn about Teams controls](../guest/teams-administration.md).
+- [Govern user experience in Teams meetings](./govern-meeting-experience.md)
communication-services Manage Role Assignment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/manage-role-assignment.md
+
+ Title: Manager role assignments
+
+description: Use Azure Communication Services SDKs to track assigned Teams meeting role.
+++++ Last updated : 04/03/2023++
+# How to manage Teams meeting role
+
+In this article, you learn how users that joined Teams meetings or Room can learn the currently assigned role and manage role change.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).
+- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/identity/access-tokens.md).
+- Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md)
+
+## Join Teams meeting
+In the following code, you learn how to create `CallClient` and `CallAgent`, which are necessary for the next steps. Then we join the Teams meeting, which creates a `Call` instance.
+
+```js
+const { CallClient } = require('@azure/communication-calling');
+const { AzureCommunicationTokenCredential} = require('@azure/communication-common');
+
+const userToken = '<USER_TOKEN>';
+callClient = new CallClient();
+const tokenCredential = new AzureCommunicationTokenCredential(userToken);
+const callAgent = await callClient.createCallAgent(tokenCredential);
+const deviceManager = await callClient.getDeviceManager();
+
+const meetingCall = callAgent.join({ meetingLink: '<MEETING_LINK>' });
+```
+
+## Learn the current role
+
+You create a 'Call' instance when you join the Teams meeting or Room with calling SDK. This object has a property `role` that can have one of the following values:
+- Unknown
+- Attendee
+- Presenter
+- Organizer
+- Consumer
+
+```js
+const role = meetingCall.role;
+```
+
+## Subscribe to role changes
+
+During the Teams meeting or Room, your role can be changed. To learn about the change, subscribe to an event, `roleChanged`, on the `Call` object.
+
+```js
+meetingCall.on('roleChanged', args => {
+ role = meetingCall.role;
+ // Update UI
+}
+```
+
+## Next steps
+- [Learn how to manage calls](./manage-calls.md)
+- [Learn how to manage video](./manage-video.md)
+- [Learn how to record calls](./record-calls.md)
+- [Learn how to transcribe calls](./call-transcription.md)
communication-services Sample Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/virtual-visits/sample-builder.md
+
+ Title: Build Virtual appointments with Azure Communication Services
+description: Build your own virtual appointments application with Azure Communication Services
+++++ Last updated : 04/03/2023++++++
+# Sample builder
+
+This tutorial describes concepts for virtual appointment applications. After completing this tutorial and the associated [Sample Builder](https://aka.ms/acs-sample-builder), you will understand common use cases that a virtual appointments application delivers, the Microsoft technologies that can help you build those uses cases, and have built a sample application integrating Microsoft 365 and Azure that you can use to demo and explore further. You can learn more concepts about Virtual appointments in the [overview](../../concepts/interop/virtual-visits/overview.md).
+
+This tutorial focuses on Microsoft 365 and Azure hybrid solutions. These hybrid configurations are popular because they combine employee familiarity with Microsoft 365 with the ability to customize the consumer experience. They're also a good launching point for understanding more complex and customized architectures. The following diagram shows the user steps for a virtual appointment:
+
+![High-level architecture of a hybrid virtual appointments solution.](../media/virtual-visits/virtual-visit-arch.svg)
+1. Consumer schedules the appointment using Microsoft 365 Bookings.
+2. Consumer gets an appointment reminder through SMS and Email.
+3. Provider joins the appointment using Microsoft Teams.
+4. Consumer uses a link from the Bookings reminders to launch the Contoso consumer app and join the underlying Teams meeting.
+5. The users communicate with each other using voice, video, and text chat in a meeting.
+
+## Building a virtual appointment sample
+In this section, we're going to use a Sample Builder tool to deploy a Microsoft 365 + Azure hybrid virtual appointments application to an Azure subscription. This application is a desktop and mobile-friendly browser experience, with code that you can use to explore and make the final product.
+
+### Step 1 - Configure bookings
+
+This sample uses the Microsoft 365 Bookings app to power the consumer scheduling experience and create meetings for providers. Thus the first step is creating a Bookings calendar and getting the Booking page URL from https://outlook.office.com/bookings/calendar.
+
+![Screenshot of Booking configuration experience.](../media/virtual-visits/bookings-url.png)
+
+Make sure online meeting is enabled for the calendar by going to https://outlook.office.com/bookings/services.
+
+![Screenshot of Booking services configuration experience.](../media/virtual-visits/bookings-services.png)
+
+And then, make sure "Add online meeting" is enabled.
+
+![Screenshot of Booking services online meeting configuration experience.](../media/virtual-visits/bookings-services-online-meeting.png)
++
+### Step 2 ΓÇô Sample Builder
+Use the Sample Builder to customize the consumer experience. You can reach the Sampler Builder using this [link](https://aka.ms/acs-sample-builder) or navigating to the page within the Azure Communication Services resource in the Azure portal. Step through the Sample Builder wizard:
+1. Select the Industry template.
+1. Configure the call experience (Chat or Screen Sharing availability).
+1. Change themes and text to match your application style and get valuable feedback through post-call survey options.
+
+You can preview your configuration live from the page in both Desktop and Mobile browser form factors.
+
+[ ![Screenshot of Sample builder start page.](../media/virtual-visits/sample-builder-themes.png)](../media/virtual-visits/sample-builder-themes.png#lightbox)
++
+### Step 3 - Deploy
+At the end of the Sample Builder wizard, you can **Deploy to Azure** or download the code as a zip. The sample builder code is publicly available on [GitHub](https://github.com/Azure-Samples/communication-services-virtual-visits-js).
+
+[ ![Screenshot of Sample builder deployment page.](../media/virtual-visits/sample-builder-landing.png)](../media/virtual-visits/sample-builder-landing.png#lightbox)
+
+The deployment launches an Azure Resource Manager (ARM) template that deploys the themed application you configured.
+
+![Screenshot of Sample builder arm template.](../media/virtual-visits/sample-builder-arm.png)
+
+After walking through the ARM template, you can **Go to resource group**.
+
+![Screenshot of a completed Azure Resource Manager Template.](../media/virtual-visits/azure-complete-deployment.png)
+
+### Step 4 - Test
+The Sample Builder creates three resources in the selected Azure subscriptions. The **App Service** is the consumer front end, powered by Azure Communication Services.
+
+![Screenshot of produced azure resources in azure portal.](../media/virtual-visits/azure-resources.png)
+
+Opening the App Service's URL and navigating to `https://<YOUR URL>/VISIT` allows you to try out the consumer experience and join a Teams meeting. `https://<YOUR URL>/BOOK` embeds the Booking experience for consumer schedule.
+
+![Screenshot of final view of azure app service.](../media/virtual-visits/azure-resource-final.png)
+
+### Step 5 - Set deployed app URL in Bookings
+
+Enter the application URL followed by "/visit" in the "Deployed App URL" field at https://outlook.office.com/bookings/businessinformation.
+
+## Going to production
+The Sample Builder gives you the basics of a Microsoft 365 and Azure virtual appointment: consumer scheduling via Bookings, consumer joining via a custom app, and the provider joining via Teams. However, several things to consider as you take this scenario to production.
+
+### Launching patterns
+Consumers want to jump directly to the virtual appointment from the scheduling reminders they receive from Bookings. In Bookings, you can provide a URL prefix that is used in reminders. If your prefix is `https://<YOUR URL>/VISIT`, Bookings will point users to `https://<YOUR URL>/VISIT?MEETINGURL=<MEETING URL>.`
+
+### Integrate into your existing app
+The app service generated by the Sample Builder is a stand-alone artifact designed for desktop and mobile browsers. However, you may already have a website or mobile application and need to migrate these experiences to the existing codebase. The code generated by the Sample Builder should help, but you can also use the following:
+- **UI SDKs ΓÇô** [Production Ready Web and Mobile](../../concepts/ui-library/ui-library-overview.md) components to build graphical applications.
+- **Core SDKs ΓÇô** The underlying [Call](../../quickstarts/voice-video-calling/get-started-teams-interop.md) and [Chat](../../quickstarts/chat/meeting-interop.md) services can be accessed, and you can build any kind of user experience.
+
+### Identity & security
+The Sample Builder's consumer experience doesn't authenticate the end user but provides [Azure Communication Services user access tokens](../../quickstarts/identity/access-tokens.md) to any random visitor. In most scenarios you want to implement an authentication scheme.
+
+## Next steps
+- [Overview of virtual appointments](../../concepts/interop/virtual-visits/overview.md)
+- [Learn about Teams controls](../../concepts/interop/guest/teams-administration.md).
+- [Govern user experience in Teams meetings](../../concepts/interop/virtual-visits/govern-meeting-experience.md)
+- [Plan user experience in Teams meetings](../../concepts/interop/virtual-visits/plan-user-experience.md)
communications-gateway Request Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/request-changes.md
If you notice problems with Azure Communications Gateway or you need Microsoft t
Azure provides unlimited support for subscription management, which includes billing, quota adjustments, and account transfers. For technical support, you need a support plan, such as [Microsoft Unified Support](https://www.microsoft.com/en-us/unifiedsupport/overview) or [Premier Support](https://www.microsoft.com/en-us/unifiedsupport/premier).
-## Pre-requisites
+## Prerequisites
+
+Perform initial troubleshooting to help determine if you should raise an issue with Azure Communications Gateway or a different component. We provide some examples where you should raise an issue with Azure Communications Gateway. Raising issues for the correct component helps resolve your issues faster.
+
+Raise an issue with Azure Communications Gateway if you experience an issue with:
+- SIP and RTP exchanged by Azure Communications Gateway and your network.
+- Your Azure bill relating to Azure Communications Gateway.
+- The API Bridge, including the API Bridge Number Management Portal.
You must have an **Owner**, **Contributor**, or **Support Request Contributor** role in your Azure Communications Gateway subscription, or a custom role with [Microsoft.Support/*](../role-based-access-control/resource-provider-operations.md#microsoftsupport) at the subscription level.
You must have an **Owner**, **Contributor**, or **Support Request Contributor**
1. A new **Service** option will appear giving you the option to select either **My services** or **All services**. Select **My services**. 1. In **Service type** select **Azure Communications Gateway** from the drop-down menu. 1. A new **Problem type** option will appear. Select the problem type that most accurately describes your issue from the drop-down menu.
-1. A new **Problem subtype** option will appear. Select the problem subtype that most accurately describes your issue from the drop-down menu.
+ * Select **API Bridge Issue** if your API Bridge Number Management Portal is returning errors when you try to gain access or carry out actions.
+ * Select **Configuration and Setup** if you experience issues during initial provisioning and onboarding, or if you want to change configuration for an existing deployment.
+ * Select **Monitoring** for issues with metrics and logs.
+ * Select **Voice Call Issue** if calls aren't connecting, have poor quality, or show unexpected behavior.
+ * Select **Other issue or question** if your issue or question doesn't apply to any of the other problem types.
+1. A new **Problem subtype** option will appear. Select the problem subtype that most accurately describes your issue from the drop-down menu. If the problem type you selected only has one subtype, the subtype is automatically selected.
1. Select **Next**. ## 3. Assess the recommended solutions Based on the information you provided, we might show you recommended solutions you can use to try to resolve the problem. In some cases, we might even run a quick diagnostic. Solutions are written by Azure engineers and will solve most common problems.
-If you're still unable to resolve the issue, continue creating your support request by selecting **Next**.
+If you're still unable to resolve the issue, continue creating your support request by selecting **Return to support request** then selecting **Next**.
## 4. Enter additional details
container-apps Health Probes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/health-probes.md
Container Apps support the following probes:
For a full listing of the specification supported in Azure Container Apps, refer to [Azure REST API specs](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/app/resource-manager/Microsoft.App/stable/2022-03-01/CommonDefinitions.json#L119-L236).
+> [!NOTE]
+> TCP startup probes are not supported for Consumption workload profiles in the [Consumption + Dedicated plan structure](./plans.md#consumption-dedicated).
+ ## HTTP probes HTTP probes allow you to implement custom logic to check the status of application dependencies before reporting a healthy status. Configure your health probe endpoints to respond with an HTTP status code greater than or equal to `200` and less than `400` to indicate success. Any other response code outside this range indicates a failure.
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
HTTP applications scale based on the number of HTTP requests and connections. En
Under the [ingress](azure-resource-manager-api-spec.md#propertiesconfiguration) section, you can configure the following settings: -- **Accessibility level**: You can set your container app as externally or internally accessible in the environment. An environment variable `CONTAINER_APP_ENV_DNS_SUFFIX` is used to automatically resolve the FQDN suffix for your environment.
+- **Accessibility level**: You can set your container app as externally or internally accessible in the environment. An environment variable `CONTAINER_APP_ENV_DNS_SUFFIX` is used to automatically resolve the FQDN suffix for your environment. When communicating between Container Apps within the same environment, you may also use the app name. For more information on how to access your apps, see [ingress](./ingress-overview.md#domain-names).
- **Traffic split rules**: You can define traffic splitting rules between different revisions of your application. For more information, see [Traffic splitting](traffic-splitting.md).
The second URL grants access to the log streaming service and the console. If ne
## Ports and IP addresses
->[!NOTE]
-> The subnet associated with a Container App Environment requires a CIDR prefix of `/23` or larger.
- The following ports are exposed for inbound connections. | Use | Port(s) |
The static IP address of the Container Apps environment can be found in the Azur
## Managed resources
-When you deploy an internal or an external environment into your own network, a new resource group prefixed with `MC_` is created in the Azure subscription where your environment is hosted. This resource group contains infrastructure components managed by the Azure Container Apps platform, and shouldn't be modified. The resource group contains Public IP addresses used specifically for outbound connectivity from your environment and a load balancer. The resource group name can be configured during container app environment creation. In addition to the [Azure Container Apps billing](./billing.md), you're billed for:
+When you deploy an internal or an external environment into your own network, a new resource group is created in the Azure subscription where your environment is hosted. This resource group contains infrastructure components managed by the Azure Container Apps platform, and it shouldn't be modified.
-- Two standard static [public IPs](https://azure.microsoft.com/pricing/details/ip-addresses/), one for ingress and one for egress. If you need more IPs for egress due to SNAT issues, [open a support ticket to request an override](https://azure.microsoft.com/support/create-ticket/).
+#### Consumption only architecture
+The name of the resource group created in the Azure subscription where your environment is hosted is prefixed with `MC_` by default, and the resource group name *cannot* be customized during container app creation. The resource group contains Public IP addresses used specifically for outbound connectivity from your environment and a load balancer.
+In addition to the [Azure Container Apps billing](./billing.md), you're billed for:
+- Two standard static [public IPs](https://azure.microsoft.com/pricing/details/ip-addresses/), one for ingress and one for egress. If you need more IPs for egress due to SNAT issues, [open a support ticket to request an override](https://azure.microsoft.com/support/create-ticket/).
- Two standard [Load Balancers](https://azure.microsoft.com/pricing/details/load-balancer/) if using an internal environment, or one standard [Load Balancer](https://azure.microsoft.com/pricing/details/load-balancer/) if using an external environment. Each load balancer has fewer than six rules. The cost of data processed (GB) includes both ingress and egress for management operations.
+#### Workload profiles architecture
+The name of the resource group created in the Azure subscription where your environment is hosted is prefixed with `me_` by default, and the resource group name *can* be customized during container app environment creation. For external environments, the resource group contains a public IP address used specifically for inbound connectivity to your external environment and a load balancer. For internal environments, the resource group only contains a Load Balancer.
+
+In addition to the [Azure Container Apps billing](./billing.md), you're billed for:
+- One standard static [public IP](https://azure.microsoft.com/pricing/details/ip-addresses/) for ingress in external environments and one standard [Load Balancer](https://azure.microsoft.com/pricing/details/load-balancer/).
+- The cost of data processed (GB) includes both ingress and egress for management operations.
## Next steps
container-apps Workload Profiles Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/workload-profiles-manage-cli.md
Use the following commands to create an environment with a workload profile.
--ingress external \ --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \ --environment "<ENVIRONMENT_NAME>" \
- --workload-profile-name "consumption"
+ --workload-profile-name "Consumption"
``` This command deploys the application to the built in Consumption workload profile. If you want to create an app in a dedicated workload profile, you first need to [add the profile to the environment](#add-profiles).
cosmos-db Migrate Dotnet V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/migrate-dotnet-v3.md
Previously updated : 11/09/2022 Last updated : 04/04/2023 ms.devlang: csharp
Some settings in `ConnectionPolicy` have been renamed or replaced by `CosmosClie
|`SetCurrentLocation`|`CosmosClientOptions.ApplicationRegion` can be used to achieve the same effect.| |`PreferredLocations`|`CosmosClientOptions.ApplicationPreferredRegions` can be used to achieve the same effect.| |`UserAgentSuffix`|`CosmosClientBuilder.ApplicationName` can be used to achieve the same effect.|
+|`UseMultipleWriteLocations`|Removed. The SDK automatically detects if the account supports multiple write endpoints.|
### Indexing policy
cosmos-db Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/synapse-link.md
Azure Synapse Link isn't recommended if you're looking for traditional data ware
* Currently Azure Synapse Workspaces don't support linked services using `Managed Identity`. Always use the `MasterKey` option.
+* Currently Multi-regions write accounts aren't recommended for production environments.
+++ ## Security Azure Synapse Link enables you to run near real-time analytics over your mission-critical data in Azure Cosmos DB. It's vital to make sure that critical business data is stored securely across both transactional and analytical stores. Azure Synapse Link for Azure Cosmos DB is designed to help meet these security requirements through the following features:
cost-management-billing Migrate Ea Price Sheet Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/migrate-ea-price-sheet-api.md
description: This article has information to help you migrate from the EA Price Sheet API. Previously updated : 01/24/2022 Last updated : 04/05/2023
cost-management-billing Analyze Cost Data Azure Cost Management Power Bi Template App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/analyze-cost-data-azure-cost-management-power-bi-template-app.md
Title: Analyze Azure costs with the Power BI App
description: This article explains how to install and use the Cost Management Power BI App. Previously updated : 04/08/2022 Last updated : 04/05/2023
cost-management-billing Aws Integration Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/aws-integration-manage.md
description: This article helps you understand how to use cost analysis and budgets in Cost Management to manage your AWS costs and usage. Previously updated : 09/15/2021 Last updated : 04/05/2023 -+
cost-management-billing Aws Integration Set Up Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/aws-integration-set-up-configure.md
description: This article walks you through setting up and configuring AWS Cost and Usage report integration with Cost Management. Previously updated : 04/28/2022 Last updated : 04/05/2023
cost-management-billing Cost Analysis Common Uses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-analysis-common-uses.md
description: This article explains how you can get results for common cost analysis tasks in Cost Management. Previously updated : 12/20/2021 Last updated : 04/05/2023
cost-management-billing Cost Mgt Alerts Monitor Usage Spending https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md
description: This article describes how cost alerts help you monitor usage and spending in Cost Management. Previously updated : 06/07/2022 Last updated : 04/05/2023
cost-management-billing Export Cost Data Storage Account Sas Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/export-cost-data-storage-account-sas-key.md
Title: Export cost data with an Azure Storage account SAS key
description: This article helps partners create a SAS key and configure Cost Management exports. Previously updated : 03/22/2022 Last updated : 04/05/2023
cost-management-billing Get Started Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/get-started-partners.md
description: This article explains how partners use Cost Management features and how they enable access for their customers. Previously updated : 11/10/2021 Last updated : 04/05/2023
cost-management-billing Quick Create Budget Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/quick-create-budget-template.md
tags: azure-resource-manager
Previously updated : 01/07/2022 Last updated : 04/05/2023
cost-management-billing Account Admin Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/account-admin-tasks.md
tags: billing
Previously updated : 12/10/2021 Last updated : 04/05/2023 - # Account Administrator tasks in the Azure portal
cost-management-billing Avoid Charges Free Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/avoid-charges-free-account.md
tags: billing
Previously updated : 03/22/2022 Last updated : 04/05/2023
cost-management-billing Azurestudents Subscription Disabled https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/azurestudents-subscription-disabled.md
tags: billing
Previously updated : 09/15/2021 Last updated : 04/05/2023
cost-management-billing Change Azure Account Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/change-azure-account-profile.md
tags: billing
Previously updated : 03/22/2022 Last updated : 03/04/2023 - # Change contact information for an Azure billing account
cost-management-billing Create Subscription Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/create-subscription-request.md
Previously updated : 05/25/2022 Last updated : 04/05/2023
cost-management-billing Direct Ea Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-administration.md
Title: EA Billing administration on the Azure portal
description: This article explains the common tasks that an enterprise administrator accomplishes in the Azure portal. Previously updated : 02/27/2023 Last updated : 04/06/2023
This article explains the common tasks that an Enterprise Agreement (EA) adminis
## Manage your enrollment
-To manage your service, the initial enterprise administrator opens the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/AllBillingScopes) and signs in using the email address from the invitation email.
+To start managing the EA enrollment, the initial enterprise administrator signs in to the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/AllBillingScopes) using the account where they received the invitation email.
-If you've been set up as the enterprise administrator, then go to the Azure portal and sign in with your work, school, or Microsoft account email address and password.
+If you've been set up as the enterprise administrator, then go to the Azure portal and sign in with your work, school, or Microsoft account.
If you have more than one billing account, select a billing account from billing scope menu. You can view your billing account properties and policy from the left menu.
Only existing EA admins can create other enterprise administrators. Use one of t
:::image type="content" source="./media/direct-ea-administration/add-enterprise-admin-navigate.png" alt-text="Screenshot showing navigation to Enterprise administrator." lightbox="./media/direct-ea-administration/add-enterprise-admin-navigate.png" ::: 1. Complete the Add role assignment form and then select **Add**.
-Make sure that you have the user's email address and preferred authentication method handy, such as a work, school, or Microsoft account.
+Make sure that you have the user's account details and preferred authentication method handy, such as a work, school, or Microsoft account.
An EA admin can manage access for existing enterprise administrators by selecting the ellipsis (**…**) symbol to right of each user. They can **Edit** and **Delete** existing users.
If you're not an EA admin, contact your EA admin to request that they add you to
If your enterprise administrator can't assist you, create anΓÇ»[Azure support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). Provide the following information: - Enrollment number-- Email address to add, and authentication type (work, school, or Microsoft account)-- Email approval from an existing enterprise administrator
+- Account details and authentication type (work, school, or Microsoft account)
+- Approval from an existing enterprise administrator
>[!NOTE] > - We recommend that you have at least one active Enterprise Administrator at all times. If no active Enterprise Administrator is available, contact your partner to change the contact information on the Volume License agreement. Your partner can make changes to the customer contact information by using the Contact Information Change Request (CICR) process available in the eAgreements (VLCM) tool.
As an enterprise administrator:
1. Select the department where you want to add administrator. 1. In the department view, select **Access Control (IAM)**. 1. Select **+ Add**, then select **Department administrator**.
-1. Enter the email address and other required information.
+1. Enter the account details and other required information.
1. For read-only access, set the **Read-Only** option to **Yes**, and then select **Add**. :::image type="content" source="./media/direct-ea-administration/add-department-admin.png" alt-text="Screenshot showing navigation to Department administrator." lightbox="./media/direct-ea-administration/add-department-admin.png" :::
Check out the [EA admin manage accounts](https://www.youtube.com/watch?v=VKWAEx6
After the account owner receives an account ownership email, they need to confirm their ownership.
-1. Sign in to the email account associated with the work, school, or Microsoft account that was set as the account owner.
-1. Open the email notification titled _Invitation to Activate your Account on the Microsoft Azure Service_.
-1. Select the **Activate Account** link in the invitation.
+1. The account owner receives an email notification titled _Invitation to Activate your Account on the Microsoft Azure Service_. Select the **Activate Account** link in the invitation.
1. Sign in to the Azure portal. 1. On the Activate Account page, select **Yes, I wish to continue** to confirm account ownership. :::image type="content" source="./media/direct-ea-administration/activate-account.png" alt-text="Screenshot showing the Activate Account page." lightbox="./media/direct-ea-administration/activate-account.png" ::: After account ownership is confirmed, you can create subscriptions and purchase resources with the subscriptions.
-### To activate an enrollment account with a .onmicrosoft.com email account
+### To activate an enrollment account with a .onmicrosoft.com account
-If you're a new EA account owner with a .onmicrosoft.com email account, you might not have a forwarding email address by default. In that situation, you might not receive the activation email. If this situation applies to you, use the following steps to activate your account ownership.
+If you're a new EA account owner with a .onmicrosoft.com account, you might not have a forwarding email address by default. In that situation, you might not receive the activation email. If this situation applies to you, use the following steps to activate your account ownership.
1. Sign into the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/AllBillingScopes). 1. Navigate to **Cost Management + Billing** and select a billing scope.
If you're a new EA account owner with a .onmicrosoft.com email account, you migh
1. In the left menu under **Settings**, select **Activate Account**. 1. On the Activate Account page, select **Yes, I wish to continue** and the select **Activate this account**. :::image type="content" source="./media/direct-ea-administration/activate-account.png" alt-text="Screenshot showing the Activate Account page for onmicrosoft.com accounts." lightbox="./media/direct-ea-administration/activate-account.png" :::
-1. After the activation process completes, copy and paste the following link to your browser. The page will open and create a subscription that's associated with your enrollment.
+1. After the activation process completes, copy and paste the following link to your browser. The page opens and creates a subscription that's associated with your enrollment.
`https://signup.azure.com/signup?offer=MS-AZR-0017P&appId=IbizaCatalogBlade` ## Change Azure subscription or account ownership
It might take up to eight hours for the account to appear in the Azure portal.
### To confirm account ownership
-1. Sign into the email account associated with the work, school, or Microsoft account that you associated in the previous procedure.
-1. Open the email notification titled _Invitation to Activate your Account on the Microsoft Azure Service_.
+1. After you complete the previous steps, the Account Owner receives an email notification titled _Invitation to Activate your Account on the Microsoft Azure Service_.
1. Select the **Activate account** link in the invitation. 1. Sign in to the Azure portal. 1. On the Activate Account page, select **Yes, I wish to continue** to confirm account ownership.
When you transfer an Azure in Open subscription to an Enterprise Agreement, you
## Subscription transfers with support plans
-If your Enterprise Agreement doesn't have a support plan and try to transfer an existing Microsoft Online Support Agreement (MOSA) subscription that has a support plan, the subscription won't automatically transfer. You'll need to repurchase a support plan for your EA enrollment during the grace period, which is by the end of the following month.
+If your Enterprise Agreement doesn't have a support plan and you try to transfer an existing Microsoft Online Support Agreement (MOSA) subscription that has a support plan, the subscription doesn't automatically transfer. You need to repurchase a support plan for your EA enrollment during the grace period, which is by the end of the following month.
## Manage department and account spending with budgets
As an EA admin, you can allow account owners in your organization to create subs
:::image type="content" source="./media/direct-ea-administration/dev-test-option.png" alt-text="Screenshot showing navigation to the Dev/Test option." lightbox="./media/direct-ea-administration/dev-test-option.png" :::
-When a user is added as an account owner, any Azure subscriptions associated with the user that are based on either the pay-as-you-go Dev/Test offer or the monthly credit offers for Visual Studio subscribers get converted to the EA Dev/Test offer. Subscriptions based on other offer types, such as pay-as-you-go, that are associated with the account owner get converted to Microsoft Azure Enterprise offers.
+When a user is added as an account owner, any Azure subscriptions associated with the user that is based on either the pay-as-you-go Dev/Test offer or the monthly credit offers for Visual Studio subscribers get converted to the EA Dev/Test offer. Subscriptions based on other offer types, such as pay-as-you-go, associated with the account owner get converted to Microsoft Azure Enterprise offers.
Currently, the Dev/Test Offer isn't applicable to Azure Gov customers.
By default, notification contacts are subscribed for the coverage period end dat
## Azure sponsorship offer
-The Azure sponsorship offer is a limited sponsored Microsoft Azure account. It's available by e-mail invitation only to limited customers selected by Microsoft. If you're entitled to the Microsoft Azure sponsorship offer, you'll receive an e-mail invitation to your account ID.
+The Azure sponsorship offer is a limited sponsored Microsoft Azure account. It's available by e-mail invitation only to limited customers selected by Microsoft. If you're entitled to the Microsoft Azure sponsorship offer, you receive an e-mail invitation to your account ID.
If you need assistance, create aΓÇ»[support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) in the Azure portal.
An enrollment in the Enterprise Agreement program providing Microsoft products i
A web-based service that enables participating sites to authenticate a user with a single set of credentials. **Microsoft Azure Enterprise Enrollment Amendment (enrollment amendment)**<br>
-An amendment signed by an enterprise, which provides them access to Azure as part of their enterprise enrollment.
+An amendment signed by an enterprise, which provides them with access to Azure as part of their enterprise enrollment.
**Resource quantity consumed**<br> The quantity of an individual Azure service that was used in a month.
For organizations that have set up Azure Active Directory with federation to the
## Enrollment status **New**<br>
-This status is assigned to an enrollment that was created within 24 hours and will be updated to a Pending status within 24 hours.
+This status is assigned to an enrollment that was created within 24 hours and is updated to a Pending status within 24 hours.
**Pending**<br>
-The enrollment administrator needs to sign in to the Azure portal. Once signed in, the enrollment will switch to an Active status.
+The enrollment administrator needs to sign in to the Azure portal. After they sign in, the enrollment switches to an Active status.
**Active**<br>
-The enrollment is Active and accounts and subscriptions can be created in the Azure portal. The enrollment will remain active until the Enterprise Agreement end date.
+The enrollment is Active and accounts and subscriptions can be created in the Azure portal. The enrollment remains active until the Enterprise Agreement end date.
**Indefinite extended term**<br> An indefinite extended term takes place after the Enterprise Agreement end date has passed. It enables Azure EA customers who are opted in to the extended term to continue to use Azure services indefinitely at the end of their Enterprise Agreement.
Before the Azure EA enrollment reaches the Enterprise Agreement end date, the en
- Confirm disablement of all services associated with the enrollment. **Expired**<br>
-The Azure EA customer is opted out of the extended term, and the Azure EA enrollment has reached the Enterprise Agreement end date. The enrollment will expire, and all associated services will be disabled.
+The Azure EA customer is opted out of the extended term, and the Azure EA enrollment has reached the Enterprise Agreement end date. The enrollment expires, and all associated services are disabled.
**Transferred**<br> Enrollments where all associated accounts and services have been transferred to a new enrollment appear with a transferred status.
cost-management-billing Direct Ea Billing Invoice Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-billing-invoice-documents.md
tags: billing
Previously updated : 11/14/2021 Last updated : 04/05/2023
cost-management-billing Ea Partner Portal Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-partner-portal-administration.md
Title: Azure EA portal administration for partners description: Describes portal administration topics pertaining to Partners -+ Previously updated : 09/15/2021 Last updated : 04/05/2023
cost-management-billing Elevate Access Global Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/elevate-access-global-admin.md
tags: billing
Previously updated : 5/18/2022 Last updated : 04/05/2023
cost-management-billing Grant Access To Create Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/grant-access-to-create-subscription.md
Previously updated : 06/01/2022 Last updated : 04/05/2023
cost-management-billing How To Create Azure Support Request Ea https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/how-to-create-azure-support-request-ea.md
Title: How to create an Azure support request for an Enterprise Agreement issue description: Enterprise Agreement customers who need assistance can use the Azure portal to find self-service solutions and to create and manage support requests. Previously updated : 02/03/2022 Last updated : 04/05/2023
cost-management-billing Manage Billing Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/manage-billing-access.md
tags: billing
Previously updated : 03/22/2022 Last updated : 04/05/2023 - # Manage access to billing information for Azure
cost-management-billing Manage Tax Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/manage-tax-information.md
tags: billing
Previously updated : 05/23/2022 Last updated : 04/05/2023
cost-management-billing Mca Enterprise Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-enterprise-operations.md
tags: billing
Previously updated : 09/15/2021 Last updated : 04/05/2023
cost-management-billing Mca Section Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-section-invoice.md
tags: billing
Previously updated : 03/03/2022 Last updated : 04/05/2023
cost-management-billing Open Banking Strong Customer Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/open-banking-strong-customer-authentication.md
Title: Open Banking (PSD2) and Strong Customer Authentication (SCA) for Azure customers description: This article explains why multi-factor authentication is required for some Azure purchases and how to complete authentication. -+ tags: billing Previously updated : 09/15/2021 Last updated : 04/05/2023
cost-management-billing Programmatically Create Subscription Microsoft Partner Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-partner-agreement.md
Previously updated : 09/01/2021- Last updated : 04/05/2023+
cost-management-billing Programmatically Create Subscription Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-preview.md
Previously updated : 03/22/2022 Last updated : 04/05/2023
cost-management-billing Programmatically Create Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription.md
Previously updated : 03/22/2022 Last updated : 04/05/2023
cost-management-billing Review Enterprise Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/review-enterprise-billing.md
Previously updated : 09/15/2021 Last updated : 04/05/2023 #Customer intent: As an administrator or developer, I want to use REST APIs to review billing data for all subscriptions and departments in the enterprise enrollment.
cost-management-billing Review Subscription Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/review-subscription-billing.md
Previously updated : 12/13/2021 Last updated : 04/05/2023 # Customer intent: As an administrator or developer, I want to use REST APIs to review subscription billing data for a specified period.
cost-management-billing Spending Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/spending-limit.md
tags: billing
Previously updated : 04/08/2022 Last updated : 04/05/2023
cost-management-billing Switch Azure Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/switch-azure-offer.md
tags: billing,top-support-issue
Previously updated : 03/22/2022 Last updated : 04/05/2023
cost-management-billing Troubleshoot Account Not Found https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/troubleshoot-account-not-found.md
tags: billing
Previously updated : 10/07/2021 Last updated : 04/05/2023
cost-management-billing Troubleshoot Cant Find Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/troubleshoot-cant-find-invoice.md
tags: billing
Previously updated : 03/22/2022 Last updated : 04/05/2023
cost-management-billing Troubleshoot Csp Billing Issues Usage File Pivot Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/troubleshoot-csp-billing-issues-usage-file-pivot-tables.md
tags: billing
Previously updated : 10/07/2021 Last updated : 04/05/2023
cost-management-billing Troubleshoot Customer Agreement Billing Issues Usage File Pivot Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables.md
tags: billing
Previously updated : 10/07/2021 Last updated : 04/05/2023
cost-management-billing Troubleshoot Ea Billing Issues Usage File Pivot Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/troubleshoot-ea-billing-issues-usage-file-pivot-tables.md
tags: billing
Previously updated : 10/07/2021 Last updated : 04/05/2023
cost-management-billing Troubleshoot Sign In Issue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/troubleshoot-sign-in-issue.md
tags: billing
Previously updated : 03/22/2022 Last updated : 04/05/2023
cost-management-billing Upgrade Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/upgrade-azure-subscription.md
tags: billing
Previously updated : 03/22/2022 Last updated : 04/05/2023 - # Upgrade your Azure free account or Azure for Students Starter account
cost-management-billing Manage Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/microsoft-customer-agreement/manage-tenants.md
tags: billing
Previously updated : 03/22/2022 Last updated : 04/05/2023
cost-management-billing Microsoft Customer Agreement Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/microsoft-customer-agreement/microsoft-customer-agreement-get-started.md
tags: billing
Previously updated : 03/22/2022 Last updated : 04/05/2023
cost-management-billing Troubleshoot Subscription Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/microsoft-customer-agreement/troubleshoot-subscription-access.md
tags: billing
Previously updated : 03/22/2022 Last updated : 04/05/2023
cost-management-billing Create Sql License Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/create-sql-license-assignments.md
Title: Create SQL Server license assignments for Azure Hybrid Benefit
description: This article explains how to create SQL Server license assignments for Azure Hybrid Benefit. Previously updated : 12/06/2022 Last updated : 04/06/2023
# Create SQL Server license assignments for Azure Hybrid Benefit
-The new centralized Azure Hybrid Benefit experience in the Azure portal supports SQL Server license assignments at the account level or at a particular subscription level. When the assignment is created at the account level, Azure Hybrid Benefit discounts are automatically applied to SQL resources in all subscriptions in the account up to the license count specified in the assignment.
+The new centralized Azure Hybrid Benefit (preview) experience in the Azure portal supports SQL Server license assignments at the account level or at a particular subscription level. When the assignment is created at the account level, Azure Hybrid Benefit discounts are automatically applied to SQL resources in all subscriptions in the account up to the license count specified in the assignment.
+
+> [!IMPORTANT]
+> Centrally-managed Azure Hybrid Benefit is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
For each license assignment, a scope is selected and then licenses are assigned to the scope. Each scope can have multiple license entries.
The prerequisite roles differ depending on the agreement type.
| Agreement type | Required role | Supported offers | | | | |
-| Enterprise Agreement | _Enterprise Administrator_<p> If you are an Enterprise admin with read-only access, youΓÇÖll need your organization to give you **full** access to assign licenses using centrally managed Azure Hybrid Benefit. <p>If you're not an Enterprise admin, you must be assigned that role by your organization (with full access). For more information about how to become a member of the role, see [Add another enterprise administrator](../manage/ea-portal-administration.md#create-another-enterprise-administrator). | - MS-AZR-0017P (Microsoft Azure Enterprise)<br>- MS-AZR-USGOV-0017P (Azure Government Enterprise) |
-| Microsoft Customer Agreement | *Billing account owner*<br> *Billing account contributor* <br> *Billing profile owner*<br> *Billing profile contributor*<br> If you don't have one of the roles above, your organization must assign one to you. For more information about how to become a member of the roles, see [Manage billing roles](../manage/understand-mca-roles.md#manage-billing-roles-in-the-azure-portal). | MS-AZR-0017G (Microsoft Azure Plan)|
+| Enterprise Agreement | _Enterprise Administrator_<p> If you're an Enterprise admin with read-only access, you need your organization to give you **full** access to assign licenses using centrally managed Azure Hybrid Benefit. <p>If you're not an Enterprise admin, you must get assigned that role by your organization (with full access). For more information about how to become a member of the role, see [Add another enterprise administrator](../manage/ea-portal-administration.md#create-another-enterprise-administrator). | - MS-AZR-0017P (Microsoft Azure Enterprise)<br>- MS-AZR-USGOV-0017P (Azure Government Enterprise) |
+| Microsoft Customer Agreement | *Billing account owner*<br> *Billing account contributor* <br> *Billing profile owner*<br> *Billing profile contributor*<br> If you don't have one of the preceding roles, your organization must assign one to you. For more information about how to become a member of the roles, see [Manage billing roles](../manage/understand-mca-roles.md#manage-billing-roles-in-the-azure-portal). | MS-AZR-0017G (Microsoft Azure Plan)|
| WebDirect / Pay-as-you-go | Not available | None | | CSP / Partner led customers | Not available | None |
The prerequisite roles differ depending on the agreement type.
## Create a SQL license assignment
-In the following procedure, you navigate from **Cost Management + Billing** to **Reservations + Hybrid Benefit**. Don't navigate to **Reservations** from the Azure home page. By doing so you won't have the necessary scope to view the license assignment experience.
+In the following procedure, you navigate from **Cost Management + Billing** to **Reservations + Hybrid Benefit**. Don't navigate to **Reservations** from the Azure home page. By doing so, you don't have the necessary scope to view the license assignment experience.
1. Sign in to the Azure portal and navigate to **Cost Management + Billing**. :::image type="content" source="./media/create-sql-license-assignments/select-cost-management.png" alt-text="Screenshot showing Azure portal navigation to Cost Management + Billing." lightbox="./media/create-sql-license-assignments/select-cost-management.png" :::
In the following procedure, you navigate from **Cost Management + Billing** to *
:::image type="content" source="./media/create-sql-license-assignments/select-azure-hybrid-benefit.png" alt-text="Screenshot showing Azure Hybrid Benefit selection." lightbox="./media/create-sql-license-assignments/select-azure-hybrid-benefit.png" ::: 1. On the next screen, select **Begin to assign licenses**. :::image type="content" source="./media/create-sql-license-assignments/get-started-centralized.png" alt-text="Screenshot showing Add SQL hybrid benefit selection" lightbox="./media/create-sql-license-assignments/get-started-centralized.png" :::
- If you don't see the page and instead see the message `You are not the Billing Admin on the selected billing scope` then you don't have the required permission to assign a license. If so, you need to get the required permission. For more information, see [Prerequisites](#prerequisites).
+ If you don't see the page, and instead see the message `You are not the Billing Admin on the selected billing scope` then you don't have the required permission to assign a license. If so, you need to get the required permission. For more information, see [Prerequisites](#prerequisites).
1. Choose a scope and then enter the license count to use for each SQL Server edition. If you don't have any licenses to assign for a specific edition, enter zero. > [!NOTE] > You are accountable to determine that the entries that you make in the scope-level managed license experience are accurate and will satisfy your licensing obligations. The license usage information is shown to assist you as you make your license assignments. However, the information shown could be incomplete or inaccurate due to various factors.
After you create SQL license assignments, your experience with Azure Hybrid Bene
## Cancel a license assignment
-Review your license situation before you cancel your license assignments. When you cancel a license assignment, you no longer receive discounts for them. Consequently, your Azure bill might increase. If you cancel the last remaining license assignment, Azure Hybrid Benefit management reverts to the individual resource level.
+Review your license situation before you cancel your license assignments. When you cancel a license assignment, you no longer receive discounts for them. So, your Azure bill might increase. If you cancel the last remaining license assignment, Azure Hybrid Benefit management reverts to the individual resource level.
### To cancel a license assignment
cost-management-billing Analyze Unexpected Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/analyze-unexpected-charges.md
Previously updated : 03/10/2022 Last updated : 04/05/2023 - # Identify anomalies and unexpected changes in cost
cost-management-billing Mca Download Tax Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/mca-download-tax-document.md
tags: billing Previously updated : 09/15/2021 Last updated : 04/05/2023 - # View and download tax documents for your Azure invoice
cost-management-billing Mca Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/mca-overview.md
Previously updated : 05/26/2022 Last updated : 04/05/2023
cost-management-billing Mosp New Customer Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/mosp-new-customer-experience.md
Title: Get started with your updated Azure billing account description: Get started with your updated Azure billing account to understand changes in the new billing and cost management experience -+ Previously updated : 10/07/2021 Last updated : 04/05/2023
cost-management-billing Mpa Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/mpa-overview.md
Previously updated : 10/07/2021 Last updated : 04/05/2023
cost-management-billing Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/plan-manage-costs.md
tags: billing
Previously updated : 10/20/2021 Last updated : 04/05/2023
cost-management-billing Review Customer Agreement Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/review-customer-agreement-bill.md
Title: Review your Microsoft Customer Agreement bill - Azure description: Learn how to review your bill and resource usage and to verify charges for your Microsoft Customer Agreement invoice. -+ tags: billing Previously updated : 09/15/2021 Last updated : 04/05/2023
cost-management-billing Review Individual Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/review-individual-bill.md
tags: billing
Previously updated : 03/22/2022 Last updated : 04/05/2023
cost-management-billing Review Partner Agreement Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/review-partner-agreement-bill.md
tags: billing
Previously updated : 10/07/2021 Last updated : 04/05/2023
cost-management-billing Understand Azure Marketplace Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/understand-azure-marketplace-charges.md
tags: billing
Previously updated : 10/20/2021 Last updated : 04/05/2023 - # Understand your Azure external services charges
cost-management-billing Understand Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/understand-invoice.md
Title: Understand your Azure invoice description: Learn how to read and understand the usage and bill for your Azure subscription -+ tags: billing Previously updated : 05/18/2022 Last updated : 04/05/2023
data-factory Change Data Capture Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/change-data-capture-troubleshoot.md
Previously updated : 03/29/2023 Last updated : 04/06/2023
EXEC sys.sp_cdc_enable_table
If your SQL source doesn't have SQL Server CDC with net_changed enabled or doesn't have any time-based incremental columns, then the tables in your source will be unavailable for selection.
-## Issue: The debug cluster is not available from a warm pool.
+## Issue: The debug cluster isn't available from a warm pool.
-The debug cluster is not available from a warm pool. There will be a wait time in the order of 1+ minutes.
+The debug cluster isn't available from a warm pool. There will be a wait time in the order of 1+ minutes.
## Issue: Trouble in tracking delete operations.
-Currently CDC resource supports delete operations for following sink types – Azure SQL Database & Delta. To achieve this, in the column mapping page, please select **keys** column that can be used to determine if a row from the source matches a row from the sink. 
+Currently CDC resource supports delete operations for following sink types – Azure SQL Database & Delta. To achieve this in the column mapping page, select **keys** column that can be used to determine if a row from the source matches a row from the sink. 
## Issue: My CDC resource fails when target SQL table has identity columns. Getting following error on running a CDC when your target sink table has identity columns,
-*_Cannot insert explicit value for identity column in table 'TableName' when IDENTITY_INSERT is set to OFF._*
+*_Can't insert explicit value for identity column in table 'TableName' when IDENTITY_INSERT is set to OFF._*
-Run below query to determine if you have an identity column in your SQL based target.
+Run the following query to determine if you have an identity column in your SQL based target.
**Query 4**
FROM sys.identity_columns
WHERE OBJECT_NAME(object_id) = 'TableName' ```
-To resolve this user can follow either of the steps
+To resolve this user can follow either of these steps:
1. Set IDENTITY_INSERT to ON by running following query at database level and rerun the CDC Mapper
SET IDENTITY_INSERT dbo.TableName ON;
2. User can remove the specific identity column from mapping while performing inserts.
+## Issue: Trouble using Self-hosted integration runtime.
+
+Currently, Self-hosted integration runtime isn't supported in the CDC resource. If trying to connect to an on-premise source, use Azure integration runtime with managed virtual network.
+ ## Next steps - [Learn more about the change data capture resource](concepts-change-data-capture-resource.md)
data-factory Concepts Change Data Capture Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-change-data-capture-resource.md
Previously updated : 02/17/2023 Last updated : 04/06/2023 # Change data capture resource overview
The new Change Data Capture resource in ADF allows for full fidelity change data
## Known limitations * Currently, when creating source/target mappings, each source and target is only allowed to be used once. * Complex types are currently unsupported.
+* Self-hosted integration runtime (SHIR) is currently unsupported.
For more information on known limitations and troubleshooting assistance, please reference [this troubleshooting guide](change-data-capture-troubleshoot.md).
data-factory Data Flow Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-troubleshoot-guide.md
Previously updated : 11/02/2022 Last updated : 03/28/2023 # Troubleshoot mapping data flows in Azure Data Factory
This section lists common error codes and messages reported by mapping data flow
- **Cause**: Broadcast has a default timeout of 60 seconds in debug runs and 300 seconds in job runs. On the broadcast join, the stream chosen for broadcast is too large to produce data within this limit. If a broadcast join isn't used, the default broadcast by dataflow can reach the same limit. - **Recommendation**: Turn off the broadcast option or avoid broadcasting large data streams for which the processing can take more than 60 seconds. Choose a smaller stream to broadcast. Large Azure SQL Data Warehouse tables and source files aren't typically good choices. In the absence of a broadcast join, use a larger cluster if this error occurs.
-### Error code: DF-Executor-ColumnUnavailable
+### Error code: DF-Executor-ColumnNotFound
- **Message**: Column name used in expression is unavailable or invalid. - **Cause**: An invalid or unavailable column name is used in an expression.
This section lists common error codes and messages reported by mapping data flow
- **Cause**: The size of the data far exceeds the limit of the node memory. - **Recommendation**: Increase the core count and switch to the memory optimized compute type.
-### Error code: DF-Executor-ParseError
+### Error code: DF-Executor-ExpressionParseError
- **Message**: Expression cannot be parsed. - **Cause**: An expression generated parsing errors because of incorrect formatting.
This section lists common error codes and messages reported by mapping data flow
- **Cause**: Invalid store configuration is provided. - **Recommendation**: Check the parameter value assignment in the pipeline. A parameter expression may contain invalid characters.
+### Error code: DF-Executor-StringValueNotInQuotes
+
+- **Message**: Column operands are not allowed in literal expressions.
+- **Cause**: The value for a string parameter or an expected string value is not enclosed in single quotes.
+- **Recommendation**: Near the mentioned line number(s) in the data flow script, ensure the value for a string parameter or an expected string value is enclosed in single quotes.
+ ### Error code: DF-Executor-SystemImplicitCartesian - **Message**: Implicit cartesian product for INNER join is not supported, use CROSS JOIN instead. Columns used in join should create a unique key for rows.
This section lists common error codes and messages reported by mapping data flow
- **Cause**: Privileged access approval is needed to copy data. It's a user configuration issue. - **Recommendation**: Ask the tenant admin to approve your **Data Access Request** in Office365 in privileged access management (PAM) module.
+### Error code: DF-Executor-DSLParseError
+
+- **Message**: Data flow script cannot be parsed.
+- **Cause**: The data flow script has parsing errors.
+- **Recommendation**: Check for errors (example: missing symbol(s), unwanted symbol(s)) near mentioned line number(s) in the data flow script.
+
+### Error code: DF-Executor-IncorrectQuery
+
+- **Message**: Incorrect syntax. SQL Server error encountered while reading from the given table or while executing the given query.
+- **Cause**: The query submitted was syntactically incorrect.
+- **Recommendation**: Check the syntactical correctness of the given query. Ensure to have a non-quoted query string when it is referenced as a pipeline parameter.
+
+### Error code: DF-Executor-ParameterParseError
+- **Message**: Parameter stream has parsing errors. Not honoring the datatype of parameter(s) could be one of the causes.
+- **Cause**: Parsing errors in given parameter(s).
+- **Recommendation**: Check the parameter(s) having errors, ensure the usage of appropriate function(s), and honor the datatype(s) given.
+ ### Error code: DF-File-InvalidSparkFolder - **Message**: Failed to read footer for file.
This section lists common error codes and messages reported by mapping data flow
### Error code: DF-GEN2-InvalidAccountConfiguration -- **Message**: Either one of account key or tenant/spnId/spnCredential/spnCredentialType or miServiceUri/miServiceToken should be specified.
+- **Message**: Either one of account key or SAS token or tenant/spnId/spnCredential/spnCredentialType or userAuth or miServiceUri/miServiceToken should be specified.
- **Cause**: An invalid credential is provided in the ADLS Gen2 linked service. - **Recommendation**: Update the ADLS Gen2 linked service to have the right credential configuration.
This section lists common error codes and messages reported by mapping data flow
### Error code: DF-JSON-WrongDocumentForm -- **Message**: Malformed records are detected in schema inference. Parse Mode: FAILFAST.
+- **Message**: Malformed records are detected in schema inference. Parse Mode: FAILFAST. It could be because of a wrong selection in document form to parse json file(s). Please try a different 'Document form' (Single document/Document per line/Array of documents) on the json source.
- **Cause**: Wrong document form is selected to parse JSON file(s). - **Recommendation**: Try different **Document form** (**Single document**/**Document per line**/**Array of documents**) in JSON settings. Most cases of parsing errors are caused by wrong configuration.
+### Error code: DF-MICROSOFT365-CONSENTPENDING
+- **Message**: Admin Consent is pending.
+- **Cause**: Admin Consent is missing.
+- **Recommendation**: Provide the consent and then rerun the pipeline. To provide consent, refer to [PAM requests](/graph/data-connect-faq#how-can-i-approve-pam-requests-via-the-microsoft-365-admin-center).
+ ### Error code: DF-MSSQL-ErrorRowsFound - **Cause**: Error/Invalid rows were found while writing to Azure SQL Database sink.
This section lists common error codes and messages reported by mapping data flow
- **Cause**: An exception happened while writing error rows to the storage. - **Recommendation**: Please check your rejected data linked service configuration.
+### Error code: DF-SQLDW-IncorrectLinkedServiceConfiguration
+
+- **Message**: The linked service is incorrectly configured as type 'Azure Synapse Analytics' instead of 'Azure SQL Database'. Please create a new linked service of type 'Azure SQL Database'<br>
+Note: Please check that the given database is of type 'Dedicated SQL pool (formerly SQL DW)' for linked service type 'Azure Synapse Analytics'.
+- **Cause**: The linked service is incorrectly configured as type **Azure Synapse Analytics** instead of **Azure SQL Database**. 
+- **Recommendation**: Create a new linked service of type **Azure SQL Database**, and check that the given database is of type Dedicated SQL pool (formerly SQL DW) for linked service type **Azure Synapse Analytics**.
+ ### Error code: DF-SQLDW-InternalErrorUsingMSI - **Message**: An internal error occurred while authenticating against Managed Service Identity in Azure Synapse Analytics instance. Please restart the Azure Synapse Analytics instance or contact Azure Synapse Analytics Dedicated SQL Pool support if this problem persists.
This section lists common error codes and messages reported by mapping data flow
- **Cause**: The pipeline expression passed in the Data Flow activity isn't being processed correctly because of a syntax error. - **Recommendation**: Check data flow activity name. Check expressions in activity monitoring to verify the expressions. For example, data flow activity name can't have a space or a hyphen.
+### Error code: 127
+- **Message**: The spark job of Dataflow completed, but the runtime state is either null or still InProgress..
+- **Cause**: Transient issue with microservices involved in the execution can cause the run to fail.
+- **Recommendation**: Refer to [scenario 3 transient issues](#scenario-3-transient-issues).
+ ### Error code: 2011 - **Message**: The activity was running on Azure Integration Runtime and failed to decrypt the credential of data store or compute connected via a Self-hosted Integration Runtime. Please check the configuration of linked services associated with this activity, and make sure to use the proper integration runtime type.
firewall-manager Dns Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/dns-settings.md
Previously updated : 02/17/2021 Last updated : 04/06/2023 # Azure Firewall policy DNS settings
-You can configure a custom DNS server and enable DNS proxy for Azure Firewall policies. You can configure these settings when you deploy the firewall or later from the **DNS settings** page.
+You can configure a custom DNS server and enable DNS proxy for Azure Firewall policies. You can configure these settings when you deploy the firewall or later from the **Settings**, **DNS** page.
## DNS servers
-A DNS server maintains and resolves domain names to IP addresses. By default, Azure Firewall uses Azure DNS for name resolution. The **DNS server** setting lets you configure your own DNS servers for Azure Firewall name resolution. You can configure a single or multiple servers.
-
-### Configure custom DNS servers
-
-1. Select your firewall policy.
-2. Under **Settings**, select **DNS Settings**.
-3. Under **DNS servers**, you can type or add existing DNS servers that have been previously specified in your Virtual Network.
-4. Select **Save**.
-5. The firewall now directs DNS traffic to the specified DNS server(s) for name resolution.
+A DNS server maintains and resolves domain names to IP addresses. By default, Azure Firewall uses Azure DNS for name resolution. The **DNS servers** setting lets you configure your own DNS servers for Azure Firewall name resolution. You can configure a single or multiple servers.
## DNS proxy
DNS Proxy configuration requires three steps:
2. Optionally configure your custom DNS server or use the provided default. 3. Finally, you must configure the Azure FirewallΓÇÖs private IP address as a Custom DNS address in your virtual network DNS server settings. This ensures DNS traffic is directed to Azure Firewall.
-### Configure DNS proxy
+## Configure firewall policy DNS
+
+1. Select your firewall policy.
+2. Under **Settings**, select **DNS**.
+1. Select **Enabled** to enable DNS settings for this policy.
+1. Under **DNS servers**, you can accept the **Default (Azure provided)** setting, or select **Custom** to add custom DNS servers you'll configure for your virtual network.
+1. Under **DNS Proxy**, select **Enabled** to enable DNS Proxy if you configured a customer DNS server.
+1. Select **Apply**.
-To configure DNS proxy, you must configure your virtual network DNS servers setting to use the firewall private IP address. Then, enable DNS Proxy in Azure Firewall policy **DNS settings**.
-#### Configure virtual network DNS servers
+## Configure virtual network
+
+To configure DNS proxy, you must also configure your virtual network DNS servers setting to use the firewall private IP address.
+
+### Configure virtual network DNS servers
1. Select the virtual network where the DNS traffic will be routed through the Azure Firewall. 2. Under **Settings**, select **DNS servers**.
To configure DNS proxy, you must configure your virtual network DNS servers sett
4. Enter the firewallΓÇÖs private IP address. 5. Select **Save**.
-#### Enable DNS proxy
-
-1. Select your Azure Firewall policy.
-2. Under **Settings**, select **DNS settings**.
-3. By default, **DNS Proxy** is disabled. When enabled, the firewall listens on port 53 and forwards DNS requests to the configured DNS servers.
-4. Review the **DNS servers** configuration to make sure that the settings are appropriate for your environment.
-5. Select **Save**.
## Next steps
firewall-manager Fqdn Filtering Network Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/fqdn-filtering-network-rules.md
Previously updated : 01/26/2022 Last updated : 04/06/2023
firewall-manager Rule Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/rule-processing.md
Previously updated : 06/30/2020 Last updated : 04/06/2023
firewall Easy Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/easy-upgrade.md
Previously updated : 03/15/2023 Last updated : 04/06/2023
This new capability is available through the Azure portal as shown here. It's al
:::image type="content" source="media/premium-features/upgrade.png" alt-text="Screenshot showing SKU upgrade." lightbox="media/premium-features/upgrade.png"::: > [!NOTE]
-> This new upgrade/downgrade capability also supports the Azure Firewall Basic SKU.
+> This new upgrade/downgrade capability doesn't currently support the [Azure Firewall Basic SKU](overview.md#azure-firewall-basic).
## Next steps
healthcare-apis How To Use Mapping Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-mapping-debugger.md
Previously updated : 04/04/2023 Last updated : 04/06/2023
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-In this article, you'll learn how to use the MedTech service Mapping debugger. The Mapping debugger is a self-service tool that is used for creating, updating, and troubleshooting the MedTech service device and FHIR destination mappings. The Mapping debugger enables you to easily view and make inline adjustments in real-time, without ever having to leave the Azure portal. The Mapping debugger can also be used for uploading test device messages to see how they'll look after being processed into normalized messages and transformed into FHIR Observations.
+In this article, learn how to use the MedTech service Mapping debugger. The Mapping debugger is a self-service tool that is used for creating, updating, and troubleshooting the MedTech service device and FHIR destination mappings. The Mapping debugger enables you to easily view and make inline adjustments in real-time, without ever having to leave the Azure portal. The Mapping debugger can also be used for uploading test device messages to see how they'll look after being processed into normalized messages and transformed into FHIR Observations.
> [!TIP] > To learn about how the MedTech service transforms and persists device message data into the FHIR service see, [Overview of the MedTech service device message processing stages](overview-of-device-message-processing-stages.md).
+The following video presents an overview of Mapping debugger:
+
+> [!VIDEO https://youtube.com/embed/OEGuCSGnECY]
+ ## Overview of the Mapping debugger
-1. To access the MedTech service's Mapping debugger, select **Mapping debugger** within your MedTech service on the Azure portal. For this article, we'll be using a MedTech service named **mt-azuredocsdemo**. You'll select your own MedTech service. From this screen, we can see the Mapping debugger is presenting the device and FHIR destination mappings associated with this MedTech service and has provided a **Validation** of those mappings.
+1. To access the MedTech service's Mapping debugger, select **Mapping debugger** within your MedTech service on the Azure portal. For this article, we're using a MedTech service named **mt-azuredocsdemo**. Select your own MedTech service. From this screen, we can see the Mapping debugger is presenting the device and FHIR destination mappings associated with this MedTech service and has provided a **Validation** of those mappings.
:::image type="content" source="media\how-to-use-mapping-debugger\mapping-debugger-main-screen.png" alt-text="Screenshot of the Mapping debugger main screen." lightbox="media\how-to-use-mapping-debugger\mapping-debugger-main-screen.png":::
In this article, you'll learn how to use the MedTech service Mapping debugger. T
## How to troubleshoot the device and FHIR destination mappings using the Mapping debugger
-1. If there are errors with the device or FHIR destination mappings, the Mapping debugger will display the issues. In this example, we can see that there are error *warnings* at **Line 12** in the **Device mapping** and at **Line 20** in the **FHIR destination mapping**.
+1. If there are errors with the device or FHIR destination mappings, the Mapping debugger displays the issues. In this example, we can see that there are error *warnings* at **Line 12** in the **Device mapping** and at **Line 20** in the **FHIR destination mapping**.
:::image type="content" source="media\how-to-use-mapping-debugger\mapping-debugger-with-errors.png" alt-text="Screenshot of the Mapping debugger with device and FHIR destination mappings warnings." lightbox="media\how-to-use-mapping-debugger\mapping-debugger-with-errors.png":::
-2. If you place your mouse cursor over an error warning, the Mapping debugger will provide you with more error information.
+2. If you place your mouse cursor over an error warning, the Mapping debugger provides you with more error information.
:::image type="content" source="media\how-to-use-mapping-debugger\mapping-debugger-with-error-details.png" alt-text="Screenshot of the Mapping debugger with error details for the device mappings warning." lightbox="media\how-to-use-mapping-debugger\mapping-debugger-with-error-details.png":::
-3. Using the suggestions provided by the Mapping debugger, we've now fixed the error warnings and are ready to select **Save** to commit our updated device and FHIR destination mappings to the MedTech service.
+3. We've used the suggestions provided by the Mapping debugger, and the error warnings are fixed. We're ready to select **Save** to commit our updated device and FHIR destination mappings to the MedTech service.
:::image type="content" source="media\how-to-use-mapping-debugger\mapping-debugger-save-mappings.png" alt-text="Screenshot of the Mapping debugger and the Save button." lightbox="media\how-to-use-mapping-debugger\mapping-debugger-save-mappings.png"::: > [!NOTE] > The MedTech service only saves the mappings that have been changed/updated. For example: If you only made a change to the **device mapping**, only those changes are saved to your MedTech service and no changes would be saved to the FHIR destination mapping. This is by design and to help with performance of the MedTech service.
-4. Once the device and FHIR destination mappings are successfully saved, you'll receive confirmation from **Notifications** within the Azure portal.
+4. Once the device and FHIR destination mappings are successfully saved, a confirmation from **Notifications** is created within the Azure portal.
:::image type="content" source="media\how-to-use-mapping-debugger\mapping-debugger-successful-save.png" alt-text="Screenshot of the Mapping debugger and a successful the save of the device and FHIR destination mappings." lightbox="media\how-to-use-mapping-debugger\mapping-debugger-successful-save.png"::: ## View a normalized message and FHIR Observation
-1. The Mapping debugger gives you the ability to view sample outputs of the normalization and FHIR transformation processes by supplying a test device message. Select **Upload** and **Test device message**.
+1. The Mapping debugger gives you the ability to view sample outputs of the normalization and FHIR transformation stages by supplying a test device message. Select **Upload** and **Test device message**.
:::image type="content" source="media\how-to-use-mapping-debugger\mapping-debugger-select-upload-and-test-device-message.png" alt-text="Screenshot of the Mapping debugger and test device message box." lightbox="media\how-to-use-mapping-debugger\mapping-debugger-select-upload-and-test-device-message.png":::
-2. The **Select a file** box will open. For this example, we'll select **Enter manually**.
+2. The **Select a file** box opens. For this example, select **Enter manually**.
:::image type="content" source="media\how-to-use-mapping-debugger\mapping-debugger-select-test-device-message-manual.png" alt-text="Screenshot of the Mapping debugger and Select a file box." lightbox="media\how-to-use-mapping-debugger\mapping-debugger-select-test-device-message-manual.png":::
In this article, you'll learn how to use the MedTech service Mapping debugger. T
:::image type="content" source="media\how-to-use-mapping-debugger\mapping-debugger-input-test-device-message.png" alt-text="Screenshot of the Enter manually box with a validated test device message in the box." lightbox="media\how-to-use-mapping-debugger\mapping-debugger-input-test-device-message.png":::
-4. Once a conforming test device message is uploaded, the **View normalized message** and **View FHIR observation** buttons will become available so that you may view the sample outputs of the normalization and FHIR transformation processes. These sample outputs can be used to validate your device and FHIR destination mappings are properly configured for processing events according to your requirement.
+4. Once a conforming test device message is uploaded, the **View normalized message** and **View FHIR observation** buttons become available so that you may view the sample outputs of the normalization and FHIR transformation stages. These sample outputs can be used to validate your device and FHIR destination mappings are properly configured for processing device messages according to your requirement.
:::image type="content" source="media\how-to-use-mapping-debugger\mapping-debugger-normalized-and-FHIR-selections-available.png" alt-text="Screenshot View normalized message and View FHIR observation available." lightbox="media\how-to-use-mapping-debugger\mapping-debugger-normalized-and-FHIR-selections-available.png":::
In this article, you'll learn how to use the MedTech service Mapping debugger. T
## Next steps
-In this article, you learned about how to use the Mapping debugger to edit/troubleshoot the MedTech service device and FHIR destination mappings and view normalized message and FHIR Observation from a test device message.
+In this article, you were provided with an overview and learned about how to use the Mapping debugger to edit/troubleshoot the MedTech service device and FHIR destination mappings.
To learn how to troubleshoot MedTech service deployment errors, see
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview.md
Previously updated : 04/04/2023 Last updated : 04/06/2023
This article provides an introductory overview of the MedTech service. The MedTe
The MedTech service is important because data can be difficult to access or lost when it comes from diverse or incompatible devices, systems, or formats. If this information isn't easy to access, it may have a negative effect on gaining key insights and capturing trends. The ability to transform many types of device data into a unified FHIR format enables the MedTech service to successfully link device data with other datasets to support the end user. As a result, this capability can facilitate the discovery of important clinical insights and trend capture. It can also help make connections to new device applications and enable advanced research projects.
+The following video presents an overview of MedTech service:
+
+> [!VIDEO https://youtube.com/embed/_nMirYYU0pg]
+ ## How the MedTech service works The following diagram outlines the basic elements of how the MedTech service transforms device data into a standardized FHIR resource in the cloud.
import-export Storage Import Export Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-service.md
Previously updated : 10/27/2022 Last updated : 03/31/2023 # What is Azure Import/Export service?
Supply your own disk drives and transfer data with the Azure Import/Export servi
If you want to transfer data using disk drives supplied by Microsoft, you can use [Azure Data Box Disk](../databox/data-box-disk-overview.md) to import data into Azure. Microsoft ships up to 5 encrypted solid-state disk drives (SSDs) with a 40 TB total capacity per order, to your datacenter through a regional carrier. You can quickly configure disk drives, copy data to disk drives over a USB 3.0 connection, and ship the disk drives back to Azure. For more information, go to [Azure Data Box Disk overview](../databox/data-box-disk-overview.md).
+> [!NOTE]
+> Import/Export jobs are now part of the Azure Data Box resource. Follow [this tutorial](storage-import-export-data-to-blobs.md#step-2-create-an-import-job) on how to create a new Import Export job under Data Box
+ ## Azure Import/Export use cases Consider using Azure Import/Export service when uploading or downloading data over the network is too slow, or getting additional network bandwidth is cost-prohibitive. Use this service in the following scenarios:
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
Install the following tools and versions for your specific operating system: Win
1. On the **User** tab, go to **Features** **>** **Extensions**.
- 1. Confirm that **Auto Check Updates** and **Auto Update** are selected.
+ 1. Confirm that **Auto Check Updates** is selected, and that **Auto Update** is set to **All Extensions**.
-By default, the following settings are enabled and set for the Azure Logic Apps (Standard) extension:
+1. Confirm that the **Azure Logic Apps Standard: Project Runtime** setting for the Azure Logic Apps (Standard) extension is set to version **~4**:
-* **Azure Logic Apps Standard: Project Runtime**, which is set to version **~3**
-
- > [!NOTE]
- > This version is required to use the [Inline Code Operations actions](../logic-apps/logic-apps-add-run-inline-code.md).
-
-* **Azure Logic Apps Standard: Experimental View Manager**, which enables the latest designer in Visual Studio Code. If you experience problems on the designer, such as dragging and dropping items, turn off this setting.
-
-To find and confirm these settings, follow these steps:
+ > [!NOTE]
+ > This version is required to use the [Inline Code Operations actions](../logic-apps/logic-apps-add-run-inline-code.md).
-1. On the **File** menu, go to **Preferences** **>** **Settings**.
+ 1. On the **File** menu, go to **Preferences** **>** **Settings**.
-1. On the **User** tab, go to **>** **Extensions** **>** **Azure Logic Apps (Standard)**.
+ 1. On the **User** tab, go to **>** **Extensions** **>** **Azure Logic Apps (Standard)**.
- For example, you can find the **Azure Logic Apps Standard: Project Runtime** setting here or use the search box to find other settings:
+ For example, you can find the **Azure Logic Apps Standard: Project Runtime** setting here or use the search box to find other settings:
- ![Screenshot that shows Visual Studio Code settings for "Azure Logic Apps (Standard)" extension.](./media/create-single-tenant-workflows-visual-studio-code/azure-logic-apps-settings.png)
+ ![Screenshot that shows Visual Studio Code settings for "Azure Logic Apps (Standard)" extension.](./media/create-single-tenant-workflows-visual-studio-code/azure-logic-apps-settings.png)
<a name="connect-azure-account"></a>
machine-learning How To Attach Kubernetes Anywhere https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-attach-kubernetes-anywhere.md
With Arc Kubernetes cluster, you can build, train, and deploy models in any infr
| Usage pattern | Location of data | Motivation | Infra setup & Azure Machine Learning implementation | | -- | -- | -- | -- | Train model in cloud, deploy model on-premises | Cloud | Make use of cloud compute. Either because of elastic compute needs or special hardware such as a GPU.<br/>Model must be deployed on-premises because of security, compliance, or latency requirements | 1. Azure managed compute in cloud.<br/>2. Customer managed Kubernetes on-premises.<br/>3. Fully automated MLOps in hybrid mode, including training and model deployment steps transitioning seamlessly from cloud to on-premises and vice versa.<br/>4. Repeatable, with all assets tracked properly. Model retrained when necessary, and model deployment updated automatically after retraining. |
+| Train model on-premises and cloud, deploy to both cloud and on-premises | Cloud | Organizations wanting to combine on-premises investments with cloud scalability. Bring cloud and on-premises compute under single pane of glass. Single source of truth for data is located in cloud, can be replicated to on-prem (i.e., lazily on usage or proactively). Cloud compute primary usage is when on-prem resources aren't available (in use, maintenance) or don't have specific hardware requirements (GPU). | 1. Azure managed compute in cloud.<br />2. Customer managed Kubernetes on-premises.<br />3. Fully automated MLOps in hybrid mode, including training and model deployment steps transitioning seamlessly from cloud to on-premises and vice versa.<br />4. Repeatable, with all assets tracked properly. Model retrained when necessary, and model deployment updated automatically after retraining.|
| Train model on-premises, deploy model in cloud | On-premises | Data must remain on-premises due to data-residency requirements.<br/>Deploy model in the cloud for global service access or for compute elasticity for scale and throughput. | 1. Azure managed compute in cloud.<br/>2. Customer managed Kubernetes on-premises.<br/>3. Fully automated MLOps in hybrid mode, including training and model deployment steps transitioning seamlessly from cloud to on-premises and vice versa.<br/>4. Repeatable, with all assets tracked properly. Model retrained when necessary, and model deployment updated automatically after retraining. | | Bring your own AKS in Azure | Cloud | More security and controls.<br/>All private IP machine learning to prevent data exfiltration. | 1. AKS cluster behind an Azure VNet.<br/>2. Create private endpoints in the same VNet for Azure Machine Learning workspace and its associated resources.<br/>3. Fully automated MLOps. | | Full ML lifecycle on-premises | On-premises | Secure sensitive data or proprietary IP, such as ML models and code/scripts. | 1. Outbound proxy server connection on-premises.<br/>2. Azure ExpressRoute and Azure Arc private link to Azure resources.<br/>3. Customer managed Kubernetes on-premises.<br/>4. Fully automated MLOps. |
machine-learning How To Batch Scoring Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-batch-scoring-script.md
Batch endpoints allow you to deploy models to perform long-running inference at scale. To indicate how batch endpoints should use your model over the input data to create predictions, you need to create and specify a scoring script (also known as batch driver script). In this article, you will learn how to use scoring scripts in different scenarios and their best practices. > [!TIP]
-> MLflow models don't require a scoring script as it is autogenerated for you. For more details about how batch endpoints work with MLflow models, see the dedicated tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md). If you want to change the default inference routine, write an scoring script for your MLflow models as explained at [Using MLflow models with a scoring script](how-to-mlflow-batch.md#customizing-mlflow-models-deployments-with-a-scoring-script).
+> MLflow models don't require a scoring script as it is autogenerated for you. For more details about how batch endpoints work with MLflow models, see the dedicated tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
> [!WARNING] > If you are deploying an Automated ML model under a batch endpoint, notice that the scoring script that Automated ML provides only works for Online Endpoints and it is not designed for batch execution. Please follow this guideline to learn how to create one depending on what your model does. ## Understanding the scoring script
-The scoring script is a Python file (`.py`) that contains the logic about how to run the model and read the input data submitted by the batch deployment executor driver. Each model deployment has to provide a scoring script, however, an endpoint may host multiple deployments using different scoring script versions.
+The scoring script is a Python file (`.py`) that contains the logic about how to run the model and read the input data submitted by the batch deployment executor. Each model deployment provides the scoring script (allow with any other dependency required) at creation time. It is usually indicated as follows:
+
+# [Azure CLI](#tab/cli)
+
+__deployment.yml__
++
+# [Python](#tab/python)
+
+```python
+deployment = BatchDeployment(
+ ...
+ code_path="code",
+ scoring_script="batch_driver.py",
+ ...
+)
+```
+
+# [Studio](#tab/azure-studio)
+
+When creating a new deployment, you will be prompted for a scoring script and dependencies as follows:
++
+For MLflow models, scoring scripts are automatically generated but you can indicate one by checking the following option:
+
+
+ The scoring script must contain two methods:
The `run()` method should return a Pandas `DataFrame` or an array/list. Each ret
> [!IMPORTANT] > __How to write predictions?__ >
-> Use __arrays__ when you need to output a single prediction. Use __pandas DataFrames__ when you need to return multiple pieces of information. For instance, for tabular data, you may want to append your predictions to the original record. Use a pandas DataFrame for this case. For file datasets, __we still recommend to output a pandas DataFrame__ as they provide a more robust approach to read the results.
->
-> Although pandas DataFrame may contain column names, they are not included in the output file. If needed, please see [Customize outputs in batch deployments](how-to-deploy-model-custom-output.md).
+> Whatever you return in the `run()` function will be appended in the output pedictions file generated by the batch job. It is important to return the right data type from this function. Return __arrays__ when you need to output a single prediction. Return __pandas DataFrames__ when you need to return multiple pieces of information. For instance, for tabular data you may want to append your predictions to the original record. Use a pandas DataFrame for this case. Although pandas DataFrame may contain column names, they are not included in the output file.
+>
+> If you need to write predictions in a different way, you can [customize outputs in batch deployments](how-to-deploy-model-custom-output.md).
> [!WARNING] > Do not not output complex data types (or lists of complex data types) in the `run` function. Those outputs will be transformed to string and they will be hard to read.
For an example about how to achieve it see [Text processing with batch deploymen
### Using models that are folders
-When authoring scoring scripts, the environment variable `AZUREML_MODEL_DIR` is typically used in the `init()` function to load the model. However, some models may contain its files inside of a folder. When reading the files in this variable, you may need to account for that. You can identify the folder where your MLflow model is placed as follows:
+The environment variable `AZUREML_MODEL_DIR` contains the path to where the selected model is located and it is typically used in the `init()` function to load the model into memory. However, some models may contain its files inside of a folder. When reading the files in this variable, you may need to account for that. You can identify the folder where your MLflow model is placed as follows:
1. Go to [Azure Machine Learning portal](https://ml.azure.com).
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
Where the file *create-instance.yml* is:
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] ```python
+from azure.ai.ml.entities import ComputeInstance, ComputeSchedules, ComputeStartStopSchedule, RecurrenceTrigger, RecurrencePattern
from azure.ai.ml import MLClient from azure.ai.ml.constants import TimeZone
-from azure.ai.ml.entities import ComputeInstance, AmlCompute, ComputeSchedules, ComputeStartStopSchedule, RecurrencePattern, RecurrenceTrigger
from azure.identity import DefaultAzureCredential
-from dateutil import tz
-import datetime
-# Enter details of your Azure Machine Learning workspace
-subscription_id = "<guid>"
-resource_group = "sample-rg"
-workspace = "sample-ws"
+
+subscription_id = "sub-id"
+resource_group = "rg-name"
+workspace = "ws-name"
# get a handle to the workspace ml_client = MLClient( DefaultAzureCredential(), subscription_id, resource_group, workspace )
-ci_minimal_name = "sampleCI"
-mytz = tz.gettz("Asia/Kolkata")
-now = datetime.datetime.now(tz = mytz)
-starttime = now + datetime.timedelta(minutes=25)
-triggers = RecurrenceTrigger(frequency="day", interval=1, schedule=RecurrencePattern(hours=17, minutes=30))
-myschedule = ComputeStartStopSchedule(start_time=starttime, time_zone=TimeZone.INDIA_STANDARD_TIME, trigger=triggers, action="Stop")
+
+ci_minimal_name = "ci-name"
+
+rec_trigger = RecurrenceTrigger(start_time="yyyy-mm-ddThh:mm:ss", time_zone=TimeZone.INDIA_STANDARD_TIME, frequency="week", interval=1, schedule=RecurrencePattern(week_days=["Friday"], hours=15, minutes=[30]))
+myschedule = ComputeStartStopSchedule(trigger=rec_trigger, action="start")
com_sch = ComputeSchedules(compute_start_stop=[myschedule])
-ci_minimal = ComputeInstance(name=ci_minimal_name, schedules=com_sch)
-ml_client.begin_create_or_update(ci_minimal)
+
+my_compute = ComputeInstance(name=ci_minimal_name, schedules=com_sch)
+ml_client.compute.begin_create_or_update(my_compute)
``` ### Create a schedule with a Resource Manager template
migrate How To Discover Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-discover-applications.md
ms. Previously updated : 02/24/2023 Last updated : 03/06/2023
This article describes how to discover installed software inventory, web apps, a
Performing software inventory helps identify and tailor a migration path to Azure for your workloads. Software inventory uses the Azure Migrate appliance to perform discovery, using server credentials. It's completely agentless - no agents are installed on the servers to collect this data.
-> [!Note]
-> Currently the discovery of ASP.NET web apps is only available with appliance used for discovery of servers running in your VMware enviornment. These feature is not available for servers running in your Hyper-V enviornment and for physical servers or servers running on other clouds like AWS, GCP etc.
## Before you start
postgresql Concepts Major Version Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-major-version-upgrade.md
-# Major Version Upgrade with PostgreSQL Flexible Server Preview
+# Major Version Upgrade for PostgreSQL Flexible Server Preview
[!INCLUDE [applies-to-postgresql-Flexible-server](../includes/applies-to-postgresql-Flexible-server.md)]
+> [!NOTE]
+> Major Version Upgrade for PostgreSQL Flexible Server is currently in preview.
## Overview Azure Database for PostgreSQL Flexible server supports PostgreSQL versions 11, 12,13, and 14. Postgres community releases a new major version containing new features about once a year. Additionally, major version receives periodic bug fixes in the form of minor releases. Minor version upgrades include changes that are backward-compatible with existing applications. Azure Database for PostgreSQL Flexible service periodically updates the minor versions during customerΓÇÖs maintenance window. Major version upgrades are more complicated than minor version upgrades as they can include internal changes and new features that may not be backward-compatible with existing applications.
postgresql Concepts Query Performance Insight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-query-performance-insight.md
Last updated 4/1/2023
-# Query Performance Insight (Preview)
+# Query Performance Insight Preview
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
+> [!NOTE]
+> Query Performance Insight for PostgreSQL Flexible Server is currently in preview.
+ Query Performance Insight provides intelligent query analysis for Azure Postgres Flexible server databases. It helps identify the top resource consuming and long-running queries in your workload. This helps you find the queries to optimize to improve overall workload performance and efficiently use the resource that you are paying for. Query Performance Insight helps you spend less time troubleshooting database performance by providing: >[!div class="checklist"]
Query Performance Insight provides intelligent query analysis for Azure Postgres
3. **[Log analytics workspace](howto-configure-and-access-logs.md)** is configured for storing 3 log categories including - PostgreSQL Sessions logs, PostgreSQL Query Store and Runtime and PostgreSQL Query Store Wait Statistics. To configure log analytics, refer [Log analytics workspace](howto-configure-and-access-logs.md#configure-diagnostic-settings). > [!NOTE]
-> The **Query Store data is not being transmitted to the log analytics workspace**. The PostgreSQL logs (Sessions data / Query Store Runtime / Query Store Wait Statistics) is not being sent to the log analytics workspace, which is necessary to use Query Performance Insight . To configure the logging settings for category PostgreSQL sessions and send the data to a log analytics workspace.
+> The **Query Store data is not being transmitted to the log analytics workspace**. The PostgreSQL logs (Sessions data / Query Store Runtime / Query Store Wait Statistics) is not being sent to the log analytics workspace, which is necessary to use Query Performance Insight. To configure the logging settings for category PostgreSQL sessions and send the data to a log analytics workspace.
## Using Query Performance Insight
postgresql How To Migrate From Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-from-oracle.md
For more information about this migration scenario, see the following resources.
| Resource | Description | | -- | |
-| [Oracle to Azure PostgreSQL migration cookbook](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20Azure%20PostgreSQL%20Migration%20Cookbook.pdf) | This document helps architects, consultants, database administrators, and related roles quickly migrate workloads from Oracle to Azure Database for PostgreSQL by using ora2pg. |
+| [Oracle to Azure PostgreSQL migration cookbook](https://www.microsoft.com/en-us/download/details.aspx?id=103473) | This document helps architects, consultants, database administrators, and related roles quickly migrate workloads from Oracle to Azure Database for PostgreSQL by using ora2pg. |
| [Oracle to Azure PostgreSQL migration workarounds](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20Azure%20Database%20for%20PostgreSQL%20Migration%20Workarounds.pdf) | This document helps architects, consultants, database administrators, and related roles quickly fix or work around issues while migrating workloads from Oracle to Azure Database for PostgreSQL. | | [Steps to install ora2pg on Windows or Linux](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Steps%20to%20Install%20ora2pg%20on%20Windows%20and%20Linux.pdf) | This document provides a quick installation guide for migrating schema and data from Oracle to Azure Database for PostgreSQL by using ora2pg on Windows or Linux. For more information, see the [ora2pg documentation](http://ora2pg.darold.net/documentation.html). |
private-5g-core Complete Private Mobile Network Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/complete-private-mobile-network-prerequisites.md
For each site you're deploying, do the following.
:::zone pivot="ase-pro-2"
-The following table contains the ports you need to open for Azure Private 5G Core local access. This includes local management access and control plane signaling.
+The following tables contain the ports you need to open for Azure Private 5G Core local access. This includes local management access and control plane signaling.
You must set these up in addition to the [ports required for Azure Stack Edge (ASE)](/azure/databox-online/azure-stack-edge-pro-2-system-requirements#networking-port-requirements).
+#### Azure Private 5G Core
+ | Port | ASE interface | Description| |--|--|--| | TCP 443 Inbound | Management (LAN) | Access to local monitoring tools (packet core dashboards and distributed tracing). |
You must set these up in addition to the [ports required for Azure Stack Edge (A
:::zone pivot="ase-pro-gpu"
-The following table contains the ports you need to open for Azure Private 5G Core local access. This includes local management access and control plane signaling.
+The following tables contains the ports you need to open for Azure Private 5G Core local access. This includes local management access and control plane signaling.
+
+You must set these up in addition to the [ports required for Azure Stack Edge (ASE)](/azure/databox-online/azure-stack-edge-pro-2-system-requirements#networking-port-requirements).
-You must set these up in addition to the [ports required for Azure Stack Edge (ASE)](../databox-online/azure-stack-edge-gpu-system-requirements.md#networking-port-requirements).
+#### Azure Private 5G Core
| Port | ASE interface | Description| |--|--|--|
You must set these up in addition to the [ports required for Azure Stack Edge (A
| SCTP 36412 Inbound | Port 5 (Access network) | Control plane access signaling (S1-MME interface). </br>Only required for 4G deployments. | | UDP 2152 In/Outbound | Port 5 (Access network) | Access network user plane data (N3 interface for 5G, S1-U for 4G). | | All IP traffic | Port 6 (Data networks) | Data network user plane data (N6 interface for 5G, SGi for 4G). |+
+#### Port requirements for Azure Stack Edge
+
+|Port No.|In/Out|Port Scope|Required|Notes|
+|--|--|--|--|--|
+|UDP 123 (NTP)|Out|WAN|In some cases|This port is only required if you are using a local NTP server or internet-based server for ASE.|
+|UDP 53 (DNS)|Out|WAN|In some cases| See [Configure Domain Name System (DNS) servers](#configure-domain-name-system-dns-servers). |
+|TCP 5985 (WinRM)|Out/In|LAN|Yes|Required for WinRM to connect ASE via PowerShell during AP5GC deployment.</br> See [Commission an AKS cluster](commission-cluster.md). |
+|TCP 5986 (WinRM)|Out/In|LAN|Yes|Required for WinRM to connect ASE via PowerShell during AP5GC deployment.</br> See [Commission an AKS cluster](commission-cluster.md). |
+|UDP 67 (DHCP)|Out|LAN|Yes|
+|TCP 445 (SMB)|In|LAN|No|ASE for AP5GC does not require a local file server.|
+|TCP 2049 (NFS)|In|LAN|No|ASE for AP5GC does not require a local file server.|
+
+#### Port requirements for IoT Edge
+
+|Port No.|In/Out|Port Scope|Required|Notes|
+|--|--|--|--|--|
+|TCP 443 (HTTPS)|Out|WAN|No|This configuration is only required when using manual scripts or Azure IoT Device Provisioning Service (DPS).|
+
+#### Port requirements for Kubernetes on Azure Stack Edge
+
+|Port No.|In/Out|Port Scope|Required|Notes|
+|--|--|--|--|--|
+|TCP 31000 (HTTPS)|In|LAN|Yes|Required for Kubernetes dashboard to monitor your device.|
+|TCP 6443 (HTTPS)|In|LAN|Yes|Required for kubectl access|
:::zone-end
Do the following for each site you want to add to your private mobile network. D
| 2. | Order and prepare your Azure Stack Edge Pro 2 device. | [Tutorial: Prepare to deploy Azure Stack Edge Pro 2](../databox-online/azure-stack-edge-pro-2-deploy-prep.md) | | 3. | Rack and cable your Azure Stack Edge Pro device. </br></br>When carrying out this procedure, you must ensure that the device has its ports connected as follows:</br></br>- Port 2 - management</br>- Port 3 - access network</br>- Port 4 - data networks| [Tutorial: Install Azure Stack Edge Pro 2](/azure/databox-online/azure-stack-edge-pro-2-deploy-install?pivots=single-node.md) | | 4. | Connect to your Azure Stack Edge Pro 2 device using the local web UI. | [Tutorial: Connect to Azure Stack Edge Pro 2](/azure/databox-online/azure-stack-edge-pro-2-deploy-connect?pivots=single-node.md) |
-| 5. | Configure the network for your Azure Stack Edge Pro 2 device. When carrying out the *Enable compute network* step of this procedure, ensure you use the port you've connected to your management network. </br></br>**Do not** configure virtual switches, virtual networks or compute IPs.| [Tutorial: Configure network for Azure Stack Edge Pro 2](/azure/databox-online/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy?pivots=single-node.md)|
+| 5. | Configure the network for your Azure Stack Edge Pro 2 device. When carrying out the *Enable compute network* step of this procedure, ensure you use the port you've connected to your management network.</br> </br> **Note:** When an ASE is used in an Azure Private 5G Core service, Port 2 is used for management rather than data. The tutorial linked assumes a generic ASE that uses Port 2 for data. </br></br> Verify the outbound connections from Azure Stack Edge Pro device to the Azure Arc endpoints are opened. </br></br>**Do not** configure virtual switches, virtual networks or compute IPs. | [Tutorial: Configure network for Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy?pivots=single-node.md)</br></br>[Azure Arc Network Requirements](/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli%2Cazure-cloud)</br></br>[Azure Arc Agent Network Requirements](/azure/architecture/hybrid/arc-hybrid-kubernetes)|
| 6. | Configure a name, DNS name, and (optionally) time settings. </br></br>**Do not** configure an update. | [Tutorial: Configure the device settings for Azure Stack Edge Pro 2](../databox-online/azure-stack-edge-pro-2-deploy-set-up-device-update-time.md) |
-| 7. | Configure certificates and configure encryption-at-rest for your Azure Stack Edge Pro 2 device. After changing the certificates, you may have to reopen the local UI in a new browser window to prevent the old cached certificates from causing problems.| [Tutorial: Configure certificates for your Azure Stack Edge Pro 2](/azure/databox-online/azure-stack-edge-pro-2-deploy-configure-certificates) |
+| 7. | Configure certificates and configure encryption-at-rest for your Azure Stack Edge Pro 2 device. After changing the certificates, you may have to reopen the local UI in a new browser window to prevent the old cached certificates from causing problems.| [Tutorial: Configure certificates for your Azure Stack Edge Pro 2](/azure/databox-online/azure-stack-edge-pro-2-deploy-configure-certificates?pivots=single-node) |
| 8. | Activate your Azure Stack Edge Pro 2 device. </br></br>**Do not** follow the section to *Deploy Workloads*. | [Tutorial: Activate Azure Stack Edge Pro 2](../databox-online/azure-stack-edge-pro-2-deploy-activate.md) | | 9. | Configure compute on your Azure Stack Edge Pro 2 device. | [Tutorial: Configure compute on Azure Stack Edge Pro 2](../databox-online/azure-stack-edge-pro-2-deploy-configure-compute.md) | | 10. | Enable VM management from the Azure portal. </br></br>Enabling this immediately after activating the Azure Stack Edge Pro 2 device occasionally causes an error. Wait one minute and retry. | Navigate to the ASE resource in the Azure portal, go to **Edge services**, select **Virtual machines** and select **Enable**. |
Do the following for each site you want to add to your private mobile network. D
| 2. | Order and prepare your Azure Stack Edge Pro GPU device. | [Tutorial: Prepare to deploy Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-prep.md) | | 3. | Rack and cable your Azure Stack Edge Pro device. </br></br>When carrying out this procedure, you must ensure that the device has its ports connected as follows:</br></br>- Port 5 - access network</br>- Port 6 - data networks</br></br>Additionally, you must have a port connected to your management network. You can choose any port from 2 to 4. | [Tutorial: Install Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-install?pivots=single-node.md) | | 4. | Connect to your Azure Stack Edge Pro device using the local web UI. | [Tutorial: Connect to Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-connect?pivots=single-node.md) |
-| 5. | Configure the network for your Azure Stack Edge Pro device. When carrying out the *Enable compute network* step of this procedure, ensure you use the port you've connected to your management network. </br></br>**Do not** configure virtual switches, virtual networks or compute IPs.</br></br> In addition, you can configure your Azure Stack Edge Pro device to run behind a web proxy. | [Tutorial: Configure network for Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy?pivots=single-node.md) </br></br> [(Optionally) Configure web proxy for Azure Stack Edge Pro](/azure/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy?pivots=single-node#configure-web-proxy)|
+| 5. | Configure the network for your Azure Stack Edge Pro device. When carrying out the *Enable compute network* step of this procedure, ensure you use the port you've connected to your management network.</br> </br> **Note:** When an ASE is used in an Azure Private 5G Core service, Port 2 is used for management rather than data. The tutorial linked assumes a generic ASE that uses Port 2 for data. </br></br> Verify the outbound connections from Azure Stack Edge Pro device to the Azure Arc endpoints are opened. </br></br>**Do not** configure virtual switches, virtual networks or compute IPs. | [Tutorial: Configure network for Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy?pivots=single-node.md)</br></br>[Azure Arc Network Requirements](/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli%2Cazure-cloud)</br></br>[Azure Arc Agent Network Requirements](/azure/architecture/hybrid/arc-hybrid-kubernetes)|
| 6. | Configure a name, DNS name, and (optionally) time settings. </br></br>**Do not** configure an update. | [Tutorial: Configure the device settings for Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-set-up-device-update-time.md) | | 7. | Configure certificates for your Azure Stack Edge Pro GPU device. After changing the certificates, you may have to reopen the local UI in a new browser window to prevent the old cached certificates from causing problems.| [Tutorial: Configure certificates for your Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-configure-certificates?pivots=single-node.md) | | 8. | Activate your Azure Stack Edge Pro GPU device. </br></br>**Do not** follow the section to *Deploy Workloads*. | [Tutorial: Activate Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-activate.md) |
private-link Disable Private Endpoint Network Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/disable-private-endpoint-network-policy.md
Previously updated : 07/26/2022 Last updated : 04/05/2023 ms.devlang: azurecli
This section describes how to disable subnet private endpoint policies using an
+> [!IMPORTANT]
+> There are limitations to private endpoints in relation to the network policy feature and Network Security Groups and User Defined Routes. For more information, see [Limitations](private-endpoint-overview.md#limitations).
+ ## Next steps - Learn more about [Azure private endpoint](private-endpoint-overview.md)
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Azure Data Explorer (Microsoft.Kusto) | privatelink.{regionName}.kusto.windows.net | {regionName}.kusto.windows.net | | Azure Static Web Apps (Microsoft.Web/staticSites) / staticSites | privatelink.azurestaticapps.net </br> privatelink.{partitionId}.azurestaticapps.net | azurestaticapps.net </br> {partitionId}.azurestaticapps.net | | Azure Migrate (Microsoft.Migrate) / migrate projects, assessment project and discovery site | privatelink.prod.migration.windowsazure.com | prod.migration.windowsazure.com |
-| Azure API Management (Microsoft.ApiManagement/service) / gateway | privatelink.azure-api.net </br> privatelink.developer.azure-api.net | azure-api.net </br> developer.azure-api.net |
+| Azure API Management (Microsoft.ApiManagement/service) / gateway | privatelink.azure-api.net | azure-api.net |
| Microsoft PowerBI (Microsoft.PowerBI/privateLinkServicesForPowerBI) | privatelink.analysis.windows.net </br> privatelink.pbidedicated.windows.net </br> privatelink.tip1.powerquery.microsoft.com | analysis.windows.net </br> pbidedicated.windows.net </br> tip1.powerquery.microsoft.com | | Azure Bot Service (Microsoft.BotService/botServices) / Bot | privatelink.directline.botframework.com | directline.botframework.com </br> europe.directline.botframework.com | | Azure Bot Service (Microsoft.BotService/botServices) / Token | privatelink.token.botframework.com | token.botframework.com </br> europe.token.botframework.com |
purview Available Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/available-metadata.md
Last updated 01/31/2023
-# Available metadata
+# Available metadata for Power BI in the Microsoft Purview Data Catalog
This article has a list of the metadata that is available for a Power BI tenant in the Microsoft Purview governance portal.
This article has a list of the metadata that is available for a Power BI tenant
| IsOnDedicatedCapacity | Automatic | Power BI | Power BI Workspace | No | workspace.IsOnDedicatedCapacity | | users | Automatic | Power BI | Power BI Workspace | No | workspace.Users | - ## Next steps - [Connect to and manage a Power BI tenant](register-scan-power-bi-tenant.md)
purview Concept Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-classification.md
Classification is the process of organizing data into *logical categories* that
* Organize and understand the variety of data classes that are important in your organization and where they're stored. * Understand the risks associated with your most important data assets and then take appropriate measures to mitigate them.
-As shown in the following image, it's possible to apply classifications at both the asset level and the schema level for the *Customers* table in Azure SQL Database.
+Following image shows classification applied while scanning on the *Customer* table in Azure SQL Database.
:::image type="content" source="./media/concept-classification/classification-customers-example-1.png" alt-text="Screenshot that shows the classification of the 'Customers' table in Azure SQL Database." lightbox="./media/concept-classification/classification-customers-example-1.png":::
purview Register Scan Azure Arc Enabled Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-arc-enabled-sql-server.md
Previously updated : 11/23/2022 Last updated : 04/05/2023
-# Connect to and manage Azure Arc-enabled SQL Server in Microsoft Purview (public preview)
-
+# Connect to and manage Azure Arc-enabled SQL Server in Microsoft Purview
This article shows how to register an Azure Arc-enabled SQL Server instance. It also shows how to authenticate and interact with Azure Arc-enabled SQL Server in Microsoft Purview. For more information about Microsoft Purview, read the [introductory article](overview.md).
This article shows how to register an Azure Arc-enabled SQL Server instance. It
|Metadata extraction|Full scan|Incremental scan|Scoped scan|Classification|Access policy|Lineage|Data sharing| |||||||||
-| [Yes](#register) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan) | [Yes - GA](#access-policy) | Limited** | No |
+| [Yes](#register)(GA) | [Yes](#scan)(preview) | [Yes](#scan)(preview) | [Yes](#scan)(preview) | [Yes](#scan)(preview) | [Yes](#access-policy)(GA) | Limited** | No |
\** Lineage is supported if the dataset is used as a source/sink in the [Azure Data Factory copy activity](how-to-link-azure-data-factory.md).
purview Supported Classifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/supported-classifications.md
Currently the address model supports the following formats in the same column:
- landmark, city ### Person's Gender
-Person's Gender machine learning model has been trained using US Census data and other public data sources in English language.
+Person's Gender machine learning model has been trained using US Census data and other public data sources in English language. It supports classifying 50+ genders out of the box.
+
+#### Keywords
+- sex
+- gender
+- orientation
+ ### Person's Age Person's Age machine learning model detects age of an individual specified in various different formats. The qualifiers for days, months, and years must be in English language. #### Keywords-- Age
+- age
#### Supported formats - {%y} y, {%m} m
route-server Quickstart Configure Route Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/quickstart-configure-route-server-powershell.md
description: In this quickstart, you learn how to create and configure an Azure
Previously updated : 07/28/2022 Last updated : 04/06/2023 -+ # Quickstart: Create and configure Route Server using Azure PowerShell
$subnetConfig = Add-AzVirtualNetworkSubnetConfig @subnet
$virtualnetwork | Set-AzVirtualNetwork
-$vnetInfo = Get-AzVirtualNetwork -Name myVirtualNetwork
+$vnetInfo = Get-AzVirtualNetwork -Name myVirtualNetwork -ResourceGroupName myRouteServerRG
$subnetId = (Get-AzVirtualNetworkSubnetConfig -Name RouteServerSubnet -VirtualNetwork $vnetInfo).Id ```
$subnetId = (Get-AzVirtualNetworkSubnetConfig -Name RouteServerSubnet -VirtualNe
To establish BGP peering from the Route Server to your NVA use [Add-AzRouteServerPeer](/powershell/module/az.network/add-azrouteserverpeer):
-The ΓÇ£your_nva_ipΓÇ¥ is the virtual network IP assigned to the NVA. The ΓÇ£your_nva_asnΓÇ¥ is the Autonomous System Number (ASN) configured in the NVA. The ASN can be any 16-bit number other than the ones in the range of 65515-65520. This range of ASNs are reserved by Microsoft.
+The `your_nva_ip` is the virtual network IP assigned to the NVA. The `your_nva_asn` is the Autonomous System Number (ASN) configured in the NVA. The ASN can be any 16-bit number other than the ones in the range of 65515-65520. This range of ASNs is reserved by Microsoft.
```azurepowershell-interactive $peer = @{
- PeerName = 'myNVA"
- PeerIp = '192.168.0.1'
- PeerAsn = '65501'
+ PeerName = 'myNVA'
+ PeerIp = 'your_nva_ip'
+ PeerAsn = 'your_nva_asn'
RouteServerName = 'myRouteServer' ResourceGroupName = myRouteServerRG' }
$routeserver = @{
Get-AzRouteServer @routeserver ```
-The output will look like the following:
+The output looks like the following:
``` RouteServerAsn : 65515
RouteServerIps : {10.5.10.4, 10.5.10.5}
If you have an ExpressRoute and an Azure VPN gateway in the same virtual network and you want them to exchange routes, you can enable route exchange on the Azure Route Server.
+> [!IMPORTANT]
+> Azure VPN gateway must be configured in **active-active** mode and have the ASN set to 65515.
+ 1. To enable route exchange between Azure Route Server and the gateway(s), use [Update-AzRouteServer](/powershell/module/az.network/update-azrouteserver) with the *-AllowBranchToBranchTraffic* flag: ```azurepowershell-interactive
route-server Routing Preference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/routing-preference.md
Previously updated : 03/28/2023- Last updated : 04/06/2023+ # Azure Route Server routing preference
Azure Route Server enables dynamic routing between network virtual appliances (N
When **branch-to-branch** is enabled and Route Server learns multiple routes across site-to-site (S2S) VPN, ExpressRoute and SD-WAN NVAs, for the same on-premises destination route prefix, users can now configure connection preferences to influence Route Server route selection.
+> [!IMPORTANT]
+> Routing preference is only available for Route Servers deployed on or after April 7, 2023. Support for existing Route Servers, deployed before April 7, 2023, will be backfilled at a later date. For any questions, please open [a support request in the Azure Portal](https://aka.ms/azsupt).
+ ## Routing preference configuration When Route Server has multiple routes to an on-premises destination prefix, Route Server selects the best route(s) in order of preference, as follows:
When Route Server has multiple routes to an on-premises destination prefix, Rout
## Next steps - Learn how to [configure Azure Route Server](quickstart-configure-route-server-portal.md).-- Learn how to [monitor Azure Route Server](monitor-route-server.md).
+- Learn how to [monitor Azure Route Server](monitor-route-server.md).
sentinel Notebook Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/notebook-get-started.md
Last updated 01/09/2023
-# Tutorial: Get started with Jupyter notebooks and MSTICPy in Microsoft Sentinel
+# Get started with Jupyter notebooks and MSTICPy in Microsoft Sentinel
-This tutorial describes how to run the **Getting Started Guide For Microsoft Sentinel ML Notebooks** notebook, which sets up basic configurations for running Jupyter notebooks in Microsoft Sentinel and running simple data queries.
+This article describes how to run the **Getting Started Guide For Microsoft Sentinel ML Notebooks** notebook, which sets up basic configurations for running Jupyter notebooks in Microsoft Sentinel and running simple data queries.
The **Getting Started Guide for Microsoft Sentinel ML Notebooks** notebook uses MSTICPy, a Python library of Cybersecurity tools built by Microsoft, which provides threat hunting and investigation functionality.
MSTICPy reduces the amount of code that customers need to write for Microsoft Se
- Visualization tools using event timelines, process trees, and geo mapping. - Advanced analyses, such as time series decomposition, anomaly detection, and clustering.
-The steps in this tutorial describe how to run the **Getting Started Guide for Microsoft Sentinel ML Notebooks** notebook in your Azure ML workspace via Microsoft Sentinel. You can also use this tutorial as guidance for performing similar steps to run notebooks in other environments, including locally.
+The steps in this article describe how to run the **Getting Started Guide for Microsoft Sentinel ML Notebooks** notebook in your Azure ML workspace via Microsoft Sentinel. You can also use this article as guidance for performing similar steps to run notebooks in other environments, including locally.
For more information, see [Use notebooks to power investigations](hunting.md#use-notebooks-to-power-investigations) and [Use Jupyter notebooks to hunt for security threats](notebooks.md).
For more information, see [Use notebooks to power investigations](hunting.md#use
- To use notebooks in Microsoft Sentinel, make sure that you have the required permissions. For more information, see [Manage access to Microsoft Sentinel notebooks](notebooks.md#manage-access-to-microsoft-sentinel-notebooks). -- To perform the steps in this tutorial, you'll need Python 3.6 or later. In Azure ML you can use either a Python 3.8 kernel (recommended) or a Python 3.6 kernel.
+- To perform the steps in this article, you'll need Python 3.6 or later. In Azure ML you can use either a Python 3.8 kernel (recommended) or a Python 3.6 kernel.
- This notebook uses the [MaxMind GeoLite2](https://www.maxmind.com) geolocation lookup service for IP addresses. To use the MaxMind GeoLite2 service, you'll need an account key. You can sign up for a free account and key at the [Maxmind signup page](https://www.maxmind.com/en/geolite2/signup).
You can also try out other notebooks stored in the [Microsoft Sentinel Notebooks
- The [Entity Explorer series](https://github.com/Azure/Azure-Sentinel-Notebooks/) of notebooks, which allow for a deep drill-down into details about a host, account, IP address, and other entities. > [!TIP]
-> If you use the notebook described in this tutorial in another Jupyter environment, you can use any kernel that supports Python 3.6 or later.
+> If you use the notebook described in this article in another Jupyter environment, you can use any kernel that supports Python 3.6 or later.
> > To use MSTICPy notebooks outside of Microsoft Sentinel and Azure Machine Learning (ML), you'll also need to configure your Python environment. Install Python 3.6 or later with the Anaconda distribution, which includes many of the required packages. >
sentinel Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/powerbi.md
Last updated 01/09/2023
-# Tutorial: Create a Power BI report from Microsoft Sentinel data
+# Create a Power BI report from Microsoft Sentinel data
[Power BI](https://powerbi.microsoft.com/) is a reporting and analytics platform that turns data into coherent, immersive, interactive visualizations. Power BI lets you easily connect to data sources, visualize and discover relationships, and share insights with whoever you want. You can base Power BI reports on data from Microsoft Sentinel Log Analytics workspaces, and share those reports with people who don't have access to Microsoft Sentinel. For example, you might want to share information about failed sign-in attempts with app owners, without granting them Microsoft Sentinel access. Power BI visualizations can provide the data at a glance.
-In this tutorial, you:
+In this article, you:
> [!div class="checklist"] > * Export a Log Analytics Kusto query to a Power BI M language query.
In this tutorial, you:
People you granted access in the Power BI service, and members of the Teams channel, can see the report without needing Microsoft Sentinel permissions. > [!NOTE]
-> This tutorial provides a scenario-based procedure for a top customer ask: viewing analysis reports in PowerBI for your Microsoft Sentinel data. For more information, see [Connect data sources](connect-data-sources.md) and [Visualize collected data](get-visibility.md).
+> This article provides a scenario-based procedure to view analysis reports in PowerBI for your Microsoft Sentinel data. For more information, see [Connect data sources](connect-data-sources.md) and [Visualize collected data](get-visibility.md).
> ## Prerequisites
-To complete this tutorial, you need:
+To complete the steps in this article, you need:
- At least read access to a Microsoft Sentinel Log Analytics workspace that monitors sign-in attempts. - A Power BI account that has read access to the Log Analytics workspace.
spring-apps Troubleshoot Exit Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/troubleshoot-exit-code.md
The exit code indicates the reason the application terminated. The following lis
This error code is most often generated by an out-of-memory error. For more information, see [App restart issues caused by out-of-memory issues](./how-to-fix-app-restart-issues-caused-by-out-of-memory.md).
- You can also find more information from the application log by using the Azure CLI [az spring app logs](/cli/azure/spring/app#az-spring-app-logs) command.
+ You can also get details from the application log by using the Azure CLI [az spring app logs](/cli/azure/spring/app#az-spring-app-logs) command. For more information, see [Stream Azure Spring Apps application console logs in real time](./how-to-log-streaming.md).
## Next steps
storage Blob V11 Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-v11-samples-dotnet.md
+
+ Title: Azure Blob Storage code samples using .NET version 11.x client libraries
+
+description: View code samples that use the Azure Blob Storage client library for .NET version 11.x.
+++++ Last updated : 04/03/2023+++
+# Azure Blob Storage code samples using .NET version 11.x client libraries
+
+This article shows code samples that use version 11.x of the Azure Blob Storage client library for .NET.
++
+## Create a snapshot
+
+Related article: [Create and manage a blob snapshot in .NET](snapshots-manage-dotnet.md)
+
+To create a snapshot of a block blob using version 11.x of the Azure Storage client library for .NET, use one of the following methods:
+
+- [CreateSnapshot](/dotnet/api/microsoft.azure.storage.blob.cloudblockblob.createsnapshot)
+- [CreateSnapshotAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblockblob.createsnapshotasync)
+
+The following code example shows how to create a snapshot with version 11.x. This example specifies additional metadata for the snapshot when it is created.
+
+```csharp
+private static async Task CreateBlockBlobSnapshot(CloudBlobContainer container)
+{
+ // Create a new block blob in the container.
+ CloudBlockBlob baseBlob = container.GetBlockBlobReference("sample-base-blob.txt");
+
+ // Add blob metadata.
+ baseBlob.Metadata.Add("ApproxBlobCreatedDate", DateTime.UtcNow.ToString());
+
+ try
+ {
+ // Upload the blob to create it, with its metadata.
+ await baseBlob.UploadTextAsync(string.Format("Base blob: {0}", baseBlob.Uri.ToString()));
+
+ // Sleep 5 seconds.
+ System.Threading.Thread.Sleep(5000);
+
+ // Create a snapshot of the base blob.
+ // You can specify metadata at the time that the snapshot is created.
+ // If no metadata is specified, then the blob's metadata is copied to the snapshot.
+ Dictionary<string, string> metadata = new Dictionary<string, string>();
+ metadata.Add("ApproxSnapshotCreatedDate", DateTime.UtcNow.ToString());
+ await baseBlob.CreateSnapshotAsync(metadata, null, null, null);
+ Console.WriteLine(snapshot.SnapshotQualifiedStorageUri.PrimaryUri);
+ }
+ catch (StorageException e)
+ {
+ Console.WriteLine(e.Message);
+ Console.ReadLine();
+ throw;
+ }
+}
+```
+
+## Delete snapshots
+
+Related article: [Create and manage a blob snapshot in .NET](snapshots-manage-dotnet.md)
+
+To delete a blob and its snapshots using version 11.x of the Azure Storage client library for .NET, use one of the following blob deletion methods, and include the [DeleteSnapshotsOption](/dotnet/api/microsoft.azure.storage.blob.deletesnapshotsoption) enum:
+
+- [Delete](/dotnet/api/microsoft.azure.storage.blob.cloudblob.delete)
+- [DeleteAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblob.deleteasync)
+- [DeleteIfExists](/dotnet/api/microsoft.azure.storage.blob.cloudblob.deleteifexists)
+- [DeleteIfExistsAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblob.deleteifexistsasync)
+
+The following code example shows how to delete a blob and its snapshots in .NET, where `blockBlob` is an object of type [CloudBlockBlob][dotnet_CloudBlockBlob]:
+
+```csharp
+await blockBlob.DeleteIfExistsAsync(DeleteSnapshotsOption.IncludeSnapshots, null, null, null);
+```
+
+## Create a stored access policy
+
+Related article: [Create a stored access policy with .NET](../common/storage-stored-access-policy-define-dotnet.md)
+
+To create a stored access policy on a container with version 11.x of the .NET client library for Azure Storage, call one of the following methods:
+
+- [CloudBlobContainer.SetPermissions](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer.setpermissions)
+- [CloudBlobContainer.SetPermissionsAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer.setpermissionsasync)
+
+The following example creates a stored access policy that is in effect for one day and that grants read, write, and list permissions:
+
+```csharp
+private static async Task CreateStoredAccessPolicyAsync(CloudBlobContainer container, string policyName)
+{
+ // Create a new stored access policy and define its constraints.
+ // The access policy provides create, write, read, list, and delete permissions.
+ SharedAccessBlobPolicy sharedPolicy = new SharedAccessBlobPolicy()
+ {
+ // When the start time for the SAS is omitted, the start time is assumed to be the time when Azure Storage receives the request.
+ SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
+ Permissions = SharedAccessBlobPermissions.Read | SharedAccessBlobPermissions.List |
+ SharedAccessBlobPermissions.Write
+ };
+
+ // Get the container's existing permissions.
+ BlobContainerPermissions permissions = await container.GetPermissionsAsync();
+
+ // Add the new policy to the container's permissions, and set the container's permissions.
+ permissions.SharedAccessPolicies.Add(policyName, sharedPolicy);
+ await container.SetPermissionsAsync(permissions);
+}
+```
+
+## Create a service SAS for a blob container
+
+Related article: [Create a service SAS for a container or blob with .NET](sas-service-create-dotnet.md)
+
+To create a service SAS for a container, call the [CloudBlobContainer.GetSharedAccessSignature](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer.getsharedaccesssignature) method.
+
+```csharp
+private static string GetContainerSasUri(CloudBlobContainer container,
+ string storedPolicyName = null)
+{
+ string sasContainerToken;
+
+ // If no stored policy is specified, create a new access policy and define its constraints.
+ if (storedPolicyName == null)
+ {
+ // Note that the SharedAccessBlobPolicy class is used both to define
+ // the parameters of an ad hoc SAS, and to construct a shared access policy
+ // that is saved to the container's shared access policies.
+ SharedAccessBlobPolicy adHocPolicy = new SharedAccessBlobPolicy()
+ {
+ // When the start time for the SAS is omitted, the start time is assumed
+ // to be the time when the storage service receives the request. Omitting
+ // the start time for a SAS that is effective immediately helps to avoid clock skew.
+ SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
+ Permissions = SharedAccessBlobPermissions.Write | SharedAccessBlobPermissions.List
+ };
+
+ // Generate the shared access signature on the container,
+ // setting the constraints directly on the signature.
+ sasContainerToken = container.GetSharedAccessSignature(adHocPolicy, null);
+
+ Console.WriteLine("SAS for blob container (ad hoc): {0}", sasContainerToken);
+ Console.WriteLine();
+ }
+ else
+ {
+ // Generate the shared access signature on the container. In this case,
+ // all of the constraints for the shared access signature are specified
+ // on the stored access policy, which is provided by name. It is also possible
+ // to specify some constraints on an ad hoc SAS and others on the stored access policy.
+ sasContainerToken = container.GetSharedAccessSignature(null, storedPolicyName);
+
+ Console.WriteLine("SAS for container (stored access policy): {0}", sasContainerToken);
+ Console.WriteLine();
+ }
+
+ // Return the URI string for the container, including the SAS token.
+ return container.Uri + sasContainerToken;
+}
+```
+
+## Create a service SAS for a blob
+
+Related article: [Create a service SAS for a container or blob with .NET](sas-service-create-dotnet.md)
+
+To create a service SAS for a blob, call the [CloudBlob.GetSharedAccessSignature](/dotnet/api/microsoft.azure.storage.blob.cloudblob.getsharedaccesssignature) method.
+
+```csharp
+private static string GetBlobSasUri(CloudBlobContainer container,
+ string blobName,
+ string policyName = null)
+{
+ string sasBlobToken;
+
+ // Get a reference to a blob within the container.
+ // Note that the blob may not exist yet, but a SAS can still be created for it.
+ CloudBlockBlob blob = container.GetBlockBlobReference(blobName);
+
+ if (policyName == null)
+ {
+ // Create a new access policy and define its constraints.
+ // Note that the SharedAccessBlobPolicy class is used both to define the parameters
+ // of an ad hoc SAS, and to construct a shared access policy that is saved to
+ // the container's shared access policies.
+ SharedAccessBlobPolicy adHocSAS = new SharedAccessBlobPolicy()
+ {
+ // When the start time for the SAS is omitted, the start time is assumed to be
+ // the time when the storage service receives the request. Omitting the start time
+ // for a SAS that is effective immediately helps to avoid clock skew.
+ SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
+ Permissions = SharedAccessBlobPermissions.Read |
+ SharedAccessBlobPermissions.Write |
+ SharedAccessBlobPermissions.Create
+ };
+
+ // Generate the shared access signature on the blob,
+ // setting the constraints directly on the signature.
+ sasBlobToken = blob.GetSharedAccessSignature(adHocSAS);
+
+ Console.WriteLine("SAS for blob (ad hoc): {0}", sasBlobToken);
+ Console.WriteLine();
+ }
+ else
+ {
+ // Generate the shared access signature on the blob. In this case, all of the constraints
+ // for the SAS are specified on the container's stored access policy.
+ sasBlobToken = blob.GetSharedAccessSignature(null, policyName);
+
+ Console.WriteLine("SAS for blob (stored access policy): {0}", sasBlobToken);
+ Console.WriteLine();
+ }
+
+ // Return the URI string for the container, including the SAS token.
+ return blob.Uri + sasBlobToken;
+}
+```
+
+## Create an account SAS
+
+Related article: [Create an account SAS with .NET](../common/storage-account-sas-create-dotnet.md)
+
+To create an account SAS for a container, call the [CloudStorageAccount.GetSharedAccessSignature](/dotnet/api/microsoft.azure.storage.cloudstorageaccount.getsharedaccesssignature) method.
+
+The following code example creates an account SAS that is valid for the Blob and File services, and gives the client permissions read, write, and list permissions to access service-level APIs. The account SAS restricts the protocol to HTTPS, so the request must be made with HTTPS. Remember to replace placeholder values in angle brackets with your own values:
+
+```csharp
+static string GetAccountSASToken()
+{
+ // To create the account SAS, you need to use Shared Key credentials. Modify for your account.
+ const string ConnectionString = "DefaultEndpointsProtocol=https;AccountName=<storage-account>;AccountKey=<account-key>";
+ CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);
+
+ // Create a new access policy for the account.
+ SharedAccessAccountPolicy policy = new SharedAccessAccountPolicy()
+ {
+ Permissions = SharedAccessAccountPermissions.Read |
+ SharedAccessAccountPermissions.Write |
+ SharedAccessAccountPermissions.List,
+ Services = SharedAccessAccountServices.Blob | SharedAccessAccountServices.File,
+ ResourceTypes = SharedAccessAccountResourceTypes.Service,
+ SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
+ Protocols = SharedAccessProtocol.HttpsOnly
+ };
+
+ // Return the SAS token.
+ return storageAccount.GetSharedAccessSignature(policy);
+}
+```
+
+## Use an account SAS from a client
+
+Related article: [Create an account SAS with .NET](../common/storage-account-sas-create-dotnet.md)
+
+In this snippet, replace the `<storage-account>` placeholder with the name of your storage account.
+
+```csharp
+static void UseAccountSAS(string sasToken)
+{
+ // Create new storage credentials using the SAS token.
+ StorageCredentials accountSAS = new StorageCredentials(sasToken);
+ // Use these credentials and the account name to create a Blob service client.
+ CloudStorageAccount accountWithSAS = new CloudStorageAccount(accountSAS, "<storage-account>", endpointSuffix: null, useHttps: true);
+ CloudBlobClient blobClientWithSAS = accountWithSAS.CreateCloudBlobClient();
+
+ // Now set the service properties for the Blob client created with the SAS.
+ blobClientWithSAS.SetServiceProperties(new ServiceProperties()
+ {
+ HourMetrics = new MetricsProperties()
+ {
+ MetricsLevel = MetricsLevel.ServiceAndApi,
+ RetentionDays = 7,
+ Version = "1.0"
+ },
+ MinuteMetrics = new MetricsProperties()
+ {
+ MetricsLevel = MetricsLevel.ServiceAndApi,
+ RetentionDays = 7,
+ Version = "1.0"
+ },
+ Logging = new LoggingProperties()
+ {
+ LoggingOperations = LoggingOperations.All,
+ RetentionDays = 14,
+ Version = "1.0"
+ }
+ });
+
+ // The permissions granted by the account SAS also permit you to retrieve service properties.
+ ServiceProperties serviceProperties = blobClientWithSAS.GetServiceProperties();
+ Console.WriteLine(serviceProperties.HourMetrics.MetricsLevel);
+ Console.WriteLine(serviceProperties.HourMetrics.RetentionDays);
+ Console.WriteLine(serviceProperties.HourMetrics.Version);
+}
+```
+
+## Optimistic concurrency for blobs
+
+Related article: [Managing Concurrency in Blob storage](concurrency-manage.md)
+
+```csharp
+public void DemonstrateOptimisticConcurrencyBlob(string containerName, string blobName)
+{
+ Console.WriteLine("Demonstrate optimistic concurrency");
+
+ // Parse connection string and create container.
+ CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);
+ CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
+ CloudBlobContainer container = blobClient.GetContainerReference(containerName);
+ container.CreateIfNotExists();
+
+ // Create test blob. The default strategy is last writer wins, so
+ // write operation will overwrite existing blob if present.
+ CloudBlockBlob blockBlob = container.GetBlockBlobReference(blobName);
+ blockBlob.UploadText("Hello World!");
+
+ // Retrieve the ETag from the newly created blob.
+ string originalETag = blockBlob.Properties.ETag;
+ Console.WriteLine("Blob added. Original ETag = {0}", originalETag);
+
+ /// This code simulates an update by another client.
+ string helloText = "Blob updated by another client.";
+ // No ETag was provided, so original blob is overwritten and ETag updated.
+ blockBlob.UploadText(helloText);
+ Console.WriteLine("Blob updated. Updated ETag = {0}", blockBlob.Properties.ETag);
+
+ // Now try to update the blob using the original ETag value.
+ try
+ {
+ Console.WriteLine(@"Attempt to update blob using original ETag
+ to generate if-match access condition");
+ blockBlob.UploadText(helloText, accessCondition: AccessCondition.GenerateIfMatchCondition(originalETag));
+ }
+ catch (StorageException ex)
+ {
+ if (ex.RequestInformation.HttpStatusCode == (int)HttpStatusCode.PreconditionFailed)
+ {
+ Console.WriteLine(@"Precondition failure as expected.
+ Blob's ETag does not match.");
+ }
+ else
+ {
+ throw;
+ }
+ }
+ Console.WriteLine();
+}
+```
+
+## Pessimistic concurrency for blobs
+
+Related article: [Managing Concurrency in Blob storage](concurrency-manage.md)
+
+```csharp
+public void DemonstratePessimisticConcurrencyBlob(string containerName, string blobName)
+{
+ Console.WriteLine("Demonstrate pessimistic concurrency");
+
+ // Parse connection string and create container.
+ CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);
+ CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
+ CloudBlobContainer container = blobClient.GetContainerReference(containerName);
+ container.CreateIfNotExists();
+
+ CloudBlockBlob blockBlob = container.GetBlockBlobReference(blobName);
+ blockBlob.UploadText("Hello World!");
+ Console.WriteLine("Blob added.");
+
+ // Acquire lease for 15 seconds.
+ string lease = blockBlob.AcquireLease(TimeSpan.FromSeconds(15), null);
+ Console.WriteLine("Blob lease acquired. Lease = {0}", lease);
+
+ // Update blob using lease. This operation should succeed.
+ const string helloText = "Blob updated";
+ var accessCondition = AccessCondition.GenerateLeaseCondition(lease);
+ blockBlob.UploadText(helloText, accessCondition: accessCondition);
+ Console.WriteLine("Blob updated using an exclusive lease");
+
+ // Simulate another client attempting to update to blob without providing lease.
+ try
+ {
+ // Operation will fail as no valid lease was provided.
+ Console.WriteLine("Now try to update blob without valid lease.");
+ blockBlob.UploadText("Update operation will fail without lease.");
+ }
+ catch (StorageException ex)
+ {
+ if (ex.RequestInformation.HttpStatusCode == (int)HttpStatusCode.PreconditionFailed)
+ {
+ Console.WriteLine(@"Precondition failure error as expected.
+ Blob lease not provided.");
+ }
+ else
+ {
+ throw;
+ }
+ }
+
+ // Release lease proactively.
+ blockBlob.ReleaseLease(accessCondition);
+ Console.WriteLine();
+}
+```
+
+## Build a highly available app with Blob Storage
+
+Related article: [Tutorial: Build a highly available application with Blob storage](storage-create-geo-redundant-storage.md).
+
+### Download the sample
+
+Download the [sample project](https://github.com/Azure-Samples/storage-dotnet-circuit-breaker-pattern-ha-apps-using-ra-grs/archive/master.zip), extract (unzip) the storage-dotnet-circuit-breaker-pattern-ha-apps-using-ra-grs.zip file, then navigate to the **v11** folder to find the project files.
+
+You can also use [git](https://git-scm.com/) to download a copy of the application to your development environment. The sample project in the v11 folder contains a console application.
+
+```bash
+git clone https://github.com/Azure-Samples/storage-dotnet-circuit-breaker-pattern-ha-apps-using-ra-grs.git
+```
+
+### Configure the sample
+
+In the application, you must provide the connection string for your storage account. You can store this connection string within an environment variable on the local machine running the application. Follow one of the examples below depending on your Operating System to create the environment variable.
+
+In the Azure portal, navigate to your storage account. Select **Access keys** under **Settings** in your storage account. Copy the **connection string** from the primary or secondary key. Run one of the following commands based on your operating system, replacing \<yourconnectionstring\> with your actual connection string. This command saves an environment variable to the local machine. In Windows, the environment variable isn't available until you reload the **Command Prompt** or shell you're using.
+
+### Run the console application
+
+In Visual Studio, press **F5** or select **Start** to begin debugging the application. Visual Studio automatically restores missing NuGet packages if package restore is configured, visit [Installing and reinstalling packages with package restore](/nuget/consume-packages/package-restore#package-restore-overview) to learn more.
+
+A console window launches and the application begins running. The application uploads the **HelloWorld.png** image from the solution to the storage account. The application checks to ensure the image has replicated to the secondary RA-GZRS endpoint. It then begins downloading the image up to 999 times. Each read is represented by a **P** or an **S**. Where **P** represents the primary endpoint and **S** represents the secondary endpoint.
+
+![Screenshot of Console application output.](media/storage-create-geo-redundant-storage/figure3.png)
+
+In the sample code, the `RunCircuitBreakerAsync` task in the `Program.cs` file is used to download an image from the storage account using the [DownloadToFileAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblob.downloadtofileasync) method. Prior to the download, an [OperationContext](/dotnet/api/microsoft.azure.cosmos.table.operationcontext) is defined. The operation context defines event handlers that fire when a download completes successfully, or if a download fails and is retrying.
+
+### Understand the sample code
+
+#### Retry event handler
+
+The `OperationContextRetrying` event handler is called when the download of the image fails and is set to retry. If the maximum number of retries defined in the application are reached, the [LocationMode](/dotnet/api/microsoft.azure.storage.blob.blobrequestoptions.locationmode) of the request is changed to `SecondaryOnly`. This setting forces the application to attempt to download the image from the secondary endpoint. This configuration reduces the time taken to request the image as the primary endpoint isn't retried indefinitely.
+
+```csharp
+private static void OperationContextRetrying(object sender, RequestEventArgs e)
+{
+ retryCount++;
+ Console.WriteLine("Retrying event because of failure reading the primary. RetryCount = " + retryCount);
+
+ // Check if we have had more than n retries in which case switch to secondary.
+ if (retryCount >= retryThreshold)
+ {
+
+ // Check to see if we can fail over to secondary.
+ if (blobClient.DefaultRequestOptions.LocationMode != LocationMode.SecondaryOnly)
+ {
+ blobClient.DefaultRequestOptions.LocationMode = LocationMode.SecondaryOnly;
+ retryCount = 0;
+ }
+ else
+ {
+ throw new ApplicationException("Both primary and secondary are unreachable. Check your application's network connection. ");
+ }
+ }
+}
+```
+
+#### Request completed event handler
+
+The `OperationContextRequestCompleted` event handler is called when the download of the image is successful. If the application is using the secondary endpoint, the application continues to use this endpoint up to 20 times. After 20 times, the application sets the [LocationMode](/dotnet/api/microsoft.azure.storage.blob.blobrequestoptions.locationmode) back to `PrimaryThenSecondary` and retries the primary endpoint. If a request is successful, the application continues to read from the primary endpoint.
+
+```csharp
+private static void OperationContextRequestCompleted(object sender, RequestEventArgs e)
+{
+ if (blobClient.DefaultRequestOptions.LocationMode == LocationMode.SecondaryOnly)
+ {
+ // You're reading the secondary. Let it read the secondary [secondaryThreshold] times,
+ // then switch back to the primary and see if it's available now.
+ secondaryReadCount++;
+ if (secondaryReadCount >= secondaryThreshold)
+ {
+ blobClient.DefaultRequestOptions.LocationMode = LocationMode.PrimaryThenSecondary;
+ secondaryReadCount = 0;
+ }
+ }
+}
+```
+
+## Upload large amounts of random data to Azure storage
+
+Related article: [Upload large amounts of random data in parallel to Azure storage](storage-blob-scalable-app-upload-files.md)
+
+The minimum and maximum number of threads are set to 100 to ensure that a large number of concurrent connections are allowed.
+
+```csharp
+private static async Task UploadFilesAsync()
+{
+ // Create five randomly named containers to store the uploaded files.
+ CloudBlobContainer[] containers = await GetRandomContainersAsync();
+
+ var currentdir = System.IO.Directory.GetCurrentDirectory();
+
+ // Path to the directory to upload
+ string uploadPath = currentdir + "\\upload";
+
+ // Start a timer to measure how long it takes to upload all the files.
+ Stopwatch time = Stopwatch.StartNew();
+
+ try
+ {
+ Console.WriteLine("Iterating in directory: {0}", uploadPath);
+
+ int count = 0;
+ int max_outstanding = 100;
+ int completed_count = 0;
+
+ // Define the BlobRequestOptions on the upload.
+ // This includes defining an exponential retry policy to ensure that failed connections
+ // are retried with a back off policy. As multiple large files are being uploaded using
+ // large block sizes, this can cause an issue if an exponential retry policy is not defined.
+ // Additionally, parallel operations are enabled with a thread count of 8.
+ // This should be a multiple of the number of processor cores in the machine.
+ // Lastly, MD5 hash validation is disabled for this example, improving the upload speed.
+ BlobRequestOptions options = new BlobRequestOptions
+ {
+ ParallelOperationThreadCount = 8,
+ DisableContentMD5Validation = true,
+ StoreBlobContentMD5 = false
+ };
+
+ // Create a new instance of the SemaphoreSlim class to
+ // define the number of threads to use in the application.
+ SemaphoreSlim sem = new SemaphoreSlim(max_outstanding, max_outstanding);
+
+ List<Task> tasks = new List<Task>();
+ Console.WriteLine("Found {0} file(s)", Directory.GetFiles(uploadPath).Count());
+
+ // Iterate through the files
+ foreach (string path in Directory.GetFiles(uploadPath))
+ {
+ var container = containers[count % 5];
+ string fileName = Path.GetFileName(path);
+ Console.WriteLine("Uploading {0} to container {1}", path, container.Name);
+ CloudBlockBlob blockBlob = container.GetBlockBlobReference(fileName);
+
+ // Set the block size to 100MB.
+ blockBlob.StreamWriteSizeInBytes = 100 * 1024 * 1024;
+
+ await sem.WaitAsync();
+
+ // Create a task for each file to upload. The tasks are
+ // added to a collection and all run asynchronously.
+ tasks.Add(blockBlob.UploadFromFileAsync(path, null, options, null).ContinueWith((t) =>
+ {
+ sem.Release();
+ Interlocked.Increment(ref completed_count);
+ }));
+
+ count++;
+ }
+
+ // Run all the tasks asynchronously.
+ await Task.WhenAll(tasks);
+
+ time.Stop();
+
+ Console.WriteLine("Upload has been completed in {0} seconds. Press any key to continue", time.Elapsed.TotalSeconds.ToString());
+
+ Console.ReadLine();
+ }
+ catch (DirectoryNotFoundException ex)
+ {
+ Console.WriteLine("Error parsing files in the directory: {0}", ex.Message);
+ }
+ catch (Exception ex)
+ {
+ Console.WriteLine(ex.Message);
+ }
+}
+```
+
+In addition to setting the threading and connection limit settings, the [BlobRequestOptions](/dotnet/api/microsoft.azure.storage.blob.blobrequestoptions) for the [UploadFromStreamAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblockblob.uploadfromstreamasync) method are configured to use parallelism and disable MD5 hash validation. The files are uploaded in 100-mb blocks, this configuration provides better performance but can be costly if using a poorly performing network as if there is a failure the entire 100-mb block is retried.
+
+|Property|Value|Description|
+||||
+|[ParallelOperationThreadCount](/dotnet/api/microsoft.azure.storage.blob.blobrequestoptions.paralleloperationthreadcount)| 8| The setting breaks the blob into blocks when uploading. For highest performance, this value should be eight times the number of cores. |
+|[DisableContentMD5Validation](/dotnet/api/microsoft.azure.storage.blob.blobrequestoptions.disablecontentmd5validation)| true| This property disables checking the MD5 hash of the content uploaded. Disabling MD5 validation produces a faster transfer. But does not confirm the validity or integrity of the files being transferred. |
+|[StoreBlobContentMD5](/dotnet/api/microsoft.azure.storage.blob.blobrequestoptions.storeblobcontentmd5)| false| This property determines if an MD5 hash is calculated and stored with the file. |
+| [RetryPolicy](/dotnet/api/microsoft.azure.storage.blob.blobrequestoptions.retrypolicy)| 2-second backoff with 10 max retry |Determines the retry policy of requests. Connection failures are retried, in this example an [ExponentialRetry](/dotnet/api/microsoft.azure.batch.common.exponentialretry) policy is configured with a 2-second backoff, and a maximum retry count of 10. This setting is important when your application gets close to hitting the scalability targets for Blob storage. For more information, see [Scalability and performance targets for Blob storage](../blobs/scalability-targets.md). |
+
+## Download large amounts of random data from Azure storage
+
+Related article: [Download large amounts of random data from Azure storage](storage-blob-scalable-app-download-files.md)
+
+The application reads the containers located in the storage account specified in the **storageconnectionstring**. It iterates through the blobs 10 at a time using the [ListBlobsSegmentedAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblobclient.listblobssegmentedasync) method in the containers and downloads them to the local machine using the [DownloadToFileAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblob.downloadtofileasync) method.
+
+The following table shows the [BlobRequestOptions](/dotnet/api/microsoft.azure.storage.blob.blobrequestoptions) defined for each blob as it is downloaded.
+
+|Property|Value|Description|
+||||
+|[DisableContentMD5Validation](/dotnet/api/microsoft.azure.storage.blob.blobrequestoptions.disablecontentmd5validation)| true| This property disables checking the MD5 hash of the content uploaded. Disabling MD5 validation produces a faster transfer. But does not confirm the validity or integrity of the files being transferred. |
+|[StoreBlobContentMD5](/dotnet/api/microsoft.azure.storage.blob.blobrequestoptions.storeblobcontentmd5)| false| This property determines if an MD5 hash is calculated and stored. |
+
+```csharp
+private static async Task DownloadFilesAsync()
+{
+ CloudBlobClient blobClient = GetCloudBlobClient();
+
+ // Define the BlobRequestOptions on the download, including disabling MD5
+ // hash validation for this example, this improves the download speed.
+ BlobRequestOptions options = new BlobRequestOptions
+ {
+ DisableContentMD5Validation = true,
+ StoreBlobContentMD5 = false
+ };
+
+ // Retrieve the list of containers in the storage account.
+ // Create a directory and configure variables for use later.
+ BlobContinuationToken continuationToken = null;
+ List<CloudBlobContainer> containers = new List<CloudBlobContainer>();
+ do
+ {
+ var listingResult = await blobClient.ListContainersSegmentedAsync(continuationToken);
+ continuationToken = listingResult.ContinuationToken;
+ containers.AddRange(listingResult.Results);
+ }
+ while (continuationToken != null);
+
+ var directory = Directory.CreateDirectory("download");
+ BlobResultSegment resultSegment = null;
+ Stopwatch time = Stopwatch.StartNew();
+
+ // Download the blobs
+ try
+ {
+ List<Task> tasks = new List<Task>();
+ int max_outstanding = 100;
+ int completed_count = 0;
+
+ // Create a new instance of the SemaphoreSlim class to
+ // define the number of threads to use in the application.
+ SemaphoreSlim sem = new SemaphoreSlim(max_outstanding, max_outstanding);
+
+ // Iterate through the containers
+ foreach (CloudBlobContainer container in containers)
+ {
+ do
+ {
+ // Return the blobs from the container, 10 at a time.
+ resultSegment = await container.ListBlobsSegmentedAsync(null, true, BlobListingDetails.All, 10, continuationToken, null, null);
+ continuationToken = resultSegment.ContinuationToken;
+ {
+ foreach (var blobItem in resultSegment.Results)
+ {
+
+ if (((CloudBlob)blobItem).Properties.BlobType == BlobType.BlockBlob)
+ {
+ // Get the blob and add a task to download the blob asynchronously from the storage account.
+ CloudBlockBlob blockBlob = container.GetBlockBlobReference(((CloudBlockBlob)blobItem).Name);
+ Console.WriteLine("Downloading {0} from container {1}", blockBlob.Name, container.Name);
+ await sem.WaitAsync();
+ tasks.Add(blockBlob.DownloadToFileAsync(directory.FullName + "\\" + blockBlob.Name, FileMode.Create, null, options, null).ContinueWith((t) =>
+ {
+ sem.Release();
+ Interlocked.Increment(ref completed_count);
+ }));
+
+ }
+ }
+ }
+ }
+ while (continuationToken != null);
+ }
+
+ // Creates an asynchronous task that completes when all the downloads complete.
+ await Task.WhenAll(tasks);
+ }
+ catch (Exception e)
+ {
+ Console.WriteLine("\nError encountered during transfer: {0}", e.Message);
+ }
+
+ time.Stop();
+ Console.WriteLine("Download has been completed in {0} seconds. Press any key to continue", time.Elapsed.TotalSeconds.ToString());
+ Console.ReadLine();
+}
+```
+
+## Enable Azure Storage Analytics logs (classic)
+
+Related article: [Enable and manage Azure Storage Analytics logs (classic)](../common/manage-storage-analytics-logs.md)
+
+```csharp
+var storageAccount = CloudStorageAccount.Parse(connStr);
+var queueClient = storageAccount.CreateCloudQueueClient();
+var serviceProperties = queueClient.GetServiceProperties();
+
+serviceProperties.Logging.LoggingOperations = LoggingOperations.All;
+serviceProperties.Logging.RetentionDays = 2;
+
+queueClient.SetServiceProperties(serviceProperties);
+```
+
+## Modify log data retention period
+
+Related article: [Enable and manage Azure Storage Analytics logs (classic)](../common/manage-storage-analytics-logs.md)
+
+The following example prints to the console the retention period for blob and queue storage services.
+
+```csharp
+var storageAccount = CloudStorageAccount.Parse(connectionString);
+
+var blobClient = storageAccount.CreateCloudBlobClient();
+var queueClient = storageAccount.CreateCloudQueueClient();
+
+var blobserviceProperties = blobClient.GetServiceProperties();
+var queueserviceProperties = queueClient.GetServiceProperties();
+
+Console.WriteLine("Retention period for logs from the blob service is: " +
+ blobserviceProperties.Logging.RetentionDays.ToString());
+
+Console.WriteLine("Retention period for logs from the queue service is: " +
+ queueserviceProperties.Logging.RetentionDays.ToString());
+```
+
+The following example changes the retention period for logs for the blob and queue storage services to 4 days.
+
+```csharp
+
+blobserviceProperties.Logging.RetentionDays = 4;
+queueserviceProperties.Logging.RetentionDays = 4;
+
+blobClient.SetServiceProperties(blobserviceProperties);
+queueClient.SetServiceProperties(queueserviceProperties);
+```
+
+## Enable Azure Storage Analytics metrics (classic)
+
+Related article: [Enable and manage Azure Storage Analytics metrics (classic)](../common/manage-storage-analytics-metrics.md)
+
+```csharp
+var storageAccount = CloudStorageAccount.Parse(connStr);
+var queueClient = storageAccount.CreateCloudQueueClient();
+var serviceProperties = queueClient.GetServiceProperties();
+
+serviceProperties.HourMetrics.MetricsLevel = MetricsLevel.Service;
+serviceProperties.HourMetrics.RetentionDays = 10;
+
+queueClient.SetServiceProperties(serviceProperties);
+```
+
+## Configure Transport Layer Security (TLS) for a client application
+
+Related article: [Configure Transport Layer Security (TLS) for a client application](../common/transport-layer-security-configure-client-version.md)
+
+The following sample shows how to enable TLS 1.2 in a .NET client using version 11.x of the Azure Storage client library:
+
+```csharp
+static void EnableTls12()
+{
+ // Enable TLS 1.2 before connecting to Azure Storage
+ System.Net.ServicePointManager.SecurityProtocol = System.Net.SecurityProtocolType.Tls12;
+
+ // Add your connection string here.
+ string connectionString = "";
+
+ // Connect to Azure Storage and create a new container.
+ CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString);
+ CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
+
+ CloudBlobContainer container = blobClient.GetContainerReference("sample-container");
+ container.CreateIfNotExists();
+}
+```
+
+## Monitor, diagnose, and troubleshoot Microsoft Azure Storage (classic)
+
+Related article: [Monitor, diagnose, and troubleshoot Microsoft Azure Storage (classic)](../common/storage-monitoring-diagnosing-troubleshooting.md)
+
+If the Storage Client Library throws a **StorageException** in the client, the **RequestInformation** property contains a **RequestResult** object that includes a **ServiceRequestID** property. You can also access a **RequestResult** object from an **OperationContext** instance.
+
+The code sample below demonstrates how to set a custom **ClientRequestId** value by attaching an **OperationContext** object the request to the storage service. It also shows how to retrieve the **ServerRequestId** value from the response message.
+
+```csharp
+//Parse the connection string for the storage account.
+const string ConnectionString = "DefaultEndpointsProtocol=https;AccountName=account-name;AccountKey=account-key";
+CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);
+CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
+
+// Create an Operation Context that includes custom ClientRequestId string based on constants defined within the application along with a Guid.
+OperationContext oc = new OperationContext();
+oc.ClientRequestID = String.Format("{0} {1} {2} {3}", HOSTNAME, APPNAME, USERID, Guid.NewGuid().ToString());
+
+try
+{
+ CloudBlobContainer container = blobClient.GetContainerReference("democontainer");
+ ICloudBlob blob = container.GetBlobReferenceFromServer("testImage.jpg", null, null, oc);
+ var downloadToPath = string.Format("./{0}", blob.Name);
+ using (var fs = File.OpenWrite(downloadToPath))
+ {
+ blob.DownloadToStream(fs, null, null, oc);
+ Console.WriteLine("\t Blob downloaded to file: {0}", downloadToPath);
+ }
+}
+catch (StorageException storageException)
+{
+ Console.WriteLine("Storage exception {0} occurred", storageException.Message);
+ // Multiple results may exist due to client side retry logic - each retried operation will have a unique ServiceRequestId
+ foreach (var result in oc.RequestResults)
+ {
+ Console.WriteLine("HttpStatus: {0}, ServiceRequestId {1}", result.HttpStatusCode, result.ServiceRequestID);
+ }
+}
+```
+
+## Investigating client performance issues - disable the Nagle algorithm
+
+Related article: [Monitor, diagnose, and troubleshoot Microsoft Azure Storage (classic)](../common/storage-monitoring-diagnosing-troubleshooting.md)
+
+```csharp
+var storageAccount = CloudStorageAccount.Parse(connStr);
+ServicePoint queueServicePoint = ServicePointManager.FindServicePoint(storageAccount.QueueEndpoint);
+queueServicePoint.UseNagleAlgorithm = false;
+```
+
+## Investigating network latency issues - configure Cross Origin Resource Sharing (CORS)
+
+Related article: [Monitor, diagnose, and troubleshoot Microsoft Azure Storage (classic)](../common/storage-monitoring-diagnosing-troubleshooting.md)
+
+```csharp
+CloudBlobClient client = new CloudBlobClient(blobEndpoint, new StorageCredentials(accountName, accountKey));
+// Set the service properties.
+ServiceProperties sp = client.GetServiceProperties();
+sp.DefaultServiceVersion = "2013-08-15";
+CorsRule cr = new CorsRule();
+cr.AllowedHeaders.Add("*");
+cr.AllowedMethods = CorsHttpMethods.Get | CorsHttpMethods.Put;
+cr.AllowedOrigins.Add("http://www.contoso.com");
+cr.ExposedHeaders.Add("x-ms-*");
+cr.MaxAgeInSeconds = 5;
+sp.Cors.CorsRules.Clear();
+sp.Cors.CorsRules.Add(cr);
+client.SetServiceProperties(sp);
+```
+
+## Creating an empty page blob of a specified size
+
+Related article: [Overview of Azure page blobs](storage-blob-pageblob-overview.md)
+
+To create a page blob, we first create a **CloudBlobClient** object, with the base URI for accessing the blob storage for your storage account (*pbaccount* in figure 1) along with the **StorageCredentialsAccountAndKey** object, as shown in the following example. The example then shows creating a reference to a **CloudBlobContainer** object, and then creating the container (*testvhds*) if it doesn't already exist. Then using the **CloudBlobContainer** object, create a reference to a **CloudPageBlob** object by specifying the page blob name (os4.vhd) to access. To create the page blob, call [CloudPageBlob.Create](/dotnet/api/microsoft.azure.storage.blob.cloudpageblob.create), passing in the max size for the blob to create. The *blobSize* must be a multiple of 512 bytes.
+
+```csharp
+using Microsoft.Azure;
+using Microsoft.Azure.Storage;
+using Microsoft.Azure.Storage.Blob;
+
+long OneGigabyteAsBytes = 1024 * 1024 * 1024;
+// Retrieve storage account from connection string.
+CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
+ CloudConfigurationManager.GetSetting("StorageConnectionString"));
+
+// Create the blob client.
+CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
+
+// Retrieve a reference to a container.
+CloudBlobContainer container = blobClient.GetContainerReference("testvhds");
+
+// Create the container if it doesn't already exist.
+container.CreateIfNotExists();
+
+CloudPageBlob pageBlob = container.GetPageBlobReference("os4.vhd");
+pageBlob.Create(16 * OneGigabyteAsBytes);
+```
+
+## Resizing a page blob
+
+Related article: [Overview of Azure page blobs](storage-blob-pageblob-overview.md)
+
+To resize a page blob after creation, use the [Resize](/dotnet/api/microsoft.azure.storage.blob.cloudpageblob.resize) method. The requested size should be a multiple of 512 bytes.
+
+```csharp
+pageBlob.Resize(32 * OneGigabyteAsBytes);
+```
+
+## Writing pages to a page blob
+
+Related article: [Overview of Azure page blobs](storage-blob-pageblob-overview.md)
+
+To write pages, use the [CloudPageBlob.WritePages](/dotnet/api/microsoft.azure.storage.blob.cloudpageblob.beginwritepages) method.
+
+```csharp
+pageBlob.WritePages(dataStream, startingOffset);
+```
+
+## Reading pages from a page blob
+
+Related article: [Overview of Azure page blobs](storage-blob-pageblob-overview.md)
+
+To read pages, use the [CloudPageBlob.DownloadRangeToByteArray](/dotnet/api/microsoft.azure.storage.blob.icloudblob.downloadrangetobytearray) method to read a range of bytes from the page blob.
+
+```csharp
+byte[] buffer = new byte[rangeSize];
+pageBlob.DownloadRangeToByteArray(buffer, bufferOffset, pageBlobOffset, rangeSize);
+```
+
+To determine which pages are backed by data, use [CloudPageBlob.GetPageRanges](/dotnet/api/microsoft.azure.storage.blob.cloudpageblob.getpageranges). You can then enumerate the returned ranges and download the data in each range.
+
+```csharp
+IEnumerable<PageRange> pageRanges = pageBlob.GetPageRanges();
+
+foreach (PageRange range in pageRanges)
+{
+ // Calculate the range size
+ int rangeSize = (int)(range.EndOffset + 1 - range.StartOffset);
+
+ byte[] buffer = new byte[rangeSize];
+
+ // Read from the correct starting offset in the page blob and
+ // place the data in the bufferOffset of the buffer byte array
+ pageBlob.DownloadRangeToByteArray(buffer, bufferOffset, range.StartOffset, rangeSize);
+
+ // Then use the buffer for the page range just read
+}
+```
storage Blob V11 Samples Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-v11-samples-javascript.md
+
+ Title: Azure Blob Storage code samples using JavaScript version 11.x client libraries
+
+description: View code samples that use the Azure Blob Storage client library for JavaScript version 11.x.
+++++ Last updated : 04/03/2023+++
+# Azure Blob Storage code samples using JavaScript version 11.x client libraries
+
+This article shows code samples that use version 11.x of the Azure Blob Storage client library for JavaScript.
++
+## Build a highly available app with Blob Storage
+
+Related article: [Tutorial: Build a highly available application with Blob storage](storage-create-geo-redundant-storage.md)
+
+### Download the sample
+
+[Download the sample project](https://github.com/Azure-Samples/storage-node-v10-ha-ra-grs) and unzip the file. You can also use [git](https://git-scm.com/) to download a copy of the application to your development environment. The sample project contains a basic Node.js application.
+
+```bash
+git clone https://github.com/Azure-Samples/storage-node-v10-ha-ra-grs.git
+```
+
+### Configure the sample
+
+To run this sample, you must add your storage account credentials to the `.env.example` file and then rename it to `.env`.
+
+```
+AZURE_STORAGE_ACCOUNT_NAME=<replace with your storage account name>
+AZURE_STORAGE_ACCOUNT_ACCESS_KEY=<replace with your storage account access key>
+```
+
+You can find this information in the Azure portal by navigating to your storage account and selecting **Access keys** in the **Settings** section.
+
+Install the required dependencies by opening a command prompt, navigating to the sample folder, then entering `npm install`.
+
+### Run the console application
+
+To run the sample, open a command prompt, navigate to the sample folder, then enter `node index.js`.
+
+The sample creates a container in your Blob storage account, uploads **HelloWorld.png** into the container, then repeatedly checks whether the container and image have replicated to the secondary region. After replication, it prompts you to enter **D** or **Q** (followed by ENTER) to download or quit. Your output should look similar to the following example:
+
+```
+Created container successfully: newcontainer1550799840726
+Uploaded blob: HelloWorld.png
+Checking to see if container and blob have replicated to secondary region.
+[0] Container has not replicated to secondary region yet: newcontainer1550799840726 : ContainerNotFound
+[1] Container has not replicated to secondary region yet: newcontainer1550799840726 : ContainerNotFound
+...
+[31] Container has not replicated to secondary region yet: newcontainer1550799840726 : ContainerNotFound
+[32] Container found, but blob has not replicated to secondary region yet.
+...
+[67] Container found, but blob has not replicated to secondary region yet.
+[68] Blob has replicated to secondary region.
+Ready for blob download. Enter (D) to download or (Q) to quit, followed by ENTER.
+> D
+Attempting to download blob...
+Blob downloaded from primary endpoint.
+> Q
+Exiting...
+Deleted container newcontainer1550799840726
+```
+
+### Understand the code sample
+
+With the Node.js V10 SDK, callback handlers are unnecessary. Instead, the sample creates a pipeline configured with retry options and a secondary endpoint. This configuration allows the application to automatically switch to the secondary pipeline if it fails to reach your data through the primary pipeline.
+
+```javascript
+const accountName = process.env.AZURE_STORAGE_ACCOUNT_NAME;
+const storageAccessKey = process.env.AZURE_STORAGE_ACCOUNT_ACCESS_KEY;
+const sharedKeyCredential = new SharedKeyCredential(accountName, storageAccessKey);
+
+const primaryAccountURL = `https://${accountName}.blob.core.windows.net`;
+const secondaryAccountURL = `https://${accountName}-secondary.blob.core.windows.net`;
+
+const pipeline = StorageURL.newPipeline(sharedKeyCredential, {
+ retryOptions: {
+ maxTries: 3,
+ tryTimeoutInMs: 10000,
+ retryDelayInMs: 500,
+ maxRetryDelayInMs: 1000,
+ secondaryHost: secondaryAccountURL
+ }
+});
+```
storage Blob V2 Samples Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-v2-samples-python.md
+
+ Title: Azure Blob Storage code samples using Python version 2.1 client libraries
+
+description: View code samples that use the Azure Blob Storage client library for Python version 2.1.
+++++ Last updated : 04/03/2023+++
+# Azure Blob Storage code samples using Python version 2.1 client libraries
+
+This article shows code samples that use version 2.1 of the Azure Blob Storage client library for Python.
++
+## Build a highly available app with Blob Storage
+
+Related article: [Tutorial: Build a highly available application with Blob storage](storage-create-geo-redundant-storage.md)
+
+### Download the sample
+
+[Download the sample project](https://github.com/Azure-Samples/storage-python-circuit-breaker-pattern-ha-apps-using-ra-grs/archive/master.zip) and extract (unzip) the storage-python-circuit-breaker-pattern-ha-apps-using-ra-grs.zip file. You can also use [git](https://git-scm.com/) to download a copy of the application to your development environment. The sample project contains a basic Python application.
+
+```bash
+git clone https://github.com/Azure-Samples/storage-python-circuit-breaker-pattern-ha-apps-using-ra-grs.git
+```
+
+### Configure the sample
+
+In the application, you must provide your storage account credentials. You can store this information in environment variables on the local machine running the application. Follow one of the examples below depending on your Operating System to create the environment variables.
+
+In the Azure portal, navigate to your storage account. Select **Access keys** under **Settings** in your storage account. Paste the **Storage account name** and **Key** values into the following commands, replacing the \<youraccountname\> and \<youraccountkey\> placeholders. This command saves the environment variables to the local machine. In Windows, the environment variable isn't available until you reload the **Command Prompt** or shell you're using.
+
+#### Linux
+
+```bash
+export accountname=<youraccountname>
+export accountkey=<youraccountkey>
+```
+
+#### Windows
+
+```powershell
+setx accountname "<youraccountname>"
+setx accountkey "<youraccountkey>"
+```
+
+### Run the console application
+
+To run the application on a terminal or command prompt, go to the **circuitbreaker.py** directory, then enter `python circuitbreaker.py`. The application uploads the **HelloWorld.png** image from the solution to the storage account. The application checks to ensure the image has replicated to the secondary RA-GZRS endpoint. It then begins downloading the image up to 999 times. Each read is represented by a **P** or an **S**. Where **P** represents the primary endpoint and **S** represents the secondary endpoint.
+
+![Screnshot of console app running.](media/storage-create-geo-redundant-storage/figure3.png)
+
+In the sample code, the `run_circuit_breaker` method in the `circuitbreaker.py` file is used to download an image from the storage account using the [get_blob_to_path](/python/api/azure-storage-blob/azure.storage.blob.baseblobservice.baseblobservice#get-blob-to-path-container-name--blob-name--file-path--open-mode--wbsnapshot-none--start-range-none--end-range-none--validate-content-false--progress-callback-none--max-connections-2--lease-id-none--if-modified-since-none--if-unmodified-since-none--if-match-none--if-none-match-none--timeout-none-) method.
+
+The Storage object retry function is set to a linear retry policy. The retry function determines whether to retry a request, and specifies the number of seconds to wait before retrying the request. Set the **retry\_to\_secondary** value to true, if request should be retried to secondary in case the initial request to primary fails. In the sample application, a custom retry policy is defined in the `retry_callback` function of the storage object.
+
+Before the download, the Service object [retry_callback](/python/api/azure-storage-common/azure.storage.common.storageclient.storageclient) and [response_callback](/python/api/azure-storage-common/azure.storage.common.storageclient.storageclient) function is defined. These functions define event handlers that fire when a download completes successfully or if a download fails and is retrying.
+
+### Understand the code sample
+
+#### Retry event handler
+
+The `retry_callback` event handler is called when the download of the image fails and is set to retry. If the maximum number of retries defined in the application are reached, the [LocationMode](/python/api/azure-storage-common/azure.storage.common.models.locationmode) of the request is changed to `SECONDARY`. This setting forces the application to attempt to download the image from the secondary endpoint. This configuration reduces the time taken to request the image as the primary endpoint isn't retried indefinitely.
+
+```python
+def retry_callback(retry_context):
+ global retry_count
+ retry_count = retry_context.count
+ sys.stdout.write(
+ "\nRetrying event because of failure reading the primary. RetryCount= {0}".format(retry_count))
+ sys.stdout.flush()
+
+ # Check if we have more than n-retries in which case switch to secondary
+ if retry_count >= retry_threshold:
+
+ # Check to see if we can fail over to secondary.
+ if blob_client.location_mode != LocationMode.SECONDARY:
+ blob_client.location_mode = LocationMode.SECONDARY
+ retry_count = 0
+ else:
+ raise Exception("Both primary and secondary are unreachable. "
+ "Check your application's network connection.")
+```
+
+#### Request completed event handler
+
+The `response_callback` event handler is called when the download of the image is successful. If the application is using the secondary endpoint, the application continues to use this endpoint up to 20 times. After 20 times, the application sets the [LocationMode](/python/api/azure-storage-common/azure.storage.common.models.locationmode) back to `PRIMARY` and retries the primary endpoint. If a request is successful, the application continues to read from the primary endpoint.
+
+```python
+def response_callback(response):
+ global secondary_read_count
+ if blob_client.location_mode == LocationMode.SECONDARY:
+
+ # You're reading the secondary. Let it read the secondary [secondaryThreshold] times,
+ # then switch back to the primary and see if it is available now.
+ secondary_read_count += 1
+ if secondary_read_count >= secondary_threshold:
+ blob_client.location_mode = LocationMode.PRIMARY
+ secondary_read_count = 0
+```
storage Concurrency Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/concurrency-manage.md
The outline of this process is as follows:
The following code examples show how to construct an **If-Match** condition on the write request that checks the ETag value for a blob. Azure Storage evaluates whether the blob's current ETag is the same as the ETag provided on the request and performs the write operation only if the two ETag values match. If another process has updated the blob in the interim, then Azure Storage returns an HTTP 412 (Precondition Failed) status message.
-# [.NET v12 SDK](#tab/dotnet)
- :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Concurrency.cs" id="Snippet_DemonstrateOptimisticConcurrencyBlob":::
-# [.NET v11 SDK](#tab/dotnetv11)
-
-```csharp
-public void DemonstrateOptimisticConcurrencyBlob(string containerName, string blobName)
-{
- Console.WriteLine("Demonstrate optimistic concurrency");
-
- // Parse connection string and create container.
- CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);
- CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
- CloudBlobContainer container = blobClient.GetContainerReference(containerName);
- container.CreateIfNotExists();
-
- // Create test blob. The default strategy is last writer wins, so
- // write operation will overwrite existing blob if present.
- CloudBlockBlob blockBlob = container.GetBlockBlobReference(blobName);
- blockBlob.UploadText("Hello World!");
-
- // Retrieve the ETag from the newly created blob.
- string originalETag = blockBlob.Properties.ETag;
- Console.WriteLine("Blob added. Original ETag = {0}", originalETag);
-
- /// This code simulates an update by another client.
- string helloText = "Blob updated by another client.";
- // No ETag was provided, so original blob is overwritten and ETag updated.
- blockBlob.UploadText(helloText);
- Console.WriteLine("Blob updated. Updated ETag = {0}", blockBlob.Properties.ETag);
-
- // Now try to update the blob using the original ETag value.
- try
- {
- Console.WriteLine(@"Attempt to update blob using original ETag
- to generate if-match access condition");
- blockBlob.UploadText(helloText, accessCondition: AccessCondition.GenerateIfMatchCondition(originalETag));
- }
- catch (StorageException ex)
- {
- if (ex.RequestInformation.HttpStatusCode == (int)HttpStatusCode.PreconditionFailed)
- {
- Console.WriteLine(@"Precondition failure as expected.
- Blob's ETag does not match.");
- }
- else
- {
- throw;
- }
- }
- Console.WriteLine();
-}
-```
--- Azure Storage also supports other conditional headers, including as **If-Modified-Since**, **If-Unmodified-Since** and **If-None-Match**. For more information, see [Specifying Conditional Headers for Blob Service Operations](/rest/api/storageservices/specifying-conditional-headers-for-blob-service-operations). ## Pessimistic concurrency for blobs
Leases enable different synchronization strategies to be supported, including ex
The following code examples show how to acquire an exclusive lease on a blob, update the content of the blob by providing the lease ID, and then release the lease. If the lease is active and the lease ID isn't provided on a write request, then the write operation fails with error code 412 (Precondition Failed).
-# [.NET v12 SDK](#tab/dotnet)
- :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Concurrency.cs" id="Snippet_DemonstratePessimisticConcurrencyBlob":::
-# [.NET v11 SDK](#tab/dotnetv11)
-
-```csharp
-public void DemonstratePessimisticConcurrencyBlob(string containerName, string blobName)
-{
- Console.WriteLine("Demonstrate pessimistic concurrency");
-
- // Parse connection string and create container.
- CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);
- CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
- CloudBlobContainer container = blobClient.GetContainerReference(containerName);
- container.CreateIfNotExists();
-
- CloudBlockBlob blockBlob = container.GetBlockBlobReference(blobName);
- blockBlob.UploadText("Hello World!");
- Console.WriteLine("Blob added.");
-
- // Acquire lease for 15 seconds.
- string lease = blockBlob.AcquireLease(TimeSpan.FromSeconds(15), null);
- Console.WriteLine("Blob lease acquired. Lease = {0}", lease);
-
- // Update blob using lease. This operation should succeed.
- const string helloText = "Blob updated";
- var accessCondition = AccessCondition.GenerateLeaseCondition(lease);
- blockBlob.UploadText(helloText, accessCondition: accessCondition);
- Console.WriteLine("Blob updated using an exclusive lease");
-
- // Simulate another client attempting to update to blob without providing lease.
- try
- {
- // Operation will fail as no valid lease was provided.
- Console.WriteLine("Now try to update blob without valid lease.");
- blockBlob.UploadText("Update operation will fail without lease.");
- }
- catch (StorageException ex)
- {
- if (ex.RequestInformation.HttpStatusCode == (int)HttpStatusCode.PreconditionFailed)
- {
- Console.WriteLine(@"Precondition failure error as expected.
- Blob lease not provided.");
- }
- else
- {
- throw;
- }
- }
-
- // Release lease proactively.
- blockBlob.ReleaseLease(accessCondition);
- Console.WriteLine();
-}
-```
--- ## Pessimistic concurrency for containers Leases on containers enable the same synchronization strategies that are supported for blobs, including exclusive write/shared read, exclusive write/exclusive read, and shared write/exclusive read. For containers, however, the exclusive lock is enforced only on delete operations. To delete a container with an active lease, a client must include the active lease ID with the delete request. All other container operations succeed on a leased container without the lease ID.
Leases on containers enable the same synchronization strategies that are support
- [Specifying conditional headers for Blob service operations](/rest/api/storageservices/specifying-conditional-headers-for-blob-service-operations) - [Lease Container](/rest/api/storageservices/lease-container) - [Lease Blob](/rest/api/storageservices/lease-blob)+
+## Resources
+
+For related code samples using deprecated .NET version 11.x SDKs, see [Code samples using .NET version 11.x](blob-v11-samples-dotnet.md#optimistic-concurrency-for-blobs).
storage Sas Service Create Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-dotnet.md
This article shows how to use the storage account key to create a service SAS fo
The following code example creates a SAS for a container. If the name of an existing stored access policy is provided, that policy is associated with the SAS. If no stored access policy is provided, then the code creates an ad hoc SAS on the container.
-### [.NET v12 SDK](#tab/dotnet)
- A service SAS is signed with the account access key. Use the [StorageSharedKeyCredential](/dotnet/api/azure.storage.storagesharedkeycredential) class to create the credential that is used to sign the SAS. In the following example, populate the constants with your account name, account key, and container name:
Next, create a new [BlobSasBuilder](/dotnet/api/azure.storage.sas.blobsasbuilder
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Sas.cs" id="Snippet_GetServiceSasUriForContainer":::
-### [.NET v11 SDK](#tab/dotnetv11)
-
-To create a service SAS for a container, call the [CloudBlobContainer.GetSharedAccessSignature](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer.getsharedaccesssignature) method.
-
-```csharp
-private static string GetContainerSasUri(CloudBlobContainer container,
- string storedPolicyName = null)
-{
- string sasContainerToken;
-
- // If no stored policy is specified, create a new access policy and define its constraints.
- if (storedPolicyName == null)
- {
- // Note that the SharedAccessBlobPolicy class is used both to define
- // the parameters of an ad hoc SAS, and to construct a shared access policy
- // that is saved to the container's shared access policies.
- SharedAccessBlobPolicy adHocPolicy = new SharedAccessBlobPolicy()
- {
- // When the start time for the SAS is omitted, the start time is assumed
- // to be the time when the storage service receives the request. Omitting
- // the start time for a SAS that is effective immediately helps to avoid clock skew.
- SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
- Permissions = SharedAccessBlobPermissions.Write | SharedAccessBlobPermissions.List
- };
-
- // Generate the shared access signature on the container,
- // setting the constraints directly on the signature.
- sasContainerToken = container.GetSharedAccessSignature(adHocPolicy, null);
-
- Console.WriteLine("SAS for blob container (ad hoc): {0}", sasContainerToken);
- Console.WriteLine();
- }
- else
- {
- // Generate the shared access signature on the container. In this case,
- // all of the constraints for the shared access signature are specified
- // on the stored access policy, which is provided by name. It is also possible
- // to specify some constraints on an ad hoc SAS and others on the stored access policy.
- sasContainerToken = container.GetSharedAccessSignature(null, storedPolicyName);
-
- Console.WriteLine("SAS for container (stored access policy): {0}", sasContainerToken);
- Console.WriteLine();
- }
-
- // Return the URI string for the container, including the SAS token.
- return container.Uri + sasContainerToken;
-}
-```
--- ## Create a service SAS for a blob The following code example creates a SAS on a blob. If the name of an existing stored access policy is provided, that policy is associated with the SAS. If no stored access policy is provided, then the code creates an ad hoc SAS on the blob.
-# [.NET v12 SDK](#tab/dotnet)
- A service SAS is signed with the account access key. Use the [StorageSharedKeyCredential](/dotnet/api/azure.storage.storagesharedkeycredential) class to create the credential that is used to sign the SAS. In the following example, populate the constants with your account name, account key, and container name:
Next, create a new [BlobSasBuilder](/dotnet/api/azure.storage.sas.blobsasbuilder
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Sas.cs" id="Snippet_GetServiceSasUriForBlob":::
-# [.NET v11 SDK](#tab/dotnetv11)
-
-To create a service SAS for a blob, call the [CloudBlob.GetSharedAccessSignature](/dotnet/api/microsoft.azure.storage.blob.cloudblob.getsharedaccesssignature) method.
-
-```csharp
-private static string GetBlobSasUri(CloudBlobContainer container,
- string blobName,
- string policyName = null)
-{
- string sasBlobToken;
-
- // Get a reference to a blob within the container.
- // Note that the blob may not exist yet, but a SAS can still be created for it.
- CloudBlockBlob blob = container.GetBlockBlobReference(blobName);
-
- if (policyName == null)
- {
- // Create a new access policy and define its constraints.
- // Note that the SharedAccessBlobPolicy class is used both to define the parameters
- // of an ad hoc SAS, and to construct a shared access policy that is saved to
- // the container's shared access policies.
- SharedAccessBlobPolicy adHocSAS = new SharedAccessBlobPolicy()
- {
- // When the start time for the SAS is omitted, the start time is assumed to be
- // the time when the storage service receives the request. Omitting the start time
- // for a SAS that is effective immediately helps to avoid clock skew.
- SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
- Permissions = SharedAccessBlobPermissions.Read |
- SharedAccessBlobPermissions.Write |
- SharedAccessBlobPermissions.Create
- };
-
- // Generate the shared access signature on the blob,
- // setting the constraints directly on the signature.
- sasBlobToken = blob.GetSharedAccessSignature(adHocSAS);
-
- Console.WriteLine("SAS for blob (ad hoc): {0}", sasBlobToken);
- Console.WriteLine();
- }
- else
- {
- // Generate the shared access signature on the blob. In this case, all of the constraints
- // for the SAS are specified on the container's stored access policy.
- sasBlobToken = blob.GetSharedAccessSignature(null, policyName);
-
- Console.WriteLine("SAS for blob (stored access policy): {0}", sasBlobToken);
- Console.WriteLine();
- }
-
- // Return the URI string for the container, including the SAS token.
- return blob.Uri + sasBlobToken;
-}
-```
--- ## Create a service SAS for a directory In a storage account with a hierarchical namespace enabled, you can create a service SAS for a directory. To create the service SAS, make sure you have installed version 12.5.0 or later of the [Azure.Storage.Files.DataLake](https://www.nuget.org/packages/Azure.Storage.Files.DataLake/) package.
-The following example shows how to create a service SAS for a directory with the v12 client library for .NET:
+The following example shows how to create a service SAS for a directory:
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Sas.cs" id="Snippet_GetServiceSasUriForDirectory":::
The following example shows how to create a service SAS for a directory with the
- [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md) - [Create a service SAS](/rest/api/storageservices/create-service-sas)+
+## Resources
+
+For related code samples using deprecated .NET version 11.x SDKs, see [Code samples using .NET version 11.x](blob-v11-samples-dotnet.md#create-a-service-sas-for-a-blob-container).
storage Snapshots Manage Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/snapshots-manage-dotnet.md
For more information about blob snapshots in Azure Storage, see [Blob snapshots]
## Create a snapshot
-# [.NET v12 SDK](#tab/dotnet)
-
-To create a snapshot of a block blob using version 12.x of the Azure Storage client library for .NET, use one of the following methods:
+To create a snapshot of a block blob, use one of the following methods:
- [CreateSnapshot](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.createsnapshot) - [CreateSnapshotAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.createsnapshotasync)
-The following code example shows how to create a snapshot with version 12.x. Include a reference to the [Azure.Identity](https://www.nuget.org/packages/azure.identity) library to use your Azure AD credentials to authorize requests to the service. For more information about using the [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) class to authorize a managed identity to access Azure Storage, see [Azure Identity client library for .NET](/dotnet/api/overview/azure/identity-readme).
+The following code example shows how to create a snapshot. Include a reference to the [Azure.Identity](https://www.nuget.org/packages/azure.identity) library to use your Azure AD credentials to authorize requests to the service. For more information about using the [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) class to authorize a managed identity to access Azure Storage, see [Azure Identity client library for .NET](/dotnet/api/overview/azure/identity-readme).
```csharp private static async Task CreateBlockBlobSnapshot(string accountName, string containerName, string blobName, Stream data)
private static async Task CreateBlockBlobSnapshot(string accountName, string con
} ```
-# [.NET v11 SDK](#tab/dotnet11)
-
-To create a snapshot of a block blob using version 11.x of the Azure Storage client library for .NET, use one of the following methods:
--- [CreateSnapshot](/dotnet/api/microsoft.azure.storage.blob.cloudblockblob.createsnapshot)-- [CreateSnapshotAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblockblob.createsnapshotasync)-
-The following code example shows how to create a snapshot with version 11.x. This example specifies additional metadata for the snapshot when it is created.
-
-```csharp
-private static async Task CreateBlockBlobSnapshot(CloudBlobContainer container)
-{
- // Create a new block blob in the container.
- CloudBlockBlob baseBlob = container.GetBlockBlobReference("sample-base-blob.txt");
-
- // Add blob metadata.
- baseBlob.Metadata.Add("ApproxBlobCreatedDate", DateTime.UtcNow.ToString());
-
- try
- {
- // Upload the blob to create it, with its metadata.
- await baseBlob.UploadTextAsync(string.Format("Base blob: {0}", baseBlob.Uri.ToString()));
-
- // Sleep 5 seconds.
- System.Threading.Thread.Sleep(5000);
-
- // Create a snapshot of the base blob.
- // You can specify metadata at the time that the snapshot is created.
- // If no metadata is specified, then the blob's metadata is copied to the snapshot.
- Dictionary<string, string> metadata = new Dictionary<string, string>();
- metadata.Add("ApproxSnapshotCreatedDate", DateTime.UtcNow.ToString());
- await baseBlob.CreateSnapshotAsync(metadata, null, null, null);
- Console.WriteLine(snapshot.SnapshotQualifiedStorageUri.PrimaryUri);
- }
- catch (StorageException e)
- {
- Console.WriteLine(e.Message);
- Console.ReadLine();
- throw;
- }
-}
-```
--- ## Delete snapshots To delete a blob, you must first delete any snapshots of that blob. You can delete a snapshot individually, or specify that all snapshots be deleted when the source blob is deleted. If you attempt to delete a blob that still has snapshots, an error results.
-# [.NET v12 SDK](#tab/dotnet)
-
-To delete a blob and its snapshots using version 12.x of the Azure Storage client library for .NET, use one of the following methods, and include the [DeleteSnapshotsOption](/dotnet/api/azure.storage.blobs.models.deletesnapshotsoption) enum:
+To delete a blob and its snapshots, use one of the following methods, and include the [DeleteSnapshotsOption](/dotnet/api/azure.storage.blobs.models.deletesnapshotsoption) enum:
- [Delete](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.delete) - [DeleteAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.deleteasync) - [DeleteIfExists](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.deleteifexists) - [DeleteIfExistsAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.deleteifexistsasync)
-The following code example shows how to delete a blob and its snapshots in .NET, where `blobClient` is an object of type [BlobClient](/dotnet/api/azure.storage.blobs.blobclient)):
+The following code example shows how to delete a blob and its snapshots in .NET, where `blobClient` is an object of type [BlobClient](/dotnet/api/azure.storage.blobs.blobclient):
```csharp await blobClient.DeleteIfExistsAsync(DeleteSnapshotsOption.IncludeSnapshots, null, default); ```
-# [.NET v11 SDK](#tab/dotnet11)
-
-To delete a blob and its snapshots using version 11.x of the Azure Storage client library for .NET, use one of the following blob deletion methods, and include the [DeleteSnapshotsOption](/dotnet/api/microsoft.azure.storage.blob.deletesnapshotsoption) enum:
--- [Delete](/dotnet/api/microsoft.azure.storage.blob.cloudblob.delete)-- [DeleteAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblob.deleteasync)-- [DeleteIfExists](/dotnet/api/microsoft.azure.storage.blob.cloudblob.deleteifexists)-- [DeleteIfExistsAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblob.deleteifexistsasync)-
-The following code example shows how to delete a blob and its snapshots in .NET, where `blockBlob` is an object of type [CloudBlockBlob][dotnet_CloudBlockBlob]:
-
-```csharp
-await blockBlob.DeleteIfExistsAsync(DeleteSnapshotsOption.IncludeSnapshots, null, null, null);
-```
--- ## Next steps - [Blob snapshots](snapshots-overview.md) - [Blob versions](versioning-overview.md) - [Soft delete for blobs](./soft-delete-blob-overview.md)+
+## Resources
+
+For related code samples using deprecated .NET version 11.x SDKs, see [Code samples using .NET version 11.x](blob-v11-samples-dotnet.md#create-a-snapshot).
storage Storage Blob Pageblob Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-pageblob-overview.md
The following diagram describes the overall relationships between account, conta
#### Creating an empty page blob of a specified size
-# [.NET v12 SDK](#tab/dotnet)
- First, get a reference to a container. To create a page blob, call the GetPageBlobClient method, and then call the [PageBlobClient.Create](/dotnet/api/azure.storage.blobs.specialized.pageblobclient.create) method. Pass in the max size for the blob to create. That size must be a multiple of 512 bytes. :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/CRUD.cs" id="Snippet_CreatePageBlob":::
-# [.NET v11 SDK](#tab/dotnet11)
-
-To create a page blob, we first create a **CloudBlobClient** object, with the base URI for accessing the blob storage for your storage account (*pbaccount* in figure 1) along with the **StorageCredentialsAccountAndKey** object, as shown in the following example. The example then shows creating a reference to a **CloudBlobContainer** object, and then creating the container (*testvhds*) if it doesn't already exist. Then using the **CloudBlobContainer** object, create a reference to a **CloudPageBlob** object by specifying the page blob name (os4.vhd) to access. To create the page blob, call [CloudPageBlob.Create](/dotnet/api/microsoft.azure.storage.blob.cloudpageblob.create), passing in the max size for the blob to create. The *blobSize* must be a multiple of 512 bytes.
-
-```csharp
-using Microsoft.Azure;
-using Microsoft.Azure.Storage;
-using Microsoft.Azure.Storage.Blob;
-
-long OneGigabyteAsBytes = 1024 * 1024 * 1024;
-// Retrieve storage account from connection string.
-CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
- CloudConfigurationManager.GetSetting("StorageConnectionString"));
-
-// Create the blob client.
-CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
-
-// Retrieve a reference to a container.
-CloudBlobContainer container = blobClient.GetContainerReference("testvhds");
-
-// Create the container if it doesn't already exist.
-container.CreateIfNotExists();
-
-CloudPageBlob pageBlob = container.GetPageBlobReference("os4.vhd");
-pageBlob.Create(16 * OneGigabyteAsBytes);
-```
--- #### Resizing a page blob
-# [.NET v12 SDK](#tab/dotnet)
- To resize a page blob after creation, use the [Resize](/dotnet/api/azure.storage.blobs.specialized.pageblobclient.resize) method. The requested size should be a multiple of 512 bytes. :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/CRUD.cs" id="Snippet_ResizePageBlob":::
-# [.NET v11 SDK](#tab/dotnet11)
-
-To resize a page blob after creation, use the [Resize](/dotnet/api/microsoft.azure.storage.blob.cloudpageblob.resize) method. The requested size should be a multiple of 512 bytes.
-
-```csharp
-pageBlob.Resize(32 * OneGigabyteAsBytes);
-```
--- #### Writing pages to a page blob
-# [.NET v12 SDK](#tab/dotnet)
- To write pages, use the [PageBlobClient.UploadPages](/dotnet/api/azure.storage.blobs.specialized.pageblobclient.uploadpages) method. :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/CRUD.cs" id="Snippet_WriteToPageBlob":::
-# [.NET v11 SDK](#tab/dotnet11)
-
-To write pages, use the [CloudPageBlob.WritePages](/dotnet/api/microsoft.azure.storage.blob.cloudpageblob.beginwritepages) method.
-
-```csharp
-pageBlob.WritePages(dataStream, startingOffset);
-```
--- This allows you to write a sequential set of pages up to 4MBs. The offset being written to must start on a 512-byte boundary (startingOffset % 512 == 0), and end on a 512 boundary - 1. As soon as a write request for a sequential set of pages succeeds in the blob service and is replicated for durability and resiliency, the write has committed, and success is returned back to the client.
The below diagram shows 2 separate write operations:
#### Reading pages from a page blob
-# [.NET v12 SDK](#tab/dotnet)
- To read pages, use the [PageBlobClient.Download](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.downloadto) method to read a range of bytes from the page blob. :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/CRUD.cs" id="Snippet_ReadFromPageBlob":::
-# [.NET v11 SDK](#tab/dotnet11)
-
-To read pages, use the [CloudPageBlob.DownloadRangeToByteArray](/dotnet/api/microsoft.azure.storage.blob.icloudblob.downloadrangetobytearray) method to read a range of bytes from the page blob.
-
-```csharp
-byte[] buffer = new byte[rangeSize];
-pageBlob.DownloadRangeToByteArray(buffer, bufferOffset, pageBlobOffset, rangeSize);
-```
--- This allows you to download the full blob or range of bytes starting from any offset in the blob. When reading, the offset does not have to start on a multiple of 512. When reading bytes from a NUL page, the service returns zero bytes. The following figure shows a Read operation with an offset of 256 and a range size of 4352. Data returned is highlighted in orange. Zeros are returned for NUL pages.
The following figure shows a Read operation with an offset of 256 and a range si
If you have a sparsely populated blob, you may want to just download the valid page regions to avoid paying for egressing of zero bytes and to reduce download latency.
-# [.NET v12 SDK](#tab/dotnet)
- To determine which pages are backed by data, use PageBlobClient.GetPageRanges. You can then enumerate the returned ranges and download the data in each range. :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/CRUD.cs" id="Snippet_ReadValidPageRegionsFromPageBlob":::
-# [.NET v11 SDK](#tab/dotnet11)
-
-To determine which pages are backed by data, use [CloudPageBlob.GetPageRanges](/dotnet/api/microsoft.azure.storage.blob.cloudpageblob.getpageranges). You can then enumerate the returned ranges and download the data in each range.
-
-```csharp
-IEnumerable<PageRange> pageRanges = pageBlob.GetPageRanges();
-
-foreach (PageRange range in pageRanges)
-{
- // Calculate the range size
- int rangeSize = (int)(range.EndOffset + 1 - range.StartOffset);
-
- byte[] buffer = new byte[rangeSize];
-
- // Read from the correct starting offset in the page blob and
- // place the data in the bufferOffset of the buffer byte array
- pageBlob.DownloadRangeToByteArray(buffer, bufferOffset, range.StartOffset, rangeSize);
-
- // Then use the buffer for the page range just read
-}
-```
--- #### Leasing a page blob The Lease Blob operation establishes and manages a lock on a blob for write and delete operations. This operation is useful in scenarios where a page blob is being accessed from multiple clients to ensure only one client can write to the blob at a time. Azure Disks, for example, leverages this leasing mechanism to ensure the disk is only managed by a single VM. The lock duration can be 15 to 60 seconds, or can be infinite. See the documentation [here](/rest/api/storageservices/lease-blob) for more details.
storage Storage Blob Scalable App Download Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-scalable-app-download-files.md
dotnet run
The `DownloadFilesAsync` task is shown in the following example:
-# [.NET v12 SDK](#tab/dotnet)
- The application reads the containers located in the storage account specified in the **storageconnectionstring**. It iterates through the blobs using the [GetBlobs](/dotnet/api/azure.storage.blobs.blobcontainerclient.getblobs) method and downloads them to the local machine using the [DownloadToAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.downloadtoasync) method. :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Scalable.cs" id="Snippet_DownloadFilesAsync":::
-# [.NET v11 SDK](#tab/dotnet11)
-
-The application reads the containers located in the storage account specified in the **storageconnectionstring**. It iterates through the blobs 10 at a time using the [ListBlobsSegmentedAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblobclient.listblobssegmentedasync) method in the containers and downloads them to the local machine using the [DownloadToFileAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblob.downloadtofileasync) method.
-
-The following table shows the [BlobRequestOptions](/dotnet/api/microsoft.azure.storage.blob.blobrequestoptions) defined for each blob as it is downloaded.
-
-|Property|Value|Description|
-||||
-|[DisableContentMD5Validation](/dotnet/api/microsoft.azure.storage.blob.blobrequestoptions.disablecontentmd5validation)| true| This property disables checking the MD5 hash of the content uploaded. Disabling MD5 validation produces a faster transfer. But does not confirm the validity or integrity of the files being transferred. |
-|[StoreBlobContentMD5](/dotnet/api/microsoft.azure.storage.blob.blobrequestoptions.storeblobcontentmd5)| false| This property determines if an MD5 hash is calculated and stored. |
-
-```csharp
-private static async Task DownloadFilesAsync()
-{
- CloudBlobClient blobClient = GetCloudBlobClient();
-
- // Define the BlobRequestOptions on the download, including disabling MD5
- // hash validation for this example, this improves the download speed.
- BlobRequestOptions options = new BlobRequestOptions
- {
- DisableContentMD5Validation = true,
- StoreBlobContentMD5 = false
- };
-
- // Retrieve the list of containers in the storage account.
- // Create a directory and configure variables for use later.
- BlobContinuationToken continuationToken = null;
- List<CloudBlobContainer> containers = new List<CloudBlobContainer>();
- do
- {
- var listingResult = await blobClient.ListContainersSegmentedAsync(continuationToken);
- continuationToken = listingResult.ContinuationToken;
- containers.AddRange(listingResult.Results);
- }
- while (continuationToken != null);
-
- var directory = Directory.CreateDirectory("download");
- BlobResultSegment resultSegment = null;
- Stopwatch time = Stopwatch.StartNew();
-
- // Download the blobs
- try
- {
- List<Task> tasks = new List<Task>();
- int max_outstanding = 100;
- int completed_count = 0;
-
- // Create a new instance of the SemaphoreSlim class to
- // define the number of threads to use in the application.
- SemaphoreSlim sem = new SemaphoreSlim(max_outstanding, max_outstanding);
-
- // Iterate through the containers
- foreach (CloudBlobContainer container in containers)
- {
- do
- {
- // Return the blobs from the container, 10 at a time.
- resultSegment = await container.ListBlobsSegmentedAsync(null, true, BlobListingDetails.All, 10, continuationToken, null, null);
- continuationToken = resultSegment.ContinuationToken;
- {
- foreach (var blobItem in resultSegment.Results)
- {
-
- if (((CloudBlob)blobItem).Properties.BlobType == BlobType.BlockBlob)
- {
- // Get the blob and add a task to download the blob asynchronously from the storage account.
- CloudBlockBlob blockBlob = container.GetBlockBlobReference(((CloudBlockBlob)blobItem).Name);
- Console.WriteLine("Downloading {0} from container {1}", blockBlob.Name, container.Name);
- await sem.WaitAsync();
- tasks.Add(blockBlob.DownloadToFileAsync(directory.FullName + "\\" + blockBlob.Name, FileMode.Create, null, options, null).ContinueWith((t) =>
- {
- sem.Release();
- Interlocked.Increment(ref completed_count);
- }));
-
- }
- }
- }
- }
- while (continuationToken != null);
- }
-
- // Creates an asynchronous task that completes when all the downloads complete.
- await Task.WhenAll(tasks);
- }
- catch (Exception e)
- {
- Console.WriteLine("\nError encountered during transfer: {0}", e.Message);
- }
-
- time.Stop();
- Console.WriteLine("Download has been completed in {0} seconds. Press any key to continue", time.Elapsed.TotalSeconds.ToString());
- Console.ReadLine();
-}
-```
--- ### Validate the connections While the files are being downloaded, you can verify the number of concurrent connections to your storage account. Open a console window and type `netstat -a | find /c "blob:https"`. This command shows the number of connections that are currently opened. As you can see from the following example, over 280 connections were open when downloading files from the storage account.
storage Storage Blob Scalable App Upload Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-scalable-app-upload-files.md
The application creates five randomly named containers and begins uploading the
The `UploadFilesAsync` method is shown in the following example:
-# [.NET v12 SDK](#tab/dotnet)
- :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Scalable.cs" id="Snippet_UploadFilesAsync":::
-# [.NET v11 SDK](#tab/dotnet11)
-
-The minimum and maximum number of threads are set to 100 to ensure that a large number of concurrent connections are allowed.
-
-```csharp
-private static async Task UploadFilesAsync()
-{
- // Create five randomly named containers to store the uploaded files.
- CloudBlobContainer[] containers = await GetRandomContainersAsync();
-
- var currentdir = System.IO.Directory.GetCurrentDirectory();
-
- // Path to the directory to upload
- string uploadPath = currentdir + "\\upload";
-
- // Start a timer to measure how long it takes to upload all the files.
- Stopwatch time = Stopwatch.StartNew();
-
- try
- {
- Console.WriteLine("Iterating in directory: {0}", uploadPath);
-
- int count = 0;
- int max_outstanding = 100;
- int completed_count = 0;
-
- // Define the BlobRequestOptions on the upload.
- // This includes defining an exponential retry policy to ensure that failed connections
- // are retried with a back off policy. As multiple large files are being uploaded using
- // large block sizes, this can cause an issue if an exponential retry policy is not defined.
- // Additionally, parallel operations are enabled with a thread count of 8.
- // This should be a multiple of the number of processor cores in the machine.
- // Lastly, MD5 hash validation is disabled for this example, improving the upload speed.
- BlobRequestOptions options = new BlobRequestOptions
- {
- ParallelOperationThreadCount = 8,
- DisableContentMD5Validation = true,
- StoreBlobContentMD5 = false
- };
-
- // Create a new instance of the SemaphoreSlim class to
- // define the number of threads to use in the application.
- SemaphoreSlim sem = new SemaphoreSlim(max_outstanding, max_outstanding);
-
- List<Task> tasks = new List<Task>();
- Console.WriteLine("Found {0} file(s)", Directory.GetFiles(uploadPath).Count());
-
- // Iterate through the files
- foreach (string path in Directory.GetFiles(uploadPath))
- {
- var container = containers[count % 5];
- string fileName = Path.GetFileName(path);
- Console.WriteLine("Uploading {0} to container {1}", path, container.Name);
- CloudBlockBlob blockBlob = container.GetBlockBlobReference(fileName);
-
- // Set the block size to 100MB.
- blockBlob.StreamWriteSizeInBytes = 100 * 1024 * 1024;
-
- await sem.WaitAsync();
-
- // Create a task for each file to upload. The tasks are
- // added to a collection and all run asynchronously.
- tasks.Add(blockBlob.UploadFromFileAsync(path, null, options, null).ContinueWith((t) =>
- {
- sem.Release();
- Interlocked.Increment(ref completed_count);
- }));
-
- count++;
- }
-
- // Run all the tasks asynchronously.
- await Task.WhenAll(tasks);
-
- time.Stop();
-
- Console.WriteLine("Upload has been completed in {0} seconds. Press any key to continue", time.Elapsed.TotalSeconds.ToString());
-
- Console.ReadLine();
- }
- catch (DirectoryNotFoundException ex)
- {
- Console.WriteLine("Error parsing files in the directory: {0}", ex.Message);
- }
- catch (Exception ex)
- {
- Console.WriteLine(ex.Message);
- }
-}
-```
-
-In addition to setting the threading and connection limit settings, the [BlobRequestOptions](/dotnet/api/microsoft.azure.storage.blob.blobrequestoptions) for the [UploadFromStreamAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblockblob.uploadfromstreamasync) method are configured to use parallelism and disable MD5 hash validation. The files are uploaded in 100-mb blocks, this configuration provides better performance but can be costly if using a poorly performing network as if there is a failure the entire 100-mb block is retried.
-
-|Property|Value|Description|
-||||
-|[ParallelOperationThreadCount](/dotnet/api/microsoft.azure.storage.blob.blobrequestoptions.paralleloperationthreadcount)| 8| The setting breaks the blob into blocks when uploading. For highest performance, this value should be eight times the number of cores. |
-|[DisableContentMD5Validation](/dotnet/api/microsoft.azure.storage.blob.blobrequestoptions.disablecontentmd5validation)| true| This property disables checking the MD5 hash of the content uploaded. Disabling MD5 validation produces a faster transfer. But does not confirm the validity or integrity of the files being transferred. |
-|[StoreBlobContentMD5](/dotnet/api/microsoft.azure.storage.blob.blobrequestoptions.storeblobcontentmd5)| false| This property determines if an MD5 hash is calculated and stored with the file. |
-| [RetryPolicy](/dotnet/api/microsoft.azure.storage.blob.blobrequestoptions.retrypolicy)| 2-second backoff with 10 max retry |Determines the retry policy of requests. Connection failures are retried, in this example an [ExponentialRetry](/dotnet/api/microsoft.azure.batch.common.exponentialretry) policy is configured with a 2-second backoff, and a maximum retry count of 10. This setting is important when your application gets close to hitting the scalability targets for Blob storage. For more information, see [Scalability and performance targets for Blob storage](../blobs/scalability-targets.md). |
--- The following example is a truncated application output running on a Windows system. ```console
storage Storage Create Geo Redundant Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-create-geo-redundant-storage.md
In part one of the series, you learn how to:
To complete this tutorial:
-# [.NET v12 SDK](#tab/dotnet)
+# [.NET](#tab/dotnet)
- Install [Visual Studio 2022](https://www.visualstudio.com/downloads/) with the **Azure development** workload. ![Screenshot of Visual Studio Azure development workload (under Web & Cloud).](media/storage-create-geo-redundant-storage/workloads-net-v12.png)
-# [.NET v11 SDK](#tab/dotnet11)
--- Install [Visual Studio 2019](https://www.visualstudio.com/downloads/) with the **Azure development** workload.-
- ![Screenshot of Visual Studio Azure development workload (under Web & Cloud).](media/storage-create-geo-redundant-storage/workloads.png)
-
-# [Python v12 SDK](#tab/python)
+# [JavaScript](#tab/nodejs)
We're currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For more information, see [Announcing the Azure Storage v12 Client Libraries](https://techcommunity.microsoft.com/t5/azure-storage/announcing-the-azure-storage-v12-client-libraries/ba-p/1482394).
-# [Python v2.1](#tab/python2)
--- Install [Python](https://www.python.org/downloads/)-- Download and install [Azure Storage SDK for Python](https://github.com/Azure/azure-storage-python)-
-# [Node.js v12 SDK](#tab/nodejs)
+# [Python](#tab/python)
We're currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For more information, see [Announcing the Azure Storage v12 Client Libraries](https://techcommunity.microsoft.com/t5/azure-storage/announcing-the-azure-storage-v12-client-libraries/ba-p/1482394).
-# [Node.js v11 SDK](#tab/nodejs11)
--- Install [Node.js](https://nodejs.org).- ## Sign in to the Azure portal
Follow these steps to create a read-access geo-zone-redundant (RA-GZRS) storage
## Download the sample
-# [.NET v12 SDK](#tab/dotnet)
+# [.NET](#tab/dotnet)
Download the [sample project](https://github.com/Azure-Samples/storage-dotnet-circuit-breaker-pattern-ha-apps-using-ra-grs/archive/master.zip), extract (unzip) the storage-dotnet-circuit-breaker-pattern-ha-apps-using-ra-grs.zip file, then navigate to the v12 folder to find the project files.
You can also use [git](https://git-scm.com/) to clone the repository to your loc
git clone https://github.com/Azure-Samples/storage-dotnet-circuit-breaker-pattern-ha-apps-using-ra-grs.git ```
-# [.NET v11 SDK](#tab/dotnet11)
-
-Download the [sample project](https://github.com/Azure-Samples/storage-dotnet-circuit-breaker-pattern-ha-apps-using-ra-grs/archive/master.zip), extract (unzip) the storage-dotnet-circuit-breaker-pattern-ha-apps-using-ra-grs.zip file, then navigate to the v11 folder to find the project files.
-
-You can also use [git](https://git-scm.com/) to download a copy of the application to your development environment. The sample project in the v11 folder contains a console application.
-
-```bash
-git clone https://github.com/Azure-Samples/storage-dotnet-circuit-breaker-pattern-ha-apps-using-ra-grs.git
-```
-
-# [Python v12 SDK](#tab/python)
+# [JavaScript](#tab/nodejs)
We're currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For more information, see [Announcing the Azure Storage v12 Client Libraries](https://techcommunity.microsoft.com/t5/azure-storage/announcing-the-azure-storage-v12-client-libraries/ba-p/1482394).
-# [Python v2.1](#tab/python2)
-
-[Download the sample project](https://github.com/Azure-Samples/storage-python-circuit-breaker-pattern-ha-apps-using-ra-grs/archive/master.zip) and extract (unzip) the storage-python-circuit-breaker-pattern-ha-apps-using-ra-grs.zip file. You can also use [git](https://git-scm.com/) to download a copy of the application to your development environment. The sample project contains a basic Python application.
-
-```bash
-git clone https://github.com/Azure-Samples/storage-python-circuit-breaker-pattern-ha-apps-using-ra-grs.git
-```
-
-# [Node.js v12 SDK](#tab/nodejs)
+# [Python](#tab/python)
We're currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For more information, see [Announcing the Azure Storage v12 Client Libraries](https://techcommunity.microsoft.com/t5/azure-storage/announcing-the-azure-storage-v12-client-libraries/ba-p/1482394).
-# [Node.js v11 SDK](#tab/nodejs11)
-
-[Download the sample project](https://github.com/Azure-Samples/storage-node-v10-ha-ra-grs) and unzip the file. You can also use [git](https://git-scm.com/) to download a copy of the application to your development environment. The sample project contains a basic Node.js application.
-
-```bash
-git clone https://github.com/Azure-Samples/storage-node-v10-ha-ra-grs
-```
- ## Configure the sample
-# [.NET v12 SDK](#tab/dotnet)
+# [.NET](#tab/dotnet)
Application requests to Azure Blob storage must be authorized. Using the `DefaultAzureCredential` class provided by the `Azure.Identity` client library is the recommended approach for connecting to Azure services in your code. The .NET v12 code sample uses this approach. To learn more, please see the [DefaultAzureCredential overview](/dotnet/azure/sdk/authentication#defaultazurecredential). You can also authorize requests to Azure Blob Storage by using the account access key. However, this approach should be used with caution to protect access keys from being exposed.
-# [.NET v11 SDK](#tab/dotnet11)
-
-In the application, you must provide the connection string for your storage account. You can store this connection string within an environment variable on the local machine running the application. Follow one of the examples below depending on your Operating System to create the environment variable.
-
-In the Azure portal, navigate to your storage account. Select **Access keys** under **Settings** in your storage account. Copy the **connection string** from the primary or secondary key. Run one of the following commands based on your operating system, replacing \<yourconnectionstring\> with your actual connection string. This command saves an environment variable to the local machine. In Windows, the environment variable isn't available until you reload the **Command Prompt** or shell you're using.
-
-### Linux
-
-```
-export storageconnectionstring=<yourconnectionstring>
-```
-
-### Windows
-
-```powershell
-setx storageconnectionstring "<yourconnectionstring>"
-```
-
-# [Python v12 SDK](#tab/python)
+# [JavaScript](#tab/nodejs)
We're currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For more information, see [Announcing the Azure Storage v12 Client Libraries](https://techcommunity.microsoft.com/t5/azure-storage/announcing-the-azure-storage-v12-client-libraries/ba-p/1482394).
-# [Python v2.1](#tab/python2)
-
-In the application, you must provide your storage account credentials. You can store this information in environment variables on the local machine running the application. Follow one of the examples below depending on your Operating System to create the environment variables.
-
-In the Azure portal, navigate to your storage account. Select **Access keys** under **Settings** in your storage account. Paste the **Storage account name** and **Key** values into the following commands, replacing the \<youraccountname\> and \<youraccountkey\> placeholders. This command saves the environment variables to the local machine. In Windows, the environment variable isn't available until you reload the **Command Prompt** or shell you're using.
-
-### Linux
-
-```
-export accountname=<youraccountname>
-export accountkey=<youraccountkey>
-```
-
-### Windows
-
-```powershell
-setx accountname "<youraccountname>"
-setx accountkey "<youraccountkey>"
-```
-
-# [Node.js v12 SDK](#tab/nodejs)
+# [Python](#tab/python)
We're currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For more information, see [Announcing the Azure Storage v12 Client Libraries](https://techcommunity.microsoft.com/t5/azure-storage/announcing-the-azure-storage-v12-client-libraries/ba-p/1482394).
-# [Node.js v11 SDK](#tab/nodejs11)
-
-To run this sample, you must add your storage account credentials to the `.env.example` file and then rename it to `.env`.
-
-```
-AZURE_STORAGE_ACCOUNT_NAME=<replace with your storage account name>
-AZURE_STORAGE_ACCOUNT_ACCESS_KEY=<replace with your storage account access key>
-```
-
-You can find this information in the Azure portal by navigating to your storage account and selecting **Access keys** in the **Settings** section.
-
-Install the required dependencies by opening a command prompt, navigating to the sample folder, then entering `npm install`.
- ## Run the console application
-# [.NET v12 SDK](#tab/dotnet)
+# [.NET](#tab/dotnet)
In Visual Studio, press **F5** or select **Start** to begin debugging the application. Visual Studio automatically restores missing NuGet packages if package restore is configured. See [Installing and reinstalling packages with package restore](/nuget/consume-packages/package-restore#package-restore-overview) to learn more.
Next, the application enters a loop with a prompt to download the blob, initiall
To exit the loop and clean up resources, press the `Esc` key at the blob download prompt.
-# [.NET v11 SDK](#tab/dotnet11)
-
-In Visual Studio, press **F5** or select **Start** to begin debugging the application. Visual Studio automatically restores missing NuGet packages if package restore is configured, visit [Installing and reinstalling packages with package restore](/nuget/consume-packages/package-restore#package-restore-overview) to learn more.
-
-A console window launches and the application begins running. The application uploads the **HelloWorld.png** image from the solution to the storage account. The application checks to ensure the image has replicated to the secondary RA-GZRS endpoint. It then begins downloading the image up to 999 times. Each read is represented by a **P** or an **S**. Where **P** represents the primary endpoint and **S** represents the secondary endpoint.
-
-![Screenshot of Console application output.](media/storage-create-geo-redundant-storage/figure3.png)
-
-In the sample code, the `RunCircuitBreakerAsync` task in the `Program.cs` file is used to download an image from the storage account using the [DownloadToFileAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblob.downloadtofileasync) method. Prior to the download, an [OperationContext](/dotnet/api/microsoft.azure.cosmos.table.operationcontext) is defined. The operation context defines event handlers that fire when a download completes successfully, or if a download fails and is retrying.
-
-# [Python v12 SDK](#tab/python)
+# [JavaScript](#tab/nodejs)
We're currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For more information, see [Announcing the Azure Storage v12 Client Libraries](https://techcommunity.microsoft.com/t5/azure-storage/announcing-the-azure-storage-v12-client-libraries/ba-p/1482394).
-# [Python v2.1](#tab/python2)
-
-To run the application on a terminal or command prompt, go to the **circuitbreaker.py** directory, then enter `python circuitbreaker.py`. The application uploads the **HelloWorld.png** image from the solution to the storage account. The application checks to ensure the image has replicated to the secondary RA-GZRS endpoint. It then begins downloading the image up to 999 times. Each read is represented by a **P** or an **S**. Where **P** represents the primary endpoint and **S** represents the secondary endpoint.
-
-![Console app running](media/storage-create-geo-redundant-storage/figure3.png)
-
-In the sample code, the `run_circuit_breaker` method in the `circuitbreaker.py` file is used to download an image from the storage account using the [get_blob_to_path](/python/api/azure-storage-blob/azure.storage.blob.baseblobservice.baseblobservice#get-blob-to-path-container-name--blob-name--file-path--open-mode--wbsnapshot-none--start-range-none--end-range-none--validate-content-false--progress-callback-none--max-connections-2--lease-id-none--if-modified-since-none--if-unmodified-since-none--if-match-none--if-none-match-none--timeout-none-) method.
-
-The Storage object retry function is set to a linear retry policy. The retry function determines whether to retry a request, and specifies the number of seconds to wait before retrying the request. Set the **retry\_to\_secondary** value to true, if request should be retried to secondary in case the initial request to primary fails. In the sample application, a custom retry policy is defined in the `retry_callback` function of the storage object.
-
-Before the download, the Service object [retry_callback](/python/api/azure-storage-common/azure.storage.common.storageclient.storageclient) and [response_callback](/python/api/azure-storage-common/azure.storage.common.storageclient.storageclient) function is defined. These functions define event handlers that fire when a download completes successfully or if a download fails and is retrying.
-
-# [Node.js v12 SDK](#tab/nodejs)
+# [Python](#tab/python)
We're currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For more information, see [Announcing the Azure Storage v12 Client Libraries](https://techcommunity.microsoft.com/t5/azure-storage/announcing-the-azure-storage-v12-client-libraries/ba-p/1482394).
-# [Node.js v11 SDK](#tab/nodejs11)
-
-To run the sample, open a command prompt, navigate to the sample folder, then enter `node index.js`.
-
-The sample creates a container in your Blob storage account, uploads **HelloWorld.png** into the container, then repeatedly checks whether the container and image have replicated to the secondary region. After replication, it prompts you to enter **D** or **Q** (followed by ENTER) to download or quit. Your output should look similar to the following example:
-
-```
-Created container successfully: newcontainer1550799840726
-Uploaded blob: HelloWorld.png
-Checking to see if container and blob have replicated to secondary region.
-[0] Container has not replicated to secondary region yet: newcontainer1550799840726 : ContainerNotFound
-[1] Container has not replicated to secondary region yet: newcontainer1550799840726 : ContainerNotFound
-...
-[31] Container has not replicated to secondary region yet: newcontainer1550799840726 : ContainerNotFound
-[32] Container found, but blob has not replicated to secondary region yet.
-...
-[67] Container found, but blob has not replicated to secondary region yet.
-[68] Blob has replicated to secondary region.
-Ready for blob download. Enter (D) to download or (Q) to quit, followed by ENTER.
-> D
-Attempting to download blob...
-Blob downloaded from primary endpoint.
-> Q
-Exiting...
-Deleted container newcontainer1550799840726
-```
- ## Understand the sample code
-# [.NET v12 SDK](#tab/dotnet)
+# [.NET](#tab/dotnet)
The sample creates a `BlobServiceClient` object configured with retry options and a secondary region endpoint. This configuration allows the application to automatically switch to the secondary region if the request fails on the primary region endpoint.
BlobServiceClient blobServiceClient = new BlobServiceClient(primaryAccountUri, n
When the `GeoRedundantSecondaryUri` property is set in `BlobClientOptions`, retries for GET or HEAD requests will switch to use the secondary endpoint. Subsequent retries will alternate between the primary and secondary endpoint. However, if the status of the response from the secondary Uri is 404, then subsequent retries for the request will no longer use the secondary Uri, as this error code indicates the resource hasn't replicated to the secondary region.
-# [.NET v11 SDK](#tab/dotnet11)
-
-### Retry event handler
-
-The `OperationContextRetrying` event handler is called when the download of the image fails and is set to retry. If the maximum number of retries defined in the application are reached, the [LocationMode](/dotnet/api/microsoft.azure.storage.blob.blobrequestoptions.locationmode) of the request is changed to `SecondaryOnly`. This setting forces the application to attempt to download the image from the secondary endpoint. This configuration reduces the time taken to request the image as the primary endpoint isn't retried indefinitely.
-
-```csharp
-private static void OperationContextRetrying(object sender, RequestEventArgs e)
-{
- retryCount++;
- Console.WriteLine("Retrying event because of failure reading the primary. RetryCount = " + retryCount);
-
- // Check if we have had more than n retries in which case switch to secondary.
- if (retryCount >= retryThreshold)
- {
-
- // Check to see if we can fail over to secondary.
- if (blobClient.DefaultRequestOptions.LocationMode != LocationMode.SecondaryOnly)
- {
- blobClient.DefaultRequestOptions.LocationMode = LocationMode.SecondaryOnly;
- retryCount = 0;
- }
- else
- {
- throw new ApplicationException("Both primary and secondary are unreachable. Check your application's network connection. ");
- }
- }
-}
-```
-
-### Request completed event handler
-
-The `OperationContextRequestCompleted` event handler is called when the download of the image is successful. If the application is using the secondary endpoint, the application continues to use this endpoint up to 20 times. After 20 times, the application sets the [LocationMode](/dotnet/api/microsoft.azure.storage.blob.blobrequestoptions.locationmode) back to `PrimaryThenSecondary` and retries the primary endpoint. If a request is successful, the application continues to read from the primary endpoint.
-
-```csharp
-private static void OperationContextRequestCompleted(object sender, RequestEventArgs e)
-{
- if (blobClient.DefaultRequestOptions.LocationMode == LocationMode.SecondaryOnly)
- {
- // You're reading the secondary. Let it read the secondary [secondaryThreshold] times,
- // then switch back to the primary and see if it's available now.
- secondaryReadCount++;
- if (secondaryReadCount >= secondaryThreshold)
- {
- blobClient.DefaultRequestOptions.LocationMode = LocationMode.PrimaryThenSecondary;
- secondaryReadCount = 0;
- }
- }
-}
-```
-
-# [Python v12 SDK](#tab/python)
+# [JavaScript](#tab/nodejs)
We're currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For more information, see [Announcing the Azure Storage v12 Client Libraries](https://techcommunity.microsoft.com/t5/azure-storage/announcing-the-azure-storage-v12-client-libraries/ba-p/1482394).
-# [Python v2.1](#tab/python2)
-
-### Retry event handler
-
-The `retry_callback` event handler is called when the download of the image fails and is set to retry. If the maximum number of retries defined in the application are reached, the [LocationMode](/python/api/azure-storage-common/azure.storage.common.models.locationmode) of the request is changed to `SECONDARY`. This setting forces the application to attempt to download the image from the secondary endpoint. This configuration reduces the time taken to request the image as the primary endpoint isn't retried indefinitely.
-
-```python
-def retry_callback(retry_context):
- global retry_count
- retry_count = retry_context.count
- sys.stdout.write(
- "\nRetrying event because of failure reading the primary. RetryCount= {0}".format(retry_count))
- sys.stdout.flush()
-
- # Check if we have more than n-retries in which case switch to secondary
- if retry_count >= retry_threshold:
-
- # Check to see if we can fail over to secondary.
- if blob_client.location_mode != LocationMode.SECONDARY:
- blob_client.location_mode = LocationMode.SECONDARY
- retry_count = 0
- else:
- raise Exception("Both primary and secondary are unreachable. "
- "Check your application's network connection.")
-```
-
-### Request completed event handler
-
-The `response_callback` event handler is called when the download of the image is successful. If the application is using the secondary endpoint, the application continues to use this endpoint up to 20 times. After 20 times, the application sets the [LocationMode](/python/api/azure-storage-common/azure.storage.common.models.locationmode) back to `PRIMARY` and retries the primary endpoint. If a request is successful, the application continues to read from the primary endpoint.
-
-```python
-def response_callback(response):
- global secondary_read_count
- if blob_client.location_mode == LocationMode.SECONDARY:
-
- # You're reading the secondary. Let it read the secondary [secondaryThreshold] times,
- # then switch back to the primary and see if it is available now.
- secondary_read_count += 1
- if secondary_read_count >= secondary_threshold:
- blob_client.location_mode = LocationMode.PRIMARY
- secondary_read_count = 0
-```
-
-# [Node.js v12 SDK](#tab/nodejs)
+# [Python](#tab/python)
We're currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For more information, see [Announcing the Azure Storage v12 Client Libraries](https://techcommunity.microsoft.com/t5/azure-storage/announcing-the-azure-storage-v12-client-libraries/ba-p/1482394).
-# [Node.js v11 SDK](#tab/nodejs11)
-
-With the Node.js V10 SDK, callback handlers are unnecessary. Instead, the sample creates a pipeline configured with retry options and a secondary endpoint. This configuration allows the application to automatically switch to the secondary pipeline if it fails to reach your data through the primary pipeline.
-
-```javascript
-const accountName = process.env.AZURE_STORAGE_ACCOUNT_NAME;
-const storageAccessKey = process.env.AZURE_STORAGE_ACCOUNT_ACCESS_KEY;
-const sharedKeyCredential = new SharedKeyCredential(accountName, storageAccessKey);
-
-const primaryAccountURL = `https://${accountName}.blob.core.windows.net`;
-const secondaryAccountURL = `https://${accountName}-secondary.blob.core.windows.net`;
-
-const pipeline = StorageURL.newPipeline(sharedKeyCredential, {
- retryOptions: {
- maxTries: 3,
- tryTimeoutInMs: 10000,
- retryDelayInMs: 500,
- maxRetryDelayInMs: 1000,
- secondaryHost: secondaryAccountURL
- }
-});
-```
- ## Next steps
Advance to part two of the series to learn how to simulate a failure and force y
> [!div class="nextstepaction"] > [Simulate a failure in reading from the primary region](simulate-primary-region-failure.md)+
+## Resources
+
+For related code samples using deprecated SDKs, see the following resources:
+
+- [.NET version 11.x](blob-v11-samples-dotnet.md#build-a-highly-available-app-with-blob-storage)
+- [JavaScript version 11.x](blob-v11-samples-javascript.md#build-a-highly-available-app-with-blob-storage)
+- [Python version 2.1](blob-v2-samples-python.md#build-a-highly-available-app-with-blob-storage)
storage Classic Account Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/classic-account-migrate.md
To delete disk artifacts from the Azure portal, follow these steps:
:::image type="content" source="media/classic-account-migrate/delete-disk-artifacts-portal.png" alt-text="Screenshot showing how to delete classic disk artifacts in Azure portal." lightbox="media/classic-account-migrate/delete-disk-artifacts-portal.png":::
+For more information about errors that may occur when deleting disk artifacts and how to address them, see [Troubleshoot errors when you delete Azure classic storage accounts, containers, or VHDs](/troubleshoot/azure/virtual-machines/storage-classic-cannot-delete-storage-account-container-vhd).
+ ## See also - [Migrate your classic storage accounts to Azure Resource Manager by August 31, 2024](classic-account-migration-overview.md)
storage Classic Account Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/classic-account-migration-overview.md
Previously updated : 03/27/2023 Last updated : 04/05/2023
Storage accounts created using the classic deployment model will follow the [Mod
> [!WARNING] > If you do not migrate your classic storage accounts to Azure Resource Manager by August 31, 2024, you will permanently lose access to the data in those accounts.
-## What resources are available for this migration?
+## What actions should I take?
+
+To migrate your classic storage accounts, you should:
+
+1. Identify all classic storage accounts in your subscription.
+1. Migrate any classic storage accounts to Azure Resource Manager.
+1. Check your applications and logs to determine whether you are dynamically creating, updating, or deleting classic storage accounts from your code, scripts, or templates. If you are, then you need to update your applications to use Azure Resource Manager accounts instead.
+
+For step-by-step instructions, see [How to migrate your classic storage accounts to Azure Resource Manager](classic-account-migrate.md).
+
+## How to get help
- If you have questions, get answers from community experts in [Microsoft Q&A](/answers/tags/98/azure-storage-accounts). - If your organization or company has partnered with Microsoft or works with Microsoft representatives, such as cloud solution architects (CSAs) or customer success account managers (CSAMs), contact them for additional resources for migration.
Storage accounts created using the classic deployment model will follow the [Mod
1. Under **Problem subtype**, select **Migrate account to new resource group/subscription/region/tenant**. 1. Select **Next**, then follow the instructions to submit your support request.
-## What actions should I take?
-
-To migrate your classic storage accounts, you should:
-
-1. Identify all classic storage accounts in your subscription.
-1. Migrate any classic storage accounts to Azure Resource Manager.
-1. Check your applications and logs to determine whether you are dynamically creating, updating, or deleting classic storage accounts from your code, scripts, or templates. If you are, then you need to update your applications to use Azure Resource Manager accounts instead.
-
-For step-by-step instructions, see [How to migrate your classic storage accounts to Azure Resource Manager](classic-account-migrate.md).
- ## See also - [How to migrate your classic storage accounts to Azure Resource Manager](classic-account-migrate.md)
storage Classic Account Migration Process https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/classic-account-migration-process.md
Previously updated : 03/27/2023 Last updated : 04/05/2023
First, it's helpful to understand the basic architecture of Azure Storage. Azure
During the migration process, Microsoft translates the representation of the storage account resource from the classic deployment model to the Azure Resource Manager deployment model. As a result, you need to use new tools, APIs, and SDKs to manage your storage accounts and related resources after the migration.
-The data plane is unaffected by migration from the classic deployment model to the Azure Resource Manager model. The data in your migrated storage account will be identical to the data in the original storage account.
+The data plane is unaffected by migration from the classic deployment model to the Azure Resource Manager model. Your applications can continue to read and write data from the storage account throughout the migration process.
## The migration experience
Before you start the migration:
- Ensure that the storage accounts that you want to migrate don't use any unsupported features or configurations. Usually the platform detects these issues and generates an error. - Plan your migration during non-business hours to accommodate for any unexpected failures that might happen during migration. - Evaluate any Azure role-based access control (Azure RBAC) roles that are configured on the classic storage account, and plan for after the migration is complete.-- If possible, halt write operations to the storage account for the duration of the migration. There are four steps to the migration process, as shown in the following diagram: :::image type="content" source="media/classic-account-migration-process/migration-workflow.png" alt-text="Screenshot showing the account migration workflow."::: 1. **Validate**. During the Validation phase, Azure checks the storage account to ensure that it can be migrated.
-1. **Prepare**. In the Prepare phase, Azure creates a new general-purpose v1 storage account and alerts you to any problems that may have occurred. The new account is created in a new resource group in the same region as your classic account. All of your data has been migrated to the new account.
+1. **Prepare**. In the Prepare phase, Azure creates a new general-purpose v1 storage account and alerts you to any problems that may have occurred. The new account is created in a new resource group in the same region as your classic account.
- At this point your classic storage account still exists and contains all of your data. If there are any problems reported, you can correct them or abort the process.
+ At this point your classic storage account still exists. If there are any problems reported, you can correct them or abort the process.
1. **Check manually**. It's a good idea to make a manual check of the new storage account to make sure that the output is as you expect. 1. **Commit or abort**. If you are satisfied that the migration has been successful, then you can commit the migration. Committing the migration permanently deletes the classic storage account.
The Validation step is the first step in the migration process. The goal of this
The Validation step analyzes the state of resources in the classic deployment model. It checks for failures and unsupported scenarios due to different configurations of the storage account in the classic deployment model.
-The Validation step does not check for virtual machine (VM) disks that may be associated with the storage account. You must check your storage accounts manually to determine whether they support VM disks.
+> [!NOTE]
+> The Validation step does not check for virtual machine (VM) disks that may be associated with the storage account. You must check your storage accounts manually to determine whether they support VM disks.
Keep in mind that it's not possible to check for every constraint that the Azure Resource Manager stack might impose on the storage account during migration. Some constraints are only checked when the resources undergo transformation in the next step of migration (the Prepare step).
The Prepare step is the second step in the migration process. The goal of this s
If the storage account is not capable of migration, Azure stops the migration process and lists the reason why the Prepare step failed.
-If the storage account is capable of migration, Azure blocks management plane operations for the storage account under migration. For example, you cannot regenerate the storage account keys while the Prepare phase is underway. Azure then creates a new resource group as the classic storage account. The name of the new resource group follows the pattern `<classic-account-name>-Migrated`.
+If the storage account is capable of migration, Azure locks management plane operations for the storage account under migration. For example, you cannot regenerate the storage account keys while the Prepare phase is underway. Azure then creates a new resource group as the classic storage account. The name of the new resource group follows the pattern `<classic-account-name>-Migrated`.
> [!NOTE] > It is not possible to select the name of the resource group that is created for a migrated storage account. After migration is complete, however, you can use the move feature of Azure Resource Manager to move your migrated storage account to a different resource group. For more information, see [Move resources to a new subscription or resource group](../../azure-resource-manager/management/move-resource-group-and-subscription.md).
-Finally, Azure migrates the storage account and all of its data and configurations to a new storage account in Azure Resource Manager in the same region as the classic storage account. At this point your classic storage account still exists and contains all of your data. If there are any problems reported during the Prepare step, you can correct them or abort the process.
+Finally, Azure migrates the storage account and its configuration to a new storage account in Azure Resource Manager in the same region as the classic storage account. At this point your classic storage account still exists. If there are any problems reported during the Prepare step, you can correct them or abort the process.
### Check manually After the Prepare step is complete, both accounts exist in your subscription, so that you can review and compare the classic storage account in the pre-migration state and in Azure Resource Manager. For example, you can examine the new account via the Azure portal to ensure that the storage account's configuration is as expected.
-There is no set window of time before which you need to commit or abort the migration. You can take as much time as you need for the Check phase. However, management plane operations are blocked for the classic storage account until you either abort or commit.
+There is no set window of time before which you need to commit or abort the migration. You can take as much time as you need for the Check phase. However, management plane operations are locked for the classic storage account until you either abort or commit.
### Abort
After you are satisfied that your classic storage account has been migrated succ
## After the migration
-After the migration is complete, your new storage account is ready for use. You can resume write operations at this point to the storage account.
+After the migration is complete, your new storage account is ready for use.
### Migrated account type
storage Geo Redundant Design Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/geo-redundant-design-legacy.md
# Use geo-redundancy to design highly available applications (.NET v11 SDK)
+> [!NOTE]
+> The samples in this article use the deprecated Azure Storage .NET v11 library. For the latest v12 code and guidance, see [Use geo-redundancy to design highly available applications](geo-redundant-design.md).
+ A common feature of cloud-based infrastructures like Azure Storage is that they provide a highly available and durable platform for hosting data and applications. Developers of cloud-based applications must consider carefully how to leverage this platform to maximize those advantages for their users. Azure Storage offers geo-redundant storage to ensure high availability even in the event of a regional outage. Storage accounts configured for geo-redundant replication are synchronously replicated in the primary region, and then asynchronously replicated to a secondary region that is hundreds of miles away. Azure Storage offers two options for geo-redundant replication. The only difference between these two options is how data is replicated in the primary region:
storage Manage Storage Analytics Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/manage-storage-analytics-logs.md
You can instruct Azure Storage to save diagnostics logs for read, write, and del
For information about how to configure the Azure PowerShell cmdlets to work with your Azure subscription and how to select the default storage account to use, see: [How to install and configure Azure PowerShell](/powershell/azure/).
-### [.NET v12 SDK](#tab/dotnet)
+### [.NET](#tab/dotnet)
:::code language="csharp" source="~/azure-storage-snippets/queues/howto/dotnet/dotnet-v12/Monitoring.cs" id="snippet_EnableDiagnosticLogs":::
-### [.NET v11 SDK](#tab/dotnet11)
-
-```csharp
-var storageAccount = CloudStorageAccount.Parse(connStr);
-var queueClient = storageAccount.CreateCloudQueueClient();
-var serviceProperties = queueClient.GetServiceProperties();
-
-serviceProperties.Logging.LoggingOperations = LoggingOperations.All;
-serviceProperties.Logging.RetentionDays = 2;
-
-queueClient.SetServiceProperties(serviceProperties);
-```
- <a id="modify-retention-policy"></a>
Log data can accumulate in your account over time which can increase the cost of
For information about how to configure the Azure PowerShell cmdlets to work with your Azure subscription and how to select the default storage account to use, see: [How to install and configure Azure PowerShell](/powershell/azure/).
-### [.NET v12 SDK](#tab/dotnet)
+### [.NET](#tab/dotnet)
The following example prints to the console the retention period for blob and queue storage services.
The following example changes the retention period to 4 days.
:::code language="csharp" source="~/azure-storage-snippets/queues/howto/dotnet/dotnet-v12/Monitoring.cs" id="snippet_ModifyRetentionPeriod":::
-### [.NET v11 SDK](#tab/dotnet11)
-
-The following example prints to the console the retention period for blob and queue storage services.
-
-```csharp
-var storageAccount = CloudStorageAccount.Parse(connectionString);
-
-var blobClient = storageAccount.CreateCloudBlobClient();
-var queueClient = storageAccount.CreateCloudQueueClient();
-
-var blobserviceProperties = blobClient.GetServiceProperties();
-var queueserviceProperties = queueClient.GetServiceProperties();
-
-Console.WriteLine("Retention period for logs from the blob service is: " +
- blobserviceProperties.Logging.RetentionDays.ToString());
-
-Console.WriteLine("Retention period for logs from the queue service is: " +
- queueserviceProperties.Logging.RetentionDays.ToString());
-```
-
-The following example changes the retention period for logs for the blob and queue storage services to 4 days.
-
-```csharp
-
-blobserviceProperties.Logging.RetentionDays = 4;
-queueserviceProperties.Logging.RetentionDays = 4;
-
-blobClient.SetServiceProperties(blobserviceProperties);
-queueClient.SetServiceProperties(queueserviceProperties);
-```
- ### Verify that log data is being deleted
storage Manage Storage Analytics Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/manage-storage-analytics-metrics.md
You can disable metrics collection and logging by setting **Status** to **Off**.
For information about how to configure the Azure PowerShell cmdlets to work with your Azure subscription and how to select the default storage account to use, see [Install and configure Azure PowerShell](/powershell/azure/).
-### [.NET v12 SDK](#tab/dotnet)
+### [.NET](#tab/dotnet)
:::code language="csharp" source="~/azure-storage-snippets/queues/howto/dotnet/dotnet-v12/Monitoring.cs" id="snippet_EnableDiagnosticLogs":::
For more information about using a .NET language to configure storage metrics, s
For general information about configuring storage metrics by using the REST API, see [Enabling and configuring Storage Analytics](/rest/api/storageservices/Enabling-and-Configuring-Storage-Analytics).
-### [.NET v11 SDK](#tab/dotnet11)
-
-```csharp
-var storageAccount = CloudStorageAccount.Parse(connStr);
-var queueClient = storageAccount.CreateCloudQueueClient();
-var serviceProperties = queueClient.GetServiceProperties();
-
-serviceProperties.HourMetrics.MetricsLevel = MetricsLevel.Service;
-serviceProperties.HourMetrics.RetentionDays = 10;
-
-queueClient.SetServiceProperties(serviceProperties);
-```
-
-For more information about using a .NET language to configure storage metrics, see [Azure Storage client libraries for .NET](/dotnet/api/overview/azure/storage).
-
-For general information about configuring storage metrics by using the REST API, see [Enabling and configuring Storage Analytics](/rest/api/storageservices/Enabling-and-Configuring-Storage-Analytics).
- <a id="view-metrics"></a>
Once you've added charts to your dashboard, you can further customize them as de
- To learn more about Storage Analytics, see [Storage Analytics](storage-analytics.md) for Storage Analytics. - [Configure Storage Analytics logs](manage-storage-analytics-logs.md).-- Learn more about the the metrics schema. See [Storage Analytics metrics table schema](/rest/api/storageservices/storage-analytics-metrics-table-schema).
+- Learn more about the metrics schema. See [Storage Analytics metrics table schema](/rest/api/storageservices/storage-analytics-metrics-table-schema).
storage Migrate Azure Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/migrate-azure-credentials.md
description: Learn to migrate existing applications away from Shared Key authori
Previously updated : 12/07/2022 Last updated : 04/05/2023 # Migrate an application to use passwordless connections with Azure Storage
-Application requests to Azure Storage must be authenticated using either account access keys or passwordless connections. However, you should prioritize passwordless connections in your applications when possible. Traditional authentication methods that use passwords or secret keys create additional security risks and complications. Visit the [passwordless connections for Azure services](/azure/developer/intro/passwordless-overview) hub to learn more about the advantages of moving to passwordless connections.
+Application requests to Azure Storage must be authenticated using either account access keys or passwordless connections. However, you should prioritize passwordless connections in your applications when possible. Traditional authentication methods that use passwords or secret keys create security risks and complications. Visit the [passwordless connections for Azure services](/azure/developer/intro/passwordless-overview) hub to learn more about the advantages of moving to passwordless connections.
The following tutorial explains how to migrate an existing application to connect to Azure Storage to use passwordless connections instead of a key-based solution. These same migration steps should apply whether you're using access keys directly, or through connection strings.
For local development, make sure you're authenticated with the same Azure AD acc
[!INCLUDE [default-azure-credential-sign-in](../../../includes/passwordless/default-azure-credential-sign-in.md)]
-Next you need to update your code to use passwordless connections.
+Next, update your code to use passwordless connections.
## [.NET](#tab/dotnet)
-1. To use `DefaultAzureCredential` in a .NET application, add the **Azure.Identity** NuGet package to your application.
+1. To use `DefaultAzureCredential` in a .NET application, install the `Azure.Identity` package:
```dotnetcli dotnet add package Azure.Identity ```
-1. At the top of your `Program.cs` file, add the following `using` statement:
+1. At the top of your file, add the following code:
```csharp using Azure.Identity; ```
-1. Identify the locations in your code that currently create a `BlobServiceClient` to connect to Azure Storage. This task is often handled in `Program.cs`, potentially as part of your service registration with the .NET dependency injection container. Update your code to match the following example:
+1. Identify the locations in your code that create a `BlobServiceClient` to connect to Azure Storage. Update your code to match the following example:
```csharp
- // TODO: Update <storage-account-name> placeholder to your account name
+ var credential = new DefaultAzureCredential();
+
+ // TODO: Update the <storage-account-name> placeholder.
var blobServiceClient = new BlobServiceClient( new Uri("https://<storage-account-name>.blob.core.windows.net"),
- new DefaultAzureCredential());
+ credential);
```
-1. Make sure to update the storage account name in the URI of your `BlobServiceClient`. You can find the storage account name on the overview page of the Azure portal.
+## [Java](#tab/java)
- :::image type="content" source="../blobs/media/storage-quickstart-blobs-dotnet/storage-account-name.png" alt-text="Screenshot showing how to find the storage account name.":::
+1. To use `DefaultAzureCredential` in a Java application, install the `azure-identity` package via one of the following approaches:
+ 1. [Include the BOM file](/java/api/overview/azure/identity-readme?view=azure-java-stable&preserve-view=true#include-the-bom-file).
+ 1. [Include a direct dependency](/java/api/overview/azure/identity-readme?view=azure-java-stable&preserve-view=true#include-direct-dependency).
+
+1. At the top of your file, add the following code:
+
+ ```java
+ import com.azure.identity.DefaultAzureCredentialBuilder;
+ ```
+
+1. Identify the locations in your code that create a `BlobServiceClient` object to connect to Azure Storage. Update your code to match the following example:
+
+ ```java
+ DefaultAzureCredential credential = new DefaultAzureCredentialBuilder()
+ .build();
+
+ // TODO: Update the <storage-account-name> placeholder.
+ BlobServiceClient blobServiceClient = new BlobServiceClientBuilder()
+ .endpoint("https://<storage-account-name>.blob.core.windows.net")
+ .credential(credential)
+ .buildClient();
+ ```
+
+## [Node.js](#tab/nodejs)
+
+1. To use `DefaultAzureCredential` in a Node.js application, install the `@azure/identity` package:
+
+ ```bash
+ npm install --save @azure/identity
+ ```
+
+1. At the top of your file, add the following code:
+
+ ```nodejs
+ const { DefaultAzureCredential } = require("@azure/identity");
+ ```
+
+1. Identify the locations in your code that create a `BlobServiceClient` object to connect to Azure Storage. Update your code to match the following example:
+
+ ```nodejs
+ const credential = new DefaultAzureCredential();
+
+ // TODO: Update the <storage-account-name> placeholder.
+ const blobServiceClient = new BlobServiceClient(
+ "https://<storage-account-name>.blob.core.windows.net",
+ credential
+ );
+ ```
+
+## [Python](#tab/python)
+1. To use `DefaultAzureCredential` in a Python application, install the `azure-identity` package:
+
+ ```bash
+ pip install azure-identity
+ ```
+
+1. At the top of your file, add the following code:
+
+ ```python
+ from azure.identity import DefaultAzureCredential
+ ```
+
+1. Identify the locations in your code that create a `BlobServiceClient` to connect to Azure Storage. Update your code to match the following example:
+
+ ```python
+ credential = DefaultAzureCredential()
+
+ # TODO: Update the <storage-account-name> placeholder.
+ blob_service_client = BlobServiceClient(
+ account_url = "https://<storage-account-name>.blob.core.windows.net",
+ credential = credential
+ )
+ ```
+4. Make sure to update the storage account name in the URI of your `BlobServiceClient`. You can find the storage account name on the overview page of the Azure portal.
+
+ :::image type="content" source="../blobs/media/storage-quickstart-blobs-dotnet/storage-account-name.png" alt-text="Screenshot showing how to find the storage account name.":::
+ ### Run the app locally After making these code changes, run your application locally. The new configuration should pick up your local credentials, such as the Azure CLI, Visual Studio, or IntelliJ. The roles you assigned to your local dev user in Azure allows your app to connect to the Azure service locally.
Complete the following steps in the Azure portal to associate an identity with y
* Azure Spring Apps * Azure Container Apps * Azure virtual machines
-* Azure Kubernetes Service.
+* Azure Kubernetes Service
1. Navigate to the overview page of your web app. 1. Select **Identity** from the left navigation.
If you connected your services using Service Connector you don't need to complet
### Update the application code
-You need to configure your application code to look for the specific managed identity you created when it is deployed to Azure. In some scenarios, explicitly setting the managed identity for the app also prevents other environment identities from accidentally being detected and used automatically.
-
-## [.NET](#tab/dotnet)
+You need to configure your application code to look for the specific managed identity you created when it's deployed to Azure. In some scenarios, explicitly setting the managed identity for the app also prevents other environment identities from accidentally being detected and used automatically.
1. On the managed identity overview page, copy the client ID value to your clipboard.
-1. Update the `DefaultAzureCredential` object in the `Program.cs` file of your app to specify this managed identity client ID.
+1. Update the `DefaultAzureCredential` object to specify this managed identity client ID:
+ ## [.NET](#tab/dotnet)
+
```csharp
- // TODO: Update the <your-storage-account-name> and <your-managed-identity-client-id> placeholders
- var blobServiceClient = new BlobServiceClient(
- new Uri("https://<your-storage-account-name>.blob.core.windows.net"),
- new DefaultAzureCredential(
- new DefaultAzureCredentialOptions()
- {
- ManagedIdentityClientId = "<your-managed-identity-client-id>"
- }));
+ // TODO: Update the <managed-identity-client-id> placeholder.
+ var credential = new DefaultAzureCredential(
+ new DefaultAzureCredentialOptions
+ {
+ ManagedIdentityClientId = "<managed-identity-client-id>"
+ });
```
-3. Redeploy your code to Azure after making this change in order for the configuration updates to be applied.
+ ## [Java](#tab/java)
+
+ ```java
+ // TODO: Update the <managed-identity-client-id> placeholder.
+ DefaultAzureCredential credential = new DefaultAzureCredentialBuilder()
+ .managedIdentityClientId("<managed-identity-client-id>")
+ .build();
+ ```
+
+ ## [Node.js](#tab/nodejs)
+
+ ```nodejs
+ // TODO: Update the <managed-identity-client-id> placeholder.
+ const credential = new DefaultAzureCredential({
+ managedIdentityClientId: "<managed-identity-client-id>"
+ });
+ ```
+
+ ## [Python](#tab/python)
+
+ ```python
+ # TODO: Update the <managed-identity-client-id> placeholder.
+ credential = DefaultAzureCredential(
+ managed_identity_client_id = "<managed-identity-client-id>"
+ )
+ ```
-
+
+
+3. Redeploy your code to Azure after making this change in order for the configuration updates to be applied.
### Test the app
storage Storage Account Sas Create Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-sas-create-dotnet.md
This article shows how to use the storage account key to create an account SAS w
## Create an account SAS
-### [.NET v12 SDK](#tab/dotnet)
-
-A account SAS is signed with the account access key. Use the [StorageSharedKeyCredential](/dotnet/api/azure.storage.storagesharedkeycredential) class to create the credential that is used to sign the SAS. Next, create a new [AccountSasBuilder](/dotnet/api/azure.storage.sas.accountsasbuilder) object and call the [ToSasQueryParameters](/dotnet/api/azure.storage.sas.accountsasbuilder.tosasqueryparameters) to get the SAS token string.
+An account SAS is signed with the account access key. Use the [StorageSharedKeyCredential](/dotnet/api/azure.storage.storagesharedkeycredential) class to create the credential that is used to sign the SAS. Next, create a new [AccountSasBuilder](/dotnet/api/azure.storage.sas.accountsasbuilder) object and call the [ToSasQueryParameters](/dotnet/api/azure.storage.sas.accountsasbuilder.tosasqueryparameters) to get the SAS token string.
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Sas.cs" id="Snippet_GetAccountSASToken":::
-### [.NET v11 SDK](#tab/dotnetv11)
-
-To create an account SAS for a container, call the [CloudStorageAccount.GetSharedAccessSignature](/dotnet/api/microsoft.azure.storage.cloudstorageaccount.getsharedaccesssignature) method.
-
-The following code example creates an account SAS that is valid for the Blob and File services, and gives the client permissions read, write, and list permissions to access service-level APIs. The account SAS restricts the protocol to HTTPS, so the request must be made with HTTPS. Remember to replace placeholder values in angle brackets with your own values:
-
-```csharp
-static string GetAccountSASToken()
-{
- // To create the account SAS, you need to use Shared Key credentials. Modify for your account.
- const string ConnectionString = "DefaultEndpointsProtocol=https;AccountName=<storage-account>;AccountKey=<account-key>";
- CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);
-
- // Create a new access policy for the account.
- SharedAccessAccountPolicy policy = new SharedAccessAccountPolicy()
- {
- Permissions = SharedAccessAccountPermissions.Read |
- SharedAccessAccountPermissions.Write |
- SharedAccessAccountPermissions.List,
- Services = SharedAccessAccountServices.Blob | SharedAccessAccountServices.File,
- ResourceTypes = SharedAccessAccountResourceTypes.Service,
- SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
- Protocols = SharedAccessProtocol.HttpsOnly
- };
-
- // Return the SAS token.
- return storageAccount.GetSharedAccessSignature(policy);
-}
-```
--- ## Use an account SAS from a client To use the account SAS to access service-level APIs for the Blob service, construct a Blob service client object using the SAS and the Blob storage endpoint for your storage account.
-### [.NET v12 SDK](#tab/dotnet)
- :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Sas.cs" id="Snippet_UseAccountSAS":::
-### [.NET v11 SDK](#tab/dotnetv11)
-
-In this snippet, replace the `<storage-account>` placeholder with the name of your storage account.
-
-```csharp
-static void UseAccountSAS(string sasToken)
-{
- // Create new storage credentials using the SAS token.
- StorageCredentials accountSAS = new StorageCredentials(sasToken);
- // Use these credentials and the account name to create a Blob service client.
- CloudStorageAccount accountWithSAS = new CloudStorageAccount(accountSAS, "<storage-account>", endpointSuffix: null, useHttps: true);
- CloudBlobClient blobClientWithSAS = accountWithSAS.CreateCloudBlobClient();
-
- // Now set the service properties for the Blob client created with the SAS.
- blobClientWithSAS.SetServiceProperties(new ServiceProperties()
- {
- HourMetrics = new MetricsProperties()
- {
- MetricsLevel = MetricsLevel.ServiceAndApi,
- RetentionDays = 7,
- Version = "1.0"
- },
- MinuteMetrics = new MetricsProperties()
- {
- MetricsLevel = MetricsLevel.ServiceAndApi,
- RetentionDays = 7,
- Version = "1.0"
- },
- Logging = new LoggingProperties()
- {
- LoggingOperations = LoggingOperations.All,
- RetentionDays = 14,
- Version = "1.0"
- }
- });
-
- // The permissions granted by the account SAS also permit you to retrieve service properties.
- ServiceProperties serviceProperties = blobClientWithSAS.GetServiceProperties();
- Console.WriteLine(serviceProperties.HourMetrics.MetricsLevel);
- Console.WriteLine(serviceProperties.HourMetrics.RetentionDays);
- Console.WriteLine(serviceProperties.HourMetrics.Version);
-}
-```
--- ## Next steps - [Grant limited access to Azure Storage resources using shared access signatures (SAS)](storage-sas-overview.md) - [Create an account SAS](/rest/api/storageservices/create-account-sas)+
+## Resources
+
+For related code samples using deprecated .NET version 11.x SDKs, see [Code samples using .NET version 11.x](../blobs/blob-v11-samples-dotnet.md#create-an-account-sas).
storage Storage Monitoring Diagnosing Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-monitoring-diagnosing-troubleshooting.md
The "[Appendices]" include information about using other tools such as Wireshark
## <a name="monitoring-your-storage-service"></a>Monitoring your storage service
-If you are familiar with Windows performance monitoring, you can think of Storage Metrics as being an Azure Storage equivalent of Windows Performance Monitor counters. In Storage Metrics, you will find a comprehensive set of metrics (counters in Windows Performance Monitor terminology) such as service availability, total number of requests to service, or percentage of successful requests to service. For a full list of the available metrics, see [Storage Analytics Metrics Table Schema](/rest/api/storageservices/Storage-Analytics-Metrics-Table-Schema). You can specify whether you want the storage service to collect and aggregate metrics every hour or every minute. For more information about how to enable metrics and monitor your storage accounts, see [Enabling storage metrics and viewing metrics data](../blobs/monitor-blob-storage.md).
+If you're familiar with Windows performance monitoring, you can think of Storage Metrics as being an Azure Storage equivalent of Windows Performance Monitor counters. In Storage Metrics, you'll find a comprehensive set of metrics (counters in Windows Performance Monitor terminology) such as service availability, total number of requests to service, or percentage of successful requests to service. For a full list of the available metrics, see [Storage Analytics Metrics Table Schema](/rest/api/storageservices/Storage-Analytics-Metrics-Table-Schema). You can specify whether you want the storage service to collect and aggregate metrics every hour or every minute. For more information about how to enable metrics and monitor your storage accounts, see [Enabling storage metrics and viewing metrics data](../blobs/monitor-blob-storage.md).
You can choose which hourly metrics you want to display in the [Azure portal](https://portal.azure.com) and configure rules that notify administrators by email whenever an hourly metric exceeds a particular threshold. For more information, see [Receive Alert Notifications](../../azure-monitor/alerts/alerts-overview.md).
-We recommend you review [Azure Monitor for Storage](./storage-insights-overview.md?toc=/azure/azure-monitor/toc.json) (preview). It is a feature of Azure Monitor that offers comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. It does not require you to enable or configure anything, and you can immediately view these metrics from the pre-defined interactive charts and other visualizations included.
+We recommend you review [Azure Monitor for Storage](./storage-insights-overview.md?toc=/azure/azure-monitor/toc.json) (preview). It's a feature of Azure Monitor that offers comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. It doesn't require you to enable or configure anything, and you can immediately view these metrics from the pre-defined interactive charts and other visualizations included.
The storage service collects metrics using a best effort, but may not record every storage operation. In the Azure portal, you can view metrics such as availability, total requests, and average latency numbers for a storage account. A notification rule has also been set up to alert an administrator if availability drops below a certain level. From viewing this data, one possible area for investigation is the table service success percentage being below 100% (for more information, see the section "[Metrics show low PercentSuccess or analytics log entries have operations with transaction status of ClientOtherErrors]").
-You should continuously monitor your Azure applications to ensure they are healthy and performing as expected by:
+You should continuously monitor your Azure applications to ensure they're healthy and performing as expected by:
-- Establishing some baseline metrics for application that will enable you to compare current data and identify any significant changes in the behavior of Azure storage and your application. The values of your baseline metrics will, in many cases, be application specific and you should establish them when you are performance testing your application.
+- Establishing some baseline metrics for application that will enable you to compare current data and identify any significant changes in the behavior of Azure storage and your application. The values of your baseline metrics will, in many cases, be application specific and you should establish them when you're performance testing your application.
- Recording minute metrics and using them to monitor actively for unexpected errors and anomalies such as spikes in error counts or request rates. - Recording hourly metrics and using them to monitor average values such as average error counts and request rates. - Investigating potential issues using diagnostics tools as discussed later in the section "[Diagnosing storage issues]."
For more information about Application Insights for Azure DevOps, see the append
### <a name="monitoring-capacity"></a>Monitoring capacity
-Storage Metrics only stores capacity metrics for the blob service because blobs typically account for the largest proportion of stored data (at the time of writing, it is not possible to use Storage Metrics to monitor the capacity of your tables and queues). You can find this data in the **$MetricsCapacityBlob** table if you have enabled monitoring for the Blob service. Storage Metrics records this data once per day, and you can use the value of the **RowKey** to determine whether the row contains an entity that relates to user data (value **data**) or analytics data (value **analytics**). Each stored entity contains information about the amount of storage used (**Capacity** measured in bytes) and the current number of containers (**ContainerCount**) and blobs (**ObjectCount**) in use in the storage account. For more information about the capacity metrics stored in the **$MetricsCapacityBlob** table, see [Storage Analytics Metrics Table Schema](/rest/api/storageservices/Storage-Analytics-Metrics-Table-Schema).
+Storage Metrics only stores capacity metrics for the blob service because blobs typically account for the largest proportion of stored data (at the time of writing, it's not possible to use Storage Metrics to monitor the capacity of your tables and queues). You can find this data in the **$MetricsCapacityBlob** table if you have enabled monitoring for the Blob service. Storage Metrics records this data once per day, and you can use the value of the **RowKey** to determine whether the row contains an entity that relates to user data (value **data**) or analytics data (value **analytics**). Each stored entity contains information about the amount of storage used (**Capacity** measured in bytes) and the current number of containers (**ContainerCount**) and blobs (**ObjectCount**) in use in the storage account. For more information about the capacity metrics stored in the **$MetricsCapacityBlob** table, see [Storage Analytics Metrics Table Schema](/rest/api/storageservices/Storage-Analytics-Metrics-Table-Schema).
> [!NOTE]
-> You should monitor these values for an early warning that you are approaching the capacity limits of your storage account. In the Azure portal, you can add alert rules to notify you if aggregate storage use exceeds or falls below thresholds that you specify.
+> You should monitor these values for an early warning that you're approaching the capacity limits of your storage account. In the Azure portal, you can add alert rules to notify you if aggregate storage use exceeds or falls below thresholds that you specify.
> >
For help estimating the size of various storage objects such as blobs, see the b
You should monitor the availability of the storage services in your storage account by monitoring the value in the **Availability** column in the hourly or minute metrics tables ΓÇö **$MetricsHourPrimaryTransactionsBlob**, **$MetricsHourPrimaryTransactionsTable**, **$MetricsHourPrimaryTransactionsQueue**, **$MetricsMinutePrimaryTransactionsBlob**, **$MetricsMinutePrimaryTransactionsTable**, **$MetricsMinutePrimaryTransactionsQueue**, **$MetricsCapacityBlob**. The **Availability** column contains a percentage value that indicates the availability of the service or the API operation represented by the row (the **RowKey** shows if the row contains metrics for the service as a whole or for a specific API operation).
-Any value less than 100% indicates that some storage requests are failing. You can see why they are failing by examining the other columns in the metrics data that show the numbers of requests with different error types such as **ServerTimeoutError**. You should expect to see **Availability** fall temporarily below 100% for reasons such as transient server timeouts while the service moves partitions to better load-balance request; the retry logic in your client application should handle such intermittent conditions. The article [Storage Analytics Logged Operations and Status Messages](/rest/api/storageservices/Storage-Analytics-Logged-Operations-and-Status-Messages) lists the transaction types that Storage Metrics includes in its **Availability** calculation.
+Any value less than 100% indicates that some storage requests are failing. You can see why they're failing by examining the other columns in the metrics data that show the numbers of requests with different error types such as **ServerTimeoutError**. You should expect to see **Availability** fall temporarily below 100% for reasons such as transient server timeouts while the service moves partitions to better load-balance request; the retry logic in your client application should handle such intermittent conditions. The article [Storage Analytics Logged Operations and Status Messages](/rest/api/storageservices/Storage-Analytics-Logged-Operations-and-Status-Messages) lists the transaction types that Storage Metrics includes in its **Availability** calculation.
In the [Azure portal](https://portal.azure.com), you can add alert rules to notify you if **Availability** for a service falls below a threshold that you specify.
To monitor the performance of the storage services, you can use the following me
- The values in the **TotalIngress** and **TotalEgress** columns show the total amount of data, in bytes, coming in to and going out of your storage service or through a specific API operation type. - The values in the **TotalRequests** column show the total number of requests that the storage service of API operation is receiving. **TotalRequests** is the total number of requests that the storage service receives.
-Typically, you will monitor for unexpected changes in any of these values as an indicator that you have an issue that requires investigation.
+Typically, you'll monitor for unexpected changes in any of these values as an indicator that you have an issue that requires investigation.
In the [Azure portal](https://portal.azure.com), you can add alert rules to notify you if any of the performance metrics for this service fall below or exceed a threshold that you specify.
The "[Troubleshooting guidance]" section of this guide describes some common sto
There are a number of ways that you might become aware of a problem or issue in your application, including: - A major failure that causes the application to crash or to stop working.-- Significant changes from baseline values in the metrics you are monitoring as described in the previous section "[Monitoring your storage service]."-- Reports from users of your application that some particular operation didn't complete as expected or that some feature is not working.
+- Significant changes from baseline values in the metrics you're monitoring as described in the previous section "[Monitoring your storage service]."
+- Reports from users of your application that some particular operation didn't complete as expected or that some feature isn't working.
- Errors generated within your application that appear in log files or through some other notification method. Typically, issues related to Azure storage services fall into one of four broad categories: - Your application has a performance issue, either reported by your users, or revealed by changes in the performance metrics.-- There is a problem with the Azure Storage infrastructure in one or more regions.
+- There's a problem with the Azure Storage infrastructure in one or more regions.
- Your application is encountering an error, either reported by your users, or revealed by an increase in one of the error count metrics you monitor. - During development and test, you may be using the local storage emulator; you may encounter some issues that relate specifically to usage of the storage emulator.
Service health issues are typically outside of your control. The [Azure portal](
### <a name="performance-issues"></a>Performance issues
-The performance of an application can be subjective, especially from a user perspective. Therefore, it is important to have baseline metrics available to help you identify where there might be a performance issue. Many factors might affect the performance of an Azure storage service from the client application perspective. These factors might operate in the storage service, in the client, or in the network infrastructure; therefore it is important to have a strategy for identifying the origin of the performance issue.
+The performance of an application can be subjective, especially from a user perspective. Therefore, it's important to have baseline metrics available to help you identify where there might be a performance issue. Many factors might affect the performance of an Azure storage service from the client application perspective. These factors might operate in the storage service, in the client, or in the network infrastructure; therefore it's important to have a strategy for identifying the origin of the performance issue.
-After you have identified the likely location of the cause of the performance issue from the metrics, you can then use the log files to find detailed information to diagnose and troubleshoot the problem further.
+After you've identified the likely location of the cause of the performance issue from the metrics, you can then use the log files to find detailed information to diagnose and troubleshoot the problem further.
The section "[Troubleshooting guidance]" later in this guide provides more information about some common performance-related issues you may encounter.
End-to-end tracing using a variety of log files is a useful technique for invest
### <a name="correlating-log-data"></a>Correlating log data
-When viewing logs from client applications, network traces, and server-side storage logging it is critical to be able to correlate requests across the different log files. The log files include a number of different fields that are useful as correlation identifiers. The client request ID is the most useful field to use to correlate entries in the different logs. However sometimes, it can be useful to use either the server request ID or timestamps. The following sections provide more details about these options.
+When viewing logs from client applications, network traces, and server-side storage logging it's critical to be able to correlate requests across the different log files. The log files include a number of different fields that are useful as correlation identifiers. The client request ID is the most useful field to use to correlate entries in the different logs. However sometimes, it can be useful to use either the server request ID or timestamps. The following sections provide more details about these options.
### <a name="client-request-id"></a>Client request ID
The Storage Client Library automatically generates a unique client request ID fo
- In the server-side Storage Logging log, the client request ID appears in the Client request ID column. > [!NOTE]
-> It is possible for multiple requests to share the same client request ID because the client can assign this value (although the Storage Client Library assigns a
+> It's possible for multiple requests to share the same client request ID because the client can assign this value (although the Storage Client Library assigns a
> new value automatically). When the client retries, all attempts share the same client request ID. In the case of a batch sent from the client, the batch has a single client request ID. > >
The storage service automatically generates server request IDs.
> >
-# [.NET v12 SDK](#tab/dotnet)
- The code sample below demonstrates how to use a custom client request ID. :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Monitoring.cs" id="Snippet_UseCustomRequestID":::
-# [.NET v11 SDK](#tab/dotnet11)
-
-If the Storage Client Library throws a **StorageException** in the client, the **RequestInformation** property contains a **RequestResult** object that includes a **ServiceRequestID** property. You can also access a **RequestResult** object from an **OperationContext** instance.
-
-The code sample below demonstrates how to set a custom **ClientRequestId** value by attaching an **OperationContext** object the request to the storage service. It also shows how to retrieve the **ServerRequestId** value from the response message.
-
-```csharp
-//Parse the connection string for the storage account.
-const string ConnectionString = "DefaultEndpointsProtocol=https;AccountName=account-name;AccountKey=account-key";
-CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);
-CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
-
-// Create an Operation Context that includes custom ClientRequestId string based on constants defined within the application along with a Guid.
-OperationContext oc = new OperationContext();
-oc.ClientRequestID = String.Format("{0} {1} {2} {3}", HOSTNAME, APPNAME, USERID, Guid.NewGuid().ToString());
-
-try
-{
- CloudBlobContainer container = blobClient.GetContainerReference("democontainer");
- ICloudBlob blob = container.GetBlobReferenceFromServer("testImage.jpg", null, null, oc);
- var downloadToPath = string.Format("./{0}", blob.Name);
- using (var fs = File.OpenWrite(downloadToPath))
- {
- blob.DownloadToStream(fs, null, null, oc);
- Console.WriteLine("\t Blob downloaded to file: {0}", downloadToPath);
- }
-}
-catch (StorageException storageException)
-{
- Console.WriteLine("Storage exception {0} occurred", storageException.Message);
- // Multiple results may exist due to client side retry logic - each retried operation will have a unique ServiceRequestId
- foreach (var result in oc.RequestResults)
- {
- Console.WriteLine("HttpStatus: {0}, ServiceRequestId {1}", result.HttpStatusCode, result.ServiceRequestID);
- }
-}
-```
--- ### <a name="timestamps"></a>Timestamps
-You can also use timestamps to locate related log entries, but be careful of any clock skew between the client and server that may exist. Search plus or minus 15 minutes for matching server-side entries based on the timestamp on the client. Remember that the blob metadata for the blobs containing metrics indicates the time range for the metrics stored in the blob. This time range is useful if you have many metrics blobs for the same minute or hour.
+You can also use timestamps to locate related log entries, but be careful of any clock skew between the client and server that may exist. Search plus or minus 15 minutes for matching server-side entries based on the timestamp on the client. Remember that the blob metadata for the blobs containing metrics indicates the time range for the metrics stored in the blob. This time range is useful if you've many metrics blobs for the same minute or hour.
## <a name="troubleshooting-guidance"></a>Troubleshooting guidance
Does your issue relate to the performance of one of the storage services?
- [Metrics show high AverageE2ELatency and low AverageServerLatency] - [Metrics show low AverageE2ELatency and low AverageServerLatency but the client is experiencing high latency] - [Metrics show high AverageServerLatency]-- [You are experiencing unexpected delays in message delivery on a queue]
+- [You're experiencing unexpected delays in message delivery on a queue]
Does your issue relate to the availability of one of the storage services?
Does your issue relate to the availability of one of the storage services?
[Your issue arises from using the storage emulator for development or test]
-[You are encountering problems installing the Azure SDK for .NET]
+[You're encountering problems installing the Azure SDK for .NET]
[You have a different issue with a storage service]
The storage service only calculates the metric **AverageE2ELatency** for success
Possible reasons for the client responding slowly include having a limited number of available connections or threads, or being low on resources such as CPU, memory or network bandwidth. You may be able to resolve the issue by modifying the client code to be more efficient (for example by using asynchronous calls to the storage service), or by using a larger Virtual Machine (with more cores and more memory).
-For the table and queue services, the Nagle algorithm can also cause high **AverageE2ELatency** as compared to **AverageServerLatency**: for more information, see the post [Nagle's Algorithm is Not Friendly towards Small Requests](/archive/blogs/windowsazurestorage/nagles-algorithm-is-not-friendly-towards-small-requests). You can disable the Nagle algorithm in code by using the **ServicePointManager** class in the **System.Net** namespace. You should do this before you make any calls to the table or queue services in your application since this does not affect connections that are already open. The following example comes from the **Application_Start** method in a worker role.
-
-# [.NET v12 SDK](#tab/dotnet)
+For the table and queue services, the Nagle algorithm can also cause high **AverageE2ELatency** as compared to **AverageServerLatency**: for more information, see the post [Nagle's Algorithm is Not Friendly towards Small Requests](/archive/blogs/windowsazurestorage/nagles-algorithm-is-not-friendly-towards-small-requests). You can disable the Nagle algorithm in code by using the **ServicePointManager** class in the **System.Net** namespace. You should do this before you make any calls to the table or queue services in your application since this doesn't affect connections that are already open. The following example comes from the **Application_Start** method in a worker role.
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Monitoring.cs" id="Snippet_DisableNagle":::
-# [.NET v11 SDK](#tab/dotnet11)
-
-```csharp
-var storageAccount = CloudStorageAccount.Parse(connStr);
-ServicePoint queueServicePoint = ServicePointManager.FindServicePoint(storageAccount.QueueEndpoint);
-queueServicePoint.UseNagleAlgorithm = false;
-```
--- You should check the client-side logs to see how many requests your client application is submitting, and check for general .NET related performance bottlenecks in your client such as CPU, .NET garbage collection, network utilization, or memory. As a starting point for troubleshooting .NET client applications, see [Debugging, Tracing, and Profiling](/dotnet/framework/debug-trace-profile/). #### Investigating network latency issues
One possible reason for the client delaying sending requests is that there are a
Also check whether the client is performing multiple retries, and investigate the reason if it is. To determine whether the client is performing multiple retries, you can: -- Examine the Storage Analytics logs. If multiple retries are happening, you will see multiple operations with the same client request ID but with different server request IDs.
+- Examine the Storage Analytics logs. If multiple retries are happening, you'll see multiple operations with the same client request ID but with different server request IDs.
- Examine the client logs. Verbose logging will indicate that a retry has occurred. - Debug your code, and check the properties of the **OperationContext** object associated with the request. If the operation has retried, the **RequestResults** property will include multiple unique server request IDs. You can also check the start and end times for each request. For more information, see the code sample in the section [Server request ID].
For more information about using Wireshark to troubleshoot network issues, see "
In the case of high **AverageServerLatency** for blob download requests, you should use the Storage Logging logs to see if there are repeated requests for the same blob (or set of blobs). For blob upload requests, you should investigate what block size the client is using (for example, blocks less than 64 K in size can result in overheads unless the reads are also in less than 64 K chunks), and if multiple clients are uploading blocks to the same blob in parallel. You should also check the per-minute metrics for spikes in the number of requests that result in exceeding the per second scalability targets: also see "[Metrics show an increase in PercentTimeoutError]."
-If you are seeing high **AverageServerLatency** for blob download requests when there are repeated requests the same blob or set of blobs, then you should consider caching these blobs using Azure Cache or the Azure Content Delivery Network (CDN). For upload requests, you can improve the throughput by using a larger block size. For queries to tables, it is also possible to implement client-side caching on clients that perform the same query operations and where the data doesn't change frequently.
+If you're seeing high **AverageServerLatency** for blob download requests when there are repeated requests the same blob or set of blobs, then you should consider caching these blobs using Azure Cache or the Azure Content Delivery Network (CDN). For upload requests, you can improve the throughput by using a larger block size. For queries to tables, it's also possible to implement client-side caching on clients that perform the same query operations and where the data doesn't change frequently.
High **AverageServerLatency** values can also be a symptom of poorly designed tables or queries that result in scan operations or that follow the append/prepend anti-pattern. For more information, see "[Metrics show an increase in PercentThrottlingError]".
High **AverageServerLatency** values can also be a symptom of poorly designed ta
> >
-### <a name="you-are-experiencing-unexpected-delays-in-message-delivery"></a>You are experiencing unexpected delays in message delivery on a queue
+### <a name="you-are-experiencing-unexpected-delays-in-message-delivery"></a>You're experiencing unexpected delays in message delivery on a queue
-If you are experiencing a delay between the time an application adds a message to a queue and the time it becomes available to read from the queue, then you should take the following steps to diagnose the issue:
+If you're experiencing a delay between the time an application adds a message to a queue and the time it becomes available to read from the queue, then you should take the following steps to diagnose the issue:
-- Verify the application is successfully adding the messages to the queue. Check that the application is not retrying the **AddMessage** method several times before succeeding. The Storage Client Library logs will show any repeated retries of storage operations.-- Verify there is no clock skew between the worker role that adds the message to the queue and the worker role that reads the message from the queue that makes it appear as if there is a delay in processing.
+- Verify the application is successfully adding the messages to the queue. Check that the application isn't retrying the **AddMessage** method several times before succeeding. The Storage Client Library logs will show any repeated retries of storage operations.
+- Verify there's no clock skew between the worker role that adds the message to the queue and the worker role that reads the message from the queue that makes it appear as if there's a delay in processing.
- Check if the worker role that reads the messages from the queue is failing. If a queue client calls the **GetMessage** method but fails to respond with an acknowledgment, the message will remain invisible on the queue until the **invisibilityTimeout** period expires. At this point, the message becomes available for processing again. - Check if the queue length is growing over time. This can occur if you do not have sufficient workers available to process all of the messages that other workers are placing on the queue. Also check the metrics to see if delete requests are failing and the dequeue count on messages, which might indicate repeated failed attempts to delete the message. - Examine the Storage Logging logs for any queue operations that have higher than expected **E2ELatency** and **ServerLatency** values over a longer period of time than usual.
If the **PercentThrottlingError** metric show an increase in the percentage of r
- [Transient increase in PercentThrottlingError] - [Permanent increase in PercentThrottlingError error]
-An increase in **PercentThrottlingError** often occurs at the same time as an increase in the number of storage requests, or when you are initially load testing your application. This may also manifest itself in the client as "503 Server Busy" or "500 Operation Timeout" HTTP status messages from storage operations.
+An increase in **PercentThrottlingError** often occurs at the same time as an increase in the number of storage requests, or when you're initially load testing your application. This may also manifest itself in the client as "503 Server Busy" or "500 Operation Timeout" HTTP status messages from storage operations.
#### <a name="transient-increase-in-PercentThrottlingError"></a>Transient increase in PercentThrottlingError
-If you are seeing spikes in the value of **PercentThrottlingError** that coincide with periods of high activity for the application, you implement an exponential (not linear) back-off strategy for retries in your client. Back-off retries reduce the immediate load on the partition and help your application to smooth out spikes in traffic. For more information about how to implement retry policies using the Storage Client Library, see the [Microsoft.Azure.Storage.RetryPolicies namespace](/dotnet/api/microsoft.azure.storage.retrypolicies).
+If you're seeing spikes in the value of **PercentThrottlingError** that coincide with periods of high activity for the application, you implement an exponential (not linear) back-off strategy for retries in your client. Back-off retries reduce the immediate load on the partition and help your application to smooth out spikes in traffic. For more information about how to implement retry policies using the Storage Client Library, see the [Microsoft.Azure.Storage.RetryPolicies namespace](/dotnet/api/microsoft.azure.storage.retrypolicies).
> [!NOTE] > You may also see spikes in the value of **PercentThrottlingError** that do not coincide with periods of high activity for the application: the most likely cause here is the storage service moving partitions to improve load balancing.
If you are seeing spikes in the value of **PercentThrottlingError** that coincid
#### <a name="permanent-increase-in-PercentThrottlingError"></a>Permanent increase in PercentThrottlingError error
-If you are seeing a consistently high value for **PercentThrottlingError** following a permanent increase in your transaction volumes, or when you are performing your initial load tests on your application, then you need to evaluate how your application is using storage partitions and whether it is approaching the scalability targets for a storage account. For example, if you are seeing throttling errors on a queue (which counts as a single partition), then you should consider using additional queues to spread the transactions across multiple partitions. If you are seeing throttling errors on a table, you need to consider using a different partitioning scheme to spread your transactions across multiple partitions by using a wider range of partition key values. One common cause of this issue is the prepend/append anti-pattern where you select the date as the partition key and then all data on a particular day is written to one partition: under load, this can result in a write bottleneck. Either consider a different partitioning design or evaluate whether using blob storage might be a better solution. Also check whether throttling is occurring as a result of spikes in your traffic and investigate ways of smoothing your pattern of requests.
+If you're seeing a consistently high value for **PercentThrottlingError** following a permanent increase in your transaction volumes, or when you're performing your initial load tests on your application, then you need to evaluate how your application is using storage partitions and whether it's approaching the scalability targets for a storage account. For example, if you're seeing throttling errors on a queue (which counts as a single partition), then you should consider using additional queues to spread the transactions across multiple partitions. If you're seeing throttling errors on a table, you need to consider using a different partitioning scheme to spread your transactions across multiple partitions by using a wider range of partition key values. One common cause of this issue is the prepend/append anti-pattern where you select the date as the partition key and then all data on a particular day is written to one partition: under load, this can result in a write bottleneck. Either consider a different partitioning design or evaluate whether using blob storage might be a better solution. Also check whether throttling is occurring as a result of spikes in your traffic and investigate ways of smoothing your pattern of requests.
-If you distribute your transactions across multiple partitions, you must still be aware of the scalability limits set for the storage account. For example, if you used ten queues each processing the maximum of 2,000 1KB messages per second, you will be at the overall limit of 20,000 messages per second for the storage account. If you need to process more than 20,000 entities per second, you should consider using multiple storage accounts. You should also bear in mind that the size of your requests and entities has an impact on when the storage service throttles your clients: if you have larger requests and entities, you may be throttled sooner.
+If you distribute your transactions across multiple partitions, you must still be aware of the scalability limits set for the storage account. For example, if you used ten queues each processing the maximum of 2,000 1KB messages per second, you'll be at the overall limit of 20,000 messages per second for the storage account. If you need to process more than 20,000 entities per second, you should consider using multiple storage accounts. You should also bear in mind that the size of your requests and entities has an impact on when the storage service throttles your clients: if you have larger requests and entities, you may be throttled sooner.
Inefficient query design can also cause you to hit the scalability limits for table partitions. For example, a query with a filter that only selects one percent of the entities in a partition but that scans all the entities in a partition will need to access each entity. Every entity read will count towards the total number of transactions in that partition; therefore, you can easily reach the scalability targets.
The **PercentTimeoutError** metric is an aggregation of the following metrics: *
The server timeouts are caused by an error on the server. The client timeouts happen because an operation on the server has exceeded the timeout specified by the client; for example, a client using the Storage Client Library can set a timeout for an operation by using the **ServerTimeout** property of the **QueueRequestOptions** class.
-Server timeouts indicate a problem with the storage service that requires further investigation. You can use metrics to see if you are hitting the scalability limits for the service and to identify any spikes in traffic that might be causing this problem. If the problem is intermittent, it may be due to load-balancing activity in the service. If the problem is persistent and is not caused by your application hitting the scalability limits of the service, you should raise a support issue. For client timeouts, you must decide if the timeout is set to an appropriate value in the client and either change the timeout value set in the client or investigate how you can improve the performance of the operations in the storage service, for example by optimizing your table queries or reducing the size of your messages.
+Server timeouts indicate a problem with the storage service that requires further investigation. You can use metrics to see if you're hitting the scalability limits for the service and to identify any spikes in traffic that might be causing this problem. If the problem is intermittent, it may be due to load-balancing activity in the service. If the problem is persistent and isn't caused by your application hitting the scalability limits of the service, you should raise a support issue. For client timeouts, you must decide if the timeout is set to an appropriate value in the client and either change the timeout value set in the client or investigate how you can improve the performance of the operations in the storage service, for example by optimizing your table queries or reducing the size of your messages.
### <a name="metrics-show-an-increase-in-PercentNetworkError"></a>Metrics show an increase in PercentNetworkError
The most common cause of this error is a client disconnecting before a timeout e
### <a name="the-client-is-receiving-403-messages"></a>The client is receiving HTTP 403 (Forbidden) messages
-If your client application is throwing HTTP 403 (Forbidden) errors, a likely cause is that the client is using an expired Shared Access Signature (SAS) when it sends a storage request (although other possible causes include clock skew, invalid keys, and empty headers). If an expired SAS key is the cause, you will not see any entries in the server-side Storage Logging log data. The following table shows a sample from the client-side log generated by the Storage Client Library that illustrates this issue occurring:
+If your client application is throwing HTTP 403 (Forbidden) errors, a likely cause is that the client is using an expired Shared Access Signature (SAS) when it sends a storage request (although other possible causes include clock skew, invalid keys, and empty headers). If an expired SAS key is the cause, you'll not see any entries in the server-side Storage Logging log data. The following table shows a sample from the client-side log generated by the Storage Client Library that illustrates this issue occurring:
| Source | Verbosity | Verbosity | Client request ID | Operation text | | | | | | |
If your client application is throwing HTTP 403 (Forbidden) errors, a likely cau
In this scenario, you should investigate why the SAS token is expiring before the client sends the token to the server: -- Typically, you should not set a start time when you create a SAS for a client to use immediately. If there are small clock differences between the host generating the SAS using the current time and the storage service, then it is possible for the storage service to receive a SAS that is not yet valid.
+- Typically, you should not set a start time when you create a SAS for a client to use immediately. If there are small clock differences between the host generating the SAS using the current time and the storage service, then it's possible for the storage service to receive a SAS that isn't yet valid.
- Do not set a very short expiry time on a SAS. Again, small clock differences between the host generating the SAS and the storage service can lead to a SAS apparently expiring earlier than anticipated.-- Does the version parameter in the SAS key (for example **sv=2015-04-05**) match the version of the Storage Client Library you are using? We recommend that you always use the latest version of the [Storage Client Library](https://www.nuget.org/packages/WindowsAzure.Storage/).
+- Does the version parameter in the SAS key (for example **sv=2015-04-05**) match the version of the Storage Client Library you're using? We recommend that you always use the latest version of the [Storage Client Library](https://www.nuget.org/packages/WindowsAzure.Storage/).
- If you regenerate your storage access keys, any existing SAS tokens may be invalidated. This issue may arise if you generate SAS tokens with a long expiry time for client applications to cache.
-If you are using the Storage Client Library to generate SAS tokens, then it is easy to build a valid token. However, if you are using the Storage REST API and constructing the SAS tokens by hand, see [Delegating Access with a Shared Access Signature](/rest/api/storageservices/delegate-access-with-shared-access-signature).
+If you're using the Storage Client Library to generate SAS tokens, then it's easy to build a valid token. However, if you're using the Storage REST API and constructing the SAS tokens by hand, see [Delegating Access with a Shared Access Signature](/rest/api/storageservices/delegate-access-with-shared-access-signature).
### <a name="the-client-is-receiving-404-messages"></a>The client is receiving HTTP 404 (Not found) messages
-If the client application receives an HTTP 404 (Not found) message from the server, this implies that the object the client was attempting to use (such as an entity, table, blob, container, or queue) does not exist in the storage service. There are a number of possible reasons for this, such as:
+If the client application receives an HTTP 404 (Not found) message from the server, this implies that the object the client was attempting to use (such as an entity, table, blob, container, or queue) doesn't exist in the storage service. There are a number of possible reasons for this, such as:
- [The client or another process previously deleted the object] - [A Shared Access Signature (SAS) authorization issue]-- [Client-side JavaScript code does not have permission to access the object]
+- [Client-side JavaScript code doesn't have permission to access the object]
- [Network failure] #### <a name="client-previously-deleted-the-object"></a>The client or another process previously deleted the object
-In scenarios where the client is attempting to read, update, or delete data in a storage service it is usually easy to identify in the server-side logs a previous operation that deleted the object in question from the storage service. Often, the log data shows that another user or process deleted the object. In the server-side Storage Logging log, the operation-type and requested-object-key columns show when a client deleted an object.
+In scenarios where the client is attempting to read, update, or delete data in a storage service it's usually easy to identify in the server-side logs a previous operation that deleted the object in question from the storage service. Often, the log data shows that another user or process deleted the object. In the server-side Storage Logging log, the operation-type and requested-object-key columns show when a client deleted an object.
In the scenario where a client is attempting to insert an object, it may not be immediately obvious why this results in an HTTP 404 (Not found) response given that the client is creating a new object. However, if the client is creating a blob it must be able to find the blob container, if the client is creating a message it must be able to find a queue, and if the client is adding a row it must be able to find the table. You can use the client-side log from the Storage Client Library to gain a more detailed understanding of when the client sends specific requests to the storage service.
-The following client-side log generated by the Storage Client library illustrates the problem when the client cannot find the container for the blob it is creating. This log includes details of the following storage operations:
+The following client-side log generated by the Storage Client library illustrates the problem when the client cannot find the container for the blob it's creating. This log includes details of the following storage operations:
| Request ID | Operation | | | |
In this example, the log shows that the client is interleaving requests from the
#### <a name="SAS-authorization-issue"></a>A Shared Access Signature (SAS) authorization issue
-If the client application attempts to use a SAS key that does not include the necessary permissions for the operation, the storage service returns an HTTP 404 (Not found) message to the client. At the same time, you will also see a non-zero value for **SASAuthorizationError** in the metrics.
+If the client application attempts to use a SAS key that doesn't include the necessary permissions for the operation, the storage service returns an HTTP 404 (Not found) message to the client. At the same time, you'll also see a non-zero value for **SASAuthorizationError** in the metrics.
The following table shows a sample server-side log message from the Storage Logging log file:
The following table shows a sample server-side log message from the Storage Logg
Investigate why your client application is attempting to perform an operation for which it has not been granted permissions.
-#### <a name="JavaScript-code-does-not-have-permission"></a>Client-side JavaScript code does not have permission to access the object
+#### <a name="JavaScript-code-does-not-have-permission"></a>Client-side JavaScript code doesn't have permission to access the object
-If you are using a JavaScript client and the storage service is returning HTTP 404 messages, you check for the following JavaScript errors in the browser:
+If you're using a JavaScript client and the storage service is returning HTTP 404 messages, you check for the following JavaScript errors in the browser:
``` SEC7120: Origin http://localhost:56309 not found in Access-Control-Allow-Origin header.
SCRIPT7002: XMLHttpRequest: Network Error 0x80070005, Access is denied.
``` > [!NOTE]
-> You can use the F12 Developer Tools in Internet Explorer to trace the messages exchanged between the browser and the storage service when you are troubleshooting client-side JavaScript issues.
+> You can use the F12 Developer Tools in Internet Explorer to trace the messages exchanged between the browser and the storage service when you're troubleshooting client-side JavaScript issues.
> >
To work around the JavaScript issue, you can configure Cross Origin Resource Sha
The following code sample shows how to configure your blob service to allow JavaScript running in the Contoso domain to access a blob in your blob storage service:
-# [.NET v12 SDK](#tab/dotnet)
- :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Monitoring.cs" id="Snippet_ConfigureCORS":::
-# [.NET v11 SDK](#tab/dotnet11)
-
-```csharp
-CloudBlobClient client = new CloudBlobClient(blobEndpoint, new StorageCredentials(accountName, accountKey));
-// Set the service properties.
-ServiceProperties sp = client.GetServiceProperties();
-sp.DefaultServiceVersion = "2013-08-15";
-CorsRule cr = new CorsRule();
-cr.AllowedHeaders.Add("*");
-cr.AllowedMethods = CorsHttpMethods.Get | CorsHttpMethods.Put;
-cr.AllowedOrigins.Add("http://www.contoso.com");
-cr.ExposedHeaders.Add("x-ms-*");
-cr.MaxAgeInSeconds = 5;
-sp.Cors.CorsRules.Clear();
-sp.Cors.CorsRules.Add(cr);
-client.SetServiceProperties(sp);
-```
--- #### <a name="network-failure"></a>Network Failure In some circumstances, lost network packets can lead to the storage service returning HTTP 404 messages to the client. For example, when your client application is deleting an entity from the table service you see the client throw a storage exception reporting an "HTTP 404 (Not Found)" status message from the table service. When you investigate the table in the table storage service, you see that the service did delete the entity as requested.
The exception details in the client include the request ID (7e84f12d…) assigne
The server-side log also includes another entry with the same **client-request-id** value (813ea74f…) for a successful delete operation for the same entity, and from the same client. This successful delete operation took place very shortly before the failed delete request.
-The most likely cause of this scenario is that the client sent a delete request for the entity to the table service, which succeeded, but did not receive an acknowledgment from the server (perhaps due to a temporary network issue). The client then automatically retried the operation (using the same **client-request-id**), and this retry failed because the entity had already been deleted.
+The most likely cause of this scenario is that the client sent a delete request for the entity to the table service, which succeeded, but didn't receive an acknowledgment from the server (perhaps due to a temporary network issue). The client then automatically retried the operation (using the same **client-request-id**), and this retry failed because the entity had already been deleted.
If this problem occurs frequently, you should investigate why the client is failing to receive acknowledgments from the table service. If the problem is intermittent, you should trap the "HTTP (404) Not Found" error and log it in the client, but allow the client to continue.
The following table shows an extract from the server-side log for two client ope
| 05:10:13.8987407 |GetContainerProperties |404 |mmcont |bc881924-… | | 05:10:14.2147723 |CreateContainer |409 |mmcont |bc881924-… |
-The code in the client application deletes and then immediately recreates a blob container using the same name: the **CreateIfNotExists** method (Client request ID bc881924-…) eventually fails with the HTTP 409 (Conflict) error. When a client deletes blob containers, tables, or queues there is a brief period before the name becomes available again.
+The code in the client application deletes and then immediately recreates a blob container using the same name: the **CreateIfNotExists** method (Client request ID bc881924-…) eventually fails with the HTTP 409 (Conflict) error. When a client deletes blob containers, tables, or queues there's a brief period before the name becomes available again.
The client application should use unique container names whenever it creates new containers if the delete/recreate pattern is common.
The client application should use unique container names whenever it creates new
The **PercentSuccess** metric captures the percent of operations that were successful based on their HTTP Status Code. Operations with status codes of 2XX count as successful, whereas operations with status codes in 3XX, 4XX and 5XX ranges are counted as unsuccessful and lower the **PercentSuccess** metric value. In the server-side storage log files, these operations are recorded with a transaction status of **ClientOtherErrors**.
-It is important to note that these operations have completed successfully and therefore do not affect other metrics such as availability. Some examples of operations that execute successfully but that can result in unsuccessful HTTP status codes include:
+It's important to note that these operations have completed successfully and therefore do not affect other metrics such as availability. Some examples of operations that execute successfully but that can result in unsuccessful HTTP status codes include:
-- **ResourceNotFound** (Not Found 404), for example from a GET request to a blob that does not exist.
+- **ResourceNotFound** (Not Found 404), for example from a GET request to a blob that doesn't exist.
- **ResourceAlreadyExists** (Conflict 409), for example from a **CreateIfNotExist** operation where the resource already exists. - **ConditionNotMet** (Not Modified 304), for example from a conditional operation such as when a client sends an **ETag** value and an HTTP **If-None-Match** header to request an image only if it has been updated since the last operation.
You can find a list of common REST API error codes that the storage services ret
### <a name="capacity-metrics-show-an-unexpected-increase"></a>Capacity metrics show an unexpected increase in storage capacity usage
-If you see sudden, unexpected changes in capacity usage in your storage account, you can investigate the reasons by first looking at your availability metrics; for example, an increase in the number of failed delete requests might lead to an increase in the amount of blob storage you are using as application-specific cleanup operations you might have expected to be freeing up space may not be working as expected (for example, because the SAS tokens used for freeing up space have expired).
+If you see sudden, unexpected changes in capacity usage in your storage account, you can investigate the reasons by first looking at your availability metrics; for example, an increase in the number of failed delete requests might lead to an increase in the amount of blob storage you're using as application-specific cleanup operations you might have expected to be freeing up space may not be working as expected (for example, because the SAS tokens used for freeing up space have expired).
### <a name="your-issue-arises-from-using-the-storage-emulator"></a>Your issue arises from using the storage emulator for development or test
-You typically use the storage emulator during development and test to avoid the requirement for an Azure storage account. The common issues that can occur when you are using the storage emulator are:
+You typically use the storage emulator during development and test to avoid the requirement for an Azure storage account. The common issues that can occur when you're using the storage emulator are:
-- [Feature "X" is not working in the storage emulator]-- [Error "The value for one of the HTTP headers is not in the correct format" when using the storage emulator]
+- [Feature "X" isn't working in the storage emulator]
+- [Error "The value for one of the HTTP headers isn't in the correct format" when using the storage emulator]
- [Running the storage emulator requires administrative privileges]
-#### <a name="feature-X-is-not-working"></a>Feature "X" is not working in the storage emulator
+#### <a name="feature-X-is-not-working"></a>Feature "X" isn't working in the storage emulator
-The storage emulator does not support all of the features of the Azure storage services such as the file service. For more information, see [Use the Azure Storage Emulator for Development and Testing](storage-use-emulator.md).
+The storage emulator doesn't support all of the features of the Azure storage services such as the file service. For more information, see [Use the Azure Storage Emulator for Development and Testing](storage-use-emulator.md).
-For those features that the storage emulator does not support, use the Azure storage service in the cloud.
+For those features that the storage emulator doesn't support, use the Azure storage service in the cloud.
-#### <a name="error-HTTP-header-not-correct-format"></a>Error "The value for one of the HTTP headers is not in the correct format" when using the storage emulator
+#### <a name="error-HTTP-header-not-correct-format"></a>Error "The value for one of the HTTP headers isn't in the correct format" when using the storage emulator
-You are testing your application that uses the Storage Client Library against the local storage emulator and method calls such as **CreateIfNotExists** fail with the error message "The value for one of the HTTP headers is not in the correct format." This indicates that the version of the storage emulator you are using does not support the version of the storage client library you are using. The Storage Client Library adds the header **x-ms-version** to all the requests it makes. If the storage emulator does not recognize the value in the **x-ms-version** header, it rejects the request.
+You're testing your application that uses the Storage Client Library against the local storage emulator and method calls such as **CreateIfNotExists** fail with the error message "The value for one of the HTTP headers isn't in the correct format." This indicates that the version of the storage emulator you're using doesn't support the version of the storage client library you're using. The Storage Client Library adds the header **x-ms-version** to all the requests it makes. If the storage emulator doesn't recognize the value in the **x-ms-version** header, it rejects the request.
-You can use the Storage Library Client logs to see the value of the **x-ms-version header** it is sending. You can also see the value of the **x-ms-version header** if you use Fiddler to trace the requests from your client application.
+You can use the Storage Library Client logs to see the value of the **x-ms-version header** it's sending. You can also see the value of the **x-ms-version header** if you use Fiddler to trace the requests from your client application.
This scenario typically occurs if you install and use the latest version of the Storage Client Library without updating the storage emulator. You should either install the latest version of the storage emulator, or use cloud storage instead of the emulator for development and test. #### <a name="storage-emulator-requires-administrative-privileges"></a>Running the storage emulator requires administrative privileges
-You are prompted for administrator credentials when you run the storage emulator. This only occurs when you are initializing the storage emulator for the first time. After you have initialized the storage emulator, you do not need administrative privileges to run it again.
+You're prompted for administrator credentials when you run the storage emulator. This only occurs when you're initializing the storage emulator for the first time. After you've initialized the storage emulator, you don't need administrative privileges to run it again.
For more information, see [Use the Azure Storage Emulator for Development and Testing](storage-use-emulator.md). You can also initialize the storage emulator in Visual Studio, which will also require administrative privileges.
-### <a name="you-are-encountering-problems-installing-the-Windows-Azure-SDK"></a>You are encountering problems installing the Azure SDK for .NET
+### <a name="you-are-encountering-problems-installing-the-Windows-Azure-SDK"></a>You're encountering problems installing the Azure SDK for .NET
When you try to install the SDK, it fails trying to install the storage emulator on your local machine. The installation log contains one of the following messages:
The **delete** command removes any old database files from previous installation
### <a name="you-have-a-different-issue-with-a-storage-service"></a>You have a different issue with a storage service
-If the previous troubleshooting sections do not include the issue you are having with a storage service, you should adopt the following approach to diagnosing and troubleshooting your issue.
+If the previous troubleshooting sections don't include the issue you're having with a storage service, you should adopt the following approach to diagnosing and troubleshooting your issue.
-- Check your metrics to see if there is any change from your expected base-line behavior. From the metrics, you may be able to determine whether the issue is transient or permanent, and which storage operations the issue is affecting.
+- Check your metrics to see if there's any change from your expected base-line behavior. From the metrics, you may be able to determine whether the issue is transient or permanent, and which storage operations the issue is affecting.
- You can use the metrics information to help you search your server-side log data for more detailed information about any errors that are occurring. This information may help you troubleshoot and resolve the issue.-- If the information in the server-side logs is not sufficient to troubleshoot the issue successfully, you can use the Storage Client Library client-side logs to investigate the behavior of your client application, and tools such as Fiddler, Wireshark to investigate your network.
+- If the information in the server-side logs isn't sufficient to troubleshoot the issue successfully, you can use the Storage Client Library client-side logs to investigate the behavior of your client application, and tools such as Fiddler, Wireshark to investigate your network.
For more information about using Fiddler, see "[Appendix 1: Using Fiddler to capture HTTP and HTTPS traffic]."
For more information about using Wireshark, see "[Appendix 2: Using Wireshark to
## <a name="appendices"></a>Appendices
-The appendices describe several tools that you may find useful when you are diagnosing and troubleshooting issues with Azure Storage (and other services). These tools are not part of Azure Storage and some are third-party products. As such, the tools discussed in these appendices are not covered by any support agreement you may have with Microsoft Azure or Azure Storage, and therefore as part of your evaluation process you should examine the licensing and support options available from the providers of these tools.
+The appendices describe several tools that you may find useful when you're diagnosing and troubleshooting issues with Azure Storage (and other services). These tools are not part of Azure Storage and some are third-party products. As such, the tools discussed in these appendices are not covered by any support agreement you may have with Microsoft Azure or Azure Storage, and therefore as part of your evaluation process you should examine the licensing and support options available from the providers of these tools.
### <a name="appendix-1"></a>Appendix 1: Using Fiddler to capture HTTP and HTTPS traffic
-[Fiddler](https://www.telerik.com/fiddler) is a useful tool for analyzing the HTTP and HTTPS traffic between your client application and the Azure storage service you are using.
+[Fiddler](https://www.telerik.com/fiddler) is a useful tool for analyzing the HTTP and HTTPS traffic between your client application and the Azure storage service you're using.
> [!NOTE] > Fiddler can decode HTTPS traffic; you should read the Fiddler documentation carefully to understand how it does this, and to understand the security implications.
You can also choose to view the TCP data as the application layer sees it by rig
### <a name="appendix-4"></a>Appendix 4: Using Excel to view metrics and log data
-Many tools enable you to download the Storage Metrics data from Azure table storage in a delimited format that makes it easy to load the data into Excel for viewing and analysis. Storage Logging data from Azure Blob Storage is already in a delimited format that you can load into Excel. However, you will need to add appropriate column headings based in the information at [Storage Analytics Log Format](/rest/api/storageservices/Storage-Analytics-Log-Format) and [Storage Analytics Metrics Table Schema](/rest/api/storageservices/Storage-Analytics-Metrics-Table-Schema).
+Many tools enable you to download the Storage Metrics data from Azure table storage in a delimited format that makes it easy to load the data into Excel for viewing and analysis. Storage Logging data from Azure Blob Storage is already in a delimited format that you can load into Excel. However, you'll need to add appropriate column headings based in the information at [Storage Analytics Log Format](/rest/api/storageservices/Storage-Analytics-Log-Format) and [Storage Analytics Metrics Table Schema](/rest/api/storageservices/Storage-Analytics-Metrics-Table-Schema).
To import your Storage Logging data into Excel after you download it from blob storage:
For more information about analytics in Azure Storage, see these resources:
[Metrics show high AverageE2ELatency and low AverageServerLatency]: #metrics-show-high-AverageE2ELatency-and-low-AverageServerLatency [Metrics show low AverageE2ELatency and low AverageServerLatency but the client is experiencing high latency]: #metrics-show-low-AverageE2ELatency-and-low-AverageServerLatency [Metrics show high AverageServerLatency]: #metrics-show-high-AverageServerLatency
-[You are experiencing unexpected delays in message delivery on a queue]: #you-are-experiencing-unexpected-delays-in-message-delivery
+[You're experiencing unexpected delays in message delivery on a queue]: #you-are-experiencing-unexpected-delays-in-message-delivery
[Metrics show an increase in PercentThrottlingError]: #metrics-show-an-increase-in-PercentThrottlingError [Transient increase in PercentThrottlingError]: #transient-increase-in-PercentThrottlingError
For more information about analytics in Azure Storage, see these resources:
[The client is receiving HTTP 404 (Not found) messages]: #the-client-is-receiving-404-messages [The client or another process previously deleted the object]: #client-previously-deleted-the-object [A Shared Access Signature (SAS) authorization issue]: #SAS-authorization-issue
-[Client-side JavaScript code does not have permission to access the object]: #JavaScript-code-does-not-have-permission
+[Client-side JavaScript code doesn't have permission to access the object]: #JavaScript-code-does-not-have-permission
[Network failure]: #network-failure [The client is receiving HTTP 409 (Conflict) messages]: #the-client-is-receiving-409-messages
For more information about analytics in Azure Storage, see these resources:
[Feature "X" is not working in the storage emulator]: #feature-X-is-not-working [Error "The value for one of the HTTP headers is not in the correct format" when using the storage emulator]: #error-HTTP-header-not-correct-format [Running the storage emulator requires administrative privileges]: #storage-emulator-requires-administrative-privileges
-[You are encountering problems installing the Azure SDK for .NET]: #you-are-encountering-problems-installing-the-Windows-Azure-SDK
+[You're encountering problems installing the Azure SDK for .NET]: #you-are-encountering-problems-installing-the-Windows-Azure-SDK
[You have a different issue with a storage service]: #you-have-a-different-issue-with-a-storage-service [Appendices]: #appendices
storage Storage Stored Access Policy Define Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-stored-access-policy-define-dotnet.md
The underlying REST operation to create a stored access policy is [Set Container
The following code examples create a stored access policy on a container. You can use the access policy to specify constraints for a service SAS on the container or its blobs.
-# [.NET v12 SDK](#tab/dotnet)
- To create a stored access policy on a container with version 12 of the .NET client library for Azure Storage, call one of the following methods: - [BlobContainerClient.SetAccessPolicy](/dotnet/api/azure.storage.blobs.blobcontainerclient.setaccesspolicy)
async static Task CreateStoredAccessPolicyAsync(string containerName)
} ```
-# [.NET v11 SDK](#tab/dotnet11)
-
-To create a stored access policy on a container with version 12 of the .NET client library for Azure Storage, call one of the following methods:
--- [CloudBlobContainer.SetPermissions](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer.setpermissions)-- [CloudBlobContainer.SetPermissionsAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer.setpermissionsasync)-
-The following example creates a stored access policy that is in effect for one day and that grants read, write, and list permissions:
-
-```csharp
-private static async Task CreateStoredAccessPolicyAsync(CloudBlobContainer container, string policyName)
-{
- // Create a new stored access policy and define its constraints.
- // The access policy provides create, write, read, list, and delete permissions.
- SharedAccessBlobPolicy sharedPolicy = new SharedAccessBlobPolicy()
- {
- // When the start time for the SAS is omitted, the start time is assumed to be the time when Azure Storage receives the request.
- SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
- Permissions = SharedAccessBlobPermissions.Read | SharedAccessBlobPermissions.List |
- SharedAccessBlobPermissions.Write
- };
-
- // Get the container's existing permissions.
- BlobContainerPermissions permissions = await container.GetPermissionsAsync();
-
- // Add the new policy to the container's permissions, and set the container's permissions.
- permissions.SharedAccessPolicies.Add(policyName, sharedPolicy);
- await container.SetPermissionsAsync(permissions);
-}
-```
--- ## See also - [Grant limited access to Azure Storage resources using shared access signatures (SAS)](storage-sas-overview.md) - [Create a stored access policy](/rest/api/storageservices/define-stored-access-policy) - [Configure Azure Storage connection strings](storage-configure-connection-string.md)+
+## Resources
+
+For related code samples using deprecated .NET version 11.x SDKs, see [Code samples using .NET version 11.x](../blobs/blob-v11-samples-dotnet.md#create-a-stored-access-policy).
storage Transport Layer Security Configure Client Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/transport-layer-security-configure-client-version.md
$ctx = $storageAccount.Context
New-AzStorageContainer -Name "sample-container" -Context $ctx ```
-# [.NET v12 SDK](#tab/dotnet)
+# [.NET](#tab/dotnet)
The following sample shows how to enable TLS 1.2 in a .NET client using version 12 of the Azure Storage client library: :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Networking.cs" id="Snippet_ConfigureTls12":::
-# [.NET v11 SDK](#tab/dotnet11)
-
-The following sample shows how to enable TLS 1.2 in a .NET client using version 11 of the Azure Storage client library:
-
-```csharp
-static void EnableTls12()
-{
- // Enable TLS 1.2 before connecting to Azure Storage
- System.Net.ServicePointManager.SecurityProtocol = System.Net.SecurityProtocolType.Tls12;
-
- // Add your connection string here.
- string connectionString = "";
-
- // Connect to Azure Storage and create a new container.
- CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString);
- CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
-
- CloudBlobContainer container = blobClient.GetContainerReference("sample-container");
- container.CreateIfNotExists();
-}
-```
- ## Verify the TLS version used by a client
storage File Sync Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-release-notes.md
The following Azure File Sync agent versions are supported:
| Milestone | Agent version number | Release date | Status | |-|-|--||
-| V16.0 Release - [KB5013877](https://support.microsoft.com/topic/ffdc8fe2-c653-43c8-8b47-0865267fd520)| 16.0.0.0 | January 30, 2023 | Supported - Flighting |
+| V16.0 Release - [KB5013877](https://support.microsoft.com/topic/ffdc8fe2-c653-43c8-8b47-0865267fd520)| 16.0.0.0 | January 30, 2023 | Supported |
| V15.2 Release - [KB5013875](https://support.microsoft.com/topic/9159eee2-3d16-4523-ade4-1bac78469280)| 15.2.0.0 | November 21, 2022 | Supported | | V15.1 Release - [KB5003883](https://support.microsoft.com/topic/45761295-d49a-431e-98ec-4fb3329b0544)| 15.1.0.0 | September 19, 2022 | Supported | | V15 Release - [KB5003882](https://support.microsoft.com/topic/2f93053f-869b-4782-a832-e3c772a64a2d)| 15.0.0.0 | March 30, 2022 | Supported |
storage Files Troubleshoot Linux Nfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-troubleshoot-linux-nfs.md
Even if you correctly disable idmapping, it can be automatically re-enabled in s
Make sure you've disabled idmapping and that nothing is re-enabling it. Then perform the following steps: - Unmount the share-- Disable idmapping with `# echo Y > /sys/module/nfs/parameters/nfs4_disable_idmapping`
+- Disable idmapping with
+```bash
+sudo echo Y > /sys/module/nfs/parameters/nfs4_disable_idmapping
+```
- Mount the share back - If running rsync, run rsync with the "ΓÇönumeric-ids" argument from a directory that doesn't have a bad dir/file name.
Disable **secure transfer required** in your storage account's configuration bla
:::image type="content" source="media/storage-files-how-to-mount-nfs-shares/disable-secure-transfer.png" alt-text="Screenshot of storage account configuration blade, disabling secure transfer required.":::
-### Cause 3: nfs-common package isn't installed
-Before running the `mount` command, install the nfs-common package.
+### Cause 3: nfs-utils, nfs-client or nfs-common package isn't installed
+Before running the `mount` command, install the nfs-utils, nfs-client or the nfs-common package.
+
+To check if the NFS package is installed, run:
+
+# [RHEL](#tab/RHEL)
+
+Same commands on this section apply for CentOS and Oracle Linux.
+
+```bash
+sudo rpm -qa | grep nfs-utils
+```
+# [SLES](#tab/SLES)
-To check if the NFS package is installed, run: `rpm qa | grep nfs-utils`
+```bash
+sudo rpm -qa | grep nfs-client
+```
+# [Ubuntu](#tab/Ubuntu)
+
+Same commands on this section apply for Debian.
+
+```bash
+sudo dpkg -l | grep nfs-common
+```
+ #### Solution If the package isn't installed, install the package using your distro-specific command.
-##### Ubuntu or Debian
+# [RHEL](#tab/RHEL)
+
+Same commands on this section apply for CentOS and Oracle Linux.
+Os Version 7.X
+
+```bash
+sudo yum install nfs-utils
```
-sudo apt update
-sudo apt install nfs-common
+OS Version 8.X or 9.X
+
+```bash
+sudo dnf install nfs-utils
```
-##### Fedora, Red Hat Enterprise Linux 8+, CentOS 8+
+# [SLES](#tab/SLES)
-Use the dnf package
+```bash
+sudo zypper install nfs-client
+```
-Older versions of Red Hat Enterprise Linux and CentOS use the yum package
+# [Ubuntu](#tab/Ubuntu)
-##### openSUSE
+Same commands on this section apply for Debian.
-Use the zypper package
+```bash
+sudo apt update
+sudo apt install nfs-common
+```
+ ### Cause 4: Firewall blocking port 2049
The NFS protocol communicates to its server over port 2049. Make sure that this
#### Solution
-Verify that port 2049 is open on your client by running the following command: `telnet <storageaccountnamehere>.file.core.windows.net 2049`. If the port isn't open, open it.
+Verify that port 2049 is open on your client by running the following command. If the port isn't open, open it.
+
+```bash
+sudo nc -zv <storageaccountnamehere>.file.core.windows.net 2049
+```
## ls hangs for large directory enumeration on some kernels
storage Files Troubleshoot Linux Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-troubleshoot-linux-smb.md
Upgrade the Linux kernel to the following versions that have a fix for this prob
### Cause By default, mounting Azure file shares on Linux by using SMB doesn't enable support for symbolic links (symlinks). You might see an error like this:
+```bash
+sudo ln -s linked -n t
```
-ln -s linked -n t
+```output
ln: failed to create symbolic link 't': Operation not supported ```
The Linux SMB client doesn't support creating Windows-style symbolic links over
To use symlinks, add the following to the end of your SMB mount command:
-```
+```bash
,mfsymlinks ``` So the command looks something like:
-```
+```bash
sudo mount -t cifs //<storage-account-name>.file.core.windows.net/<share-name> <mount-point> -o vers=<smb-version>,username=<storage-account-name>,password=<storage-account-key>,dir_mode=0777,file_mode=0777,serverino,mfsymlinks ```
sudo mount -t cifs $smbPath $mntPath -o vers=3.0,username=$storageAccountName,pa
File I/Os on the mounted filesystem start giving "Host is down" or "Permission denied" errors. Linux dmesg logs on the client show repeated errors like:
-```
+```output
Status code returned 0xc000006d STATUS_LOGON_FAILURE cifs_setup_session: 2 callbacks suppressed CIFS VFS: \\contoso.file.core.windows.net Send error in SessSetup = -13
storage Files Troubleshoot Smb Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-troubleshoot-smb-connectivity.md
Verify virtual network and firewall rules are configured properly on the storage
### Cause
-Some Linux distributions don't yet support encryption features in SMB 3.x. Users might receive a "115" error message if they try to mount Azure Files by using SMB 3.x because of a missing feature. SMB 3.x with full encryption is supported only when you're using Ubuntu 16.04 or later.
+Some Linux distributions don't yet support encryption features in SMB 3.x. Users might receive a "115" error message if they try to mount Azure Files by using SMB 3.x because of a missing feature. SMB 3.x with full encryption is supported only on latest version of a Linux Distro.
### Solution
If you still need help, [contact support](https://portal.azure.com/?#blade/Micro
- [Troubleshoot Azure Files authentication and authorization (SMB)](files-troubleshoot-smb-authentication.md) - [Troubleshoot Azure Files general SMB issues on Linux](files-troubleshoot-linux-smb.md) - [Troubleshoot Azure Files general NFS issues on Linux](files-troubleshoot-linux-nfs.md)-- [Troubleshoot Azure File Sync issues](../file-sync/file-sync-troubleshoot.md)
+- [Troubleshoot Azure File Sync issues](../file-sync/file-sync-troubleshoot.md)
virtual-machines Custom Script Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/custom-script-windows.md
Set-AzVMExtension -ResourceGroupName <resourceGroupName> `
### Running a custom script more than once by using the CLI
-The Custom Script Extension handler prevents rerunning a script if the *exact* same settings have been passed. This behavior prevents accidental rerunning, which might cause unexpected behaviors if the script isn't idempotent. To confirm whether the handler blocked the rerunning, look at *C:\WindowsAzure\Logs\Plugins\Microsoft.Compute.CustomScriptExtension\<HandlerVersion>\CustomScriptHandler.log*. Searching for a warning like this one:
+The Custom Script Extension handler prevents rerunning a script if the *exact* same settings have been passed. This behavior prevents accidental rerunning, which might cause unexpected behaviors if the script isn't idempotent. To confirm whether the handler blocked the rerunning, look at ```C:\WindowsAzure\Logs\Plugins\Microsoft.Compute.CustomScriptExtension\<HandlerVersion>\CustomScriptHandler.log*```. Searching for a warning like this one:
```output Current sequence number, <SequenceNumber>, is not greater than the sequence number
virtual-machines How To Configure Lvm Raid On Crypt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/how-to-configure-lvm-raid-on-crypt.md
Previously updated : 03/17/2020+ Last updated : 04/06/2023
virtual-machines Image Builder Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-troubleshoot.md
There are cases where [Managed Service Identities (MSI)](./image-builder-permiss
#### Solution
-Use Azure CLI to reset identity on the image template. Ensure you [update](/azure/update-azure-cli) Azure CLI to the 2.45.0 version or later.
+Use Azure CLI to reset identity on the image template. Ensure you [update](/cli/azure/update-azure-cli) Azure CLI to the 2.45.0 version or later.
Remove the managed identity from the target image builder template
virtual-machines Multiple Nics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/multiple-nics.md
Title: Create a Linux VM in Azure with multiple NICs description: Learn how to create a Linux VM with multiple NICs attached to it using the Azure CLI or Resource Manager templates.-+ Previously updated : 06/07/2018- Last updated : 04/06/2023++ # How to create a Linux virtual machine in Azure with multiple network interface cards
virtual-machines Proximity Placement Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/proximity-placement-groups.md
Previously updated : 3/8/2021 Last updated : 4/6/2023
az ppg create \
## List proximity placement groups
-You can list all of your proximity placement groups using [az ppg list](/cli/azure/ppg#az-ppg-list).
+You can list all of your proximity placement groups using ['az ppg list'](/cli/azure/ppg#az-ppg-list).
```azurecli-interactive az ppg list -o table ``` ## Show proximity placement group
-You can see the proximity placement group details and resources using [az ppg show](/cli/azure/ppg#az-ppg-show)
+You can see the proximity placement group details and resources using ['az ppg show'](/cli/azure/ppg#az-ppg-show)
```azurecli-interactive az ppg show --name myPPG --resource-group myPPGGroup
az vm create \
-l eastus ```
-You can see the VM in the proximity placement group using [az ppg show](/cli/azure/ppg#az-ppg-show).
+You can see the VM in the proximity placement group using ['az ppg show'](/cli/azure/ppg#az-ppg-show).
```azurecli-interactive az ppg show --name myppg --resource-group myppggroup --query "virtualMachines" ``` ## Availability Sets
-You can also create an availability set in your proximity placement group. Use the same `--ppg` parameter with [az vm availability-set create](/cli/azure/vm/availability-set#az-vm-availability-set-create) to create an availability set and all of the VMs in the availability set will also be created in the same proximity placement group.
+You can also create an availability set in your proximity placement group. Use the same `--ppg` parameter with [az vm availability-set create](/cli/azure/vm/availability-set#az-vm-availability-set-create) to add all VMs in the availability set to the same proximity placement group.
## Scale sets
-You can also create a scale set in your proximity placement group. Use the same `--ppg` parameter with [az vmss create](/cli/azure/vmss#az-vmss-create) to create a scale set and all of the instances will be created in the same proximity placement group.
+You can also create a scale set in your proximity placement group. Use the same `--ppg` parameter with ['az vmss create'](/cli/azure/vmss#az-vmss-create) to create a scale set and all of the instances will be created in the same proximity placement group.
## Next steps
virtual-machines Static Dns Name Resolution For Linux On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/static-dns-name-resolution-for-linux-on-azure.md
Title: Use internal DNS for VM name resolution with the Azure CLI description: How to create virtual network interface cards and use internal DNS for VM name resolution on Azure with the Azure CLI.-+ Previously updated : 02/16/2017- Last updated : 04/06/2023++ # Create virtual network interface cards and use internal DNS for VM name resolution on Azure
az vm create \
## Detailed walkthrough
-A full continuous integration and continuous deployment (CiCd) infrastructure on Azure requires certain servers to be static or long-lived servers. It is recommended that Azure assets like the virtual networks and Network Security Groups are static and long lived resources that are rarely deployed. Once a virtual network has been deployed, it can be reused by new deployments without any adverse affects to the infrastructure. You can later add a Git repository server or a Jenkins automation server delivers CiCd to this virtual network for your development or test environments.
+A full continuous integration and continuous deployment (CiCd) infrastructure on Azure requires certain servers to be static or long-lived servers. It's recommended that Azure assets like the virtual networks and Network Security Groups are static and long lived resources that are rarely deployed. Once a virtual network has been deployed, it can be reused in new deployments without any adverse affects to the infrastructure. You can later add a Git repository server or a Jenkins automation server delivers CiCd to this virtual network for your development or test environments.
-Internal DNS names are only resolvable inside an Azure virtual network. Because the DNS names are internal, they are not resolvable to the outside internet, providing additional security to the infrastructure.
+Internal DNS names are only resolvable inside an Azure virtual network. Because the DNS names are internal, they aren't resolvable to the outside internet, providing extra security to the infrastructure.
In the following examples, replace example parameter names with your own values. Example parameter names include `myResourceGroup`, `myNic`, and `myVM`.
az network vnet subnet update \
## Create the virtual network interface card and static DNS names
-Azure is very flexible, but to use DNS names for VM name resolution, you need to create virtual network interface cards (vNics) that include a DNS label. vNics are important as you can reuse them by connecting them to different VMs over the infrastructure lifecycle. This approach keeps the vNic as a static resource while the VMs can be temporary. By using DNS labeling on the vNic, we are able to enable simple name resolution from other VMs in the VNet. Using resolvable names enables other VMs to access the automation server by the DNS name `Jenkins` or the Git server as `gitrepo`.
+To use DNS names for VM name resolution, you need to create virtual network interface cards (vNics) that include a DNS label. vNics are important as you can reuse them by connecting them to different VMs over the infrastructure lifecycle. This approach keeps the vNic as a static resource while the VMs can be temporary. By using DNS labeling on the vNic, we're able to enable simple name resolution from other VMs in the VNet. Using resolvable names enables other VMs to access the automation server by the DNS name `Jenkins` or the Git server as `gitrepo`.
Create the vNic with [az network nic create](/cli/azure/network/nic). The following example creates a vNic named `myNic`, connects it to the `myVnet` virtual network named `myVnet`, and creates an internal DNS name record called `jenkins`:
virtual-machines Put Calls Create Or Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/put-calls-create-or-update.md
- Title: PUT calls for create or update operations
-description: PUT calls for create or update operations on compute resources
----- Previously updated : 08/4/2020---
-# PUT calls for creation or updates on compute resources
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
--
-`Microsoft.Compute` resources do not support the conventional definition of *HTTP PUT* semantics. Instead, these resources use PATCH semantics for both the PUT and PATCH verbs.
-
-**Create** operations apply default values when appropriate. However, resource **updates** done through PUT or PATCH, do not apply any default properties. **Update** operations apply apply strict PATCH semantics.
-
-For example, the disk `caching` property of a virtual machine defaults to `ReadWrite` if the resource is an OS disk.
-
-```json
- "storageProfile": {
- "osDisk": {
- "name": "myVMosdisk",
- "image": {
- "uri": "http://{existing-storage-account-name}.blob.core.windows.net/{existing-container-name}/{existing-generalized-os-image-blob-name}.vhd"
- },
- "osType": "Windows",
- "createOption": "FromImage",
- "caching": "ReadWrite",
- "vhd": {
- "uri": "http://{existing-storage-account-name}.blob.core.windows.net/{existing-container-name}/myDisk.vhd"
- }
- }
- },
-```
-
-However, for **update** operations when a property is left out or a *null* value is passed, it will remain unchanged and there no defaulting values.
-
-This is important when sending update operations to a resource with the intention of removing an association. If that resource is a `Microsoft.Compute` resource, the corresponding property you want to remove needs to be explicitly called out and a value assigned. To achieve this, users can pass an empty string such as **" "**. This will instruct the platform to remove that association.
-
-> [!IMPORTANT]
-> There is no support for "patching" an array element. Instead, the client has to do a PUT or PATCH request with the entire contents of the updated array. For example, to detach a data disk from a VM, do a GET request to get the current VM model, remove the disk to be detached from `properties.storageProfile.dataDisks`, and do a PUT request with the updated VM entity.
-
-## Examples
-
-### Correct payload to remove a Proximity Placement Groups association
-
-`
-{ "location": "westus", "properties": { "platformFaultDomainCount": 2, "platformUpdateDomainCount": 20, "proximityPlacementGroup": "" } }
-`
-
-### Incorrect payloads to remove a Proximity Placement Groups association
-
-`
-{ "location": "westus", "properties": { "platformFaultDomainCount": 2, "platformUpdateDomainCount": 20, "proximityPlacementGroup": null } }
-`
-
-`
-{ "location": "westus", "properties": { "platformFaultDomainCount": 2, "platformUpdateDomainCount": 20 } }
-`
-
-## Next Steps
-Learn more about Create or Update calls for [Virtual Machines](/rest/api/compute/virtualmachines/createorupdate) and [Virtual Machine Scale Sets](/rest/api/compute/virtualmachinescalesets/createorupdate)
virtual-network Virtual Network Powershell Sample Ipv6 Dual Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-powershell-sample-ipv6-dual-stack.md
Title: Azure PowerShell script sample - Configure IPv6 endpoints description: Configure IPv6 endpoints in virtual network with an Azure PowerShell script and find links to command-specific documentation to help with the PowerShell sample.- - -- Previously updated : 07/15/2019+ Last updated : 04/05/2023 # Configure IPv6 endpoints in virtual network with Azure PowerShell script sample
-This article shows you how to deploy a dual stack (IPv4 + IPv6) application in Azure that includes a dual stack virtual network with a dual stack subnet, a load balancer with dual (IPv4 + IPv6) front-end configurations, VMs with NICs that have a dual IP configuration, dual network security group rules, and dual public IPs.
-
-You can execute the script from the Azure [Cloud Shell](https://shell.azure.com/powershell), or from a local PowerShell installation. If you use PowerShell locally, this script requires the Azure Az PowerShell module version 1.0.0 or later. To find the installed version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
-
+This article shows you how to deploy a dual stack (IPv4 + IPv6) application in Azure that includes a dual stack virtual network with a dual stack subnet. A load balancer with dual (IPv4 + IPv6) front-end configurations, VMs with NICs that have a dual IP configuration, dual network security group rules, and dual public IPs are also deployed.
## Prerequisites
-Before you deploy a dual stack application in Azure, you must configure your subscription only once for this preview feature using the following Azure PowerShell:
-Register as follows:
-```azurepowershell
-Register-AzProviderFeature -FeatureName AllowIPv6VirtualNetwork -ProviderNamespace Microsoft.Network
-Register-AzProviderFeature -FeatureName AllowIPv6CAOnStandardLB -ProviderNamespace Microsoft.Network
-```
-It takes up to 30 minutes for feature registration to complete. You can check your registration status by running the following Azure PowerShell command:
-Check on the registration as follows:
-```azurepowershell
-Get-AzProviderFeature -FeatureName AllowIPv6VirtualNetwork -ProviderNamespace Microsoft.Network
-Get-AzProviderFeature -FeatureName AllowIPv6CAOnStandardLB -ProviderNamespace Microsoft.Network
-```
-After the registration is complete, run the following command:
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-```azurepowershell
-Register-AzResourceProvider -ProviderNamespace Microsoft.Network
-```
+- Azure PowerShell installed locally or Azure Cloud Shell.
-## Sample script
+- Sign in to Azure PowerShell and ensure you've selected the subscription with which you want to use this feature. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
+- Ensure your `Az.Network` module is 4.3.0 or later. To verify the installed module, use the command `Get-InstalledModule -Name Az.Network`. If the module requires an update, use the command `Update-Module -Name Az.Network` if necessary.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+
+## Sample script
```azurepowershell # Dual-Stack VNET with 2 VMs.ps1
$vnet = New-AzVirtualNetwork `
-NetworkSecurityGroupId $nsg.Id ` -IpConfiguration $Ip4Config,$Ip6Config -- # Create virtual machines $cred = get-credential -Message "DUAL STACK VNET SAMPLE: Please enter the Administrator credential to log into the VMs"
$vmName= "dsVM2"
$VMconfig2 = New-AzVMConfig -VMName $vmName -VMSize $vmsize -AvailabilitySetId $avset.Id 3> $null | Set-AzVMOperatingSystem -Windows -ComputerName $vmName -Credential $cred -ProvisionVMAgent 3> $null | Set-AzVMSourceImage -PublisherName $ImagePublisher -Offer $imageOffer -Skus $imageSKU -Version "latest" 3> $null | Set-AzVMOSDisk -Name "$vmName.vhd" -CreateOption fromImage 3> $null | Add-AzVMNetworkInterface -Id $NIC_2.Id 3> $null $VM2 = New-AzVM -ResourceGroupName $rg.ResourceGroupName -Location $rg.Location -VM $VMconfig2 - #End Of Script ```
This script uses the following commands to create a resource group, virtual mach
| [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork) | Creates an Azure virtual network and subnet. | | [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) | Creates a public IP address with a static IP address and an associated DNS name. | | [New-AzLoadBalancer](/powershell/module/az.network/new-azloadbalancer) | Creates an Azure load balancer. |
-| [New-AzLoadBalancerProbeConfig](/powershell/module/az.network/new-azloadbalancerprobeconfig) | Creates a load balancer probe. A load balancer probe is used to monitor each VM in the load balancer set. If any VM becomes inaccessible, traffic is not routed to the VM. |
-| [New-AzLoadBalancerRuleConfig](/powershell/module/az.network/new-azloadbalancerruleconfig) | Creates a load balancer rule. In this sample, a rule is created for port 80. As HTTP traffic arrives at the load balancer, it is routed to port 80 one of the VMs in the load balancer set. |
+| [New-AzLoadBalancerProbeConfig](/powershell/module/az.network/new-azloadbalancerprobeconfig) | Creates a load balancer probe. A load balancer probe is used to monitor each VM in the load balancer set. If any VM becomes inaccessible, traffic isn't routed to the VM. |
+| [New-AzLoadBalancerRuleConfig](/powershell/module/az.network/new-azloadbalancerruleconfig) | Creates a load balancer rule. In this sample, a rule is created for port 80. As HTTP traffic arrives at the load balancer, it's routed to port 80 one of the VMs in the load balancer set. |
| [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup) | Creates a network security group (NSG), which is a security boundary between the internet and the virtual machine. | | [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig) | Creates an NSG rule to allow inbound traffic. In this sample, port 22 is opened for SSH traffic. | | [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface) | Creates a virtual network card and attaches it to the virtual network, subnet, and NSG. |
-| [New-AzAvailabilitySet](/powershell/module/az.compute/new-azavailabilityset) | Creates an availability set. Availability sets ensure application uptime by spreading the virtual machines across physical resources such that if failure occurs, the entire set is not affected. |
+| [New-AzAvailabilitySet](/powershell/module/az.compute/new-azavailabilityset) | Creates an availability set. Availability sets ensure application uptime by spreading the virtual machines across physical resources such that if failure occurs, the entire set isn't affected. |
| [New-AzVMConfig](/powershell/module/az.compute/new-azvmconfig) | Creates a VM configuration. This configuration includes information such as VM name, operating system, and administrative credentials. The configuration is used during VM creation. | | [New-AzVM](/powershell/module/az.compute/new-azvm) | Creates the virtual machine and connects it to the network card, virtual network, subnet, and NSG. This command also specifies the virtual machine image to be used and administrative credentials. | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
This script uses the following commands to create a resource group, virtual mach
For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/azure/).
-Additional networking PowerShell script samples can be found in the [Azure Networking Overview documentation](../powershell-samples.md?toc=%2fazure%2fnetworking%2ftoc.json).
+More networking PowerShell script samples can be found in the [Azure Networking Overview documentation](../powershell-samples.md?toc=%2fazure%2fnetworking%2ftoc.json).
web-application-firewall Configure Waf Custom Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/configure-waf-custom-rules.md
$wafConfig = New-AzApplicationGatewayWebApplicationFirewallConfiguration -Enable
# Create a User-Agent header custom rule $variable = New-AzApplicationGatewayFirewallMatchVariable -VariableName RequestHeaders -Selector User-Agent $condition = New-AzApplicationGatewayFirewallCondition -MatchVariable $variable -Operator Contains -MatchValue "evilbot" -Transform Lowercase -NegationCondition $False
-$rule = New-AzApplicationGatewayFirewallCustomRule -Name blockEvilBot -Priority 2 -RuleType MatchRule -MatchCondition $condition -Action Block
+$rule = New-AzApplicationGatewayFirewallCustomRule -Name blockEvilBot -Priority 2 -RuleType MatchRule -MatchCondition $condition -Action Block -State Enabled
# Create a geo-match custom rule $var2 = New-AzApplicationGatewayFirewallMatchVariable -VariableName RemoteAddr $condition2 = New-AzApplicationGatewayFirewallCondition -MatchVariable $var2 -Operator GeoMatch -MatchValue "US" -NegationCondition $False
-$rule2 = New-AzApplicationGatewayFirewallCustomRule -Name allowUS -Priority 14 -RuleType MatchRule -MatchCondition $condition2 -Action Allow
+$rule2 = New-AzApplicationGatewayFirewallCustomRule -Name allowUS -Priority 14 -RuleType MatchRule -MatchCondition $condition2 -Action Allow -State Enabled
# Create a firewall policy $wafPolicy = New-AzApplicationGatewayFirewallPolicy -Name wafpolicyNew -ResourceGroup $rgname -Location $location -CustomRule $rule,$rule2
web-application-firewall Create Custom Waf Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/create-custom-waf-rules.md
Previously updated : 08/22/2022 Last updated : 04/06/2023
The JSON snippets shown in this article are derived from a [ApplicationGatewayWe
## Example 1
-You know there's a bot named *evilbot* that you want to block from crawling your website. In this case, youΓÇÖll block on the User-Agent *evilbot* in the request headers.
+You know there's a bot named *evilbot* that you want to block from crawling your website. In this case, you block on the User-Agent *evilbot* in the request headers.
Logic: p
$rule = New-AzApplicationGatewayFirewallCustomRule `
-Priority 2 ` -RuleType MatchRule ` -MatchCondition $condition `
- -Action Block
+ -Action Block `
+ -State Enabled
```
-And here is the corresponding JSON:
+And here's the corresponding JSON:
```json {
And here is the corresponding JSON:
"priority": 2, "ruleType": "MatchRule", "action": "Block",
+ "state": "Enabled",
"matchConditions": [ { "matchVariables": [
$rule = New-AzApplicationGatewayFirewallCustomRule `
-Priority 2 ` -RuleType MatchRule ` -MatchCondition $condition `
- -Action Block
+ -Action Block `
+ -State Enabled
``` And the corresponding JSON:
And the corresponding JSON:
"priority": 2, "ruleType": "MatchRule", "action": "Block",
+ "state": "Enabled",
"matchConditions": [ { "matchVariables": [
And the corresponding JSON:
## Example 2
-You want to allow traffic only from the US using the GeoMatch operator and still have the managed rules apply:
+You want to allow traffic only from the United States using the GeoMatch operator and still have the managed rules apply:
```azurepowershell $variable = New-AzApplicationGatewayFirewallMatchVariable `
$rule = New-AzApplicationGatewayFirewallCustomRule `
-Priority 2 ` -RuleType MatchRule ` -MatchCondition $condition `
- -Action Block
+ -Action Block `
+ -State Enabled
``` And the corresponding JSON:
And the corresponding JSON:
"priority": 2, "ruleType": "MatchRule", "action": "Block",
+ "state": "Enabled",
"matchConditions": [ { "matchVariables": [
And the corresponding JSON:
You want to block all requests from IP addresses in the range 198.168.5.0/24.
-In this example, you'll block all traffic that comes from an IP addresses range. The name of the rule is *myrule1* and the priority is set to 10.
+In this example, you block all traffic that comes from an IP addresses range. The name of the rule is *myrule1* and the priority is set to 10.
Logic: p
$rule = New-AzApplicationGatewayFirewallCustomRule `
-Priority 10 ` -RuleType MatchRule ` -MatchCondition $condition1 `
- -Action Block
+ -Action Block `
+ -State Enabled
``` Here's the corresponding JSON:
Here's the corresponding JSON:
"priority": 10, "ruleType": "MatchRule", "action": "Block",
+ "state": "Enabled",
"matchConditions": [ { "matchVariables": [
Corresponding CRS rule:
## Example 4
-For this example, you want to block User-Agent *evilbot*, and traffic in the range 192.168.5.0/24. To accomplish this, you can create two separate match conditions, and put them both in the same rule. This ensures that if both *evilbot* in the User-Agent header **and** IP addresses from the range 192.168.5.0/24 are matched, then the request is blocked.
+For this example, you want to block User-Agent *evilbot*, and traffic in the range 192.168.5.0/24. To accomplish this action, you can create two separate match conditions, and put them both in the same rule. This configuration ensures that if both *evilbot* in the User-Agent header **and** IP addresses from the range 192.168.5.0/24 are matched, then the request is blocked.
Logic: p **and** q
$condition2 = New-AzApplicationGatewayFirewallCondition `
-Priority 10 ` -RuleType MatchRule ` -MatchCondition $condition1, $condition2 `
- -Action Block
+ -Action Block `
+ -State Enabled
``` Here's the corresponding JSON:
Here's the corresponding JSON:
"priority": 10, "ruleType": "MatchRule", "action": "Block",
+ "state": "Enabled",
"matchConditions": [ { "matchVariables": [
$rule1 = New-AzApplicationGatewayFirewallCustomRule `
-Priority 10 ` -RuleType MatchRule ` -MatchCondition $condition1 `
- -Action Block
+ -Action Block `
+ -State Enabled
$rule2 = New-AzApplicationGatewayFirewallCustomRule ` -Name myrule2 ` -Priority 20 ` -RuleType MatchRule ` -MatchCondition $condition2 `
- -Action Block
+ -Action Block `
+ -State Enabled
``` And the corresponding JSON:
And the corresponding JSON:
"priority": 10, "ruleType": "MatchRule", "action": "Block",
+ "state": "Enabled",
"matchConditions": [ { "matchVariables": [
And the corresponding JSON:
"priority": 20, "ruleType": "MatchRule", "action": "Block",
+ "state": "Enabled",
"matchConditions": [ { "matchVariables": [
$rule = New-AzApplicationGatewayFirewallCustomRule `
-Priority 2 ` -RuleType MatchRule ` -MatchCondition $condition `
- -Action Block
+ -Action Block `
+ -State Enabled
``` Corresponding JSON:
Corresponding JSON:
"priority": 2, "ruleType": "MatchRule", "action": "Block",
+ "state": "Enabled",
"matchConditions": [ { "matchVariables": [
Corresponding JSON:
## Example 7
-It is not uncommon to see Azure Front Door deployed in front of Application Gateway. In order to make sure the traffic received by Application Gateway comes from the Front Door deployment, the best practice is to check if the `X-Azure-FDID` header contains the expected unique value. For more information on this, please see [How to lock down the access to my backend to only Azure Front Door](../../frontdoor/front-door-faq.yml#how-do-i-lock-down-the-access-to-my-backend-to-only-azure-front-door-)
+It isn't uncommon to see Azure Front Door deployed in front of Application Gateway. In order to make sure the traffic received by Application Gateway comes from the Front Door deployment, the best practice is to check if the `X-Azure-FDID` header contains the expected unique value. For more information on securing access to your application using Azure Front Door, see [How to lock down the access to my backend to only Azure Front Door](../../frontdoor/front-door-faq.yml#how-do-i-lock-down-the-access-to-my-backend-to-only-azure-front-door-)
Logic: **not** p
$rule = New-AzApplicationGatewayFirewallCustomRule `
-Priority 2 ` -RuleType MatchRule ` -MatchCondition $condition `
- -Action Block
+ -Action Block `
+ -State Enabled
```
-And here is the corresponding JSON:
+And here's the corresponding JSON:
```json {
And here is the corresponding JSON:
"priority": 2, "ruleType": "MatchRule", "action": "Block",
+ "state": "Enabled",
"matchConditions": [ { "matchVariables": [
web-application-firewall Custom Waf Rules Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/custom-waf-rules-overview.md
The Azure Application Gateway Web Application Firewall (WAF) v2 comes with a pre-configured, platform-managed ruleset that offers protection from many different types of attacks. These attacks include cross site scripting, SQL injection, and others. If you're a WAF admin, you may want to write your own rules to augment the core rule set (CRS) rules. Your custom rules can either block, allow, or log requested traffic based on matching criteria. If the WAF policy is set to detection mode, and a custom block rule is triggered, the request is logged and no blocking action is taken.
-Custom rules allow you to create your own rules that are evaluated for each request that passes through the WAF. These rules hold a higher priority than the rest of the rules in the managed rule sets. The custom rules contain a rule name, rule priority, and an array of matching conditions. If these conditions are met, an action is taken (to allow, block, or log). If a custom rule is triggered, and an allow or block action is taken, no further custom or managed rules are evaluated.
+Custom rules allow you to create your own rules that are evaluated for each request that passes through the WAF. These rules hold a higher priority than the rest of the rules in the managed rule sets. The custom rules contain a rule name, rule priority, and an array of matching conditions. If these conditions are met, an action is taken (to allow, block, or log). If a custom rule is triggered, and an allow or block action is taken, no further custom or managed rules are evaluated. Custom rules can be enabled/disabled on demand.
-For example, you can block all requests from an IP address in the range 192.168.5.0/24. In this rule, the operator is *IPMatch*, the matchValues is the IP address range (192.168.5.0/24), and the action is to block the traffic. You also set the rule's name and priority.
+For example, you can block all requests from an IP address in the range 192.168.5.0/24. In this rule, the operator is *IPMatch*, the matchValues is the IP address range (192.168.5.0/24), and the action is to block the traffic. You also set the rule's name, priority and enabled/disabled state.
Custom rules support using compounding logic to make more advanced rules that address your security needs. For example, ((Condition 1 **and** Condition 2) **or** Condition 3). This means that if Condition 1 **and** Condition 2 are met, **or** if Condition 3 is met, the WAF should take the action specified in the custom rule.
$AllowRule = New-AzApplicationGatewayFirewallCustomRule `
-Priority 2 ` -RuleType MatchRule ` -MatchCondition $condition `
- -Action Allow
+ -Action Allow `
+ -State Enabled
$BlockRule = New-AzApplicationGatewayFirewallCustomRule ` -Name example2 ` -Priority 2 ` -RuleType MatchRule ` -MatchCondition $condition `
- -Action Block
+ -Action Block `
+ -State Enabled
``` The previous `$BlockRule` maps to the following custom rule in Azure Resource
The previous `$BlockRule` maps to the following custom rule in Azure Resource Ma
"priority": 2, "ruleType": "MatchRule", "action": "Block",
+ "state": "Enabled",
"matchConditions": [ { "matchVariables": [
This custom rule contains a name, priority, an action, and the array of matching
The name of the rule. It appears in the logs.
+### Enable rule [optional]
+
+Turn this rule on/off. Custom rules are enabled by default.
+ ### Priority [required] - Determines the rule valuation order. The lower the value, the earlier the evaluation of the rule. The allowable range is from 1-100.