Updates from: 08/09/2023 01:16:58
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/policy-reference.md
Title: Built-in policy definitions for Azure Active Directory Domain Services description: Lists Azure Policy built-in policy definitions for Azure Active Directory Domain Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
active-directory Concept System Preferred Multifactor Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-system-preferred-multifactor-authentication.md
# Customer intent: As an identity administrator, I want to encourage users to use the Microsoft Authenticator app in Azure AD to improve and secure user sign-in events. + # System-preferred multifactor authentication - Authentication methods policy System-preferred multifactor authentication (MFA) prompts users to sign in by using the most secure method they registered. Administrators can enable system-preferred MFA to improve sign-in security and discourage less secure sign-in methods like SMS.
When a user signs in, the authentication process checks which authentication met
1. [Temporary Access Pass](howto-authentication-temporary-access-pass.md) 1. [FIDO2 security key](concept-authentication-passwordless.md#fido2-security-keys)
-1. [Microsoft Authenticator push notifications](concept-authentication-authenticator-app.md)
+1. [Microsoft Authenticator notifications](concept-authentication-authenticator-app.md)
1. [Time-based one-time password (TOTP)](concept-authentication-oath-tokens.md)<sup>1</sup> 1. [Telephony](concept-authentication-phone-options.md)<sup>2</sup> 1. [Certificate-based authentication](concept-certificate-based-authentication.md)
The system-preferred MFA also applies for users who are enabled for MFA in the l
* [Authentication methods in Azure Active Directory](concept-authentication-authenticator-app.md) * [How to run a registration campaign to set up Microsoft Authenticator](how-to-mfa-registration-campaign.md)++
active-directory Howto Mfaserver Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-deploy.md
Previously updated : 08/04/2023 Last updated : 08/08/2023
Make sure the server that you're using for Azure Multi-Factor Authentication mee
| Azure Multi-Factor Authentication Server Requirements | Description | |: |: | | Hardware |<li>200 MB of hard disk space</li><li>x32 or x64 capable processor</li><li>1 GB or greater RAM</li> |
-| Software |<li>Windows Server 2019<sup>1</sup></li><li>Windows Server 2016</li><li>Windows Server 2012 R2</li><li>Windows Server 2012</li><li>Windows Server 2008/R2 (with [ESU](/lifecycle/faq/extended-security-updates) only)</li><li>Windows 10</li><li>Windows 8.1, all editions</li><li>Windows 8, all editions</li><li>Windows 7, all editions (with [ESU](/lifecycle/faq/extended-security-updates) only)</li><li>Microsoft .NET 4.0 Framework</li><li>IIS 7.0 or greater if installing the user portal or web service SDK</li> |
+| Software |<li>Windows Server 2022<sup>1</sup><li>Windows Server 2019<sup>1</sup></li><li>Windows Server 2016</li><li>Windows Server 2012 R2</li><li>Windows Server 2012</li><li>Windows Server 2008/R2 (with [ESU](/lifecycle/faq/extended-security-updates) only)</li><li>Windows 10</li><li>Windows 8.1, all editions</li><li>Windows 8, all editions</li><li>Windows 7, all editions (with [ESU](/lifecycle/faq/extended-security-updates) only)</li><li>Microsoft .NET 4.0 Framework</li><li>IIS 7.0 or greater if installing the user portal or web service SDK</li> |
| Permissions | Domain Administrator or Enterprise Administrator account to register with Active Directory |
-<sup>1</sup>If Azure MFA Server fails to activate on an Azure VM that runs Windows Server 2019, try using another version of Windows Server.
+<sup>1</sup>If Azure MFA Server fails to activate on an Azure VM that runs Windows Server 2019 or later, try using an earlier version of Windows Server.
### Azure MFA Server Components
active-directory Access Token Claims Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/access-token-claims-reference.md
Access tokens are [JSON web tokens (JWT)](https://wikipedia.org/wiki/JSON_Web_To
- **Payload** - Contains all of the important data about the user or application that's attempting to call the service. - **Signature** - Is the raw material used to validate the token.
-Each piece is separated by a period (`.`) and separately Base64 encoded.
+Each piece is separated by a period (`.`) and separately Base 64 encoded.
Claims are present only if a value exists to fill it. An application shouldn't take a dependency on a claim being present. Examples include `pwd_exp` (not every tenant requires passwords to expire) and `family_name` ([client credential](v2-oauth2-client-creds-grant-flow.md) flows are on behalf of applications that don't have names). Claims used for access token validation are always present.
The Microsoft identity platform uses some claims to help secure tokens for reuse
| Claim | Format | Description | Authorization considerations | |-|--|-||
+| `acrs` | JSON array of strings | Indicates the Auth Context IDs of the operations that the bearer is eligible to perform. Auth Context IDs can be used to trigger a demand for step-up authentication from within your application and services. Often used along with the `xms_cc` claim. |
| `aud` | String, an Application ID URI or GUID | Identifies the intended audience of the token. In v2.0 tokens, this value is always the client ID of the API. In v1.0 tokens, it can be the client ID or the resource URI used in the request. The value can depend on how the client requested the token. | This value must be validated, reject the token if the value doesn't match the intended audience. | | `iss` | String, a security token service (STS) URI | Identifies the STS that constructs and returns the token, and the Azure AD tenant of the authenticated user. If the token issued is a v2.0 token (see the `ver` claim), the URI ends in `/v2.0`. The GUID that indicates that the user is a consumer user from a Microsoft account is `9188040d-6c67-4c5b-b112-36a304b66dad`. | The application can use the GUID portion of the claim to restrict the set of tenants that can sign in to the application, if applicable. | |`idp`| String, usually an STS URI | Records the identity provider that authenticated the subject of the token. This value is identical to the value of the Issuer claim unless the user account isn't in the same tenant as the issuer, such as guests. Use the value of `iss` if the claim isn't present. For personal accounts being used in an organizational context (for instance, a personal account invited to an Azure AD tenant), the `idp` claim may be 'live.com' or an STS URI containing the Microsoft account tenant `9188040d-6c67-4c5b-b112-36a304b66dad`. | |
The Microsoft identity platform uses some claims to help secure tokens for reuse
| `uti` | String | Token identifier claim, equivalent to `jti` in the JWT specification. Unique, per-token identifier that is case-sensitive. | | | `rh` | Opaque String | An internal claim used by Azure to revalidate tokens. Resources shouldn't use this claim. | | | `ver` | String, either `1.0` or `2.0` | Indicates the version of the access token. | |
-| `xms_cc` | JSON array of strings | Indicates whether the client application that acquired the token is capable of handling claims challenges. This claim is commonly used in Conditional Access and Continuous Access Evaluation scenarios. The resource server that the token is issued for controls the presence of the claim in it. For example, a service application. For more information, see [Claims challenges, claims requests and client capabilities](claims-challenge.md?tabs=dotnet). Resource servers should check this claim in access tokens received from client applications. If this claim is present, resource servers can respond back with a claims challenge. The claims challenge requests more claims in a new access token to authorize access to a protected resource. |
+| `xms_cc` | JSON array of strings | Indicates whether the client application that acquired the token is capable of handling claims challenges. It's often used along with claim `acrs`. This claim is commonly used in Conditional Access and Continuous Access Evaluation scenarios. The resource server or service application that the token is issued for controls the presence of this claim in a token. A value of `cp1` in the access token is the authoritative way to identify that a client application is capable of handling a claims challenge. For more information, see [Claims challenges, claims requests and client capabilities](claims-challenge.md?tabs=dotnet). |
### Groups overage claim
active-directory Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/access-tokens.md
Not all applications should validate tokens. Only in specific scenarios should a
- Web APIs must validate access tokens sent to them by a client. They must only accept tokens containing one of their AppId URIs as the `aud` claim. - Web apps must validate ID tokens sent to them by using the user's browser in the hybrid flow, before allowing access to a user's data or establishing a session.
-If none of the above scenarios apply, there's no need to validate the token, and this may present a security and reliability risk when basing decisions on the validity of the token. Public clients like native, desktop or single-page applications don't benefit from validating ID tokens because the application communicates directly with the IDP where SSL protection ensures the ID tokens are valid. They shouldn't validate the access tokens, as they are for the web API to validate, not the client.
+If none of the previously described scenarios apply, there's no need to validate the token. Public clients like native, desktop or single-page applications don't benefit from validating ID tokens because the application communicates directly with the IDP where SSL protection ensures the ID tokens are valid. They shouldn't validate the access tokens, as they are for the web API to validate, not the client.
APIs and web applications must only validate tokens that have an `aud` claim that matches the application. Other resources may have custom token validation rules. For example, you can't validate tokens for Microsoft Graph according to these rules due to their proprietary format. Validating and accepting tokens meant for another resource is an example of the [confused deputy](https://cwe.mitre.org/data/definitions/441.html) problem.
The Azure AD middleware has built-in capabilities for validating access tokens,
- When your web app/API is validating a v1.0 token (`ver` claim ="1.0"), it needs to read the OpenID Connect metadata document from the v1.0 endpoint (`https://login.microsoftonline.com/{example-tenant-id}/.well-known/openid-configuration`), even if the authority configured for your web API is a v2.0 authority. - When your web app/API is validating a v2.0 token (`ver` claim ="2.0"), it needs to read the OpenID Connect metadata document from the v2.0 endpoint (`https://login.microsoftonline.com/{example-tenant-id}/v2.0/.well-known/openid-configuration`), even if the authority configured for your web API is a v1.0 authority.
-The examples below suppose that your application is validating a v2.0 access token (and therefore reference the v2.0 versions of the OIDC metadata documents and keys). Just remove the "/v2.0" in the URL if you validate v1.0 tokens.
+The following examples suppose that your application is validating a v2.0 access token (and therefore reference the v2.0 versions of the OIDC metadata documents and keys). Just remove the "/v2.0" in the URL if you validate v1.0 tokens.
### Validate the signature
Web apps validating ID tokens, and web APIs validating access tokens need to val
#### Single tenant applications
-[OpenID Connect Core](https://openid.net/specs/openid-connect-core-1_0.html#IDTokenValidation) says "The Issuer Identifier \[...\] MUST exactly match the value of the `iss` (issuer) Claim." For applications that use a tenant-specific metadata endpoint (like [https://login.microsoftonline.com/{example-tenant-id}/v2.0/.well-known/openid-configuration](https://login.microsoftonline.com/{example-tenant-id}/v2.0/.well-known/openid-configuration) or [https://login.microsoftonline.com/contoso.onmicrosoft.com/v2.0/.well-known/openid-configuration](https://login.microsoftonline.com/contoso.onmicrosoft.com/v2.0/.well-known/openid-configuration)), this is all that is needed.
+[OpenID Connect Core](https://openid.net/specs/openid-connect-core-1_0.html#IDTokenValidation) says "The Issuer Identifier \[...\] MUST exactly match the value of the `iss` (issuer) Claim." For applications that use a tenant-specific metadata endpoint, such as `https://login.microsoftonline.com/{example-tenant-id}/v2.0/.well-known/openid-configuration` or `https://login.microsoftonline.com/contoso.onmicrosoft.com/v2.0/.well-known/openid-configuration`.
Single tenant applications are applications that support: - Accounts in one organizational directory (**example-tenant-id** only): `https://login.microsoftonline.com/{example-tenant-id}`
Azure AD also supports multi-tenant applications. These applications support:
- Accounts in any organizational directory (any Azure AD directory): `https://login.microsoftonline.com/organizations` - Accounts in any organizational directory (any Azure AD directory) and personal Microsoft accounts (for example, Skype, XBox): `https://login.microsoftonline.com/common`
-For these applications, Azure AD exposes tenant-independent versions of the OIDC document at [https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration](https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration) and [https://login.microsoftonline.com/organizations/v2.0/.well-known/openid-configuration](https://login.microsoftonline.com/organizations/v2.0/.well-known/openid-configuration) respectively. These endpoints return an issuer value, which is a template parametrized by the `tenantid`: `https://login.microsoftonline.com/{tenantid}/v2.0`. Applications may use these tenant-independent endpoints to validate tokens from every tenant with the following stipulations:
+For these applications, Azure AD exposes tenant-independent versions of the OIDC document at `https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration` and `https://login.microsoftonline.com/organizations/v2.0/.well-known/openid-configuration` respectively. These endpoints return an issuer value, which is a template parametrized by the `tenantid`: `https://login.microsoftonline.com/{tenantid}/v2.0`. Applications may use these tenant-independent endpoints to validate tokens from every tenant with the following stipulations:
+ - Validate the signing key issuer
- Instead of expecting the issuer claim in the token to exactly match the issuer value from metadata, the application should replace the `{tenantid}` value in the issuer metadata with the tenant ID that is the target of the current request, and then check the exact match (`tid` claim of the token).
+ - Validate that the `tid` claim is a GUID and the `iss` claim is of the form `https://login.microsoftonline.com/{tid}/v2.0` where `{tid}` is the exact `tid` claim. This validation ties the tenant back to the issuer and back to the scope of the signing key creating a chain of trust.
- Use `tid` claim when they locate data associated with the subject of the claim. In other words, the `tid` claim must be part of the key used to access the user's data. ### Validate the signing key issuer
As discussed, from the OpenID Connect document, your application accesses the ke
"jwks_uri": "https://login.microsoftonline.com/{example-tenant-id}/discovery/v2.0/keys", ```
-As above, `{example-tenant-id}` can be replaced by a GUID, a domain name, or **common**, **organizations** and **consumers**
+The `{example-tenant-id}` value can be replaced by a GUID, a domain name, or **common**, **organizations** and **consumers**
The "keys" documents exposed by Azure AD v2.0 contains, for each key, the issuer that uses this signing key. See, for instance, the
-tenant-independent "common" key endpoint [https://login.microsoftonline.com/common/discovery/v2.0/keys](https://login.microsoftonline.com/common/discovery/v2.0/keys) returns a document like:
+tenant-independent "common" key endpoint `https://login.microsoftonline.com/common/discovery/v2.0/keys` returns a document like:
```json {
Here's some pseudo code that recapitulates how to validate issuer and signing ke
if (issUri.Segments[1] != token["tid"]) throw validationException; ```
-## Token revocation
-
-Refresh tokens are invalidated or revoked at any time, for different reasons. The reasons fall into the categories of timeouts and revocations.
-
-### Token timeouts
-
-Organizations can use [token lifetime configuration](configurable-token-lifetimes.md) to alter the lifetime of refresh tokens Some tokens can go without use. For example, the user doesn't open the application for three months and then the token expires. Applications can encounter scenarios where the login server rejects a refresh token due to its age.
--- MaxInactiveTime: Specifies the amount of time that a token can be inactive.-- MaxSessionAge: If MaxAgeSessionMultiFactor or MaxAgeSessionSingleFactor is set to something other than their default (Until-revoked), the user must reauthenticate after the time set in the MaxAgeSession*. Examples:
- - The tenant has a MaxInactiveTime of five days, and the user went on vacation for a week, and so Azure AD hasn't seen a new token request from the user in seven days. The next time the user requests a new token, they'll find their refresh token has been revoked, and they must enter their credentials again.
- - A sensitive application has a MaxAgeSessionSingleFactor of one day. If a user logs in on Monday, and on Tuesday (after 25 hours have elapsed), they must reauthenticate.
-
-### Token revocations
-
-The server possibly revokes refresh tokens due to a change in credentials, or due to use or administrative action. Refresh tokens are in the classes of confidential clients and public clients.
-
-| Change | Password-based cookie | Password-based token | Non-password-based cookie | Non-password-based token | Confidential client token |
-|--|--|-||--||
-| Password expires | Stays alive | Stays alive | Stays alive | Stays alive | Stays alive |
-| Password changed by user | Revoked | Revoked | Stays alive | Stays alive | Stays alive |
-| User does SSPR | Revoked | Revoked | Stays alive | Stays alive | Stays alive |
-| Admin resets password | Revoked | Revoked | Stays alive | Stays alive | Stays alive |
-| User or admin revokes the refresh tokens by using [PowerShell](/powershell/module/microsoft.graph.beta.users.actions/invoke-mgbetainvalidateuserrefreshtoken?view=graph-powershell-beta&preserve-view=true) | Revoked | Revoked | Revoked | Revoked | Revoked |
-| [Single sign-out](v2-protocols-oidc.md#single-sign-out) on web | Revoked | Stays alive | Revoked | Stays alive | Stays alive |
-
-#### Non-password-based
-
-A *non-password-based* login is one where the user didn't type in a password to get it. Examples of non-password-based login include:
--- Using your face with Windows Hello-- FIDO2 key-- SMS-- Voice-- PIN- ## See also - [Access token claims reference](access-token-claims-reference.md)-- [Primary Refresh Tokens](../devices/concept-primary-refresh-token.md) - [Secure applications and APIs by validating claims](claims-validation.md)
active-directory Optional Claims Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/optional-claims-reference.md
The following table lists the v1.0 and v2.0 optional claim set.
| Name | Description | Token Type | User Type | Notes | ||-||--|-| | `acct` | Users account status in tenant | JWT, SAML | | If the user is a member of the tenant, the value is `0`. If they're a guest, the value is `1`. |
+| `acrs` | Auth Context IDs | JWT | Azure AD | Indicates the Auth Context IDs of the operations that the bearer is eligible to perform. Auth Context IDs can be used to trigger a demand for step-up authentication from within your application and services. Often used along with the `xms_cc` claim. |
| `auth_time` | Time when the user last authenticated. | JWT | | | | `ctry` | User's country/region | JWT | | This claim is returned if it's present and the value of the field is a standard two-letter country/region code, such as FR, JP, SZ, and so on. |
-| `email` | The reported email address for this user | JWT, SAML | MSA, Azure AD | This value is included by default if the user is a guest in the tenant. For managed users (the users inside the tenant), it must be requested through this optional claim or, on v2.0 only, with the OpenID scope. This value isn't guaranteed to be correct, and is mutable over time - never use it for authorization or to save data for a user. For more information, see [Validate the user has permission to access this data](access-tokens.md). If you are using the email claim for authorization, we recommend [performing a migration to move to a more secure claim](./migrate-off-email-claim-authorization.md). If you require an addressable email address in your app, request this data from the user directly, using this claim as a suggestion or prefill in your UX. |
-| `fwd` | IP address | JWT | | Adds the original IPv4 address of the requesting client (when inside a VNET). |
+| `email` | The reported email address for this user | JWT, SAML | MSA, Azure AD | This value is included by default if the user is a guest in the tenant. For managed users (the users inside the tenant), it must be requested through this optional claim or, on v2.0 only, with the OpenID scope. This value isn't guaranteed to be correct, and is mutable over time - never use it for authorization or to save data for a user. For more information, see [Validate the user has permission to access this data](access-tokens.md). If you're using the email claim for authorization, we recommend [performing a migration to move to a more secure claim](./migrate-off-email-claim-authorization.md). If you require an addressable email address in your app, request this data from the user directly, using this claim as a suggestion or prefill in your UX. |
+| `fwd` | IP address | JWT | | Adds the original address of the requesting client (when inside a VNET). |
| `groups` | Optional formatting for group claims | JWT, SAML | | The `groups` claim is used with the GroupMembershipClaims setting in the [application manifest](reference-app-manifest.md), which must be set as well. | | `idtyp` | Token type | JWT access tokens | Special: only in app-only access tokens | The value is `app` when the token is an app-only token. This claim is the most accurate way for an API to determine if a token is an app token or an app+user token. |
-| `login_hint` | Login hint | JWT | MSA, Azure AD | An opaque, reliable login hint claim that's base64 encoded. Don't modify this value. This claim is the best value to use for the `login_hint` OAuth parameter in all flows to get SSO. It can be passed between applications to help them silently SSO as well - application A can sign in a user, read the `login_hint` claim, and then send the claim and the current tenant context to application B in the query string or fragment when the user selects on a link that takes them to application B. To avoid race conditions and reliability issues, the `login_hint` claim *doesn't* include the current tenant for the user, and defaults to the user's home tenant when used. In a guest scenario where the user is from another tenant, a tenant identifier must be provided in the sign-in request. and pass the same to apps you partner with. This claim is intended for use with your SDK's existing `login_hint` functionality, however that it exposed. |
+| `login_hint` | Login hint | JWT | MSA, Azure AD | An opaque, reliable login hint claim that's base 64 encoded. Don't modify this value. This claim is the best value to use for the `login_hint` OAuth parameter in all flows to get SSO. It can be passed between applications to help them silently SSO as well - application A can sign in a user, read the `login_hint` claim, and then send the claim and the current tenant context to application B in the query string or fragment when the user selects on a link that takes them to application B. To avoid race conditions and reliability issues, the `login_hint` claim *doesn't* include the current tenant for the user, and defaults to the user's home tenant when used. In a guest scenario where the user is from another tenant, a tenant identifier must be provided in the sign-in request. and pass the same to apps you partner with. This claim is intended for use with your SDK's existing `login_hint` functionality, however that it exposed. |
| `sid` | Session ID, used for per-session user sign out | JWT | Personal and Azure AD accounts. | | | `tenant_ctry` | Resource tenant's country/region | JWT | | Same as `ctry` except set at a tenant level by an admin. Must also be a standard two-letter value. | | `tenant_region_scope` | Region of the resource tenant | JWT | | |
The following table lists the v1.0 and v2.0 optional claim set.
| `verified_primary_email` | Sourced from the user's PrimaryAuthoritativeEmail | JWT | | | | `verified_secondary_email` | Sourced from the user's SecondaryAuthoritativeEmail | JWT | | | | `vnet` | VNET specifier information. | JWT | | |
-| `xms_cc` | Client Capabilities | JWT | Azure AD | Indicates whether the client application that acquired the token is capable of handling claims challenges. Service applications (resource servers) can make use of this claim to authorize access to protected resources. This claim is commonly used in Conditional Access and Continuous Access Evaluation scenarios. The service application that issues the token controls the presence of the claim in it. This optional claim should be configured as part of the service app's registration. For more information, see [Claims challenges, claims requests and client capabilities](claims-challenge.md?tabs=dotnet). |
-| `xms_edov` | Boolean value indicating if the user's email domain owner has been verified. | JWT | | An email is considered to be domain verified if: the domain belongs to the tenant where the user account resides and the tenant admin has done verification of the domain, the email is from a Microsoft account (MSA), the email is from a Google account, or the email was used for authentication using the one-time passcode (OTP) flow. It should also be noted the Facebook and SAML/WS-Fed accounts **do not** have verified domains.|
+| `xms_cc` | Client Capabilities | JWT | Azure AD | Indicates whether the client application that acquired the token is capable of handling claims challenges. It's often used along with claim `acrs`. This claim is commonly used in Conditional Access and Continuous Access Evaluation scenarios. The resource server or service application that the token is issued for controls the presence of this claim in a token. A value of `cp1` in the access token is the authoritative way to identify that a client application is capable of handling a claims challenge. For more information, see [Claims challenges, claims requests and client capabilities](claims-challenge.md?tabs=dotnet). |
+| `xms_edov` | Boolean value indicating whether the user's email domain owner has been verified. | JWT | | An email is considered to be domain verified if it belongs to the tenant where the user account resides and the tenant admin has done verification of the domain. Also, the email must be from a Microsoft account (MSA), a Google account, or used for authentication using the one-time passcode (OTP) flow. It should also be noted the Facebook and SAML/WS-Fed accounts **do not** have verified domains. |
| `xms_pdl` | Preferred data location | JWT | | For Multi-Geo tenants, the preferred data location is the three-letter code showing the geographic region the user is in. For more information, see the [Azure AD Connect documentation about preferred data location](../hybrid/how-to-connect-sync-feature-preferreddatalocation.md). | | `xms_pl` | User preferred language | JWT | | The user's preferred language, if set. Sourced from their home tenant, in guest access scenarios. Formatted LL-CC ("en-us"). | | `xms_tpl` | Tenant preferred language| JWT | | The resource tenant's preferred language, if set. Formatted LL ("en"). |
active-directory Scenario Daemon Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-call-api.md
data = requests.get(endpoint, headers=http_headers, stream=False).json()
# [.NET low level](#tab/dotnet)
active-directory Scenario Desktop Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-call-api.md
Now that you have a token, you can call a protected web API.
# [.NET](#tab/dotnet) # [Java](#tab/java)
active-directory Scenario Mobile Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-mobile-call-api.md
task.resume()
### Xamarin ## Make several API requests
active-directory Scenario Web Api Call Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-api-call-api-app-configuration.md
Microsoft recommends that you use the [Microsoft.Identity.Web](https://www.nuget
## Client secrets or client certificates ## Program.cs
The following image shows the possibilities of *Microsoft.Identity.Web* and the
## Client secrets or client certificates ## Modify *Startup.Auth.cs*
active-directory Scenario Web App Call Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-app-configuration.md
Select the tab for the platform you're interested in:
## Client secrets or client certificates ## Startup.cs
The following image shows the various possibilities of *Microsoft.Identity.Web*
## Client secrets or client certificates ## Startup.Auth.cs
active-directory Users Bulk Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-restore.md
Next, you can check to see that the users you restored exist in the Azure AD org
Run the following command: ``` PowerShell
-Get-AzureADUser -Filter "UserType eq 'Member'"
+Get-MgUser -Filter "UserType eq 'Member'"
``` You should see that the users that you restored are listed.
active-directory Lifecycle Workflow Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-templates.md
The **Real-time employee change** template is designed to configure tasks that a
:::image type="content" source="media/lifecycle-workflow-templates/on-demand-change-template.png" alt-text="Screenshot of a Lifecycle Workflow real time employee change template.":::
-The default specific parameters for the **Real-time employee termination** template are as follows:
+The default specific parameters for the **Real-time employee change** template are as follows:
|Parameter |Description |Customizable | ||||
active-directory Groups View Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-view-assignments.md
Previously updated : 02/04/2022 Last updated : 08/08/2023
Get-AzureADMSGroup -SearchString "Contoso_Helpdesk_Administrators"
### View role assignment to a group ```powershell
-Get-AzureADMSRoleAssignment -Filter "principalId eq '<object id of group>"
+Get-AzureADMSRoleAssignment -Filter "principalId eq '<object id of group>'"
``` ## Microsoft Graph API
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
This article lists the Azure AD built-in roles you can assign to allow managemen
> | [User Administrator](#user-administrator) | Can manage all aspects of users and groups, including resetting passwords for limited admins. | fe930be7-5e62-47db-91af-98c3a49a38b1 | > | [Virtual Visits Administrator](#virtual-visits-administrator) | Manage and share Virtual Visits information and metrics from admin centers or the Virtual Visits app. | e300d9e7-4a2b-4295-9eff-f1c78b36cc98 | > | [Viva Goals Administrator](#viva-goals-administrator) | Manage and configure all aspects of Microsoft Viva Goals. | 92b086b3-e367-4ef2-b869-1de128fb986e |
+> | [Viva Pulse Administrator](#viva-pulse-administrator) | Can manage all settings for Microsoft Viva Pulse app. | 87761b17-1ed2-4af3-9acd-92a150038160 |
> | [Windows 365 Administrator](#windows-365-administrator) | Can provision and manage all aspects of Cloud PCs. | 11451d60-acb2-45eb-a7d6-43d0f0125c13 | > | [Windows Update Deployment Administrator](#windows-update-deployment-administrator) | Can create and manage all aspects of Windows Update deployments through the Windows Update for Business deployment service. | 32696413-001a-46ae-978c-ce0f6b3620d2 | > | [Yammer Administrator](#yammer-administrator) | Manage all aspects of the Yammer service. | 810a2642-a034-447f-a5e8-41beaa378541 |
For more information, see [Roles and permissions in Viva Goals](/viva/goals/role
> | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
+## Viva Pulse Administrator
+
+Assign the Viva Pulse Administrator role to users who need to do the following tasks:
+
+- Read and configure all settings of Viva Pulse
+- Read basic properties on all resources in the Microsoft 365 admin center
+- Read and configure Azure Service Health
+- Create and manage Azure support tickets
+- Read messages in Message Center in the Microsoft 365 admin center, excluding security messages
+- Read usage reports in the Microsoft 365 admin center
+
+For more information, see [Assign a Viva Pulse admin in the Microsoft 365 admin center](/viva/pulse/setup-admin-access/assign-a-viva-pulse-admin-in-m365-admin-center).
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | microsoft.office365.messageCenter/messages/read | Read messages in Message Center in the Microsoft 365 admin center, excluding security messages |
+> | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center |
+> | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests |
+> | microsoft.office365.usageReports/allEntities/allProperties/read | Read Office 365 usage reports |
+> | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
+> | microsoft.viva.pulse/allEntities/allProperties/allTasks | Manage all aspects of Microsoft Viva Pulse |
+ ## Windows 365 Administrator Users with this role have global permissions on Windows 365 resources, when the service is present. Additionally, this role contains the ability to manage users and devices in order to associate policy, as well as create and manage groups.
ai-services Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/containers/image-tags.md
monikerRange: '<=doc-intel-3.1.0'
+# Document Intelligence container tags
+
+<!-- markdownlint-disable MD051 -->
+## Microsoft container registry (MCR)
+Document Intelligence container images can be found within the [**Microsoft Artifact Registry** (also know as Microsoft Container Registry(MCR))](https://mcr.microsoft.com/catalog?search=document%20intelligence), the primary registry for all Microsoft published container images.
-# Document Intelligence container tags
+
+The following containers support DocumentIntelligence v3.0 models and features:
+
+| Container name |image |
+|||
+|[**Document Intelligence Studio**](https://mcr.microsoft.com/product/azure-cognitive-services/form-recognizer/studio/tags)| `mcr.microsoft.com/azure-cognitive-services/form-recognizer/studio:latest`|
+| [**Business Card 3.0**](https://mcr.microsoft.com/product/azure-cognitive-services/form-recognizer/businesscard-3.0/tags) | `mcr.microsoft.com/azure-cognitive-services/form-recognizer/businesscard-3.0:latest` |
+| [**Custom Template 3.0**](https://mcr.microsoft.com/product/azure-cognitive-services/form-recognizer/custom-template-3.0/tags) | `mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-template-3.0:latest` |
+| [**Document 3.0**](https://mcr.microsoft.com/product/azure-cognitive-services/form-recognizer/document-3.0/tags)| `mcr.microsoft.com/azure-cognitive-services/form-recognizer/document-3.0:latest`|
+| [**ID Document 3.0**](https://mcr.microsoft.com/product/azure-cognitive-services/form-recognizer/id-document-3.0/tags) | `mcr.microsoft.com/azure-cognitive-services/form-recognizer/id-document-3.0:latest` |
+| [**Invoice 3.0**](https://mcr.microsoft.com/product/azure-cognitive-services/form-recognizer/invoice-3.0/tags) |`mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice-3.0:latest`|
+| [**Layout 3.0**](https://mcr.microsoft.com/product/azure-cognitive-services/form-recognizer/layout/tags) |`mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.0:latest`|
+| [**Read 3.0**](https://mcr.microsoft.com/product/azure-cognitive-services/form-recognizer/read-3.0/tags) |`mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.0:latest`|
+| [**Receipt 3.0**](https://mcr.microsoft.com/product/azure-cognitive-services/form-recognizer/receipt-3.0/tags) |`mcr.microsoft.com/azure-cognitive-services/form-recognizer/receipt-3.0:latest`|
+
-**This article applies to:** ![Document Intelligence v2.1 checkmark](../media/yes-icon.png) **Document Intelligence v2.1**.
+
+> [!IMPORTANT]
+>
+> Document Intelligence v3.0 containers are now generally available. If you are getting started with containers, consider using the v3 containers.
+The following containers:
## Feature containers
Document Intelligence containers support the following features:
| **Custom API** | mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-api | | **Custom Supervised** | mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-supervised |
-## Microsoft container registry (MCR)
-
-Document Intelligence container images can be found within the [**Microsoft Container Registry Catalog**](https://mcr.microsoft.com/v2/_catalog) listing, the primary registry for all Microsoft Published Docker images:
-
- :::image type="content" source="../media/containers/microsoft-container-registry-catalog.png" alt-text="Screenshot of the Microsoft Container Registry (MCR) catalog list.":::
-
-## Document Intelligence tags
-
-The following tags are available for Document Intelligence:
-
-### [Latest version](#tab/current)
-
-Release notes for `v2.1`:
-
-| Container | Tags | Retrieve image |
-||:||
-| **Layout**| &bullet; `latest` </br> &bullet; `2.1-preview`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout`|
-| **Business Card** | &bullet; `latest` </br> &bullet; `2.1-preview` |`docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/businesscard` |
-| **ID Document** | &bullet; `latest` </br> &bullet; `2.1-preview`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/id-document`|
-| **Receipt**| &bullet; `latest` </br> &bullet; `2.1-preview`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/receipt` |
-| **Invoice**| &bullet; `latest` </br> &bullet; `2.1-preview`|`docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice` |
-| **Custom API** | &bullet; `latest` </br> &bullet; `2.1-preview`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-api`|
-| **Custom Supervised**| &bullet; `latest` </br> &bullet; `2.1-preview`|`docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-supervised` |
-
-### [Previous versions](#tab/previous)
- > [!IMPORTANT]
-> The Document Intelligence v1.0 container has been retired.
------
+> The Document Intelligence v1.0 container is retired.
## Next steps > [!div class="nextstepaction"] > [Install and run Document Intelligence containers](install-run.md)
->
-
-* [Azure container instance recipe](../../../ai-services/containers/azure-container-instance-recipe.md)
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
The DALL-E models, currently in preview, generate images from text prompts that
> Due to high demand: > > - South Central US is temporarily unavailable for creating new resources and deployments.
-> - In East US and France Central, customers with existing deployments of GPT-4 can create additional deployments of GPT-4 version 0613. For customers new to GPT-4 on Azure OpenAI, please use one of the other available regions.
### GPT-4 models
These models can only be used with the Chat Completion API.
| `gpt-4-32k` <sup>1</sup><sup>3</sup> (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, UK South | N/A | 32,768 | September 2021 | <sup>1</sup> The model is [only available by request](https://aka.ms/oai/get-gpt4).<br>
-<sup>2</sup> Version `0314` of gpt-4 and gpt-4-32k will be retired no earlier than July 5, 2024. See [model updates](#model-updates) for model upgrade behavior.<br>
-<sup>3</sup> We are adding regions and rolling out availability to customers gradually to ensure a smooth experience.
+<sup>2</sup> Version `0314` of gpt-4 and gpt-4-32k will be retired no earlier than July 5, 2024. See [model updates](#model-updates) for model upgrade behavior.<br>
+<sup>3</sup> We are rolling out availability of new regions to customers gradually to ensure a smooth experience. In East US and France Central, customers with existing deployments of GPT-4 can create additional deployments of GPT-4 version 0613. For customers new to GPT-4 on Azure OpenAI, please use one of the other available regions.
### GPT-3.5 models
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
Previously updated : 07/12/2023 Last updated : 08/08/2023 recommendations: false
Avoid asking long questions and break them down into multiple questions if possi
* If you have documents in multiple languages, we recommend building a new index for each language and connecting them separately to Azure OpenAI.
-### Using the web app
+### Deploying the model
-You can use the available web app to interact with your model using a graphical user interface, which you can deploy using either [Azure OpenAI studio](../use-your-data-quickstart.md?pivots=programming-language-studio#deploy-a-web-app) or a [manual deployment](https://github.com/microsoft/sample-app-aoai-chatGPT).
+After you connect Azure OpenAI to your data, you can deploy it using the **Deploy to** button in Azure OpenAI studio.
++
+#### Using Power Virtual Agents
+
+You can deploy your model to [Power Virtual Agents](/power-virtual-agents/fundamentals-what-is-power-virtual-agents) directly from Azure OpenAI studio, enabling you to bring conversational experiences to various Microsoft Teams, Websites, Power Platform solutions, Dynamics 365, and other [Azure Bot Service channels](/power-virtual-agents/publication-connect-bot-to-azure-bot-service-channels). Power Virtual Agents acts as a conversational and generative AI platform, making the process of creating, publishing and deploying a bot to any number of channels simple and accessible.
+
+While Power Virtual Agents has features that leverage Azure OpenAI such as [generative answers](/power-virtual-agents/nlu-boost-conversations), deploying a model grounded on your data lets you create a chatbot that will respond using your data, and connect it to the Power Platform. For more information, see [Use a connection to Azure OpenAI on your data](/power-virtual-agents/nlu-generative-answers-azure-openai).
+
+> [!VIDEO https://www.microsoft.com/videoplayer/embed/RW18YwQ]
++
+#### Using the web app
+
+You can also use the available standalone web app to interact with your model using a graphical user interface, which you can deploy using either Azure OpenAI studio or a [manual deployment](https://github.com/microsoft/sample-app-aoai-chatGPT).
![A screenshot of the web app interface.](../media/use-your-data/web-app.png)
When customizing the app, we recommend:
- Pulling changes from the `main` branch for the web app's source code frequently to ensure you have the latest bug fixes and improvements.
+#### Important considerations
+
+- Publishing creates an Azure App Service in your subscription. It may incur costs depending on the
+[pricing plan](https://azure.microsoft.com/pricing/details/app-service/windows/) you select. When you're done with your app, you can delete it from the Azure portal.
+- You can [customize](../concepts/use-your-data.md#using-the-web-app) the frontend and backend logic of the web app.
+- By default, the app will only be accessible to you. To add authentication (for example, restrict access to the app to members of your Azure tenant):
+
+ 1. Go to the [Azure portal](https://portal.azure.com/#home) and search for the app name you specified during publishing. Select the web app, and go to the **Authentication** tab on the left navigation menu. Then select **Add an identity provider**.
+
+ :::image type="content" source="../media/quickstarts/web-app-authentication.png" alt-text="Screenshot of the authentication page in the Azure portal." lightbox="../media/quickstarts/web-app-authentication.png":::
+
+ 1. Select Microsoft as the identity provider. The default settings on this page will restrict the app to your tenant only, so you don't need to change anything else here. Then select **Add**
+
+ Now users will be asked to sign in with their Azure Active Directory account to be able to access your app. You can follow a similar process to add another identity provider if you prefer. The app doesn't use the user's login information in any other way other than verifying they are a member of your tenant.
++ ### Using the API Consider setting the following parameters even if they are optional for using the API.
ai-services Use Your Data Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/use-your-data-quickstart.md
Previously updated : 07/12/2023 Last updated : 08/08/2023 recommendations: false zone_pivot_groups: openai-use-your-data
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
Previously updated : 07/27/2023 Last updated : 08/08/2023 recommendations: false keywords: # What's new in Azure OpenAI Service
+## August 2023
+
+- You can now deploy Azure OpenAI on your data to [Power Virtual Agents](/azure/ai-services/openai/concepts/use-your-data#deploying-the-model).
+ ## July 2023 ### Support for function calling
ai-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/policy-reference.md
Title: Built-in policy definitions for Azure AI services description: Lists Azure Policy built-in policy definitions for Azure AI services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
aks App Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing.md
Title: Use the application routing add-on with Azure Kubernetes Service (AKS) clusters (preview)
+ Title: Azure Kubernetes Service (AKS) ingress with the application routing add-on (preview)
description: Use the application routing add-on to securely access applications deployed on Azure Kubernetes Service (AKS). -+ Previously updated : 05/04/2023- Last updated : 08/07/2023+
-# Use the application routing add-on with Azure Kubernetes Service (AKS) clusters (preview)
+# Azure Kubernetes Service (AKS) ingress with the application routing add-on (preview)
The application routing add-on configures an [ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/) in your Azure Kubernetes Service (AKS) cluster with SSL termination through certificates stored in Azure Key Vault. It can optionally integrate with Open Service Mesh (OSM) for end-to-end encryption of inter-cluster communication using mutual TLS (mTLS). When you deploy ingresses, the add-on creates publicly accessible DNS names for endpoints on an Azure DNS zone.
aks Concepts Clusters Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-clusters-workloads.md
Like StatefulSets, a DaemonSet is defined as part of a YAML definition using `ki
For more information, see [Kubernetes DaemonSets][kubernetes-daemonset]. > [!NOTE]
-> If using the [Virtual Nodes add-on](virtual-nodes-cli.md#enable-virtual-nodes-addon), DaemonSets will not create pods on the virtual node.
+> If using the [Virtual Nodes add-on](virtual-nodes-cli.md#enable-the-virtual-nodes-addon), DaemonSets will not create pods on the virtual node.
## Namespaces
aks Ingress Basic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-basic.md
Title: Create an ingress controller in Azure Kubernetes Service (AKS)
+ Title: Create an unmanaged ingress controller
+ description: Learn how to create and configure an ingress controller in an Azure Kubernetes Service (AKS) cluster. Previously updated : 02/23/2023 Last updated : 08/07/2023
-# Create an ingress controller in Azure Kubernetes Service (AKS)
+# Create an unmanaged ingress controller
An ingress controller is a piece of software that provides reverse proxy, configurable traffic routing, and TLS termination for Kubernetes services. Kubernetes ingress resources are used to configure the ingress rules and routes for individual Kubernetes services. When you use an ingress controller and ingress rules, a single IP address can be used to route traffic to multiple services in a Kubernetes cluster.
aks Istio Deploy Ingress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-deploy-ingress.md
Title: Deploy external or internal ingresses for Istio service mesh add-on for Azure Kubernetes Service (preview)
+ Title: Azure Kubernetes Service (AKS) external or internal ingresses for Istio service mesh add-on (preview)
description: Deploy external or internal ingresses for Istio service mesh add-on for Azure Kubernetes Service (preview)-- Previously updated : 04/09/2023-++++ Last updated : 08/07/2023+
-# Deploy external or internal ingresses for Istio service mesh add-on for Azure Kubernetes Service (preview)
+# Azure Kubernetes Service (AKS) external or internal ingresses for Istio service mesh add-on deployment (preview)
This article shows you how to deploy external or internal ingresses for Istio service mesh add-on for Azure Kubernetes Service (AKS) cluster.
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
aks Virtual Nodes Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/virtual-nodes-cli.md
Title: Create virtual nodes using Azure CLI
+ Title: Create virtual nodes in Azure Kubernetes Service (AKS) using Azure CLI
-description: Learn how to use the Azure CLI to create an Azure Kubernetes Services (AKS) cluster that uses virtual nodes to run pods.
+description: Learn how to use Azure CLI to create an Azure Kubernetes Services (AKS) cluster that uses virtual nodes to run pods.
Previously updated : 06/25/2022+ Last updated : 08/28/2023
-# Create and configure an Azure Kubernetes Services (AKS) cluster to use virtual nodes using the Azure CLI
+# Create and configure an Azure Kubernetes Services (AKS) cluster to use virtual nodes using Azure CLI
-This article shows you how to use the Azure CLI to create and configure the virtual network resources and AKS cluster, then enable virtual nodes.
+Virtual nodes enable network communication between pods that run in Azure Container Instances (ACI) and AKS clusters. To provide this communication, you create a virtual network subnet and assign delegated permissions. Virtual nodes only work with AKS clusters created using *advanced* networking (Azure CNI). By default, AKS clusters are created with *basic* networking (kubenet). This article shows you how to create a virtual network and subnets, then deploy an AKS cluster that uses advanced networking.
+This article shows you how to use the Azure CLI to create and configure virtual network resources and an AKS cluster enabled with virtual nodes.
## Before you begin
-Virtual nodes enable network communication between pods that run in Azure Container Instances (ACI) and the AKS cluster. To provide this communication, a virtual network subnet is created and delegated permissions are assigned. Virtual nodes only work with AKS clusters created using *advanced* networking (Azure CNI). By default, AKS clusters are created with *basic* networking (kubenet). This article shows you how to create a virtual network and subnets, then deploy an AKS cluster that uses advanced networking.
- > [!IMPORTANT] > Before using virtual nodes with AKS, review both the [limitations of AKS virtual nodes][virtual-nodes-aks] and the [virtual networking limitations of ACI][virtual-nodes-networking-aci]. These limitations affect the location, networking configuration, and other configuration details of both your AKS cluster and the virtual nodes.
-If you have not previously used ACI, register the service provider with your subscription. You can check the status of the ACI provider registration using the [az provider list][az-provider-list] command, as shown in the following example:
+* You need the ACI service provider registered with your subscription. You can check the status of the ACI provider registration using the [`az provider list`][az-provider-list] command.
-```azurecli-interactive
-az provider list --query "[?contains(namespace,'Microsoft.ContainerInstance')]" -o table
-```
+ ```azurecli-interactive
+ az provider list --query "[?contains(namespace,'Microsoft.ContainerInstance')]" -o table
+ ```
-The *Microsoft.ContainerInstance* provider should report as *Registered*, as shown in the following example output:
+ The *Microsoft.ContainerInstance* provider should report as *Registered*, as shown in the following example output:
-```output
-Namespace RegistrationState RegistrationPolicy
- - --
-Microsoft.ContainerInstance Registered RegistrationRequired
-```
+ ```output
+ Namespace RegistrationState RegistrationPolicy
+ - --
+ Microsoft.ContainerInstance Registered RegistrationRequired
+ ```
-If the provider shows as *NotRegistered*, register the provider using the [az provider register][az-provider-register] as shown in the following example:
+ If the provider shows as *NotRegistered*, register the provider using the [`az provider register`][az-provider-register].
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerInstance
-```
+ ```azurecli-interactive
+ az provider register --namespace Microsoft.ContainerInstance
+ ```
-## Launch Azure Cloud Shell
+* If using Azure CLI, this article requires Azure CLI version 2.0.49 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli). You can also use [Azure Cloud Shell](#launch-azure-cloud-shell).
-The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
+### Launch Azure Cloud Shell
-To open the Cloud Shell, select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com/bash](https://shell.azure.com/bash). Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press enter to run it.
+The Azure Cloud Shell is a free interactive shell you can use to run the steps in this article. It has common Azure tools preinstalled and configured.
-If you prefer to install and use the CLI locally, this article requires Azure CLI version 2.0.49 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
+To open the Cloud Shell, select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com/bash](https://shell.azure.com/bash). Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press enter to run it.
## Create a resource group
-An Azure resource group is a logical group in which Azure resources are deployed and managed. Create a resource group with the [az group create][az-group-create] command. The following example creates a resource group named *myResourceGroup* in the *westus* location.
+An Azure resource group is a logical group in which Azure resources are deployed and managed.
+
+* Create a resource group using the [`az group create`][az-group-create] command.
-```azurecli-interactive
-az group create --name myResourceGroup --location westus
-```
+ ```azurecli-interactive
+ az group create --name myResourceGroup --location eastus
+ ```
## Create a virtual network > [!IMPORTANT]
-> Virtual node requires a custom virtual network and associated subnet. It can't be associated with the same virtual network the AKS cluster is deployed to.
+> Virtual node requires a custom virtual network and associated subnet. It can't be associated with the same virtual network as the AKS cluster.
-Create a virtual network using the [az network vnet create][az-network-vnet-create] command. The following example creates a virtual network name *myVnet* with an address prefix of *10.0.0.0/8*, and a subnet named *myAKSSubnet*. The address prefix of this subnet defaults to *10.240.0.0/16*:
+1. Create a virtual network using the [`az network vnet create`][az-network-vnet-create] command. The following example creates a virtual network named *myVnet* with an address prefix of *10.0.0.0/8* and a subnet named *myAKSSubnet*. The address prefix of this subnet defaults to *10.240.0.0/16*.
-```azurecli-interactive
-az network vnet create \
- --resource-group myResourceGroup \
- --name myVnet \
- --address-prefixes 10.0.0.0/8 \
- --subnet-name myAKSSubnet \
- --subnet-prefix 10.240.0.0/16
-```
+ ```azurecli-interactive
+ az network vnet create \
+ --resource-group myResourceGroup \
+ --name myVnet \
+ --address-prefixes 10.0.0.0/8 \
+ --subnet-name myAKSSubnet \
+ --subnet-prefix 10.240.0.0/16
+ ```
-Now create an additional subnet for virtual nodes using the [az network vnet subnet create][az-network-vnet-subnet-create] command. The following example creates a subnet named *myVirtualNodeSubnet* with the address prefix of *10.241.0.0/16*.
+2. Create an extra subnet for the virtual nodes using the [`az network vnet subnet create`][az-network-vnet-subnet-create] command. The following example creates a subnet named *myVirtualNodeSubnet* with an address prefix of *10.241.0.0/16*.
-```azurecli-interactive
-az network vnet subnet create \
- --resource-group myResourceGroup \
- --vnet-name myVnet \
- --name myVirtualNodeSubnet \
- --address-prefixes 10.241.0.0/16
-```
+ ```azurecli-interactive
+ az network vnet subnet create \
+ --resource-group myResourceGroup \
+ --vnet-name myVnet \
+ --name myVirtualNodeSubnet \
+ --address-prefixes 10.241.0.0/16
+ ```
## Create an AKS cluster with managed identity
-Instead of using a system-assigned identity, you can also use a user-assigned identity. For more information, see [Use managed identities](use-managed-identity.md).
+1. Get the subnet ID using the [`az network vnet subnet show`][az-network-vnet-subnet-show] command.
-You deploy an AKS cluster into the AKS subnet created in a previous step. Get the ID of this subnet using [az network vnet subnet show][az-network-vnet-subnet-show]:
+ ```azurecli-interactive
+ az network vnet subnet show --resource-group myResourceGroup --vnet-name myVnet --name myAKSSubnet --query id -o tsv
+ ```
-```azurecli-interactive
-az network vnet subnet show --resource-group myResourceGroup --vnet-name myVnet --name myAKSSubnet --query id -o tsv
-```
+2. Create an AKS cluster using the [`az aks create`][az-aks-create] command and replace `<subnetId>` with the ID obtained in the previous step. The following example creates a cluster named *myAKSCluster* with five nodes.
-Use the [az aks create][az-aks-create] command to create an AKS cluster. The following example creates a cluster named *myAKSCluster* with one node. Replace `<subnetId>` with the ID obtained in the previous step.
+ ```azurecli-interactive
+ az aks create \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --node-count 5 \
+ --network-plugin azure \
+ --vnet-subnet-id <subnetId>
+ ```
-```azurecli-interactive
-az aks create \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --node-count 1 \
- --network-plugin azure \
- --vnet-subnet-id <subnetId> \
-```
+ After several minutes, the command completes and returns JSON-formatted information about the cluster.
-After several minutes, the command completes and returns JSON-formatted information about the cluster.
+For more information on managed identities, see [Use managed identities](use-managed-identity.md).
-## Enable virtual nodes addon
+## Enable the virtual nodes addon
-To enable virtual nodes, now use the [az aks enable-addons][az-aks-enable-addons] command. The following example uses the subnet named *myVirtualNodeSubnet* created in a previous step:
+* Enable virtual nodes using the [`az aks enable-addons`][az-aks-enable-addons] command. The following example uses the subnet named *myVirtualNodeSubnet* created in a previous step.
-```azurecli-interactive
-az aks enable-addons \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --addons virtual-node \
- --subnet-name myVirtualNodeSubnet
-```
+ ```azurecli-interactive
+ az aks enable-addons \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --addons virtual-node \
+ --subnet-name myVirtualNodeSubnet
+ ```
## Connect to the cluster
-To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials][az-aks-get-credentials] command. This step downloads credentials and configures the Kubernetes CLI to use them.
+1. Configure `kubectl` to connect to your Kubernetes cluster using the [`az aks get-credentials`][az-aks-get-credentials] command. This step downloads credentials and configures the Kubernetes CLI to use them.
-```azurecli-interactive
-az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
-```
+ ```azurecli-interactive
+ az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
+ ```
-To verify the connection to your cluster, use the [kubectl get][kubectl-get] command to return a list of the cluster nodes.
+2. Verify the connection to your cluster using the [`kubectl get`][kubectl-get] command, which returns a list of the cluster nodes.
-```console
-kubectl get nodes
-```
+ ```console
+ kubectl get nodes
+ ```
-The following example output shows the single VM node created and then the virtual node for Linux, *virtual-node-aci-linux*:
+ The following example output shows the single VM node created and the virtual node for Linux, *virtual-node-aci-linux*:
-```output
-NAME STATUS ROLES AGE VERSION
-virtual-node-aci-linux Ready agent 28m v1.11.2
-aks-agentpool-14693408-0 Ready agent 32m v1.11.2
-```
+ ```output
+ NAME STATUS ROLES AGE VERSION
+ virtual-node-aci-linux Ready agent 28m v1.11.2
+ aks-agentpool-14693408-0 Ready agent 32m v1.11.2
+ ```
## Deploy a sample app
-Create a file named `virtual-node.yaml` and copy in the following YAML. To schedule the container on the node, a [nodeSelector][node-selector] and [toleration][toleration] are defined.
-
-```yaml
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: aci-helloworld
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: aci-helloworld
- template:
+1. Create a file named `virtual-node.yaml` and copy in the following YAML. The YAML schedules the container on the node by defining a [nodeSelector][node-selector] and [toleration][toleration].
+
+ ```yaml
+ apiVersion: apps/v1
+ kind: Deployment
metadata:
- labels:
- app: aci-helloworld
+ name: aci-helloworld
spec:
- containers:
- - name: aci-helloworld
- image: mcr.microsoft.com/azuredocs/aci-helloworld
- ports:
- - containerPort: 80
- nodeSelector:
- kubernetes.io/role: agent
- kubernetes.io/os: linux
- type: virtual-kubelet
- tolerations:
- - key: virtual-kubelet.io/provider
- operator: Exists
- - key: azure.com/aci
- effect: NoSchedule
-```
-
-Run the application with the [kubectl apply][kubectl-apply] command.
-
-```console
-kubectl apply -f virtual-node.yaml
-```
-
-Use the [kubectl get pods][kubectl-get] command with the `-o wide` argument to output a list of pods and the scheduled node. Notice that the `aci-helloworld` pod has been scheduled on the `virtual-node-aci-linux` node.
-
-```console
-kubectl get pods -o wide
-```
-
-```output
-NAME READY STATUS RESTARTS AGE IP NODE
-aci-helloworld-9b55975f-bnmfl 1/1 Running 0 4m 10.241.0.4 virtual-node-aci-linux
-```
-
-The pod is assigned an internal IP address from the Azure virtual network subnet delegated for use with virtual nodes.
+ replicas: 1
+ selector:
+ matchLabels:
+ app: aci-helloworld
+ template:
+ metadata:
+ labels:
+ app: aci-helloworld
+ spec:
+ containers:
+ - name: aci-helloworld
+ image: mcr.microsoft.com/azuredocs/aci-helloworld
+ ports:
+ - containerPort: 80
+ nodeSelector:
+ kubernetes.io/role: agent
+ beta.kubernetes.io/os: linux
+ type: virtual-kubelet
+ tolerations:
+ - key: virtual-kubelet.io/provider
+ operator: Exists
+ - key: azure.com/aci
+ effect: NoSchedule
+ ```
+
+2. Run the application using the [`kubectl apply`][kubectl-apply] command.
+
+ ```console
+ kubectl apply -f virtual-node.yaml
+ ```
+
+3. Get a list of pods and the scheduled node using the [`kubectl get pods`][kubectl-get] command with the `-o wide` argument.
+
+ ```console
+ kubectl get pods -o wide
+ ```
+
+ The pod is scheduled on the virtual node *virtual-node-aci-linux*, as shown in the following example output:
+
+ ```output
+ NAME READY STATUS RESTARTS AGE IP NODE
+ aci-helloworld-9b55975f-bnmfl 1/1 Running 0 4m 10.241.0.4 virtual-node-aci-linux
+ ```
+
+ The pod is assigned an internal IP address from the Azure virtual network subnet delegated for use with virtual nodes.
> [!NOTE]
-> If you use images stored in Azure Container Registry, [configure and use a Kubernetes secret][acr-aks-secrets]. A current limitation of virtual nodes is that you can't use integrated Azure AD service principal authentication. If you don't use a secret, pods scheduled on virtual nodes fail to start and report the error `HTTP response status code 400 error code "InaccessibleImage"`.
+> If you use images stored in Azure Container Registry, [configure and use a Kubernetes secret][acr-aks-secrets]. A current limitation of virtual nodes is you can't use integrated Azure AD service principal authentication. If you don't use a secret, pods scheduled on virtual nodes fail to start and report the error `HTTP response status code 400 error code "InaccessibleImage"`.
## Test the virtual node pod
-To test the pod running on the virtual node, browse to the demo application with a web client. As the pod is assigned an internal IP address, you can quickly test this connectivity from another pod on the AKS cluster. Create a test pod and attach a terminal session to it:
+1. Test the pod running on the virtual node by browsing to the demo application with a web client. As the pod is assigned an internal IP address, you can quickly test this connectivity from another pod on the AKS cluster.
+2. Create a test pod and attach a terminal session to it using the following `kubectl run -it` command.
-```console
-kubectl run -it --rm testvk --image=mcr.microsoft.com/dotnet/runtime-deps:6.0
-```
+ ```console
+ kubectl run -it --rm testvk --image=mcr.microsoft.com/dotnet/runtime-deps:6.0
+ ```
-Install `curl` in the pod using `apt-get`:
+3. Install `curl` in the pod using `apt-get`.
-```console
-apt-get update && apt-get install -y curl
-```
+ ```console
+ apt-get update && apt-get install -y curl
+ ```
-Now access the address of your pod using `curl`, such as *http://10.241.0.4*. Provide your own internal IP address shown in the previous `kubectl get pods` command:
+4. Access the address of your pod using `curl`, such as *http://10.241.0.4*. Provide your own internal IP address shown in the previous `kubectl get pods` command.
-```console
-curl -L http://10.241.0.4
-```
+ ```console
+ curl -L http://10.241.0.4
+ ```
-The demo application is displayed, as shown in the following condensed example output:
+ The demo application is displayed, as shown in the following condensed example output:
-```output
-<html>
-<head>
- <title>Welcome to Azure Container Instances!</title>
-</head>
-[...]
-```
+ ```output
+ <html>
+ <head>
+ <title>Welcome to Azure Container Instances!</title>
+ </head>
+ [...]
+ ```
-Close the terminal session to your test pod with `exit`. When your session is ended, the pod is the deleted.
+5. Close the terminal session to your test pod with `exit`. When your session is ends, the pod is deleted.
## Remove virtual nodes
-If you no longer wish to use virtual nodes, you can disable them using the [az aks disable-addons][az aks disable-addons] command.
-
-If necessary, go to [https://shell.azure.com](https://shell.azure.com) to open Azure Cloud Shell in your browser.
-
-First, delete the `aci-helloworld` pod running on the virtual node:
+1. Delete the `aci-helloworld` pod running on the virtual node using the `kubectl delete` command.
-```console
-kubectl delete -f virtual-node.yaml
-```
+ ```console
+ kubectl delete -f virtual-node.yaml
+ ```
-The following example command disables the Linux virtual nodes:
+2. Disable the virtual nodes using the [`az aks disable-addons`][az aks disable-addons] command.
-```azurecli-interactive
-az aks disable-addons --resource-group myResourceGroup --name myAKSCluster --addons virtual-node
-```
+ ```azurecli-interactive
+ az aks disable-addons --resource-group myResourceGroup --name myAKSCluster --addons virtual-node
+ ```
-Now, remove the virtual network resources and resource group:
+3. Remove the virtual network resources and resource group using the following commands.
-```azurecli-interactive
-# Change the name of your resource group, cluster and network resources as needed
-RES_GROUP=myResourceGroup
-AKS_CLUSTER=myAKScluster
-AKS_VNET=myVnet
-AKS_SUBNET=myVirtualNodeSubnet
+ ```azurecli-interactive
+ # Change the name of your resource group, cluster and network resources as needed
+ RES_GROUP=myResourceGroup
+ AKS_CLUSTER=myAKScluster
+ AKS_VNET=myVnet
+ AKS_SUBNET=myVirtualNodeSubnet
-# Get AKS node resource group
-NODE_RES_GROUP=$(az aks show --resource-group $RES_GROUP --name $AKS_CLUSTER --query nodeResourceGroup --output tsv)
+ # Get AKS node resource group
+ NODE_RES_GROUP=$(az aks show --resource-group $RES_GROUP --name $AKS_CLUSTER --query nodeResourceGroup --output tsv)
-# Get network profile ID
-NETWORK_PROFILE_ID=$(az network profile list --resource-group $NODE_RES_GROUP --query "[0].id" --output tsv)
+ # Get network profile ID
+ NETWORK_PROFILE_ID=$(az network profile list --resource-group $NODE_RES_GROUP --query "[0].id" --output tsv)
-# Delete the network profile
-az network profile delete --id $NETWORK_PROFILE_ID -y
+ # Delete the network profile
+ az network profile delete --id $NETWORK_PROFILE_ID -y
-# Grab the service association link ID
-SAL_ID=$(az network vnet subnet show --resource-group $RES_GROUP --vnet-name $AKS_VNET --name $AKS_SUBNET --query id --output tsv)/providers/Microsoft.ContainerInstance/serviceAssociationLinks/default
+ # Grab the service association link ID
+ SAL_ID=$(az network vnet subnet show --resource-group $RES_GROUP --vnet-name $AKS_VNET --name $AKS_SUBNET --query id --output tsv)/providers/Microsoft.ContainerInstance/serviceAssociationLinks/default
-# Delete the service association link for the subnet
-az resource delete --ids $SAL_ID --api-version 2021-10-01
+ # Delete the service association link for the subnet
+ az resource delete --ids $SAL_ID --api-version 2021-10-01
-# Delete the subnet delegation to Azure Container Instances
-az network vnet subnet update --resource-group $RES_GROUP --vnet-name $AKS_VNET --name $AKS_SUBNET --remove delegations
-```
+ # Delete the subnet delegation to Azure Container Instances
+ az network vnet subnet update --resource-group $RES_GROUP --vnet-name $AKS_VNET --name $AKS_SUBNET --remove delegations
+ ```
## Next steps
-In this article, a pod was scheduled on the virtual node and assigned a private, internal IP address. You could instead create a service deployment and route traffic to your pod through a load balancer or ingress controller. For more information, see [Create a basic ingress controller in AKS][aks-basic-ingress].
+In this article, you scheduled a pod on the virtual node and assigned a private internal IP address. You could instead create a service deployment and route traffic to your pod through a load balancer or ingress controller. For more information, see [Create a basic ingress controller in AKS][aks-basic-ingress].
Virtual nodes are often one component of a scaling solution in AKS. For more information on scaling solutions, see the following articles: -- [Use the Kubernetes horizontal pod autoscaler][aks-hpa]-- [Use the Kubernetes cluster autoscaler][aks-cluster-autoscaler]-- [Check out the Autoscale sample for Virtual Nodes][virtual-node-autoscale]-- [Read more about the Virtual Kubelet open source library][virtual-kubelet-repo]
+* [Use the Kubernetes horizontal pod autoscaler][aks-hpa]
+* [Use the Kubernetes cluster autoscaler][aks-cluster-autoscaler]
+* [Check out the Autoscale sample for Virtual Nodes][virtual-node-autoscale]
+* [Read more about the Virtual Kubelet open source library][virtual-kubelet-repo]
<!-- LINKS - external --> [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [node-selector]:https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ [toleration]: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
-[aks-github]: https://github.com/azure/aks/issues
[virtual-node-autoscale]: https://github.com/Azure-Samples/virtual-node-autoscale [virtual-kubelet-repo]: https://github.com/virtual-kubelet/virtual-kubelet [acr-aks-secrets]: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ <!-- LINKS - internal -->
-[azure-cli-install]: /cli/azure/install-azure-cli
[az-group-create]: /cli/azure/group#az_group_create [az-network-vnet-create]: /cli/azure/network/vnet#az_network_vnet_create [az-network-vnet-subnet-create]: /cli/azure/network/vnet/subnet#az_network_vnet_subnet_create
-[az-ad-sp-create-for-rbac]: /cli/azure/ad/sp#az_ad_sp_create_for_rbac
-[az-network-vnet-show]: /cli/azure/network/vnet#az_network_vnet_show
-[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create
[az-network-vnet-subnet-show]: /cli/azure/network/vnet/subnet#az_network_vnet_subnet_show [az-aks-create]: /cli/azure/aks#az_aks_create [az-aks-enable-addons]: /cli/azure/aks#az_aks_enable_addons
-[az-extension-add]: /cli/azure/extension#az_extension_add
[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [az aks disable-addons]: /cli/azure/aks#az_aks_disable_addons [aks-hpa]: tutorial-kubernetes-scale.md
aks Windows Aks Partner Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-aks-partner-solutions.md
+
+ Title: Windows AKS Partner Solutions
+
+description: Find partner-tested solutions that enable you to build, test, deploy, manage and monitor your Windows-based apps on Windows containers on AKS.
+ Last updated : 08/04/2023++
+# Windows AKS Partners Solutions
+
+Microsoft has collaborated with partners to ensure your build, test, deployment, configuration, and monitoring of your applications perform optimally with Windows containers on AKS.
+
+Our 3rd party partners featured below have published introduction guides to start using their solutions with your applications running on Windows containers on AKS.
+
+| Solutions | Partners |
+|--|--|
+| DevOps | [GitLab](#gitlab) <br> [CircleCI](#circleci) |
+| Networking | [NGINX](#f5-nginx) <br> [Calico](#calico) |
+| Observability | [Datadog](#datadog) <br> [New Relic](#new-relic) |
+| Security | [Prisma](#prisma) |
+| Storage | [NetApp](#netapp) |
+| Config Management | [Chef](#chef) |
++
+## DevOps
+
+DevOps streamlines the delivery process, improves collaboration across teams, and enhances software quality, ensuring swift, reliable, and continuous deployment of your Windows-based applications.
+
+### GitLab
+
+The GitLab DevSecOps Platform supports the Microsoft development ecosystem with performance, accessibility testing, SAST, DAST and Fuzzing security scanning, dependency scanning, SBOM, license management and more.
+
+As an extensible platform, GitLab also allows you to plug in your own tooling for any stage. GitLab's integration with Azure Kubernetes Services (AKS) enables full DevSecOps workflows for Windows and Linux Container workloads using either Push CD or GitOps Pull CD with flux manifests. Using Cloud Native Buildpaks, GitLab Auto DevOps can build, test and autodeploy OSS .NET projects.
+
+To learn more, please our see our [joint blog](https://techcommunity.microsoft.com/t5/containers/using-gitlab-to-build-and-deploy-windows-containers-on-azure/ba-p/3889929).
+
+### CircleCI
+
+CircleCIΓÇÖs integration with Azure Kubernetes Services (AKS) allows you to automate, build, validate, and ship containerized Windows applications, ensuring faster and more reliable software deployment. You can easily integrate your pipeline with AKS using CircleCI orbs, which are prepacked snippets of YAML configuration.
+
+Follow this [tutorial](https://techcommunity.microsoft.com/t5/containers/continuous-deployment-of-windows-containers-with-circleci-and/ba-p/3841220) to learn how to set up a CI/CD pipeline to build a Dockerized ASP.NET application and deploy it to an AKS cluster.
+
+## Networking
+
+Ensure efficient traffic management, enhanced security, and optimal network performance with these solutions to achieve smooth application connectivity and communication.
+
+### F5 NGINX
+
+NGINX Ingress Controller deployed in AKS, on-premises, and in the cloud implements unified Kubernetes-native API gateways, load balancers, and Ingress controllers to reduce complexity, increase uptime, and provide in-depth insights into app health and performance for containerized Windows workloads.
+
+Running at the edge of a Kubernetes cluster, NGINX Ingress Controller ensures holistic app security with user and service identities, authorization, access control, encrypted communications, and additional NGINX App Protect modules for Layer 7 WAF and DoS app protection.
+
+Learn how to manage connectivity to your Windows applications running on Windows nodes in a mixed-node AKS cluster with NGINX Ingress controller in this [blog](https://techcommunity.microsoft.com/t5/containers/improving-customer-experiences-with-f5-nginx-and-windows-on/ba-p/3820344).
+
+### Calico
+
+Tigera provides an active security platform with full-stack observability for containerized workloads and Microsoft AKS as a fully managed SaaS (Calico Cloud) or a self-managed service (Calico Enterprise). The platform prevents, detects, troubleshoots, and automatically mitigates exposure risks of security breaches for workloads in Microsoft AKS.
+
+Its open-source offering, Calico Open Source, is the most widely adopted container networking and security solution. It specifies security and observability as code to ensure consistent enforcement of security policies, which enables DevOps, platform, and security teams to protect workloads, detect threats, achieve continuous compliance, and troubleshoot service issues in real-time.
+
+To learn more, [click here](https://techcommunity.microsoft.com/t5/containers/securing-windows-workloads-on-azure-kubernetes-service-with/ba-p/3815429).
+
+## Observability
+
+Observability provides deep insights into your systems, enabling rapid issue detection and resolution to enhance your applicationΓÇÖs reliability and performance.
+
+### Datadog
+
+Datadog is the essential monitoring and security platform for cloud applications. We bring together end-to-end traces, metrics, and logs to make your applications, infrastructure, and third-party services entirely observable. Partner with Datadog for Windows on AKS environments to streamline monitoring, proactively resolve issues, and optimize application performance and availability.
+
+Get started by following the recommendations in our [joint blog](https://techcommunity.microsoft.com/t5/containers/gain-full-observability-into-windows-containers-on-azure/ba-p/3853603).
+
+### New Relic
+
+New Relic's Azure Kubernetes integration is a powerful solution that seamlessly connects New Relic's monitoring and observability capabilities with Azure Kubernetes Service (AKS). By deploying the New Relic Kubernetes integration, users gain deep insights into their AKS clusters' performance, health, and resource utilization. This integration allows users to efficiently manage and troubleshoot containerized applications, optimize resource allocation, and proactively identify and resolve issues in their AKS environments. With New Relic's comprehensive monitoring and analysis tools, businesses can ensure the smooth operation and optimal performance of their Kubernetes workloads on Azure.
+
+Check this [blog](https://techcommunity.microsoft.com/t5/containers/persistent-storage-for-windows-containers-on-azure-kubernetes/ba-p/3836781) for detailed information.
+
+## Security
+
+Ensure the integrity and confidentiality of applications, thereby fostering trust and compliance across your infrastructure.
+
+### Prisma
+
+Prisma Cloud is a comprehensive Cloud-Native Application Protection Platform (CNAPP) tailor-made to help secure Windows containers on Azure Kubernetes Service (AKS). Gain continuous, real-time visibility and control over Windows container environments including vulnerability and compliance management, identities and permissions, and AI-assisted runtime defense. Integrated container scanning across the pipeline and in Azure Container Registry ensure security throughout the entire application lifecycle.
+
+See [our guidance](https://techcommunity.microsoft.com/t5/containers/unlocking-new-possibilities-with-prisma-cloud-and-windows/ba-p/3866485) for more details.
+
+## Storage
+
+Storage enables standardized and seamless storage interactions, ensuring high application performance and data consistency.
+
+### NetApp
+
+Astra Control provides application data management for stateful workloads on Azure Kubernetes Service (AKS). Discover your apps and define protection policies that automatically back up workloads offsite. Protect, clone, and move applications across Kubernetes environments with ease.
+
+Follow the steps provided in [this blog](https://techcommunity.microsoft.com/t5/containers/persistent-storage-for-windows-containers-on-azure-kubernetes/ba-p/3836781) post to dynamically provision SMB volumes for Windows AKS workloads.
+
+## Config management
+
+Automate and standardize the system settings across your environments to enhance efficiency, reduce errors, and ensuring system stability and compliance.
+
+### Chef
+
+Chef provides visibility and threat detection from build to runtime that monitors, audits, and remediates the security of your Azure cloud services and Kubernetes and Windows container assets. Chef provides comprehensive visibility and continuous compliance into your cloud security posture and helps limit the risk of misconfigurations in cloud-native environments by providing best practices based on CIS, STIG, SOC2, PCI-DSS and other benchmarks. This is part of a broader compliance offering that supports on-premises or hybrid cloud environments including applications deployed on the edge.
+
+To learn more about ChefΓÇÖs capabilities, check out the comprehensive ΓÇÿhow-toΓÇÖ blog post here: [Securing Your Windows Environments Running on Azure Kubernetes Service with Chef](https://techcommunity.microsoft.com/t5/containers/securing-your-windows-environments-running-on-azure-kubernetes/ba-p/3821830).
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
app-service Configure Common https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-common.md
$webapp=Get-AzWebApp -ResourceGroupName <group-name> -Name <app-name>
# Copy connection strings to a new hashtable $connStrings = @{} ForEach ($item in $webapp.SiteConfig.ConnectionStrings) {
-$connStrings[$item.Name] = @{value=$item.Value; type=item.Type}
+ $connStrings[$item.Name] = @{value=$item.ConnectionString; type=$item.Type.ToString()}
} # Add or edit one or more connection strings
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
application-gateway Application Gateway For Containers Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/application-gateway-for-containers-components.md
Previously updated : 07/24/2023 Last updated : 08/08/2023
Application Gateway for Containers inserts three additional headers to all reque
**x-forwarded-proto** returns the protocol received by Application Gateway for Containers from the client. The value is either http or https.
-**X-request-id** is a unique guid generated by Application Gateway for Containers for each client request and presented in the forwarded request to the backend target. The guid consists of 32 alphanumeric characters, separated by dashes (for example: d23387ab-e629-458a-9c93-6108d374bc75). This guid can be used to correlate a request received by Application Gateway for Containers and initiated to a backend target as defined in access logs.
+**x-request-id** is a unique guid generated by Application Gateway for Containers for each client request and presented in the forwarded request to the backend target. The guid consists of 32 alphanumeric characters, separated by dashes (for example: d23387ab-e629-458a-9c93-6108d374bc75). This guid can be used to correlate a request received by Application Gateway for Containers and initiated to a backend target as defined in access logs.
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
azure-arc Backup Controller Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/backup-controller-database.md
Title: Backup controller database
-description: Explains how to backup the controller database for Azure Arc-enabled data services
+ Title: Back up controller database
+description: Explains how to back up the controller database for Azure Arc-enabled data services
Last updated 04/26/2023
-# Backup controller database
+# Back up and recover controller database
-When you deploy Azure Arc data services, the Azure Arc Data Controller is one of the most critical components of the deployment. The data controller:
+When you deploy Azure Arc data services, the Azure Arc Data Controller is one of the most critical components that is deployed. The functions of the data controller include:
-- Provisions and deprovisions resources-- Orchestrates most of the activities for Azure Arc-enabled SQL Managed Instance-- Captures the billing and usage information of each Arc SQL managed instance.
+- Provision, de-provision and update resources
+- Orchestrate most of the activities for Azure Arc-enabled SQL Managed Instance such as upgrades, scale out etc.
+- Capture the billing and usage information of each Arc SQL managed instance.
-All information such as inventory of all the Arc SQL managed instances, billing, usage and the current state of all these SQL managed instances is stored in a database called `controller` under the SQL Server instance that is deployed into the `controldb-0` pod.
+In order to perform above functions, the Data controller needs to store an inventory of all the current Arc SQL managed instances, billing, usage and the current state of all these SQL managed instances. All this data is stored in a database called `controller` within the SQL Server instance that is deployed into the `controldb-0` pod.
This article explains how to back up the controller database.
-Following steps are needed in order to back up the `controller` database:
+## Back up data controller database
-1. Retrieve the credentials for the secret
-1. Decode the base64 encoded credentials
-1. Use the decoded credentials to connect to the SQL instance hosting the controller database, and issue the `BACKUP` command
+As part of built-in capabilities, the Data controller database `controller` is automatically backed up every 5 minutes once backups are enabled. To enable backups:
-## Retrieve the credentials for the secret
+- Create a `backups-controldb` `PersistentVolumeClaim` with a storage class that supports `ReadWriteMany` access:
-`controller-db-rw-secret` is the secret that holds the credentials for the `controldb-rw-user` user account that can be used to connect to the SQL instance.
-Run the following command to retrieve the secret contents:
-
-```azurecli
-kubectl get secret controller-db-rw-secret --namespace [namespace] -o yaml
+```yaml
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: backups-controldb
+ namespace: <namespace>
+spec:
+ accessModes:
+ - ReadWriteMany
+ resources:
+ requests:
+ storage: 15Gi
+ storageClassName: <storage-class>
```
-For example:
+- Edit the `DataController` custom resource spec to include a `backups` storage definition:
-```azurecli
-kubectl get secret controller-db-rw-secret --namespace arcdataservices -o yaml
+```yaml
+storage:
+ backups:
+ accessMode: ReadWriteMany
+ className: <storage-class>
+ size: 15Gi
+ data:
+ accessMode: ReadWriteOnce
+ className: managed-premium
+ size: 15Gi
+ logs:
+ accessMode: ReadWriteOnce
+ className: managed-premium
+ size: 10Gi
```
-## Decode the base64 encoded credentials
+The `.bak` files for the `controller` database are stored on the `backups` volume of the `controldb` pod at `/var/opt/backups/mssql`.
-The contents of the yaml file of the secret `controller-db-rw-secret` contain a `password` and `username`. You can use any base64 decoder tool to decode the contents of the `password`.
+## Recover controller database
-## Back up the database
+There are two types of recovery possible:
-With the decoded credentials, run the following command to issue a T-SQL `BACKUP` command to back up the controller database.
+1. `controller` is corrupted and you just need to restore the database
+1. the entire storage that contains the `controller` data and log files is corrupted/gone and you need to recover
-```azurecli
-kubectl exec controldb-0 -n contosons -c mssql-server -- /opt/mssql-tools/bin/sqlcmd -S localhost -U controldb-rw-user -P "<password>" -Q "BACKUP DATABASE [controller] TO DISK = N'/var/opt/controller.bak' WITH NOFORMAT, NOINIT, NAME = N'Controldb-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10, CHECKSUM"
-```
+### Corrupted controller database scenario
+
+In this scenario, all the pods are up and running, you are able to connect to the `controldb` SQL Server, and there may be a corruption with the `controller` database. You just need to restore the database from a backup.
+
+Follow these steps to restore the controller database from a backup, if the SQL Server is still up and running on the `controldb` pod, and you are able to connect to it:
+
+1. Verify connectivity to SQL Server pod hosting the `controller` database.
+
+ - First, retrieve the credentials for the secret. `controller-system-secret` is the secret that holds the credentials for the `system` user account that can be used to connect to the SQL instance.
+ Run the following command to retrieve the secret contents:
+
+ ```console
+ kubectl get secret controller-system-secret --namespace [namespace] -o yaml
+ ```
+
+ For example:
+
+ ```console
+ kubectl get secret controller-system-secret --namespace arcdataservices -o yaml
+ ```
+
+ - Decode the base64 encoded credentials. The contents of the yaml file of the secret `controller-system-secret` contain a `password` and `username`. You can use any base64 decoder tool to decode the contents of the `password`.
+ - Verify connectivity: With the decoded credentials, run a command such as `SELECT @@SERVERNAME` to verify connectivity to the SQL Server.
+
+ ```powershell
+ kubectl exec controldb-0 -n <namespace> -c mssql-server -- /opt/mssql-tools/bin/sqlcmd -S localhost -U system -P "<password>" -Q "SELECT @@SERVERNAME"
+ ```
+
+ ```powershell
+ kubectl exec controldb-0 -n contosons -c mssql-server -- /opt/mssql-tools/bin/sqlcmd -S localhost -U system -P "<password>" -Q "SELECT @@SERVERNAME"
+ ```
+
+1. Scale the controller ReplicaSet down to 0 replicas as follows:
+
+ ```console
+ kubectl scale --replicas=0 rs/control -n <namespace>`
+ ```
+
+ For example:
+
+ ```console
+ kubectl scale --replicas=0 rs/control -n arcdataservices
+ ```
+
+1. Connect to the `controldb` SQL Server as `system` as described in step 1.
+
+1. Delete the corrupted controller database using T-SQL:
+
+ ```sql
+ DROP DATABASE controller
+ ```
+
+1. Restore the database from backup - after the corrupted `controllerdb` is dropped. For example:
+
+ ```sql
+ RESTORE DATABASE test FROM DISK = '/var/opt/backups/mssql/<controller backup file>.bak'
+ WITH MOVE 'controller' to '/var/opt/mssql/datf
+ ,MOVE 'controller' to '/var/opt/mssql/data/controller_log.ldf'
+ ,RECOVERY;
+ GO
+ ```
+
+1. Scale the controller ReplicaSet back up to 1 replica.
+
+ ```console
+ kubectl scale --replicas=1 rs/control -n <namespace>
+ ```
+
+ For example:
+
+ ```console
+ kubectl scale --replicas=1 rs/control -n arcdataservices
+ ```
+
+### Corrupted storage scenario
+
+In this scenario, the storage hosting the Data controller data and log files, has corruption and a new storage was provisioned and you need to restore the controller database.
+
+Follow these steps to restore the controller database from a backup with new storage for the `controldb` StatefulSet:
+
+1. Ensure that you have a backup of the last known good state of the `controller` database
+
+2. Scale the controller ReplicaSet down to 0 replicas as follows:
+
+ ```console
+ kubectl scale --replicas=0 rs/control -n <namespace>
+ ```
+
+ For example:
+
+ ```console
+ kubectl scale --replicas=0 rs/control -n arcdataservices
+ ``
+3. Scale the `controldb` StatefulSet down to 0 replicas, as follows:
+
+ ```console
+ kubectl scale --replicas=0 sts/controldb -n <namespace>
+ ```
+
+ For example:
+
+ ```console
+ kubectl scale --replicas=0 sts/controldb -n arcdataservices`
+ ```
+
+4. Create a kubernetes secret named `controller-sa-secret` with the following YAML:
+
+ ```yml
+ apiVersion: v1
+ kind: Secret
+ metadata:
+ name: controller-sa-secret
+ namespace: <namespace>
+ type: Opaque
+ data:
+ password: <base64 encoded password>
+ ```
+
+5. Edit the `controldb` StatefulSet to include a `controller-sa-secret` volume and corresponding volume mount (`/var/run/secrets/mounts/credentials/mssql-sa-password`) in the `mssql-server` container, by using `kubectl edit sts controldb -n <namespace>` command.
+
+6. Create new data (`data-controldb`) and logs (`logs-controldb`) persistent volume claims for the `controldb` pod as follows:
+
+ ```yml
+ apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: data-controldb
+ namespace: <namespace>
+ spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 15Gi
+ storageClassName: <storage class>
+
+
+ apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: logs-controldb
+ namespace: <namespace>
+ spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ storageClassName: <storage class>
+ ```
+
+7. Scale the `controldb` StatefulSet back to 1 replica using:
+
+ ```console
+ kubectl scale --replicas=1 sts/controldb -n <namespace>
+ ```
+
+8. Connect to the `controldb` SQL server as `sa` using the password in the `controller-sa-secret` secret created earlier.
+
+9. Create a `system` login with sysadmin role using the password in the `controller-system-secret` kubernetes secret as follows:
+
+ ```sql
+ CREATE LOGIN [system] WITH PASSWORD = '<password-from-secret>'
+ ALTER SERVER ROLE sysadmin ADD MEMBER [system]
+ ```
+
+10. Restore the backup using the `RESTORE` command as follows:
+
+ ```sql
+ RESTORE DATABASE [controller] FROM DISK = N'/var/opt/backups/mssql/<controller backup file>.bak' WITH FILE = 1
+ ```
+
+11. Create a `controldb-rw-user` login using the password in the `controller-db-rw-secret` secret `CREATE LOGIN [controldb-rw-user] WITH PASSWORD = '<password-from-secret>'` and associate it with the existing `controldb-rw-user` user in the controller DB `ALTER USER [controldb-rw-user] WITH LOGIN = [controldb-rw-user]`.
+
+12. Disable the `sa` login using TSQL - `ALTER LOGIN [sa] DISABLE`.
+
+13. Edit the `controldb` StatefulSet to remove the `controller-sa-secret` volume and corresponding volume mount.
-Once the backup is created, you can move the `controller.bak` file to a remote storage for any recovery purposes.
+14. Delete the `controller-sa-secret` secret.
-> [!TIP]
-> Back up the controller database before and after any custom resource changes such as creating or deleting an Arc-enabled SQL Managed Instance.
+16. Scale the controller ReplicaSet back up to 1 replica using the `kubectl scale` command.
## Next steps
-[Azure Data Studio dashboards](azure-data-studio-dashboards.md)
+[Azure Data Studio dashboards](azure-data-studio-dashboards.md)
azure-arc Managed Instance Disaster Recovery Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery-cli.md
+
+ Title: Configure failover group - CLI
+description: Describes how to configure disaster recovery with a failover group for Azure Arc-enabled SQL Managed Instance with the CLI
+++++++ Last updated : 08/02/2023+++
+# Configure failover group - CLI
+
+This article explains how to configure disaster recovery for Azure Arc-enabled SQL Managed Instance with the CLI. Before you proceed, review the information and prerequisites in [Azure Arc-enabled SQL Managed Instance - disaster recovery](managed-instance-disaster-recovery.md).
++
+## Configure Azure failover group - direct mode
+
+Follow the steps below if the Azure Arc data services are deployed in `directly` connected mode.
+
+Once the prerequisites are met, run the below command to set up Azure failover group between the two Azure Arc-enabled SQL managed instances:
+
+```azurecli
+az sql instance-failover-group-arc create --name <name of failover group> --mi <primary SQL MI> --partner-mi <Partner MI> --resource-group <name of RG> --partner-resource-group <name of partner MI RG>
+```
+
+Example:
+
+```azurecli
+az sql instance-failover-group-arc create --name sql-fog --mi sql1 --partner-mi sql2 --resource-group rg-name --partner-resource-group rg-name
+```
+
+The above command:
+
+- Creates the required custom resources on both primary and secondary sites
+- Copies the mirroring certificates and configures the failover group between the instances
+
+## Configure Azure failover group - indirect mode
+
+Follow the steps below if Azure Arc data services are deployed in `indirectly` connected mode.
+
+1. Provision the managed instance in the primary site.
+
+ ```azurecli
+ az sql mi-arc create --name <primaryinstance> --tier bc --replicas 3 --k8s-namespace <namespace> --use-k8s
+ ```
+
+2. Switch context to the secondary cluster by running ```kubectl config use-context <secondarycluster>``` and provision the managed instance in the secondary site that will be the disaster recovery instance. At this point, the system databases are not part of the contained availability group.
+
+ > [!NOTE]
+ > It is important to specify `--license-type DisasterRecovery` **during** the Azure Arc-enabled SQL MI creation. This will allow the DR instance to be seeded from the primary instance in the primary data center. Updating this property post deployment will not have the same effect.
+
+ ```azurecli
+ az sql mi-arc create --name <secondaryinstance> --tier bc --replicas 3 --license-type DisasterRecovery --k8s-namespace <namespace> --use-k8s
+ ```
+
+3. Mirroring certificates - The binary data inside the Mirroring Certificate property of the Azure Arc-enabled SQL MI is needed for the Instance Failover Group CR (Custom Resource) creation.
+
+ This can be achieved in a few ways:
+
+ (a) If using `az` CLI, generate the mirroring certificate file first, and then point to that file while configuring the Instance Failover Group so the binary data is read from the file and copied over into the CR. The cert files are not needed after failover group creation.
+
+ (b) If using `kubectl`, directly copy and paste the binary data from the Azure Arc-enabled SQL MI CR into the yaml file that will be used to create the Instance Failover Group.
++
+ Using (a) above:
+
+ Create the mirroring certificate file for primary instance:
+ ```azurecli
+ az sql mi-arc get-mirroring-cert --name <primaryinstance> --cert-file </path/name>.pemΓÇï --k8s-namespace <namespace> --use-k8s
+ ```
+
+ Example:
+ ```azurecli
+ az sql mi-arc get-mirroring-cert --name sqlprimary --cert-file $HOME/sqlcerts/sqlprimary.pemΓÇï --k8s-namespace my-namespace --use-k8s
+ ```
+
+ Connect to the secondary cluster and create the mirroring certificate file for secondary instance:
+
+ ```azurecli
+ az sql mi-arc get-mirroring-cert --name <secondaryinstance> --cert-file </path/name>.pem --k8s-namespace <namespace> --use-k8s
+ ```
+
+ Example:
+
+ ```azurecli
+ az sql mi-arc get-mirroring-cert --name sqlsecondary --cert-file $HOME/sqlcerts/sqlsecondary.pem --k8s-namespace my-namespace --use-k8s
+ ```
+
+ Once the mirroring certificate files are created, copy the certificate from the secondary instance to a shared/local path on the primary instance cluster and vice-versa.
+
+4. Create the failover group resource on both sites.
++
+ > [!NOTE]
+ > Ensure the SQL instances have different names for both primary and secondary sites, and the `shared-name` value should be identical on both sites.
+
+ ```azurecli
+ az sql instance-failover-group-arc create --shared-name <name of failover group> --name <name for primary failover group resource> --mi <local SQL managed instance name> --role primary --partner-mi <partner SQL managed instance name> --partner-mirroring-url tcp://<secondary IP> --partner-mirroring-cert-file <secondary.pem> --k8s-namespace <namespace> --use-k8s
+ ```
+
+ Example:
+ ```azurecli
+ az sql instance-failover-group-arc create --shared-name myfog --name primarycr --mi sqlinstance1 --role primary --partner-mi sqlinstance2 --partner-mirroring-url tcp://10.20.5.20:970 --partner-mirroring-cert-file $HOME/sqlcerts/sqlinstance2.pem --k8s-namespace my-namespace --use-k8s
+ ```
+
+ On the secondary instance, run the following command to set up the failover group custom resource. The `--partner-mirroring-cert-file` in this case should point to a path that has the mirroring certificate file generated from the primary instance as described in 3(a) above.
+
+ ```azurecli
+ az sql instance-failover-group-arc create --shared-name <name of failover group> --name <name for secondary failover group resource> --mi <local SQL managed instance name> --role secondary --partner-mi <partner SQL managed instance name> --partner-mirroring-url tcp://<primary IP> --partner-mirroring-cert-file <primary.pem> --k8s-namespace <namespace> --use-k8s
+ ```
+
+ Example:
+ ```azurecli
+ az sql instance-failover-group-arc create --shared-name myfog --name secondarycr --mi sqlinstance2 --role secondary --partner-mi sqlinstance1 --partner-mirroring-url tcp://10.10.5.20:970 --partner-mirroring-cert-file $HOME/sqlcerts/sqlinstance1.pem --k8s-namespace my-namespace --use-k8s
+ ```
+
+## Retrieve Azure failover group health state
+
+Information about the failover group such as primary role, secondary role, and the current health status can be viewed on the custom resource on either primary or secondary site.
+
+Run the below command on primary and/or the secondary site to list the failover groups custom resource:
+
+```azurecli
+kubectl get fog -n <namespace>
+```
+
+Describe the custom resource to retrieve the failover group status, as follows:
+
+```azurecli
+kubectl describe fog <failover group cr name> -n <namespace>
+```
+
+## Failover group operations
+
+Once the failover group is set up between the managed instances, different failover operations can be performed depending on the circumstances.
+
+Possible failover scenarios are:
+
+- The Azure Arc-enabled SQL managed instances at both sites are in healthy state and a failover needs to be performed:
+ + perform a manual failover from primary to secondary without data loss by setting `role=secondary` on the primary SQL MI.
+
+- Primary site is unhealthy/unreachable and a failover needs to be performed:
+
+ + the primary Azure Arc-enabled SQL managed instance is down/unhealthy/unreachable
+ + the secondary Azure Arc-enabled SQL managed instance needs to be force-promoted to primary with potential data loss
+ + when the original primary Azure Arc-enabled SQL managed instance comes back online, it will report as `Primary` role and unhealthy state and needs to be forced into a `secondary` role so it can join the failover group and data can be synchronized.
+
+
+## Manual failover (without data loss)
+
+Use `az sql instance-failover-group-arc update ...` command group to initiate a failover from primary to secondary. Any pending transactions on the geo-primary instance are replicated over to the geo-secondary instance before the failover.
+
+### Directly connected mode
+Run the following command to initiate a manual failover, in `direct` connected mode using ARM APIs:
+
+```azurecli
+az sql instance-failover-group-arc update --name <shared name of failover group> --mi <primary Azure Arc-enabled SQL MI> --role secondary --resource-group <resource group>
+```
+Example:
+
+```azurecli
+az sql instance-failover-group-arc update --name myfog --mi sqlmi1 --role secondary --resource-group myresourcegroup
+```
+### Indirectly connected mode
+Run the following command to initiate a manual failover, in `indirect` connected mode using kubernetes APIs:
+
+```azurecli
+az sql instance-failover-group-arc update --name <name of failover group resource> --role secondary --k8s-namespace <namespace> --use-k8s
+```
+
+Example:
+
+```azurecli
+az sql instance-failover-group-arc update --name myfog --role secondary --k8s-namespace my-namespace --use-k8s
+```
+
+## Forced failover with data loss
+
+In the circumstance when the geo-primary instance becomes unavailable, the following commands can be run on the geo-secondary DR instance to promote to primary with a forced failover incurring potential data loss.
+
+On the geo-secondary DR instance, run the following command to promote it to primary role, with data loss.
+
+> [!NOTE]
+> If the `--partner-sync-mode` was configured as `sync`, it needs to be reset to `async` when the secondary is promoted to primary.
+
+### Directly connected mode
+```azurecli
+az sql instance-failover-group-arc update --name <shared name of failover group> --mi <secondary Azure Arc-enabled SQL MI> --role force-primary-allow-data-loss --resource-group <resource group> --partner-sync-mode async
+```
+Example:
+
+```azurecli
+az sql instance-failover-group-arc update --name myfog --mi sqlmi2 --role force-primary-allow-data-loss --resource-group myresourcegroup --partner-sync-mode async
+```
+
+### Indirectly connected mode
+```azurecli
+az sql instance-failover-group-arc update --k8s-namespace my-namespace --name secondarycr --use-k8s --role force-primary-allow-data-loss --partner-sync-mode async
+```
+
+When the geo-primary Azure Arc-enabled SQL MI instance becomes available, run the below command to bring it into the failover group and synchronize the data:
+
+### Directly connected mode
+```azurecli
+az sql instance-failover-group-arc update --name <shared name of failover group> --mi <old primary Azure Arc-enabled SQL MI> --role force-secondary --resource-group <resource group>
+```
+
+### Indirectly connected mode
+```azurecli
+az sql instance-failover-group-arc update --k8s-namespace my-namespace --name secondarycr --use-k8s --role force-secondary
+```
+Optionally, the `--partner-sync-mode` can be configured back to `sync` mode if desired.
+
+## Post failover operations
+Once you perform a failover from primary site to secondary site, either with or without data loss, you may need to do the following:
+- Update the connection string for your applications to connect to the newly promoted primary Arc SQL managed instance
+- If you plan to continue running the production workload off of the secondary site, update the `--license-type` to either `BasePrice` or `LicenseIncluded` to initiate billing for the vCores consumed.
+
+## Next steps
+
+- [Overview: Azure Arc-enabled SQL Managed Instance business continuity](managed-instance-business-continuity-overview.md)
+- [Configure failover group - portal](managed-instance-disaster-recovery-portal.md)
azure-arc Managed Instance Disaster Recovery Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery-portal.md
+
+ Title: Disaster recovery - Azure Arc-enabled SQL Managed Instance - portal
+description: Describes how to configure disaster recovery for Azure Arc-enabled SQL Managed Instance in the portal
++++++ Last updated : 08/02/2023+++
+# Configure failover group - portal
+
+This article explains how to configure disaster recovery for Azure Arc-enabled SQL Managed Instance with Azure portal. Before you proceed, review the information and prerequisites in [Azure Arc-enabled SQL Managed Instance - disaster recovery](managed-instance-disaster-recovery.md).
++
+To configure disaster recovery through Azure portal, the Azure Arc-enabled data service requires direct connectivity to Azure.
+
+## Configure Azure failover group
+
+1. In the portal, go to your primary availability group.
+1. Under **Data Management**, select **Failover Groups**.
+
+ Azure portal presents **Create instance failover group**.
+
+ :::image type="content" source="media/managed-instance-disaster-recovery-portal/create-failover-group.png" alt-text="Screenshot of the Azure portal create instance failover group control.":::
+
+1. Provide the information to define the failover group.
+
+ * **Primary mirroring URL**: The mirroring endpoint for the failover group instance.
+ * **Resource group**: The resource group for the failover group instance.
+ * **Secondary managed instance**: The Azure SQL Managed Instance at the DR location.
+ * **Synchronization mode**: Select either *Sync* for synchronous mode, or *Async* for asynchronous mode.
+ * **Instance failover group name**: The name of the failover group.
+
+1. Select **Create**.
+
+Azure portal begins to provision the instance failover group.
+
+## View failover group
+
+After the failover group is provisioned, you can view it in Azure portal.
++
+## Fail over
+
+In the disaster recovery configuration, only one of the instances in the failover group is primary. You can fail over from the portal to migrate the primary role to the other instance in your failover group. To fail over:
+
+1. In portal, locate your managed instance.
+1. Under **Data Management** select **Failover Groups**.
+1. Select **Failover**.
+
+Monitor failover progress in Azure portal.
+
+## Set synchronization mode
+
+To set the synchronization mode:
+
+1. From **Failover Groups**, select **Edit configuration**.
+
+ Azure portal shows an **Edit Configuration** control.
+
+ :::image type="content" source="media/managed-instance-disaster-recovery-portal/edit-synchronization.png" alt-text="Screenshot of the Edit Configuration control.":::
+
+1. Under **Edit configuration**, select your desired mode, and select **Apply**.
+
+## Delete failover group
+
+1. From Failover Groups**, select **Delete Failover Group**.
+
+ Azure portal asks you to confirm your choice to delete the failover group.
+
+1. Select **Delete failover group** to proceed. Otherwise select **Cancel**, to not delete the group.
++
+## Next steps
+
+- [Overview: Azure Arc-enabled SQL Managed Instance business continuity](managed-instance-business-continuity-overview.md)
+- [Configure failover group - CLI](managed-instance-disaster-recovery-cli.md)
azure-arc Managed Instance Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery.md
description: Describes disaster recovery for Azure Arc-enabled SQL Managed Insta
- Previously updated : 04/04/2023 Last updated : 08/02/2023 # Azure Arc-enabled SQL Managed Instance - disaster recovery
-To configure disaster recovery in Azure Arc-enabled SQL Managed Instance, set up Azure failover groups.
+To configure disaster recovery in Azure Arc-enabled SQL Managed Instance, set up Azure failover groups. This article explains failover groups.
## Background
Azure failover groups use the same distributed availability groups technology th
> - The Azure Arc-enabled SQL Managed Instance in both geo-primary and geo-secondary sites need to be identical in terms of their compute & capacity, as well as service tiers they are deployed in. > - Distributed availability groups can be set up for either General Purpose or Business Critical service tiers.
-## Prerequisites
-
-The following prerequisites must be met before setting up failover groups between two Azure Arc-enabled SQL managed instances:
--- An Azure Arc data controller and an Arc enabled SQL managed instance provisioned at the primary site with `--license-type` as one of `BasePrice` or `LicenseIncluded`. -- An Azure Arc data controller and an Arc enabled SQL managed instance provisioned at the secondary site with identical configuration as the primary in terms of:
- - CPU
- - Memory
- - Storage
- - Service tier
- - Collation
- - Other instance settings
-- The instance at the secondary site requires `--license-type` as `DisasterRecovery`. This instance needs to be new, without any user objects. -
-> [!NOTE]
-> - It is important to specify the `--license-type` **during** the Azure Arc-enabled SQL MI creation. This will allow the DR instance to be seeded from the primary instance in the primary data center. Updating this property post deployment will not have the same effect.
-
-## Deployment process
-
-To set up an Azure failover group between two Azure Arc-enabled SQL managed instances, complete the following steps:
-
-1. Create custom resource for distributed availability group at the primary site
-1. Create custom resource for distributed availability group at the secondary site
-1. Copy the binary data from the mirroring certificates
-1. Set up the distributed availability group between the primary and secondary sites
- either in `sync` mode or `async` mode
-
-The following image shows a properly configured distributed availability group:
-
-![Diagram showing a properly configured distributed availability group](.\media\business-continuity\distributed-availability-group.png)
-
-## Synchronization modes
-
-Failover groups in Azure Arc data services support two synchronization modes - `sync` and `async`. The synchronization mode directly impacts how the data is synchronized between the Azure Arc-enabled SQL managed instances, and potentially the performance on the primary managed instance.
-
-If primary and secondary sites are within a few miles of each other, use `sync` mode. Otherwise use `async` mode to avoid any performance impact on the primary site.
-
-## Configure Azure failover group - direct mode
-
-Follow the steps below if the Azure Arc data services are deployed in `directly` connected mode.
-
-Once the prerequisites are met, run the below command to set up Azure failover group between the two Azure Arc-enabled SQL managed instances:
-
-```azurecli
-az sql instance-failover-group-arc create --name <name of failover group> --mi <primary SQL MI> --partner-mi <Partner MI> --resource-group <name of RG> --partner-resource-group <name of partner MI RG>
-```
-
-Example:
-
-```azurecli
-az sql instance-failover-group-arc create --name sql-fog --mi sql1 --partner-mi sql2 --resource-group rg-name --partner-resource-group rg-name
-```
-
-The above command:
-
-1. Creates the required custom resources on both primary and secondary sites
-1. Copies the mirroring certificates and configures the failover group between the instances
-
-## Configure Azure failover group - indirect mode
-
-Follow the steps below if Azure Arc data services are deployed in `indirectly` connected mode.
-
-1. Provision the managed instance in the primary site.
-
- ```azurecli
- az sql mi-arc create --name <primaryinstance> --tier bc --replicas 3 --k8s-namespace <namespace> --use-k8s
- ```
-
-2. Switch context to the secondary cluster by running ```kubectl config use-context <secondarycluster>``` and provision the managed instance in the secondary site that will be the disaster recovery instance. At this point, the system databases are not part of the contained availability group.
-
- > [!NOTE]
- > It is important to specify `--license-type DisasterRecovery` **during** the Azure Arc-enabled SQL MI creation. This will allow the DR instance to be seeded from the primary instance in the primary data center. Updating this property post deployment will not have the same effect.
-
- ```azurecli
- az sql mi-arc create --name <secondaryinstance> --tier bc --replicas 3 --license-type DisasterRecovery --k8s-namespace <namespace> --use-k8s
- ```
-
-3. Mirroring certificates - The binary data inside the Mirroring Certificate property of the Azure Arc-enabled SQL MI is needed for the Instance Failover Group CR (Custom Resource) creation.
-
- This can be achieved in a few ways:
-
- (a) If using `az` CLI, generate the mirroring certificate file first, and then point to that file while configuring the Instance Failover Group so the binary data is read from the file and copied over into the CR. The cert files are not needed after failover group creation.
-
- (b) If using `kubectl`, directly copy and paste the binary data from the Azure Arc-enabled SQL MI CR into the yaml file that will be used to create the Instance Failover Group.
--
- Using (a) above:
-
- Create the mirroring certificate file for primary instance:
- ```azurecli
- az sql mi-arc get-mirroring-cert --name <primaryinstance> --cert-file </path/name>.pemΓÇï --k8s-namespace <namespace> --use-k8s
- ```
-
- Example:
- ```azurecli
- az sql mi-arc get-mirroring-cert --name sqlprimary --cert-file $HOME/sqlcerts/sqlprimary.pemΓÇï --k8s-namespace my-namespace --use-k8s
- ```
-
- Connect to the secondary cluster and create the mirroring certificate file for secondary instance:
-
- ```azurecli
- az sql mi-arc get-mirroring-cert --name <secondaryinstance> --cert-file </path/name>.pem --k8s-namespace <namespace> --use-k8s
- ```
-
- Example:
-
- ```azurecli
- az sql mi-arc get-mirroring-cert --name sqlsecondary --cert-file $HOME/sqlcerts/sqlsecondary.pem --k8s-namespace my-namespace --use-k8s
- ```
-
- Once the mirroring certificate files are created, copy the certificate from the secondary instance to a shared/local path on the primary instance cluster and vice-versa.
-
-4. Create the failover group resource on both sites.
--
- > [!NOTE]
- > Ensure the SQL instances have different names for both primary and secondary sites, and the `shared-name` value should be identical on both sites.
-
- ```azurecli
- az sql instance-failover-group-arc create --shared-name <name of failover group> --name <name for primary failover group resource> --mi <local SQL managed instance name> --role primary --partner-mi <partner SQL managed instance name> --partner-mirroring-url tcp://<secondary IP> --partner-mirroring-cert-file <secondary.pem> --k8s-namespace <namespace> --use-k8s
- ```
-
- Example:
- ```azurecli
- az sql instance-failover-group-arc create --shared-name myfog --name primarycr --mi sqlinstance1 --role primary --partner-mi sqlinstance2 --partner-mirroring-url tcp://10.20.5.20:970 --partner-mirroring-cert-file $HOME/sqlcerts/sqlinstance2.pem --k8s-namespace my-namespace --use-k8s
- ```
-
- On the secondary instance, run the following command to set up the failover group custom resource. The `--partner-mirroring-cert-file` in this case should point to a path that has the mirroring certificate file generated from the primary instance as described in 3(a) above.
-
- ```azurecli
- az sql instance-failover-group-arc create --shared-name <name of failover group> --name <name for secondary failover group resource> --mi <local SQL managed instance name> --role secondary --partner-mi <partner SQL managed instance name> --partner-mirroring-url tcp://<primary IP> --partner-mirroring-cert-file <primary.pem> --k8s-namespace <namespace> --use-k8s
- ```
-
- Example:
- ```azurecli
- az sql instance-failover-group-arc create --shared-name myfog --name secondarycr --mi sqlinstance2 --role secondary --partner-mi sqlinstance1 --partner-mirroring-url tcp://10.10.5.20:970 --partner-mirroring-cert-file $HOME/sqlcerts/sqlinstance1.pem --k8s-namespace my-namespace --use-k8s
- ```
-
-## Retrieve Azure failover group health state
-
-Information about the failover group such as primary role, secondary role, and the current health status can be viewed on the custom resource on either primary or secondary site.
-
-Run the below command on primary and/or the secondary site to list the failover groups custom resource:
-
-```azurecli
-kubectl get fog -n <namespace>
-```
-
-Describe the custom resource to retrieve the failover group status, as follows:
-
-```azurecli
-kubectl describe fog <failover group cr name> -n <namespace>
-```
-
-## Failover group operations
-
-Once the failover group is set up between the managed instances, different failover operations can be performed depending on the circumstances.
-
-Possible failover scenarios are:
--- The Azure Arc-enabled SQL managed instances at both sites are in healthy state and a failover needs to be performed:
- + perform a manual failover from primary to secondary without data loss by setting `role=secondary` on the primary SQL MI.
-
-- Primary site is unhealthy/unreachable and a failover needs to be performed:
-
- + the primary Azure Arc-enabled SQL managed instance is down/unhealthy/unreachable
- + the secondary Azure Arc-enabled SQL managed instance needs to be force-promoted to primary with potential data loss
- + when the original primary Azure Arc-enabled SQL managed instance comes back online, it will report as `Primary` role and unhealthy state and needs to be forced into a `secondary` role so it can join the failover group and data can be synchronized.
-
-
-## Manual failover (without data loss)
-
-Use `az sql instance-failover-group-arc update ...` command group to initiate a failover from primary to secondary. Any pending transactions on the geo-primary instance are replicated over to the geo-secondary instance before the failover.
-
-### Directly connected mode
-Run the following command to initiate a manual failover, in `direct` connected mode using ARM APIs:
-
-```azurecli
-az sql instance-failover-group-arc update --name <shared name of failover group> --mi <primary Azure Arc-enabled SQL MI> --role secondary --resource-group <resource group>
-```
-Example:
-
-```azurecli
-az sql instance-failover-group-arc update --name myfog --mi sqlmi1 --role secondary --resource-group myresourcegroup
-```
-### Indirectly connected mode
-Run the following command to initiate a manual failover, in `indirect` connected mode using kubernetes APIs:
-
-```azurecli
-az sql instance-failover-group-arc update --name <name of failover group resource> --role secondary --k8s-namespace <namespace> --use-k8s
-```
-
-Example:
-
-```azurecli
-az sql instance-failover-group-arc update --name myfog --role secondary --k8s-namespace my-namespace --use-k8s
-```
-
-## Forced failover with data loss
-
-In the circumstance when the geo-primary instance becomes unavailable, the following commands can be run on the geo-secondary DR instance to promote to primary with a forced failover incurring potential data loss.
-
-On the geo-secondary DR instance, run the following command to promote it to primary role, with data loss.
-
-> [!NOTE]
-> If the `--partner-sync-mode` was configured as `sync`, it needs to be reset to `async` when the secondary is promoted to primary.
-
-### Directly connected mode
-```azurecli
-az sql instance-failover-group-arc update --name <shared name of failover group> --mi <secondary Azure Arc-enabled SQL MI> --role force-primary-allow-data-loss --resource-group <resource group> --partner-sync-mode async
-```
-Example:
-
-```azurecli
-az sql instance-failover-group-arc update --name myfog --mi sqlmi2 --role force-primary-allow-data-loss --resource-group myresourcegroup --partner-sync-mode async
-```
-
-### Indirectly connected mode
-```azurecli
-az sql instance-failover-group-arc update --k8s-namespace my-namespace --name secondarycr --use-k8s --role force-primary-allow-data-loss --partner-sync-mode async
-```
-
-When the geo-primary Azure Arc-enabled SQL MI instance becomes available, run the below command to bring it into the failover group and synchronize the data:
-
-### Directly connected mode
-```azurecli
-az sql instance-failover-group-arc update --name <shared name of failover group> --mi <old primary Azure Arc-enabled SQL MI> --role force-secondary --resource-group <resource group>
-```
-
-### Indirectly connected mode
-```azurecli
-az sql instance-failover-group-arc update --k8s-namespace my-namespace --name secondarycr --use-k8s --role force-secondary
-```
-Optionally, the `--partner-sync-mode` can be configured back to `sync` mode if desired.
-
-## Post failover operations
-Once you perform a failover from primary site to secondary site, either with or without data loss, you may need to do the following:
-- Update the connection string for your applications to connect to the newly promoted primary Arc SQL managed instance-- If you plan to continue running the production workload off of the secondary site, update the `--license-type` to either `BasePrice` or `LicenseIncluded` to initiate billing for the vCores consumed.
+You can configure failover groups in with the CLI or in the portal. For prerequisites and instructions see the respective content below:
+- [Configure failover group - portal](managed-instance-disaster-recovery-portal.md)
+- [Configure failover group - CLI](managed-instance-disaster-recovery-cli.md)
## Next steps
-[Overview: Azure Arc-enabled SQL Managed Instance business continuity](managed-instance-business-continuity-overview.md)
+- [Overview: Azure Arc-enabled SQL Managed Instance business continuity](managed-instance-business-continuity-overview.md)
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
Previously updated : 05/09/2023 Last updated : 08/10/2023 #Customer intent: As a data professional, I want to understand why my solutions would benefit from running with Azure Arc-enabled data services so that I can leverage the capability of the feature.
This article highlights capabilities, features, and enhancements recently released or improved for Azure Arc-enabled data services.
+## August 8, 2023
+
+### Image tag
+
+`v1.22.0_2023-08-08`
+
+For complete release version information, review [Version log](version-log.md#august-8-2023).
+
+### Release notes
+
+- Support for configuring and managing Azure Failover groups between two Azure Arc-enabled SQL managed instances using Azure portal. For details, review [Configure failover group - portal](managed-instance-disaster-recovery-portal.md).
+- Upgraded OpenSearch and OpenSearch Dashboards from 2.7.0 to 2.8.0
+- Improvements and examples to [Back up and recover controller database](backup-controller-database.md).
+ ## July 11, 2023 ### Image tag
azure-arc Version Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md
This article identifies the component versions with each release of Azure Arc-enabled data services.
+## August 8, 2023
+
+Aug 2023 preview release is now available.
+
+|Component|Value|
+|--|--|
+|Container images tag |`v1.22.0_2023-08-08`|
+|**CRD names and version:**| |
+|`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2|
+|`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v5|
+|`exporttasks.tasks.arcdata.microsoft.com`| v1beta1, v1, v2|
+|`failovergroups.sql.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2|
+|`kafkas.arcdata.microsoft.com`| v1beta1 through v1beta4|
+|`monitors.arcdata.microsoft.com`| v1beta1, v1, v3|
+|`postgresqls.arcdata.microsoft.com`| v1beta1 through v1beta6|
+|`postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com`| v1beta1|
+|`sqlmanagedinstances.sql.arcdata.microsoft.com`| v1beta1, v1 through v13|
+|`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`| v1beta1, v1beta2|
+|`sqlmanagedinstancereprovisionreplicatasks.tasks.sql.arcdata.microsoft.com`| v1beta1|
+|`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`| v1beta1, v1|
+|`telemetrycollectors.arcdata.microsoft.com`| v1beta1 through v1beta5|
+|`telemetryrouters.arcdata.microsoft.com`| v1beta1 through v1beta5|
+|Azure Resource Manager (ARM) API version|2023-01-15-preview|
+|`arcdata` Azure CLI extension version|1.5.4 ([Download](https://aka.ms/az-cli-arcdata-ext))|
+|Arc-enabled Kubernetes helm chart extension version|1.22.0|
+|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.8.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.8.0 ([Download](https://aka.ms/ads-azcli-ext))|
+|SQL Database version | 957 |
## July 11, 2023
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023 #
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
The following example injects a singleton service dependency:
This code requires `using Microsoft.Extensions.DependencyInjection;`. To learn more, see [Dependency injection in ASP.NET Core](/aspnet/core/fundamentals/dependency-injection?view=aspnetcore-5.0&preserve-view=true).
+#### Register Azure clients
+
+Dependency injection can be used to interact with other Azure services. You can inject clients from the [Azure SDK for .NET](/dotnet/azure/sdk/azure-sdk-for-dotnet) using the [Microsoft.Extensions.Azure](https://www.nuget.org/packages/Microsoft.Extensions.Azure) package. After installing the package, [register the clients](/dotnet/azure/sdk/dependency-injection#register-clients) by calling `AddAzureClients()` on the service collection in `Program.cs`. The following example configures a [named client](/dotnet/azure/sdk/dependency-injection#configure-multiple-service-clients-with-different-names) for Azure Blobs:
+
+```csharp
+using Microsoft.Extensions.Azure;
+using Microsoft.Extensions.Hosting;
+
+var host = new HostBuilder()
+ .ConfigureFunctionsWorkerDefaults()
+ .ConfigureServices((hostContext, services) =>
+ {
+ services.AddAzureClients(clientBuilder =>
+ {
+ clientBuilder.AddBlobServiceClient(hostContext.Configuration.GetSection("MyStorageConnection"))
+ .WithName("copierOutputBlob");
+ });
+ })
+ .Build();
+
+host.Run();
+```
+
+The following example shows how we can use this registration and [SDK types](#sdk-types) to copy blob contents as a stream from one container to another using an injected client:
+
+```csharp
+using Microsoft.Extensions.Azure;
+using Microsoft.Extensions.Logging;
+
+namespace MyFunctionApp
+{
+ public class BlobCopier
+ {
+ private readonly ILogger<BlobCopier> _logger;
+ private readonly BlobContainerClient _copyContainerClient;
+
+ public BlobCopier(ILogger<BlobCopier> logger, IAzureClientFactory<BlobServiceClient> blobClientFactory)
+ {
+ _logger = logger;
+ _copyContainerClient = blobClientFactory.CreateClient("copierOutputBlob").GetBlobContainerClient("samples-workitems-copy");
+ _copyContainerClient.CreateIfNotExists();
+ }
+
+ [Function("BlobCopier")]
+ public async Task Run([BlobTrigger("samples-workitems/{name}", Connection = "MyStorageConnection")] Stream myBlob, string name)
+ {
+ await _copyContainerClient.UploadBlobAsync(name, myBlob);
+ _logger.LogInformation($"Blob {name} copied!");
+ }
+
+ }
+}
+```
+
+The [ILogger&lt;T&gt;] in this example was also obtained through dependency injection. It is registered automatically. To learn more about configuration options for logging, see [Logging](#logging).
+
+> [!TIP]
+> The example used a literal string for the name of the client in both `Program.cs` and the function. Consider instead using a shared constant string defined on the function class. For example, you could add `public const string CopyStorageClientName = nameof(_copyContainerClient);` and then reference `BlobCopier.CopyStorageClientName` in both locations. You could similarly define the configuration section name with the function rather than in `Program.cs`.
+ ### Middleware .NET isolated also supports middleware registration, again by using a model similar to what exists in ASP.NET. This model gives you the ability to inject logic into the invocation pipeline, and before and after functions execute.
Each trigger and binding extension also has its own minimum version requirement,
[eventhub-sdk-types]: ./functions-bindings-event-hubs.md?tabs=isolated-process%2Cextensionv5&pivots=programming-language-csharp#binding-types [servicebus-sdk-types]: ./functions-bindings-service-bus.md?tabs=isolated-process%2Cextensionv5&pivots=programming-language-csharp#binding-types
-<sup>1</sup> For output scenarios in which you would use an SDK type, you should create and work with SDK clients directly instead of using an output binding.
+<sup>1</sup> For output scenarios in which you would use an SDK type, you should create and work with SDK clients directly instead of using an output binding. See [Register Azure clients](#register-azure-clients) for an example of how to do this with dependency injection.
<sup>2</sup> The Service Bus trigger does not yet support message settlement scenarios for the isolated model.
Because your isolated worker process app runs outside the Functions runtime, you
[FunctionContext]: /dotnet/api/microsoft.azure.functions.worker.functioncontext?view=azure-dotnet&preserve-view=true [ILogger]: /dotnet/api/microsoft.extensions.logging.ilogger [ILogger&lt;T&gt;]: /dotnet/api/microsoft.extensions.logging.ilogger-1
+[ILoggerFactory]: /dotnet/api/microsoft.extensions.logging.iloggerfactory
[GetLogger]: /dotnet/api/microsoft.azure.functions.worker.functioncontextloggerextensions.getlogger [GetLogger&lt;T&gt;]: /dotnet/api/microsoft.azure.functions.worker.functioncontextloggerextensions.getlogger#microsoft-azure-functions-worker-functioncontextloggerextensions-getlogger-1 [HttpRequestData]: /dotnet/api/microsoft.azure.functions.worker.http.httprequestdata?view=azure-dotnet&preserve-view=true
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
Connection string for storage account where the function app code and configurat
||| |WEBSITE_CONTENTAZUREFILECONNECTIONSTRING|`DefaultEndpointsProtocol=https;AccountName=...`|
-This setting is required for Consumption plan apps on Windows and for Premium plan apps on both Windows and Linux. It's not required for Dedicated plan apps, which aren't dynamically scaled by Functions.
+This setting is required for Consumption plan apps on Windows and for Elastic Premium plan apps on both Windows and Linux. It's not required for Dedicated plan apps, which aren't dynamically scaled by Functions.
Changing or removing this setting may cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
azure-functions Storage Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/storage-considerations.md
Creating your function app resources using methods other than the Azure CLI requ
## Create an app without Azure Files
-Azure Files is set up by default for Premium and non-Linux Consumption plans to serve as a shared file system in high-scale scenarios. The file system is used by the platform for some features such as log streaming, but it primarily ensures consistency of the deployed function payload. When an app is [deployed using an external package URL](./run-functions-from-deployment-package.md), the app content is served from a separate read-only file system. This means that you can create your function app without Azure Files. If you create your function app with Azure Files, a writeable file system is still provided. However, this file system may not be available for all function app instances.
+Azure Files is set up by default for Elastic Premium and non-Linux Consumption plans to serve as a shared file system in high-scale scenarios. The file system is used by the platform for some features such as log streaming, but it primarily ensures consistency of the deployed function payload. When an app is [deployed using an external package URL](./run-functions-from-deployment-package.md), the app content is served from a separate read-only file system. This means that you can create your function app without Azure Files. If you create your function app with Azure Files, a writeable file system is still provided. However, this file system may not be available for all function app instances.
When Azure Files isn't used, you must meet the following requirements:
When Azure Files isn't used, you must meet the following requirements:
If the above are properly accounted for, you may create the app without Azure Files. Create the function app without specifying the `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` and `WEBSITE_CONTENTSHARE` application settings. You can avoid these settings by generating an ARM template for a standard deployment, removing the two settings, and then deploying the template.
-Because Functions use Azure Files during parts of the dynamic scale-out process, scaling could be limited when running without Azure Files on Consumption and Premium plans.
+Because Functions use Azure Files during parts of the dynamic scale-out process, scaling could be limited when running without Azure Files on Consumption and Elastic Premium plans.
## Mount file shares
azure-maps Drawing Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-requirements.md
For easy reference, here are some terms and definitions that are important as yo
| Xref | A file in AutoCAD DWG file format, attached to the primary drawing as an external reference. | | Level | An area of a building at a set elevation. For example, the floor of a building. | |Feature| An instance of an object produced from the Conversion service that combines a geometry with metadata information. |
-|Feature classes| A common blueprint for features. For example, a *unit* is a feature class, and an *office* is a feature. |
+|Feature classes| A common blueprint for features. For example, a _unit_ is a feature class, and an _office_ is a feature. |
## Drawing package structure
The [Conversion service] does the following on each DWG file:
- Openings - Walls - Vertical penetrations-- Produces a *Facility* feature.
+- Produces a _Facility_ feature.
- Produces a minimal set of default Category features referenced by other features: - room - structure
The following sections describe the requirements for each layer.
### Exterior layer
-The DWG file for each level must contain a layer to define that level's perimeter. This layer is referred to as the *exterior* layer. For example, if a facility contains two levels, then it needs to have two DWG files, with an exterior layer for each file.
+The DWG file for each level must contain a layer to define that level's perimeter. This layer is referred to as the _exterior_ layer. For example, if a facility contains two levels, then it needs to have two DWG files, with an exterior layer for each file.
No matter how many entity drawings are in the exterior layer, the [resulting facility dataset](tutorial-creator-feature-stateset.md) contains only one level feature for each DWG file. Additionally:
For easy reference, here are some terms and definitions that are important as yo
A drawing package is a ZIP archive that contains the following files: - DWG files in AutoCAD DWG file format.-- A *manifest.json* file that describes the DWG files in the drawing package.
+- A _manifest.json_ file that describes the DWG files in the drawing package.
The drawing package must be compressed into a single archive file, with the .zip extension. The DWG files can be organized in any way inside the drawing package, but the manifest file must be in the root directory. The next sections explain the conversion process and requirements for both the DWG and manifest files, and the content of these files. To view a sample package, you can download the [sample drawing package v2].
Each DWG file must adhere to these requirements:
- The DWG file can't contain features from multiple facilities. - The DWG file can't contain features from multiple levels. For example, a facility with three levels has three DWG files in the drawing package.-- All data of a single level must be contained in a single DWG file. Any external references (*xrefs*) must be bound to the parent drawing.
+- All data of a single level must be contained in a single DWG file. Any external references (_xrefs_) must be bound to the parent drawing.
- The DWG file must define layer(s) representing the boundary of that level. - The DWG must reference the same measurement system and unit of measurement as other DWG files in the drawing package. - The DWG file must be aligned when stacked on top of another level from the same facility.
You can see an example of the Level perimeter layer as the `GROS$` layer in the
The drawing package must contain a manifest file at the root level and the file must be named **manifest.json**. It describes the DWG files allowing the  [Conversion service] to parse their content. Only the files identified by the manifest are used. Files that are in the drawing package, but aren't properly listed in the manifest, are ignored.
-The file paths in the buildingLevels object of the manifest file must be relative to the root of the drawing package. The DWG file name must exactly match the name of the facility level. For example, a DWG file for the "Basement" level is *Basement.dwg*. A DWG file for level 2 is named as *level_2.dwg*. Filenames can't contain spaces, you can use an underscore to replace any spaces.
+The file paths in the buildingLevels object of the manifest file must be relative to the root of the drawing package. The DWG file name must exactly match the name of the facility level. For example, a DWG file for the "Basement" level is _Basement.dwg_. A DWG file for level 2 is named as _level_2.dwg_. Filenames can't contain spaces, you can use an underscore to replace any spaces.
Although there are requirements when you use the manifest objects, not all objects are required. The following table shows the required and optional objects for the 2023-03-01-preview [Conversion service].
Although there are requirements when you use the manifest objects, not all objec
| Property | Type | Required | Description  | |-|-|-|--|
-| `version` | number | TRUE | Manifest schema version. Currently version 2.0 |
+| `version` | string | TRUE | Manifest schema version. Currently version "2.0" |
|`buildingLevels`| [BuildingLevels] object  | TRUE | Specifies the levels of the facility and the files containing the design of the levels. | |`featureClasses`|Array of [featureClass] objects| TRUE | List of feature class objects that define how layers are read from the DWG drawing file.| | `georeference` |[Georeference] object | FALSE | Contains numerical geographic information for the facility drawing.     |
The next sections detail the requirements for each object.
||-|-|| | `dwgLayers`| Array of strings| TRUE| The name of each layer that defines the feature class. Each entity on the specified layer is converted to an instance of the feature class. The `dwgLayer` name that a feature is converted from ends up as a property of that feature. | | `featureClassName` | String | TRUE | The name of the feature class. Typical examples include room, workspace or wall.|
-|`featureClassProperties`| Array of [featureClassProperty] objects | TRUE | Specifies text layers in the DWG file associated to the feature as a property. For example, a label that falls inside the bounds of a space, such as a room number.|
+|`featureClassProperties`| Array of [featureClassProperty] objects | FALSE | Specifies text layers in the DWG file associated to the feature as a property. For example, a label that falls inside the bounds of a space, such as a room number.|
#### featureClassProperty
azure-maps Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/glossary.md
The following list describes common words used with the Azure Maps services.
<a name="advanced-routing"></a> **Advanced routing**: A collection of services that perform advance operations using road routing data; such as, calculating reachable ranges (isochrones), distance matrices, and batch route requests.
-<a name="aerial-imagery"></a> **Aerial imagery**: See [Satellite imagery](#satellite-imagery).
+<a name="aerial-imagery"></a> **Aerial imagery**: See [Satellite imagery].
<a name="along-a-route-search"></a> **Along a route search**: A spatial query that looks for data within a specified detour time or distance from a route path. <a name="altitude"></a> **Altitude**: The height or vertical elevation of a point above a reference surface. Altitude measurements are based on a given reference datum, such as mean sea level. See also elevation.
-<a name="ambiguous"></a> **Ambiguous**: A state of uncertainty in data classification that exists when an object may appropriately be assigned two or more values for a given attribute. For example, when geocoding "CA", two ambiguous results are returned: "Canada" and "California". "CA" is a country/region and a state code, for "Canada" and "California", respectively.
+<a name="ambiguous"></a> **Ambiguous**: A state of uncertainty in data classification that exists when an object may appropriately be assigned two or more values for a given attribute. For example, when geocoding "CA", two ambiguous results are returned: "Canada" and "California". "CA" is a country/region and a state code, for "Canada" and "California", respectively.
<a name="annotation"></a> **Annotation**: Text or graphics displayed on the map to provide information to the user. Annotation may identify or describe a specific map entity, provide general information about an area on the map, or supply information about the map itself.
The following list describes common words used with the Azure Maps services.
<a name="application-programming-interface-api"></a> **Application Programming Interface (API)**: A specification that allows developers to create applications.
-<a name="api-key"></a> **API key**: See [Shared key authentication](#shared-key-authentication).
+<a name="api-key"></a> **API key**: See [Shared key authentication].
<a name="area-of-interest-aoi"></a> **Area of Interest (AOI)**: The extent used to define a focus area for either a map or a database production.
The following list describes common words used with the Azure Maps services.
<a name="asynchronous-request"></a> **Asynchronous request**: An HTTP request that opens a connection and makes a request to the server that returns an identifier for the asynchronous request, then closes the connection. The server continues to process the request and the user can check the status using the identifier. When the request is finished processing, the user can then download the response. This type of request is used for long running processes.
-<a name="autocomplete"></a> **Autocomplete**: A feature in an application that predicts the rest of a word a user is typing.
+<a name="autocomplete"></a> **Autocomplete**: A feature in an application that predicts the rest of a word a user is typing.
<a name="autosuggest"></a> **Autosuggest**: A feature in an application that predicts logical possibilities for what the user is typing. <a name="azure-location-based-services-lbs"></a> **Azure Location Based Services (LBS)**: The former name of Azure Maps when it was in preview.
-<a name="azure-active-directory"></a> **Azure Active Directory (Azure AD)**: Azure AD is Microsoft's cloud-based identity and access management service. Azure Maps Azure AD integration is currently available in preview for all Azure Maps APIs. Azure AD supports Azure role-based access control (Azure RBAC) to allow fine-grained access to Azure Maps resources. To learn more about Azure Maps Azure AD integration, see [Azure Maps and Azure AD](azure-maps-authentication.md) and [Manage authentication in Azure Maps](how-to-manage-authentication.md).
+<a name="azure-active-directory"></a> **Azure Active Directory (Azure AD)**: Azure AD is Microsoft's cloud-based identity and access management service. Azure Maps Azure AD integration is currently available in preview for all Azure Maps APIs. Azure AD supports Azure role-based access control (Azure RBAC) to allow fine-grained access to Azure Maps resources. To learn more about Azure Maps Azure AD integration, see [Azure Maps and Azure AD] and [Manage authentication in Azure Maps].
-<a name="azure-maps-key"></a> **Azure Maps key**: See [Shared key authentication](#shared-key-authentication).
+<a name="azure-maps-key"></a> **Azure Maps key**: See [Shared key authentication].
## B
The following list describes common words used with the Azure Maps services.
<a name="batch-request"></a> **Batch request**: The process of combining multiple requests into a single request.
-<a name="bearing"></a> **Bearing**: The horizontal direction of a point in relation to another point. This is expressed as an angle relative to north, from 0-degrees to 360 degrees in a clockwise direction.
+<a name="bearing"></a> **Bearing**: The horizontal direction of a point in relation to another point. This is expressed as an angle relative to north, from 0-degrees to 360 degrees in a clockwise direction.
<a name="boundary"></a> **Boundary**: A line or polygon separating adjacent political entities, such as countries/regions, districts, and properties. A boundary is a line that may or may not follow physical features, such as rivers, mountains, or walls.
-<a name="bounds"></a> **Bounds**: See [Bounding box](#bounding-box).
+<a name="bounds"></a> **Bounds**: See [Bounding box].
-<a name="bounding-box"></a> **Bounding box**: A set of coordinates used to represent a rectangular area on the map.
+<a name="bounding-box"></a> **Bounding box**: A set of coordinates used to represent a rectangular area on the map.
## C
-<a name="cadastre"></a> **Cadastre**: A record of registered land and properties. See also [Parcel](#parcel).
+<a name="cadastre"></a> **Cadastre**: A record of registered land and properties. See also [Parcel].
<a name="camera"></a> **Camera**: In the context of an interactive map control, a camera defines the maps field of view. The viewport of the camera is determined based on several map parameters: center, zoom level, pitch, bearing.
The following list describes common words used with the Azure Maps services.
<a name="concave-hull"></a> **Concave hull**: A shape that represents a possible concave geometry that encloses all shapes in the specified data set. The generated shape is similar to wrapping the data with plastic wrap and then heating it, thus causing large spans between points to cave in towards other data points.
-<a name="consumption-model"></a> **Consumption model**: Information that defines the rate at which a vehicle consumes fuel or electricity. Also see the [consumption model documentation](consumption-model.md).
+<a name="consumption-model"></a> **Consumption model**: Information that defines the rate at which a vehicle consumes fuel or electricity. Also see the [consumption model documentation].
<a name="control"></a> **Control**: A self-contained or reusable component consisting of a graphical user interface that defines a set of behaviors for the interface. For example, a map control, is generally the portion of the user interface that loads an interactive map.
The following list describes common words used with the Azure Maps services.
<a name="dijkstra's-algorithm"></a> **Dijkstra's algorithm**: An algorithm that examines the connectivity of a network to find the shortest path between two points.
-<a name="distance-matrix"></a> **Distance matrix**: A matrix that contains travel time and distance information between a set of origins and destinations.
+<a name="distance-matrix"></a> **Distance matrix**: A matrix that contains travel time and distance information between a set of origins and destinations.
## E <a name="elevation"></a> **Elevation**: The vertical distance of a point or an object above or below a reference surface or datum. Generally, the reference surface is mean sea level. Elevation generally refers to the vertical height of land.
-<a name="envelope"></a> **Envelope**: See [Bounding box](#bounding-box).
+<a name="envelope"></a> **Envelope**: See [Bounding box].
<a name="extended-postal-code"></a> **Extended postal code**: A postal code that may include more information. For example, in the USA, zip codes have five digits. But, an extended zip code, known as zip+4, includes four more digits. These digits are used to identify a geographic segment within the five-digit delivery area, such as a city block, a group of apartments, or a post office box. Knowing the geographic segment aids in efficient mail sorting and delivery.
-<a name="extent"></a> **Extent**: See [Bounding box](#bounding-box).
+<a name="extent"></a> **Extent**: See [Bounding box].
## F
-<a name="federated-authentication"></a> **Federated authentication**: An authentication method that allows a single logon/authentication mechanism to be used across multiple web and mobile apps.
+<a name="federated-authentication"></a> **Federated authentication**: An authentication method that allows a single logon/authentication mechanism to be used across multiple web and mobile apps.
<a name="feature"></a> **Feature**: An object that combines a geometry with metadata information.
The following list describes common words used with the Azure Maps services.
<a name="geofence"></a> **Geofence**: A defined geographical region that can be used to trigger events when a device enters or exists the region.
-<a name="geojson"></a> **GeoJSON**: Is a common JSON-based file format used for storing geographical vector data such as points, lines, and polygons. For more information Azure Maps use of an extended version of GeoJSON, see [Extended geojson](extend-geojson.md).
+<a name="geojson"></a> **GeoJSON**: Is a common JSON-based file format used for storing geographical vector data such as points, lines, and polygons. For more information Azure Maps use of an extended version of GeoJSON, see [Extended geojson].
<a name="geometry"></a> **Geometry**: Represents a spatial object such as a point, line, or polygon.
The following list describes common words used with the Azure Maps services.
<a name="hd-maps"></a> **HD maps** (High Definition Maps): consists of high fidelity road network information such as lane markings, signage, and direction lights required for autonomous driving.
-<a name="heading"></a> **Heading**: The direction something is pointing or facing. See also [Bearing](#heading).
+<a name="heading"></a> **Heading**: The direction something is pointing or facing. See also [Bearing].
<a name="heatmap"></a> **Heatmap**: A data visualization in which a range of colors represent the density of points in a particular area. See also Thematic map.
The following list describes common words used with the Azure Maps services.
<a name="iana"></a> **IANA**: An acronym for the Internet Assigned Numbers Authority. A nonprofit group that oversees global IP address allocation.
-<a name="isochrone"></a> **Isochrone**: An isochrone defines the area in which someone can travel within a specified time for a mode of transportation in any direction from a given location. See also [Reachable Range](#reachable-range).
+<a name="isochrone"></a> **Isochrone**: An isochrone defines the area in which someone can travel within a specified time for a mode of transportation in any direction from a given location. See also [Reachable Range].
-<a name="isodistance"></a> **Isodistance**: Given a location, an isochrone defines the area in which someone can travel within a specified distance for a mode of transportation in any direction. See also [Reachable Range](#reachable-range).
+<a name="isodistance"></a> **Isodistance**: Given a location, an isochrone defines the area in which someone can travel within a specified distance for a mode of transportation in any direction. See also [Reachable Range].
## K
-<a name="kml"></a> **KML**: Also known as Keyhole Markup Language, is a common XML file format for storing geographic vector data such as points, lines, and polygons.
+<a name="kml"></a> **KML**: Also known as Keyhole Markup Language, is a common XML file format for storing geographic vector data such as points, lines, and polygons.
## L
The following list describes common words used with the Azure Maps services.
## M
-<a name="map-tile"></a> **Map Tile**: A rectangular image that represents a partition of a map canvas. For more information, see the [Zoom levels and tile grid documentation](zoom-levels-and-tile-grid.md).
+<a name="map-tile"></a> **Map Tile**: A rectangular image that represents a partition of a map canvas. For more information, see [Zoom levels and tile grid].
<a name="marker"></a> **Marker**: Also called a pin or pushpin, is an icon that represents a point location on a map.
-<a name="mercator-projection"></a> **Mercator projection**: A cylindrical map projection that became the standard map projection for nautical purposes because of its ability to represent lines of constant course, known as rhumb lines, as straight segments that conserve the angles with the meridians. All flat map projections distort the shapes or sizes of the map when compared to the true layout of the Earth's surface. The Mercator projection exaggerates areas far from the equator, such that smaller areas appear larger on the map as you approach the poles.
+<a name="mercator-projection"></a> **Mercator projection**: A cylindrical map projection that became the standard map projection for nautical purposes because of its ability to represent lines of constant course, known as rhumb lines, as straight segments that conserve the angles with the meridians. All flat map projections distort the shapes or sizes of the map when compared to the true layout of the Earth's surface. The Mercator projection exaggerates areas far from the equator, such that smaller areas appear larger on the map as you approach the poles.
<a name="multilinestring"></a> **MultiLineString**: A geometry that represents a collection of LineString objects.
The following list describes common words used with the Azure Maps services.
<a name="position"></a> **Position**: The longitude, latitude, and altitude (x,y,z coordinates) of a point.
-<a name="post-code"></a> **Post code**: See [Postal code](#postal-code).
+<a name="post-code"></a> **Post code**: See [Postal code].
<a name="postal-code"></a> **Postal code**: A series of letters or numbers, or both, in a specific format. The postal-code is used by the postal service of a country/region to divide geographic areas into zones in order to simplify delivery of mail.
-<a name="primary-key"></a> **Primary key**: The first of two subscription keys provided for Azure Maps shared key authentication. See [Shared key authentication](#shared-key-authentication).
+<a name="primary-key"></a> **Primary key**: The first of two subscription keys provided for Azure Maps shared key authentication. See [Shared key authentication].
<a name="prime-meridian"></a> **Prime meridian**: A line of longitude that represents 0-degrees longitude. Generally, longitude values decrease when traveling in a westerly direction until 180 degrees and increase when traveling in easterly directions to -180-degrees.
The following list describes common words used with the Azure Maps services.
## Q
-<a name="quadkey"></a> **`Quadkey`**: A base-4 address index for a tile within a `quadtree` tiling system. For more information, see [Zoom levels and tile grid](zoom-levels-and-tile-grid.md).
+<a name="quadkey"></a> **`Quadkey`**: A base-4 address index for a tile within a `quadtree` tiling system. For more information, see [Zoom levels and tile grid].
-<a name="quadtree"></a> **`Quadtree`**: A data structure in which each node has exactly four children. The tiling system used in Azure Maps uses a 'quadtree' structure such that as a user zooms in one level, each map tile breaks up into four subtiles. For more information, see [Zoom levels and tile grid](zoom-levels-and-tile-grid.md).
+<a name="quadtree"></a> **`Quadtree`**: A data structure in which each node has exactly four children. The tiling system used in Azure Maps uses a 'quadtree' structure such that as a user zooms in one level, each map tile breaks up into four subtiles. For more information, see [Zoom levels and tile grid].
<a name="queries-per-second-qps"></a> **Queries Per Second (QPS)**: The number of queries or requests that can be made to a service or platform within one second.
The following list describes common words used with the Azure Maps services.
<a name="raster-layer"></a> **Raster layer**: A tile layer that consists of raster images.
-<a name="reachable-range"></a> **Reachable range**: A reachable range defines the area in which someone can travel within a specified time or distance, for a mode of transportation to travel, in any direction from a location. See also [Isochrone](#isochrone) and [Isodistance](#isodistance).
+<a name="reachable-range"></a> **Reachable range**: A reachable range defines the area in which someone can travel within a specified time or distance, for a mode of transportation to travel, in any direction from a location. See also [Isochrone] and [Isodistance].
<a name="remote-sensing"></a> **Remote sensing**: The process of collecting and interpreting sensor data from a distance.
The following list describes common words used with the Azure Maps services.
<a name="reverse-geocode"></a> **Reverse geocode**: The process of taking a coordinate and determining the address it represents on a map.
-<a name="reproject"></a> **Reproject**: See [Transformation](#transformation).
+<a name="reproject"></a> **Reproject**: See [Transformation].
<a name="rest-service"></a> **REST service**: Acronym for Representational State Transfer. An architecture for exchanging information between peers in a decentralized, distributed environment. REST allows programs on different computers to communicate independently of an operating system or platform. A service can send a Hypertext Transfer Protocol (HTTP) request to a uniform resource locator (URL) and get back data. <a name="route"></a> **Route**: A path between two or more locations, which may also include additional information such as instructions for waypoints along the route.
-<a name="requests-per-second-rps"></a> **Requests Per Second (RPS)**: See [Queries Per Second (QPS)](#queries-per-second-qps).
+<a name="requests-per-second-rps"></a> **Requests Per Second (RPS)**: See [Queries Per Second (QPS)].
<a name="rss"></a> **RSS**: Acronym for Really Simple Syndication, Resource Description Framework (RDF) Site Summary, or Rich Site Summary, depending on the source. A simple, structured XML format for sharing content among different Web sites. RSS documents include key metadata elements such as author, date, title, a brief description, and a hypertext link. This information helps a user (or an RSS publisher service) decide what materials are worth further investigation.
The following list describes common words used with the Azure Maps services.
<a name="satellite-imagery"></a> **Satellite imagery**: Imagery captured by planes and satellites pointing straight down.
-<a name="secondary-key"></a> **Secondary key**: The second of two subscriptions keys provided for Azure Maps shared key authentication. See [Shared key authentication](#shared-key-authentication).
+<a name="secondary-key"></a> **Secondary key**: The second of two subscriptions keys provided for Azure Maps shared key authentication. See [Shared key authentication].
-<a name="shapefile-shp"></a> **Shapefile (SHP)**: Or *ESRI Shapefile*, is a vector data storage format for storing the location, shape, and attributes of geographic features. A shapefile is stored in a set of related files.
+<a name="shapefile-shp"></a> **Shapefile (SHP)**: Or _ESRI Shapefile_, is a vector data storage format for storing the location, shape, and attributes of geographic features. A shapefile is stored in a set of related files.
-<a name="shared-key-authentication"></a> **Shared key authentication**: Shared Key authentication relies on passing Azure Maps account generated keys with each request to Azure Maps. These keys are often referred to as subscription keys. It's recommended that keys are regularly regenerated for security. Two keys are provided so that you can maintain connections using one key while regenerating the other. When you regenerate your keys, you must update any applications that access this account to use the new keys. To learn more about Azure Maps authentication, see [Azure Maps and Azure AD](azure-maps-authentication.md) and [Manage authentication in Azure Maps](how-to-manage-authentication.md).
+<a name="shared-key-authentication"></a> **Shared key authentication**: Shared Key authentication relies on passing Azure Maps account generated keys with each request to Azure Maps. These keys are often referred to as subscription keys. It's recommended that keys are regularly regenerated for security. Two keys are provided so that you can maintain connections using one key while regenerating the other. When you regenerate your keys, you must update any applications that access this account to use the new keys. To learn more about Azure Maps authentication, see [Azure Maps and Azure AD] and [Manage authentication in Azure Maps].
<a name="software-development-kit-sdk"></a> **Software development kit (SDK)**: A collection of documentation, sample code, and sample apps to help a developer use an API to build apps.
-<a name="spherical-mercator-projection"></a> **Spherical Mercator projection**: See [Web Mercator](#web-mercator).
+<a name="spherical-mercator-projection"></a> **Spherical Mercator projection**: See [Web Mercator].
<a name="spatial-query"></a> **Spatial query**: A request made to a service that performs a spatial operation. Such as a radial search, or along a route search.
-<a name="spatial-reference"></a> **Spatial reference**: A coordinate-based local, regional, or global system used to precisely locate geographical entities. It defines the coordinate system used to relate map coordinates to locations in the real world. Spatial references ensure spatial data from different layers, or sources, can be integrated for accurate viewing or analysis. Azure Maps uses the [EPSG:3857](https://epsg.io/3857) coordinate reference system and WGS 84 for input geometry data.
+<a name="spatial-reference"></a> **Spatial reference**: A coordinate-based local, regional, or global system used to precisely locate geographical entities. It defines the coordinate system used to relate map coordinates to locations in the real world. Spatial references ensure spatial data from different layers, or sources, can be integrated for accurate viewing or analysis. Azure Maps uses the [EPSG:3857] coordinate reference system and WGS 84 for input geometry data.
-<a name="sql-spatial"></a> **SQL spatial**: Refers to the spatial functionality built into SQL Azure and SQL Server 2008 and above. This spatial functionality is also available as a .NET library that can be used independently of SQL Server. For more information, see [Spatial Data (SQL Server)](/sql/relational-databases/spatial/spatial-data-sql-server).
+<a name="sql-spatial"></a> **SQL spatial**: Refers to the spatial functionality built into SQL Azure and SQL Server 2008 and above. This spatial functionality is also available as a .NET library that can be used independently of SQL Server. For more information, see [Spatial Data (SQL Server)].
-<a name="subscription-key"></a> **Subscription key**: See [Shared key authentication](#shared-key-authentication).
+<a name="subscription-key"></a> **Subscription key**: See [Shared key authentication].
<a name="synchronous-request"></a> **Synchronous request**: An HTTP request opens a connection and waits for a response. Browsers limit the number of concurrent HTTP requests that can be made from a page. If multiple long running synchronous requests are made at the same time, then this limit can be reached. Requests are delayed until one of the other requests has completed.
The following list describes common words used with the Azure Maps services.
- One transaction is created for every 15 map or traffic tiles requested. - One transaction is created for each API call to one of the services in Azure Maps. Searching and routing are examples of Azure Maps service.
-<a name="transformation"></a> **Transformation**: The process of converting data between different geographic coordinate systems. You may, for example, have some data that was captured in the United Kingdom and based on the OSGB 1936 geographic coordinate system. Azure Maps uses the [EPSG:3857](https://epsg.io/3857) coordinate reference system variant of WGS84. As such to display the data correctly, it needs to have its coordinates transformed from one system to another.
+<a name="transformation"></a> **Transformation**: The process of converting data between different geographic coordinate systems. You may, for example, have some data that was captured in the United Kingdom and based on the OSGB 1936 geographic coordinate system. Azure Maps uses the [EPSG:3857] coordinate reference system variant of WGS84. As such to display the data correctly, it needs to have its coordinates transformed from one system to another.
<a name="traveling-salesmen-problem-tsp"></a> **Traveling Salesmen Problem (TSP)**: A Hamiltonian circuit problem in which a salesperson must find the most efficient way to visit a series of stops, then return to the starting location.
The following list describes common words used with the Azure Maps services.
<a name="vector-data"></a> **Vector data**: Coordinate based data that is represented as points, lines, or polygons.
-<a name="vector-tile"></a> **Vector tile**: An open data specification for storing geospatial vector data using the same tile system as the map control. See also [Tile layer](#tile-layer).
+<a name="vector-tile"></a> **Vector tile**: An open data specification for storing geospatial vector data using the same tile system as the map control. See also [Tile layer].
<a name="vehicle-routing-problem-vrp"></a> **Vehicle Routing Problem (VRP)**: A class of problems, in which a set of ordered routes for a fleet of vehicles is calculated while taking into consideration as set of constraints. These constraints may include delivery time windows, multiple route capacities, and travel duration constraints.
-<a name="voronoi-diagram"></a> **Voronoi diagram**: A partition of space into areas, or cells, that surrounds a set of geometric objects, usually point features. These cells, or polygons, must satisfy the criteria for Delaunay triangles. All locations within an area are closer to the object it surrounds than to any other object in the set. Voronoi diagrams are often used to delineate areas of influence around geographic features.
+<a name="voronoi-diagram"></a> **Voronoi diagram**: A partition of space into areas, or cells, that surrounds a set of geometric objects, usually point features. These cells, or polygons, must satisfy the criteria for Delaunay triangles. All locations within an area are closer to the object it surrounds than to any other object in the set. Voronoi diagrams are often used to delineate areas of influence around geographic features.
## W <a name="waypoint"></a> **Waypoint**: A waypoint is a specified geographical location defined by longitude and latitude that is used for navigational purposes. Often used to represent a point in which someone navigates a route through.
-<a name="waypoint-optimization"></a> **Waypoint optimization**: The process of reordering a set of waypoints to minimize the travel time or distance required to pass through all provided waypoints. Depending on the complexity of the optimization, this optimization is often referred to as the [Traveling Salesmen Problem](#traveling-salesmen-problem-tsp) or [Vehicle Routing Problem](#vehicle-routing-problem-vrp).
+<a name="waypoint-optimization"></a> **Waypoint optimization**: The process of reordering a set of waypoints to minimize the travel time or distance required to pass through all provided waypoints. Depending on the complexity of the optimization, this optimization is often referred to as the [Traveling Salesmen Problem] or [Vehicle Routing Problem].
<a name="web-map-service-wms"></a> **Web Map Service (WMS)**: WMS is an Open Geographic Consortium (OGC) standard that defines image-based map services. WMS services provide map images for specific areas within a map on demand. Images include prerendered symbology and may be rendered in one of several named styles if defined by the service. <a name="web-mercator"></a> **Web Mercator**: Also known as Spherical Mercator projection. It's a slight variant of the Mercator projection, one used primarily in Web-based mapping programs. It uses the same formulas as the standard Mercator projection as used for small-scale maps. However, the Web Mercator uses the spherical formulas at all scales, but large-scale Mercator maps normally use the ellipsoidal form of the projection. The discrepancy is imperceptible at the global scale, but it causes maps of local areas to deviate slightly from true ellipsoidal Mercator maps, at the same scale.
-<a name="wgs84"></a> **WGS84**: A set of constants used to relate spatial coordinates to locations on the surface of the map. The WGS84 datum is the standard one used by most online mapping providers and GPS devices. Azure Maps uses the [EPSG:3857](https://epsg.io/3857) coordinate reference system variant of WGS84.
+<a name="wgs84"></a> **WGS84**: A set of constants used to relate spatial coordinates to locations on the surface of the map. The WGS84 datum is the standard one used by most online mapping providers and GPS devices. Azure Maps uses the [EPSG:3857] coordinate reference system variant of WGS84.
## Z
-<a name="z-coordinate"></a> **Z-coordinate**: See [Altitude](#altitude).
-
-<a name="zip-code"></a> **Zip code**: See [Postal code](#postal-code).
-
-<a name="Zoom level"></a> **Zoom level**: Specifies the level of detail and how much of the map is visible. When zoomed all the way to level 0, the full world map is often visible. But, the map shows limited details such as country/region names, borders, and ocean names. When zoomed in closer to level 17, the map displays an area of a few city blocks with detailed road information. In Azure Maps, the highest zoom level is 22. For more information, see the [Zoom levels and tile grid](zoom-levels-and-tile-grid.md) documentation.
+<a name="z-coordinate"></a> **Z-coordinate**: See [Altitude].
+
+<a name="zip-code"></a> **Zip code**: See [Postal code].
+
+<a name="Zoom level"></a> **Zoom level**: Specifies the level of detail and how much of the map is visible. When zoomed all the way to level 0, the full world map is often visible. But, the map shows limited details such as country/region names, borders, and ocean names. When zoomed in closer to level 17, the map displays an area of a few city blocks with detailed road information. In Azure Maps, the highest zoom level is 22. For more information, see [Zoom levels and tile grid].
+
+[Satellite imagery]: #satellite-imagery
+[Shared key authentication]: #shared-key-authentication
+[Azure Maps and Azure AD]: azure-maps-authentication.md
+[Manage authentication in Azure Maps]: how-to-manage-authentication.md
+[Bounding box]: #bounding-box
+[Parcel]: #parcel
+[consumption model documentation]: consumption-model.md
+[Extended geojson]: extend-geojson.md
+[Bearing]: #bearing
+[Reachable Range]: #reachable-range
+[Zoom levels and tile grid]: zoom-levels-and-tile-grid.md
+[Postal code]: #postal-code
+[Isochrone]: #isochrone
+[Isodistance]: #isodistance
+[Transformation]: #transformation
+[Queries Per Second (QPS)]: #queries-per-second-qps
+[EPSG:3857]: https://epsg.io/3857
+[Spatial Data (SQL Server)]: /sql/relational-databases/spatial/spatial-data-sql-server
+[Tile layer]: #tile-layer
+[Traveling Salesmen Problem]: #traveling-salesmen-problem-tsp
+[Vehicle Routing Problem]: #vehicle-routing-problem-vrp
+[Altitude]: #altitude
+[Web Mercator]: #web-mercator
azure-maps How To Add Shapes To Android Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-add-shapes-to-android-map.md
This article shows you how to render the areas of `Polygon` and `MultiPolygon` f
## Prerequisites
-Be sure to complete the steps in the [Quickstart: Create an Android app](quick-android-map.md) document. Code blocks in this article can be inserted into the maps `onReady` event handler.
+Be sure to complete the steps in the [Quickstart: Create an Android app] document. Code blocks in this article can be inserted into the maps `onReady` event handler.
## Use a polygon layer
The following image is a screenshot of the above code rendering a polygon with a
See the following articles for more code samples to add to your maps: > [!div class="nextstepaction"]
-> [Create a data source](create-data-source-android-sdk.md)
+> [Create a data source]
> [!div class="nextstepaction"]
-> [Use data-driven style expressions](data-driven-style-expressions-android-sdk.md)
+> [Use data-driven style expressions]
> [!div class="nextstepaction"]
-> [Add a line layer](android-map-add-line-layer.md)
+> [Add a line layer]
> [!div class="nextstepaction"]
-> [Add a polygon extrusion layer](map-extruded-polygon-android.md)
+> [Add a polygon extrusion layer]
+
+[Add a line layer]: android-map-add-line-layer.md
+[Add a polygon extrusion layer]: map-extruded-polygon-android.md
+[Create a data source]: create-data-source-android-sdk.md
+[Quickstart: Create an Android app]: quick-android-map.md
+[Use data-driven style expressions]: data-driven-style-expressions-android-sdk.md
azure-maps How To Add Symbol To Android Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-add-symbol-to-android-map.md
This article shows you how to render point data from a data source as a symbol l
## Prerequisites
-Be sure to complete the steps in the [Quickstart: Create an Android app](quick-android-map.md) document. Code blocks in this article can be inserted into the maps `onReady` event handler.
+Be sure to complete the steps in the [Quickstart: Create an Android app] document. Code blocks in this article can be inserted into the maps `onReady` event handler.
## Add a symbol layer
There are three different types of point data that can be added to the map:
- GeoJSON MultiPoint geometry - This object contains the coordinates of multiple points and nothing else. Pass an array of points into the `MultiPoint` class to create these objects. - GeoJSON Feature - This object consists of any GeoJSON geometry and a set of properties that contain metadata associated to the geometry.
-For more information, see the [Create a data source](create-data-source-android-sdk.md) document on creating and adding data to the map.
+For more information, see the [Create a data source] document on creating and adding data to the map.
The following code sample creates a GeoJSON Point geometry and passes it into the GeoJSON Feature and has a `title` value added to its properties. The `title` property is displayed as text above the symbol icon on the map.
The following code is a modified version of the default marker vector XML that y
See the following articles for more code samples to add to your maps: > [!div class="nextstepaction"]
-> [Create a data source](create-data-source-android-sdk.md)
+> [Create a data source]
> [!div class="nextstepaction"]
-> [Cluster point data](clustering-point-data-android-sdk.md)
+> [Cluster point data]
> [!div class="nextstepaction"]
-> [Add a bubble layer](map-add-bubble-layer-android.md)
+> [Add a bubble layer]
> [!div class="nextstepaction"]
-> [Use data-driven style expressions](data-driven-style-expressions-android-sdk.md)
+> [Use data-driven style expressions]
> [!div class="nextstepaction"]
-> [Display feature information](display-feature-information-android.md)
+> [Display feature information]
+
+[Add a bubble layer]: map-add-bubble-layer-android.md
+[Cluster point data]: clustering-point-data-android-sdk.md
+[Create a data source]: create-data-source-android-sdk.md
+[Display feature information]: display-feature-information-android.md
+[Quickstart: Create an Android app]: quick-android-map.md
+[Use data-driven style expressions]: data-driven-style-expressions-android-sdk.md
azure-maps How To Add Tile Layer Android Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-add-tile-layer-android-map.md
zone_pivot_groups: azure-maps-android
# Add a tile layer to a map (Android SDK)
-This article shows you how to render a tile layer on a map using the Azure Maps Android SDK. Tile layers allow you to superimpose images on top of Azure Maps base map tiles. More information on Azure Maps tiling system can be found in the [Zoom levels and tile grid](zoom-levels-and-tile-grid.md) documentation.
+This article shows you how to render a tile layer on a map using the Azure Maps Android SDK. Tile layers allow you to superimpose images on top of Azure Maps base map tiles. More information on Azure Maps tiling system can be found in the [Zoom levels and tile grid] documentation.
A Tile layer loads in tiles from a server. These images can be prerendered and stored like any other image on a server, using a naming convention that the tile layer understands. Or, these images can be rendered with a dynamic service that generates the images near real time. There are three different tile service naming conventions supported by Azure Maps TileLayer class: * X, Y, Zoom notation - Based on the zoom level, x is the column and y is the row position of the tile in the tile grid. * Quadkey notation - Combination x, y, zoom information into a single string value that is a unique identifier for a tile.
-* Bounding Box - Bounding box coordinates can be used to specify an image in the format `{west},{south},{east},{north}`, which is commonly used by [web-mapping Services (WMS)](https://www.opengeospatial.org/standards/wms).
+* Bounding Box - Bounding box coordinates can be used to specify an image in the format `{west},{south},{east},{north}`, which is commonly used by [web-mapping Services (WMS)].
> [!TIP] > A TileLayer is a great way to visualize large data sets on the map. Not only can a tile layer be generated from an image, but vector data can also be rendered as a tile layer too. By rendering vector data as a tile layer, the map control only needs to load the tiles, which can be much smaller in file size than the vector data they represent. This technique is used by many who need to render millions of rows of data on the map.
The tile URL passed into a Tile layer must be an http/https URL to a TileJSON re
## Prerequisites
-To complete the process in this article, you need to install [Azure Maps Android SDK](how-to-use-android-map-control-library.md) to load a map.
+To complete the process in this article, you need to install [Azure Maps Android SDK] to load a map.
## Add a tile layer to the map
-This sample shows how to create a tile layer that points to a set of tiles. This sample uses the "x, y, zoom" tiling system. The source of this tile layer is the [OpenSeaMap project](https://openseamap.org/index.php), which contains crowd sourced nautical charts. Often when viewing tile layers it's desirable to be able to clearly see the labels of cities on the map. This behavior can be achieved by inserting the tile layer below the map label layers.
+This sample shows how to create a tile layer that points to a set of tiles. This sample uses the "x, y, zoom" tiling system. The source of this tile layer is the [OpenSeaMap project], which contains crowd sourced nautical charts. Often when viewing tile layers it's desirable to be able to clearly see the labels of cities on the map. This behavior can be achieved by inserting the tile layer below the map label layers.
::: zone pivot="programming-language-java-android"
map.layers.add(layer, "labels")
::: zone-end
-The following screenshot shows the above code overlaying a web-mapping service of geological data from the [U.S. Geological Survey (USGS)](https://mrdata.usgs.gov/) on top of a map, below the labels.
+The following screenshot shows the above code overlaying a web-mapping service of geological data from the [U.S. Geological Survey (USGS)] on top of a map, below the labels.
![Android map displaying WMS tile layer](media/how-to-add-tile-layer-android-map/android-tile-layer-wms.jpg)
The following screenshot shows the above code overlaying a web-mapping tile serv
See the following article to learn more about ways to overlay imagery on a map. > [!div class="nextstepaction"]
-> [Image layer](map-add-image-layer-android.md)
+> [Image layer]
+
+[Azure Maps Android SDK]: how-to-use-android-map-control-library.md
+[Image layer]: map-add-image-layer-android.md
+[OpenSeaMap project]: https://openseamap.org/index.php
+[U.S. Geological Survey (USGS)]: https://mrdata.usgs.gov
+[web-mapping Services (WMS)]: https://www.opengeospatial.org/standards/wms
+[Zoom levels and tile grid]: zoom-levels-and-tile-grid.md
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Workspace-based Application Insights resources allow you to take advantage of th
When you migrate to a workspace-based resource, no data is transferred from your classic resource's storage to the new workspace-based storage. Choosing to migrate changes the location where new data is written to a Log Analytics workspace while preserving access to your classic resource data.
-Your classic resource data persists and is subject to the retention settings on your classic Application Insights resource. All new data ingested post migration is subject to the [retention settings](../logs/data-retention-archive.md) of the associated Log Analytics workspace, which also supports [different retention settings by data type](../logs/data-retention-archive.md#set-retention-and-archive-policy-by-table).
+Your classic resource data persists and is subject to the retention settings on your classic Application Insights resource. All new data ingested post migration is subject to the [retention settings](../logs/data-retention-archive.md) of the associated Log Analytics workspace, which also supports [different retention settings by data type](../logs/data-retention-archive.md#configure-retention-and-archive-at-the-table-level).
*The migration process is permanent and can't be reversed.* After you migrate a resource to workspace-based Application Insights, it will always be a workspace-based resource. After you migrate, you can change the target workspace as often as needed.
If you don't need to migrate an existing resource, and instead want to create a
- Check your current retention settings under **Settings** > **Usage and estimated costs** > **Data Retention** for your Log Analytics workspace. This setting affects how long any new ingested data is stored after you migrate your Application Insights resource. > [!NOTE]
- > - If you currently store Application Insights data for longer than the default 90 days and want to retain this longer retention period after migration, adjust your [workspace retention settings](../logs/data-retention-archive.md?tabs=portal-1%2cportal-2#set-retention-and-archive-policy-by-table).
+ > - If you currently store Application Insights data for longer than the default 90 days and want to retain this longer retention period after migration, adjust your [workspace retention settings](../logs/data-retention-archive.md?tabs=portal-1%2cportal-2#configure-retention-and-archive-at-the-table-level).
> - If you've selected data retention longer than 90 days on data ingested into the classic Application Insights resource prior to migration, data retention continues to be billed through that Application Insights resource until the data exceeds the retention period. > - If the retention setting for your Application Insights instance under **Configure** > **Usage and estimated costs** > **Data Retention** is enabled, use that setting to control the retention days for the telemetry data still saved in your classic resource's storage.
For the full Azure CLI documentation for this command, see the [Azure CLI docume
### Azure PowerShell
-Starting with version 8.0 or higher of [Azure PowerShell](https://learn.microsoft.com/powershell/azure/what-is-azure-powershell), you can use the `Update-AzApplicationInsights` PowerShell command to migrate a classic Application Insights resource to workspace based.
+Starting with version 8.0 or higher of [Azure PowerShell](/powershell/azure/what-is-azure-powershell), you can use the `Update-AzApplicationInsights` PowerShell command to migrate a classic Application Insights resource to workspace based.
-To use this cmdlet, you need to specify the name and resource group of the Application Insights resource that you want to update. Use the `IngestionMode` and `WorkspaceResoruceId` parameters to migrate your classic instance to workspace-based. For more information on the parameters and syntax of this cmdlet, see [Update-AzApplicationInsights](https://learn.microsoft.com/powershell/module/az.applicationinsights/update-azapplicationinsights).
+To use this cmdlet, you need to specify the name and resource group of the Application Insights resource that you want to update. Use the `IngestionMode` and `WorkspaceResoruceId` parameters to migrate your classic instance to workspace-based. For more information on the parameters and syntax of this cmdlet, see [Update-AzApplicationInsights](/powershell/module/az.applicationinsights/update-azapplicationinsights).
#### Example
Yes, they continue to work.
No. Migration doesn't affect existing API access to data. After migration, you can access data directly from a workspace by using a [slightly different schema](#workspace-based-resource-changes).
-### Is there be any impact on Live Metrics or other monitoring experiences?
+### Is there any impact on Live Metrics or other monitoring experiences?
No. There's no impact to [Live Metrics](live-stream.md#live-metrics-monitor-and-diagnose-with-1-second-latency) or other monitoring experiences.
azure-monitor Data Retention Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-retention-privacy.md
You'll need to write a [telemetry processor plug-in](./api-filtering-sampling.md
## How long is the data kept?
-Raw data points (that is, items that you can query in Analytics and inspect in Search) are kept for up to 730 days. You can [select a retention duration](../logs/data-retention-archive.md#set-retention-and-archive-policy-by-table) of 30, 60, 90, 120, 180, 270, 365, 550, or 730 days. If you need to keep data longer than 730 days, you can use [diagnostic settings](../essentials/diagnostic-settings.md#diagnostic-settings-in-azure-monitor).
+Raw data points (that is, items that you can query in Analytics and inspect in Search) are kept for up to 730 days. You can [select a retention duration](../logs/data-retention-archive.md#configure-retention-and-archive-at-the-table-level) of 30, 60, 90, 120, 180, 270, 365, 550, or 730 days. If you need to keep data longer than 730 days, you can use [diagnostic settings](../essentials/diagnostic-settings.md#diagnostic-settings-in-azure-monitor).
Data kept longer than 90 days incurs extra charges. For more information about Application Insights pricing, see the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
azure-monitor Opentelemetry Add Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md
Title: Add and modify Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications
-description: This article provides guidance on how to add and modify OpenTelemetry for applications using Azure Monitor.
+ Title: Add, modify, and filter Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications
+description: This article provides guidance on how to add, modify, and filter OpenTelemetry for applications using Azure Monitor.
Last updated 06/22/2023 ms.devlang: csharp, javascript, typescript, python
-# Add and modify OpenTelemetry
+# Add, modify, and filter OpenTelemetry
-This article provides guidance on how to add and modify OpenTelemetry for applications using [Azure Monitor Application Insights](app-insights-overview.md#application-insights-overview).
+This article provides guidance on how to add, modify, and filter OpenTelemetry for applications using [Azure Monitor Application Insights](app-insights-overview.md#application-insights-overview).
To learn more about OpenTelemetry concepts, see the [OpenTelemetry overview](opentelemetry-overview.md) or [OpenTelemetry FAQ](/azure/azure-monitor/faq#opentelemetry).
logger.warning("WARNING: Warning log with properties", extra={"key1": "value1"})
-### Filter telemetry
+## Filter telemetry
You might use the following ways to filter out telemetry before it leaves your application.
-#### [ASP.NET Core](#tab/aspnetcore)
+### [ASP.NET Core](#tab/aspnetcore)
1. Many instrumentation libraries provide a filter option. For guidance, see the readme files of individual instrumentation libraries: - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#filter)
You might use the following ways to filter out telemetry before it leaves your a
1. If a particular source isn't explicitly added by using `AddSource("ActivitySourceName")`, then none of the activities created by using that source are exported.
-#### [.NET](#tab/net)
+### [.NET](#tab/net)
1. Many instrumentation libraries provide a filter option. For guidance, see the readme files of individual instrumentation libraries: - [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.8/src/OpenTelemetry.Instrumentation.AspNet/README.md#filter)
You might use the following ways to filter out telemetry before it leaves your a
1. If a particular source isn't explicitly added by using `AddSource("ActivitySourceName")`, then none of the activities created by using that source are exported.
-#### [Java](#tab/java)
+### [Java](#tab/java)
See [sampling overrides](java-standalone-config.md#sampling-overrides-preview) and [telemetry processors](java-standalone-telemetry-processors.md).
-#### [Node.js](#tab/nodejs)
+### [Node.js](#tab/nodejs)
1. Exclude the URL option provided by many HTTP instrumentation libraries.
Use the add [custom property example](#add-a-custom-property-to-a-span), but rep
} ```
-#### [Python](#tab/python)
+### [Python](#tab/python)
1. Exclude the URL with the `OTEL_PYTHON_EXCLUDED_URLS` environment variable: ```
Use the add [custom property example](#add-a-custom-property-to-a-span), but rep
<!-- For more information, see [GitHub Repo](link). -->
-### Get the trace ID or span ID
+## Get the trace ID or span ID
You might want to get the trace ID or span ID. If you have logs sent to a destination other than Application Insights, consider adding the trace ID or span ID. Doing so enables better correlation when debugging and diagnosing issues.
-#### [ASP.NET Core](#tab/aspnetcore)
+### [ASP.NET Core](#tab/aspnetcore)
> [!NOTE] > The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Tracing API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api).
string traceId = activity?.TraceId.ToHexString();
string spanId = activity?.SpanId.ToHexString(); ```
-#### [.NET](#tab/net)
+### [.NET](#tab/net)
> [!NOTE] > The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Tracing API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api).
string traceId = activity?.TraceId.ToHexString();
string spanId = activity?.SpanId.ToHexString(); ```
-#### [Java](#tab/java)
+### [Java](#tab/java)
You can use `opentelemetry-api` to get the trace ID or span ID.
You can use `opentelemetry-api` to get the trace ID or span ID.
String spanId = span.getSpanContext().getSpanId(); ```
-#### [Node.js](#tab/nodejs)
+### [Node.js](#tab/nodejs)
Get the request trace ID and the span ID in your code:
Get the request trace ID and the span ID in your code:
let traceId = trace.getActiveSpan().spanContext().traceId; ```
-#### [Python](#tab/python)
+### [Python](#tab/python)
Get the request trace ID and the span ID in your code:
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md
A connection string consists of a list of settings represented as key-value pair
#### Syntax - `InstrumentationKey` (for example, 00000000-0000-0000-0000-000000000000).
- This is a *required* field.
+ `InstrumentationKey` is a *required* field.
- `Authorization` (for example, ikey). This setting is optional because today we only support ikey authorization. - `EndpointSuffix` (for example, applicationinsights.azure.cn).
- Setting the endpoint suffix will instruct the SDK on which Azure cloud to connect to. The SDK will assemble the rest of the endpoint for individual services.
+ Setting the endpoint suffix tells the SDK which Azure cloud to connect to. The SDK assembles the rest of the endpoint for individual services.
- Explicit endpoints. Any service can be explicitly overridden in the connection string: - `IngestionEndpoint` (for example, `https://dc.applicationinsights.azure.com`)
Here are some examples of connection strings.
`InstrumentationKey=00000000-0000-0000-0000-000000000000;EndpointSuffix=ai.contoso.com;`
-In this example, the connection string specifies the endpoint suffix and the SDK will construct service endpoints:
+In this example, the connection string specifies the endpoint suffix and the SDK constructs service endpoints:
- Authorization scheme defaults to "ikey" - Instrumentation key: 00000000-0000-0000-0000-000000000000
In this example, the connection string specifies the endpoint suffix and the SDK
`InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://custom.com:111/;LiveEndpoint=https://custom.com:222/;ProfilerEndpoint=https://custom.com:333/;SnapshotEndpoint=https://custom.com:444/;`
-In this example, the connection string specifies explicit overrides for every service. The SDK will use the exact endpoints provided without modification:
+In this example, the connection string specifies explicit overrides for every service. The SDK uses the exact endpoints provided without modification:
- Authorization scheme defaults to "ikey" - Instrumentation key: 00000000-0000-0000-0000-000000000000
Connection string: `APPLICATIONINSIGHTS_CONNECTION_STRING`
builder.Services.AddApplicationInsightsTelemetry(options: options); ```
-> [!NOTE]
-> When deploying applications to Azure in production scenarios, consider placing connection strings or other configuration secrets in secure locations such as App Service configuration settings or Azure Key Vault. Avoid including secrets in your application code or checking them into source control where they might be exposed or misused. The preceding code example will also work if the connection string is stored in App Service configuration settings. Learn more about [configuring App Service settings](/azure/app-service/configure-common).
- # [.NET Framework](#tab/dotnet-framework) Set the property [TelemetryConfiguration.ConnectionString](https://github.com/microsoft/ApplicationInsights-dotnet/blob/add45ceed35a817dc7202ec07d3df1672d1f610d/BASE/src/Microsoft.ApplicationInsights/Extensibility/TelemetryConfiguration.cs#L271-L274) or [ApplicationInsightsServiceOptions.ConnectionString](https://github.com/microsoft/ApplicationInsights-dotnet/blob/81288f26921df1e8e713d31e7e9c2187ac9e6590/NETCORE/src/Shared/Extensions/ApplicationInsightsServiceOptions.cs#L66-L69).
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
This article describes [Cost optimization](/azure/architecture/framework/cost/)
| Recommendation | Benefit | |:|:|
-| Change to Workspace-based Application Insights | Ensure that your Application Insights resources are [Workspace-based](app/create-workspace-resource.md) so that they can leverage new cost savings tools such as [Basic Logs](logs/basic-logs-configure.md), [Commitment Tiers](logs/cost-logs.md#commitment-tiers), [Retention by data type and Data Archive](logs/data-retention-archive.md#set-retention-and-archive-policy-by-table). |
+| Change to Workspace-based Application Insights | Ensure that your Application Insights resources are [Workspace-based](app/create-workspace-resource.md) so that they can leverage new cost savings tools such as [Basic Logs](logs/basic-logs-configure.md), [Commitment Tiers](logs/cost-logs.md#commitment-tiers), [Retention by data type and Data Archive](logs/data-retention-archive.md#configure-retention-and-archive-at-the-table-level). |
| Use sampling to tune the amount of data collected. | [Sampling](app/sampling.md) is the primary tool you can use to tune the amount of data collected by Application Insights. Use sampling to reduce the amount of telemetry that's sent from your applications with minimal distortion of metrics. | | Limit the number of Ajax calls. | [Limit the number of Ajax calls](app/javascript.md#configuration) that can be reported in every page view or disable Ajax reporting. If you disable Ajax calls, you'll be disabling [JavaScript correlation](app/javascript.md#enable-distributed-tracing) too. | | Disable unneeded modules. | [Edit ApplicationInsights.config](app/configuration-with-applicationinsights-config.md) to turn off collection modules that you don't need. For example, you might decide that performance counters or dependency data aren't required. |
azure-monitor Container Insights Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-query.md
The required tables for this chart include KubeNodeInventory, KubePodInventory,
For details on resource logs for AKS clusters, see [Collect control plane logs](../../aks/monitor-aks.md#resource-logs). - ## Prometheus metrics
-The following example is a Prometheus metrics query showing disk reads per second per disk per node.
-
-```
-InsightsMetrics
-| where Namespace == 'container.azm.ms/diskio'
-| where TimeGenerated > ago(1h)
-| where Name == 'reads'
-| extend Tags = todynamic(Tags)
-| extend HostName = tostring(Tags.hostName), Device = Tags.name
-| extend NodeDisk = strcat(Device, "/", HostName)
-| order by NodeDisk asc, TimeGenerated asc
-| serialize
-| extend PrevVal = iif(prev(NodeDisk) != NodeDisk, 0.0, prev(Val)), PrevTimeGenerated = iif(prev(NodeDisk) != NodeDisk, datetime(null), prev(TimeGenerated))
-| where isnotnull(PrevTimeGenerated) and PrevTimeGenerated != TimeGenerated
-| extend Rate = iif(PrevVal > Val, Val / (datetime_diff('Second', TimeGenerated, PrevTimeGenerated) * 1), iif(PrevVal == Val, 0.0, (Val - PrevVal) / (datetime_diff('Second', TimeGenerated, PrevTimeGenerated) * 1)))
-| where isnotnull(Rate)
-| project TimeGenerated, NodeDisk, Rate
-| render timechart
-
-```
+The following examples requires the configuration described in [Send Prometheus metrics to Log Analytics workspace with Container insights](container-insights-prometheus-logs.md).
To view Prometheus metrics scraped by Azure Monitor and filtered by namespace, specify *"prometheus"*. Here's a sample query to view Prometheus metrics from the `default` Kubernetes namespace.
InsightsMetrics
The output will show results similar to the following example.
-![Screenshot that shows the log query results of data ingestion volume.](./media/container-insights-prometheus/log-query-example-usage-03.png)
+![Screenshot that shows the log query results of data ingestion volume.](media/container-insights-log-query/log-query-example-usage-03.png)
To estimate what each metrics size in GB is for a month to understand if the volume of data ingested received in the workspace is high, the following query is provided.
InsightsMetrics
The output will show results similar to the following example.
-![Screenshot that shows log query results of data ingestion volume.](./media/container-insights-prometheus/log-query-example-usage-02.png)
+![Screenshot that shows log query results of data ingestion volume.](./media/container-insights-log-query/log-query-example-usage-02.png)
## Configuration or scraping errors
azure-monitor Container Insights Prometheus Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus-logs.md
+
+ Title: Collect Prometheus metrics with Container insights
+description: Describes different methods for configuring the Container insights agent to scrape Prometheus metrics from your Kubernetes cluster.
++ Last updated : 03/01/2023+++
+# Send Prometheus metrics to Log Analytics workspace with Container insights
+This article describes how to send Prometheus metrics from your Kubernetes cluster monitored by Container insights to a Log Analytics workspace. Before you perform this configuration, you should first ensure that you're [scraping Prometheus metrics from your cluster using Azure Monitor managed service for Prometheus](), which is the recommended method for monitoring your clusters. Use the configuration described in this article only if you also want to send this same data to a Log Analytics workspace where you can analyze it using [log queries](../logs/log-query-overview.md) and [log alerts](../alerts/alerts-log-query.md).
+
+This configuration requires configuring the *monitoring addon* for the Azure Monitor agent, which is the same one used by Container insights to send data to a Log Analytics workspace. It requires exposing the Prometheus metrics endpoint through your exporters or pods and then configuring the monitoring addon for the Azure Monitor agent used by Container insights as shown the following diagram.
++++
+## Prometheus scraping settings (for metrics stored as logs)
+
+Active scraping of metrics from Prometheus is performed from one of two perspectives below and metrics are sent to configured log analytics workspace :
+
+- **Cluster-wide**: Defined in the ConfigMap section *[Prometheus data_collection_settings.cluster]*.
+- **Node-wide**: Defined in the ConfigMap section *[Prometheus_data_collection_settings.node]*.
+
+| Endpoint | Scope | Example |
+|-|-||
+| Pod annotation | Cluster-wide | `prometheus.io/scrape: "true"` <br>`prometheus.io/path: "/mymetrics"` <br>`prometheus.io/port: "8000"` <br>`prometheus.io/scheme: "http"` |
+| Kubernetes service | Cluster-wide | `http://my-service-dns.my-namespace:9100/metrics` <br>`http://metrics-server.kube-system.svc.cluster.local/metrics`ΓÇï |
+| URL/endpoint | Per-node and/or cluster-wide | `http://myurl:9101/metrics` |
+
+When a URL is specified, Container insights only scrapes the endpoint. When Kubernetes service is specified, the service name is resolved with the cluster DNS server to get the IP address. Then the resolved service is scraped.
+
+|Scope | Key | Data type | Value | Description |
+||--|--|-|-|
+| Cluster-wide | | | | Specify any one of the following three methods to scrape endpoints for metrics. |
+| | `urls` | String | Comma-separated array | HTTP endpoint (either IP address or valid URL path specified). For example: `urls=[$NODE_IP/metrics]`. ($NODE_IP is a specific Container insights parameter and can be used instead of a node IP address. Must be all uppercase.) |
+| | `kubernetes_services` | String | Comma-separated array | An array of Kubernetes services to scrape metrics from kube-state-metrics. Fully qualified domain names must be used here. For example,`kubernetes_services = ["http://metrics-server.kube-system.svc.cluster.local/metrics",http://my-service-dns.my-namespace.svc.cluster.local:9100/metrics]`|
+| | `monitor_kubernetes_pods` | Boolean | true or false | When set to `true` in the cluster-wide settings, the Container insights agent scrapes Kubernetes pods across the entire cluster for the following Prometheus annotations:<br> `prometheus.io/scrape:`<br> `prometheus.io/scheme:`<br> `prometheus.io/path:`<br> `prometheus.io/port:` |
+| | `prometheus.io/scrape` | Boolean | true or false | Enables scraping of the pod, and `monitor_kubernetes_pods` must be set to `true`. |
+| | `prometheus.io/scheme` | String | http | Defaults to scraping over HTTP. |
+| | `prometheus.io/path` | String | Comma-separated array | The HTTP resource path from which to fetch metrics. If the metrics path isn't `/metrics`, define it with this annotation. |
+| | `prometheus.io/port` | String | 9102 | Specify a port to scrape from. If the port isn't set, it defaults to 9102. |
+| | `monitor_kubernetes_pods_namespaces` | String | Comma-separated array | An allowlist of namespaces to scrape metrics from Kubernetes pods.<br> For example, `monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]` |
+| Node-wide | `urls` | String | Comma-separated array | HTTP endpoint (either IP address or valid URL path specified). For example: `urls=[$NODE_IP/metrics]`. ($NODE_IP is a specific Container insights parameter and can be used instead of a node IP address. Must be all uppercase.) |
+| Node-wide or cluster-wide | `interval` | String | 60s | The collection interval default is one minute (60 seconds). You can modify the collection for either the *[prometheus_data_collection_settings.node]* and/or *[prometheus_data_collection_settings.cluster]* to time units such as s, m, and h. |
+| Node-wide or cluster-wide | `fieldpass`<br> `fielddrop`| String | Comma-separated array | You can specify certain metrics to be collected or not from the endpoint by setting the allow (`fieldpass`) and disallow (`fielddrop`) listing. You must set the allowlist first. |
+
+## Configure ConfigMaps to specify Prometheus scrape configuration (for metrics stored as logs)
+Perform the following steps to configure your ConfigMap configuration file for your cluster. ConfigMaps is a global list and there can be only one ConfigMap applied to the agent. You can't have another ConfigMaps overruling the collections.
+++
+1. [Download](https://aka.ms/container-azm-ms-agentconfig) the template ConfigMap YAML file and save it as c*ontainer-azm-ms-agentconfig.yaml*. If you've already deployed a ConfigMap to your cluster and you want to update it with a newer configuration, you can edit the ConfigMap file you've previously used.
+1. Edit the ConfigMap YAML file with your customizations to scrape Prometheus metrics.
++
+ ### [Cluster-wide](#tab/cluster-wide)
+
+ To collect Kubernetes services cluster-wide, configure the ConfigMap file by using the following example:
+
+ ```
+ prometheus-data-collection-settings: |- ΓÇï
+ # Custom Prometheus metrics data collection settings
+ [prometheus_data_collection_settings.cluster] ΓÇï
+ interval = "1m" ## Valid time units are s, m, h.
+ fieldpass = ["metric_to_pass1", "metric_to_pass12"] ## specify metrics to pass through ΓÇï
+ fielddrop = ["metric_to_drop"] ## specify metrics to drop from collecting
+ kubernetes_services = ["http://my-service-dns.my-namespace:9102/metrics"]
+ ```
+
+ ### [Specific URL](#tab/url)
+
+ To configure scraping of Prometheus metrics from a specific URL across the cluster, configure the ConfigMap file by using the following example:
+
+ ```
+ prometheus-data-collection-settings: |- ΓÇï
+ # Custom Prometheus metrics data collection settings
+ [prometheus_data_collection_settings.cluster] ΓÇï
+ interval = "1m" ## Valid time units are s, m, h.
+ fieldpass = ["metric_to_pass1", "metric_to_pass12"] ## specify metrics to pass through ΓÇï
+ fielddrop = ["metric_to_drop"] ## specify metrics to drop from collecting
+ urls = ["http://myurl:9101/metrics"] ## An array of urls to scrape metrics from
+ ```
+
+ ### [DaemonSet](#tab/deamonset)
+
+ To configure scraping of Prometheus metrics from an agent's DaemonSet for every individual node in the cluster, configure the following example in the ConfigMap:
+
+ ```
+ prometheus-data-collection-settings: |- ΓÇï
+ # Custom Prometheus metrics data collection settings ΓÇï
+ [prometheus_data_collection_settings.node] ΓÇï
+ interval = "1m" ## Valid time units are s, m, h.
+ urls = ["http://$NODE_IP:9103/metrics"] ΓÇï
+ fieldpass = ["metric_to_pass1", "metric_to_pass2"] ΓÇï
+ fielddrop = ["metric_to_drop"] ΓÇï
+ ```
+
+ `$NODE_IP` is a specific Container insights parameter and can be used instead of a node IP address. It must be all uppercase.
+
+ ### [Pod annotation](#tab/pod)
+
+ To configure scraping of Prometheus metrics by specifying a pod annotation:
+
+ 1. In the ConfigMap, specify the following configuration:
+
+ ```
+ prometheus-data-collection-settings: |- ΓÇï
+ # Custom Prometheus metrics data collection settings
+ [prometheus_data_collection_settings.cluster] ΓÇï
+ interval = "1m" ## Valid time units are s, m, h
+ monitor_kubernetes_pods = true
+ ```
+
+ 2. Specify the following configuration for pod annotations:
+
+ ```
+ - prometheus.io/scrape:"true" #Enable scraping for this pod ΓÇï
+ - prometheus.io/scheme:"http" #If the metrics endpoint is secured then you will need to set this to `https`, if not default ΓÇÿhttpΓÇÖΓÇï
+ - prometheus.io/path:"/mymetrics" #If the metrics path is not /metrics, define it with this annotation. ΓÇï
+ - prometheus.io/port:"8000" #If port is not 9102 use this annotationΓÇï
+ ```
+
+ If you want to restrict monitoring to specific namespaces for pods that have annotations, for example, only include pods dedicated for production workloads, set the `monitor_kubernetes_pod` to `true` in ConfigMap. Then add the namespace filter `monitor_kubernetes_pods_namespaces` to specify the namespaces to scrape from. An example is `monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]`.
+
+2. Run the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
+
+ Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
+
+The configuration change can take a few minutes to finish before taking effect. All ama-logs pods in the cluster will restart. When the restarts are finished, a message appears that's similar to the following and includes the result `configmap "container-azm-ms-agentconfig" created`.
++
+## Verify configuration
+
+To verify the configuration was successfully applied to a cluster, use the following command to review the logs from an agent pod: `kubectl logs ama-logs-fdf58 -n=kube-system`.
++
+If there are configuration errors from the Azure Monitor Agent pods, the output shows errors similar to the following example:
+
+```
+***************Start Config Processing********************
+config::unsupported/missing config schema version - 'v21' , using defaults
+```
+
+Errors related to applying configuration changes are also available for review. The following options are available to perform additional troubleshooting of configuration changes and scraping of Prometheus metrics:
+
+- From an agent pod logs using the same `kubectl logs` command.
+
+- From Live Data. Live Data logs show errors similar to the following example:
+
+ ```
+ 2019-07-08T18:55:00Z E! [inputs.prometheus]: Error in plugin: error making HTTP request to http://invalidurl:1010/metrics: Get http://invalidurl:1010/metrics: dial tcp: lookup invalidurl on 10.0.0.10:53: no such host
+ ```
+
+- From the **KubeMonAgentEvents** table in your Log Analytics workspace. Data is sent every hour with *Warning* severity for scrape errors and *Error* severity for configuration errors. If there are no errors, the entry in the table has data with severity *Info*, which reports no errors. The **Tags** property contains more information about the pod and container ID on which the error occurred and also the first occurrence, last occurrence, and count in the last hour.
+- For Azure Red Hat OpenShift v3.x and v4.x, check the Azure Monitor Agent logs by searching the **ContainerLog** table to verify if log collection of openshift-azure-logging is enabled.
+
+Errors prevent Azure Monitor Agent from parsing the file, causing it to restart and use the default configuration. After you correct the errors in ConfigMap on clusters other than Azure Red Hat OpenShift v3.x, save the YAML file and apply the updated ConfigMaps by running the command `kubectl apply -f <configmap_yaml_file.yaml`.
+
+For Azure Red Hat OpenShift v3.x, edit and save the updated ConfigMaps by running the command `oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging`.
+
+## Query Prometheus metrics data
+
+To view Prometheus metrics scraped by Azure Monitor and any configuration/scraping errors reported by the agent, review [Query Prometheus metrics data](container-insights-log-query.md#prometheus-metrics).
+
+## View Prometheus metrics in Grafana
+
+Container insights supports viewing metrics stored in your Log Analytics workspace in Grafana dashboards. We've provided a template that you can download from Grafana's [dashboard repository](https://grafana.com/grafana/dashboards?dataSource=grafana-azure-monitor-datasource&category=docker). Use the template to get started and reference it to help you learn how to query other data from your monitored clusters to visualize in custom Grafana dashboards.
++
+## Next steps
+
+- [See the default configuration for Prometheus metrics](../essentials/prometheus-metrics-scrape-default.md).
+- [Customize Prometheus metric scraping for the cluster](../essentials/prometheus-metrics-scrape-configuration.md).
azure-monitor Integrate Keda https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/integrate-keda.md
+
+ Title: Integrate KEDA with your Azure Kubernetes Service cluster
+description: How to integrate KEDA with your Azure Kubernetes Service cluster.
+++++ Last updated : 05/31/2023
+
++
+# Integrate KEDA with your Azure Kubernetes Service cluster
+
+KEDA is a Kubernetes-based Event Driven Autoscaler. KEDA lets you drive the scaling of any container in Kubernetes based on the load to be processed, by querying metrics from systems such as Prometheus. Integrate KEDA with your Azure Kubernetes Service (AKS) cluster to scale your workloads based on Prometheus metrics from your Azure Monitor workspace.
+
+To integrate KEDA into your Azure Kubernetes Service, you have to deploy and configure a workload identity or pod identity on your cluster. The identity allows KEDA to authenticate with Azure and retrieve metrics for scaling from your Monitor workspace.
+
+This article walks you through the steps to integrate KEDA into your AKS cluster using a workload identity.
+
+> [!NOTE]
+> We recommend using Azure Active Directory workload identity. This authentication method replaces pod-managed identity (preview), which integrates with the Kubernetes native capabilities to federate with any external identity providers on behalf of the application.
+>
+> The open source Azure AD pod-managed identity (preview) in Azure Kubernetes Service has been deprecated as of 10/24/2022, and the project will be archived in Sept. 2023. For more information, see the deprecation notice. The AKS Managed add-on begins deprecation in Sept. 2023.
+>
+> Azure Managed Prometheus support starts from KEDA v2.10. If you have an older version of KEDA installed, you must upgrade in order to work with Azure Managed Prometheus.
+
+## Prerequisites
+++ Azure Kubernetes Service (AKS) cluster++ Prometheus sending metrics to an Azure Monitor workspace. For more information, see [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md).++
+## Set up a workload identity
+
+1. Start by setting up some environment variables. Change the values to suit your AKS cluster.
+
+ ```bash
+ export RESOURCE_GROUP="rg-keda-integration"
+ export LOCATION="eastus"
+ export SUBSCRIPTION="$(az account show --query id --output tsv)"
+ export USER_ASSIGNED_IDENTITY_NAME="keda-int-identity"
+ export FEDERATED_IDENTITY_CREDENTIAL_NAME="kedaFedIdentity"
+ export SERVICE_ACCOUNT_NAMESPACE="keda"
+ export SERVICE_ACCOUNT_NAME="keda-operator"
+ export AKS_CLUSTER_NAME="aks-cluster-name"
+ ```
+
+ + `SERVICE_ACCOUNT_NAME` - KEDA must use the service account that was used to create federated credentials. This can be any user defined name.
+ + `AKS_CLUSTER_NAME`- The name of the AKS cluster where you want to deploy KEDA.
+ + `SERVICE_ACCOUNT_NAMESPACE` Both KEDA and service account must be in same namespace.
+ + `USER_ASSIGNED_IDENTITY_NAME` is the name of the Azure Active directory identity that's created for KEDA.
+ + `FEDERATED_IDENTITY_CREDENTIAL_NAME` is the name of the credential that's created for KEDA to use to authenticate with Azure.
+
+1. If your AKS cluster hasn't been created with workload-identity or oidc-issuer enabled, you'll need to enable it. If you aren't sure, you can run the following command to check if it's enabled.
+
+ ```azurecli
+ az aks show --resource-group $RESOURCE_GROUP --name $AKS_CLUSTER_NAME --query oidcIssuerProfile
+ az aks show --resource-group $RESOURCE_GROUP --name $AKS_CLUSTER_NAME --query securityProfile.workloadIdentity
+ ```
+
+ To enable workload identity and oidc-issuer, run the following command.
+
+ ```azurecli
+ az aks update -g $RESOURCE_GROUP -n $AKS_CLUSTER_NAME --enable-managed-identity --enable-oidc-issuer
+ ```
+
+1. Store the OIDC issuer url in an environment variable to be used later.
+
+ ```bash
+ export AKS_OIDC_ISSUER="$(az aks show -n $AKS_CLUSTER_NAME -g $RESOURCE_GROUP --query "oidcIssuerProfile.issuerUrl" -otsv)"
+ ```
+
+1. Create a user assigned identity for KEDA. This identity is used by KEDA to authenticate with Azure Monitor.
+
+ ```azurecli
+ az identity create --name $USER_ASSIGNED_IDENTITY_NAME --resource-group $RESOURCE_GROUP --location $LOCATION --subscription $SUBSCRIPTION
+ ```
+
+ The output will be similar to the following:
+
+ ```json
+ {
+ "clientId": "abcd1234-abcd-abcd-abcd-9876543210ab",
+ "id": "/subscriptions/abcdef01-2345-6789-0abc-def012345678/resourcegroups/rg-keda-integration/providers/Microsoft. ManagedIdentity/userAssignedIdentities/keda-int-identity",
+ "location": "eastus",
+ "name": "keda-int-identity",
+ "principalId": "12345678-abcd-abcd-abcd-1234567890ab",
+ "resourceGroup": "rg-keda-integration",
+ "systemData": null,
+ "tags": {},
+ "tenantId": "1234abcd-9876-9876-9876-abcdef012345",
+ "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
+ }
+ ```
+
+1. Store the `clientId` and `tenantId` in environment variables to use later.
+ ```bash
+ export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group $RESOURCE_GROUP --name $USER_ASSIGNED_IDENTITY_NAME --query 'clientId' -otsv)"
+ export TENANT_ID="$(az identity show --resource-group $RESOURCE_GROUP --name $USER_ASSIGNED_IDENTITY_NAME --query 'tenantId' -otsv)"
+ ```
+
+1. Assign the *Monitoring Data Reader* role to the identity for your Azure Monitor workspace. This role allows the identity to read metrics from your workspace. Replace the *Azure Monitor Workspace resource group* and *Azure Monitor Workspace name* with the resource group and name of the Azure Monitor workspace which is configured to collect metrics from the AKS cluster.
+
+ ```azurecli
+ az role assignment create \
+ --assignee $USER_ASSIGNED_CLIENT_ID \
+ --role "Monitoring Data Reader" \
+ --scope /subscriptions/$SUBSCRIPTION/resourceGroups/<Azure Monitor Workspace resource group>/providers/microsoft.monitor/accounts/<Azure monitor workspace name>
+ ```
+
+
+1. Create the KEDA namespace, then create Kubernetes service account. This service account is used by KEDA to authenticate with Azure.
+
+ ```azurecli
+
+ az aks get-credentials -n $AKS_CLUSTER_NAME -g $RESOURCE_GROUP
+
+ kubectl create namespace keda
+
+ cat <<EOF | kubectl apply -f -
+ apiVersion: v1
+ kind: ServiceAccount
+ metadata:
+ annotations:
+ azure.workload.identity/client-id: $USER_ASSIGNED_CLIENT_ID
+ name: $SERVICE_ACCOUNT_NAME
+ namespace: $SERVICE_ACCOUNT_NAMESPACE
+ EOF
+ ```
+
+1. Check your service account by running
+ ```bash
+ kubectl describe serviceaccount $SERVICE_ACCOUNT_NAME -n keda
+ ```
+
+1. Establish a federated credential between the service account and the user assigned identity. The federated credential allows the service account to use the user assigned identity to authenticate with Azure.
+
+ ```azurecli
+ az identity federated-credential create --name $FEDERATED_IDENTITY_CREDENTIAL_NAME --identity-name $USER_ASSIGNED_IDENTITY_NAME --resource-group $RESOURCE_GROUP --issuer $AKS_OIDC_ISSUER --subject system:serviceaccount:$SERVICE_ACCOUNT_NAMESPACE:$SERVICE_ACCOUNT_NAME --audience api://AzureADTokenExchange
+ ```
+
+ > [!Note]
+ > It takes a few seconds for the federated identity credential to be propagated after being initially added. If a token request is made immediately after adding the federated identity credential, it might lead to failure for a couple of minutes as the cache is populated in the directory with old data. To avoid this issue, you can add a slight delay after adding the federated identity credential.
+
+## Deploy KEDA
+
+KEDA can be deployed using YAML manifests, Helm charts, or Operator Hub. This article uses Helm charts. For more information on deploying KEDA, see [Deploying KEDA](https://keda.sh/docs/2.10/deploy/)
+
+Add helm repository:
+
+```bash
+helm repo add kedacore https://kedacore.github.io/charts
+helm repo update
+```
+
+Deploy KEDA using the following command:
+
+```bash
+helm install keda kedacore/keda --namespace keda \
+--set podIdentity.azureWorkload.enabled=true \
+--set podIdentity.azureWorkload.clientId=$USER_ASSIGNED_CLIENT_ID \
+--set podIdentity.azureWorkload.tenantId=$TENANT_ID
+```
+
+Check your deployment by running the following command.
+```bash
+kubectl get pods -n keda
+```
+The output will be similar to the following:
+
+```bash
+NAME READY STATUS RESTARTS AGE
+keda-admission-webhooks-ffcb8f688-kqlxp 1/1 Running 0 4m
+keda-operator-5d9f7d975-mgv7r 1/1 Running 1 (4m ago) 4m
+keda-operator-metrics-apiserver-7dc6f59678-745nz 1/1 Running 0 4m
+```
+
+## Scalers
+
+Scalers define how and when KEDA should scale a deployment. KEDA supports a variety of scalers. For more information on scalers, see [Scalers](https://keda.sh/docs/2.10/scalers/prometheus/). Azure Managed Prometheus utilizes already existing Prometheus scaler to retrieve Prometheus metrics from Azure Monitor Workspace. The following yaml file is an example to use Azure Managed Prometheus.
+
+```yml
+apiVersion: keda.sh/v1alpha1
+kind: TriggerAuthentication
+metadata:
+ name: azure-managed-prometheus-trigger-auth
+spec:
+ podIdentity:
+ provider: azure-workload | azure # use "azure" for pod identity and "azure-workload" for workload identity
+ identityId: <identity-id> # Optional. Default: Identity linked with the label set when installing KEDA.
+
+apiVersion: keda.sh/v1alpha1
+kind: ScaledObject
+metadata:
+ name: azure-managed-prometheus-scaler
+spec:
+ scaleTargetRef:
+ name: deployment-name-to-be-scaled
+ minReplicaCount: 1
+ maxReplicaCount: 20
+ triggers:
+ - type: prometheus
+ metadata:
+ serverAddress: https://test-azure-monitor-workspace-name-1234.eastus.prometheus.monitor.azure.com
+ metricName: http_requests_total
+ query: sum(rate(http_requests_total{deployment="my-deployment"}[2m])) # Note: query must return a vector/scalar single element response
+ threshold: '100.50'
+ activationThreshold: '5.5'
+ authenticationRef:
+ name: azure-managed-prometheus-trigger-auth
+```
++ `serverAddress` is the Query endpoint of your Azure Monitor workspace. For more information, see [Query Prometheus metrics using the API and PromQL](../essentials/prometheus-api-promql.md#query-endpoint)++ `metricName` is the name of the metric you want to scale on. ++ `query` is the query used to retrieve the metric. ++ `threshold` is the value at which the deployment scales. ++ Set the `podIdentity.provider` according to the type of identity you're using. +
+## Troubleshooting
+
+The following section provides troubleshooting tips for common issues.
+
+### Federated credentials
+
+Federated credentials can take up to 10 minutes to propagate. If you're having issues with KEDA authenticating with Azure, try the following steps.
+
+The following log excerpt shows an error with the federated credentials.
+
+```
+kubectl logs -n keda keda-operator-5d9f7d975-mgv7r
+
+{
+ \"error\": \"unauthorized_client\",\n \"error_description\": \"AADSTS70021: No matching federated identity record found for presented assertion.
+Assertion Issuer: 'https://eastus.oic.prod-aks.azure.com/abcdef01-2345-6789-0abc-def012345678/12345678-abcd-abcd-abcd-1234567890ab/'.
+Assertion Subject: 'system:serviceaccount:keda:keda-operator'.
+Assertion Audience: 'api://AzureADTokenExchange'. https://docs.microsoft.com/azure/active-directory/develop/workload-identity-federation
+Trace ID: 12dd9ea0-3a65-408f-a41f-5d0403a25100\\r\\nCorrelation ID: 8a2dce68-17f1-4f11-bed2-4bcf9577f2af\\r\\nTimestamp: 2023-05-30 11:11:53Z\",
+\"error_codes\": [\n 70021\n ],\n \"timestamp\": \"2023-05-30 11:11:53Z\",
+\"trace_id\": \"12345678-3a65-408f-a41f-5d0403a25100\",
+\"correlation_id\": \"12345678-17f1-4f11-bed2-4bcf9577f2af\",
+\"error_uri\": \"https://login.microsoftonline.com/error?code=70021\"\n}
+\n--\n"}
+```
+
+Check the values used to create the ServiceAccount and the credentials created with `az identity federated-credential create` and ensure the `subject` value matches the `system:serviceaccount` value.
+
+### Azure Monitor workspace permissions
+
+If you're having issues with KEDA authenticating with Azure, check the permissions for the Azure Monitor workspace.
+The following log excerpt shows that the identity doesn't have read permissions for the Azure Monitor workspace.
+
+```
+kubectl logs -n keda keda-operator-5d9f7d975-mgv7r
+
+2023-05-30T11:15:45Z ERROR scale_handler error getting metric for scaler
+{"scaledObject.Namespace": "default", "scaledObject.Name": "azure-managed-prometheus-scaler", "scaler": "prometheusScaler",
+"error": "prometheus query api returned error. status: 403 response: {\"status\":\"error\",
+\"errorType\":\"Forbidden\",\"error\":\"User \\u0027abc123ab-1234-1234-abcd-abcdef123456
+\\u0027 does not have access to perform any of the following actions
+\\u0027microsoft.monitor/accounts/data/metrics/read, microsoft.monitor/accounts/data/metrics/read
+\\u0027 on resource \\u0027/subscriptions/abcdef01-2345-6789-0abc-def012345678/resourcegroups/rg-azmon-ws-01/providers/microsoft.monitor/accounts/azmon-ws-01\\u0027. RequestId: 123456c427f348258f3e5aeeefef834a\"}"}
+```
+
+Ensure the identity has the `Monitoring Data Reader` role on the Azure Monitor workspace.
azure-monitor Prometheus Authorization Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-authorization-proxy.md
+
+ Title: Azure Active Directory authorization proxy
+description: Azure Active Directory authorization proxy
+++ Last updated : 07/10/2022++
+# Azure Active Directory authorization proxy
+The Azure Active Directory authorization proxy is a reverse proxy, which can be used to authenticate requests using Azure Active Directory. This proxy can be used to authenticate requests to any service that supports Azure Active Directory authentication. Use this proxy to authenticate requests to Azure Monitor managed service for Prometheus.
++
+## Prerequisites
+++ An Azure Monitor workspace. If you don't have a workspace, create one using the [Azure portal](../essentials/azure-monitor-workspace-manage.md#create-an-azure-monitor-workspace).++ Prometheus installed on your cluster. +
+> [!NOTE]
+> The remote write example in this article uses Prometheus remote write to write data to Azure Monitor. Onboarding your AKS cluster to Prometheus automatically installs Prometheus on your cluster and sends data to your workspace.
+## Deployment
+
+The proxy can be deployed with custom templates using release image or as a helm chart. Both deployments contain the same customizable parameters. These parameters are described in the [Parameters](#parameters) table.
+
+For for more information, see [Azure Active Directory authentication proxy](https://github.com/Azure/aad-auth-proxy) project.
+
+The following examples show how to deploy the proxy for remote write and for querying data from Azure Monitor.
+
+## [Remote write example](#tab/remote-write-example)
+
+> [!NOTE]
+> This example shows how to use the proxy to authenticate requests for remote write to an Azure Monitor managed service for Prometheus. Prometheus remote write has a dedicated side car for remote writing which is the recommended method for implementing remote write.
+
+Before deploying the proxy, find your managed identity and assign it the `Monitoring Metrics Publisher` role for the Azure Monitor workspace's data collection rule.
+
+1. Find the `clientId` for the managed identity for your AKS cluster. The managed identity is used to authenticate to the Azure Monitor workspace. The managed identity is created when the AKS cluster is created.
+ ```azurecli
+ # Get the identity client_id
+ az aks show -g <AKS-CLUSTER-RESOURCE-GROUP> -n <AKS-CLUSTER-NAME> --query "identityProfile"
+ ```
+
+ The output has the following format:
+ ```bash
+ {
+ "kubeletidentity": {
+ "clientId": "abcd1234-1243-abcd-9876-1234abcd5678",
+ "objectId": "12345678-abcd-abcd-abcd-1234567890ab",
+ "resourceId": "/subscriptions/def0123-1243-abcd-9876-1234abcd5678/resourcegroups/MC_rg-proxytest-01_proxytest-01_eastus/providers/Microsoft.ManagedIdentity/userAssignedIdentities/proxytest-01-agentpool"
+ }
+ ```
+
+1. Find your Azure Monitor workspace's data collection rule (DCR) ID.
+ The rule name is same as the workspace name.
+ The resource group name for your data collection rule follows the format: `MA_<workspace-name>_<REGION>_managed`, for example `MA_amw-proxytest_eastus_managed`. Use the following command to find the data collection rule ID:
+
+ ```azurecli
+ az monitor data-collection rule show --name <dcr-name> --resource-group <resource-group-name> --query "id"
+ ```
+1. Alternatively you can find your DCR ID and Metrics ingestion endpoint using the Azure portal on the Azure Monitor workspace Overview page.
+
+ Select the **Data collection rule** on the workspace Overview tab, then select **JSON view** to see the **Resource ID**.
+
+
+ :::image type="content" source="./media/prometheus-authorization-proxy/workspace-overview.png" lightbox="./media/prometheus-authorization-proxy/workspace-overview.png" alt-text="A screenshot showing the overview page for an Azure Monitor workspace.":::
+
+1. Assign the `Monitoring Metrics Publisher` role to the managed identity's `clientId` so that it can write to the Azure Monitor workspace data collection rule.
+
+ ```azurecli
+ az role assignment create /
+ --assignee <clientid> /
+ --role "Monitoring Metrics Publisher" /
+ --scope <workspace-dcr-id>
+ ```
+
+ For example:
+
+ ```bash
+ az role assignment create \
+ --assignee abcd1234-1243-abcd-9876-1234abcd5678 \
+ --role "Monitoring Metrics Publisher" \
+ --scope /subscriptions/ef0123-1243-abcd-9876-1234abcd5678/resourceGroups/MA_amw-proxytest_eastus_managed/providers/Microsoft.Insights/dataCollectionRules/amw-proxytest
+ ```
+
+1. Use the following YAML file to deploy the proxy for remote write. Modify the following parameters:
+
+ + `TARGET_HOST` - The target host where you want to forward the request to. To send data to an Azure Monitor workspace, use the hostname part of the `Metrics ingestion endpoint` from the workspaces Overview page. For example, `http://amw-proxytest-abcd.eastus-1.metrics.ingest.monitor.azure.com`
+ + `AAD_CLIENT_ID` - The `clientId` of the managed identity used that was assigned the `Monitoring Metrics Publisher` role.
+ + `AUDIENCE` - For ingesting metrics to Azure Monitor Workspace, set `AUDIENCE` to `https://monitor.azure.com/.default` .
+ + Remove `OTEL_GRPC_ENDPOINT` and `OTEL_SERVICE_NAME` if you aren't using OpenTelemetry.
+
+ For more information about the parameters, see the [Parameters](#parameters) table.
+
+ proxy-ingestion.yaml
+
+ ```yml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ labels:
+ app: azuremonitor-ingestion
+ name: azuremonitor-ingestion
+ namespace: observability
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: azuremonitor-ingestion
+ template:
+ metadata:
+ labels:
+ app: azuremonitor-ingestion
+ name: azuremonitor-ingestion
+ spec:
+ containers:
+ - name: aad-auth-proxy
+ image: mcr.microsoft.com/azuremonitor/auth-proxy/prod/aad-auth-proxy/images/aad-auth-proxy:0.1.0-main-05-24-2023-b911fe1c
+ imagePullPolicy: Always
+ ports:
+ - name: auth-port
+ containerPort: 8081
+ env:
+ - name: AUDIENCE
+ value: https://monitor.azure.com/.default
+ - name: TARGET_HOST
+ value: http://<workspace-endpoint-hostname>
+ - name: LISTENING_PORT
+ value: "8081"
+ - name: IDENTITY_TYPE
+ value: userAssigned
+ - name: AAD_CLIENT_ID
+ value: <clientId>
+ - name: AAD_TOKEN_REFRESH_INTERVAL_IN_PERCENTAGE
+ value: "10"
+ - name: OTEL_GRPC_ENDPOINT
+ value: <YOUR-OTEL-GRPC-ENDPOINT> # "otel-collector.observability.svc.cluster.local:4317"
+ - name: OTEL_SERVICE_NAME
+ value: <YOUE-SERVICE-NAME>
+ livenessProbe:
+ httpGet:
+ path: /health
+ port: auth-port
+ initialDelaySeconds: 5
+ timeoutSeconds: 5
+ readinessProbe:
+ httpGet:
+ path: /ready
+ port: auth-port
+ initialDelaySeconds: 5
+ timeoutSeconds: 5
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: azuremonitor-ingestion
+ namespace: observability
+ spec:
+ ports:
+ - port: 80
+ targetPort: 8081
+ selector:
+ app: azuremonitor-ingestion
+ ```
++
+ 1. Deploy the proxy using commands:
+ ```bash
+ # create the namespace if it doesn't already exist
+ kubectl create namespace observability
+
+ kubectl apply -f proxy-ingestion.yaml -n observability
+ ```
+
+1. Alternatively you can deploy the proxy using helm as follows:
+
+ ```bash
+ helm install aad-auth-proxy oci://mcr.microsoft.com/azuremonitor/auth-proxy/prod/aad-auth-proxy/helmchart/aad-auth-proxy \
+ --version 0.1.0-main-05-24-2023-b911fe1c \
+ -n observability \
+ --set targetHost=https://proxy-test-abc123.eastus-1.metrics.ingest.monitor.azure.com \
+ --set identityType=userAssigned \
+ --set aadClientId= abcd1234-1243-abcd-9876-1234abcd5678 \
+ --set audience=https://monitor.azure.com/.default
+ ```
+
+1. Configure remote write url.
+ The URL hostname is made up of the ingestion service name and namespace in the following format `<ingestion service name>.<namespace>.svc.cluster.local`. In this example, the host is `azuremonitor-ingestion.observability.svc.cluster.local`.
+ Configure the URL path using the path from the `Metrics ingestion endpoint` from the Azure Monitor workspace Overview page. For example, `dataCollectionRules/dcr-abc123d987e654f3210abc1def234567/streams/Microsoft-PrometheusMetrics/api/v1/write?api-version=2021-11-01-preview`.
+
+ ```yml
+ prometheus:
+ prometheusSpec:
+ externalLabels:
+ cluster: <cluster name to be used in the workspace>
+ ## https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write
+ ##
+ remoteWrite:
+ - url: "http://azuremonitor-ingestion.observability.svc.cluster.local/dataCollectionRules/dcr-abc123d987e654f3210abc1def234567/streams/Microsoft-PrometheusMetrics/api/v1/write?api-version=2021-11-01-preview"
+ ```
+
+1. Apply the remote write configuration.
+
+> [!NOTE]
+> For the latest proxy image version ,see the [release notes](https://github.com/Azure/aad-auth-proxy/blob/main/RELEASENOTES.md)
+
+### Check that the proxy is ingesting data
+
+Check that the proxy is successfully ingesting metrics by checking the pod's logs, or by querying the Azure Monitor workspace.
+
+Check the pod's logs by running the following commands:
+```bash
+# Get the azuremonitor-ingestion pod ID
+ kubectl get pods -A | grep azuremonitor-ingestion
+ #Using the returned pod ID, get the logs
+ kubectl logs --namespace observability <pod ID> --tail=10
+ ```
+ Successfully ingesting metrics produces a log with `StatusCode=200` similar to the following:
+ ```
+ time="2023-05-16T08:47:27Z" level=info msg="Successfully sent request, returning response back." ContentLength=0 Request="https://amw-proxytest-05-t16w.eastus-1.metrics.ingest.monitor.azure.com/dataCollectionRules/dcr-688b6ed1f2244e098a88e32dde18b4f6/streams/Microsoft-PrometheusMetrics/api/v1/write?api-version=2021-11-01-preview" StatusCode=200
+```
+
+To query your Azure Monitor workspace, follow the steps below:
+
+1. From your Azure Monitor workspace, select **Workbooks** .
+
+1. Select the **Prometheus Explorer** tile.
+ :::image type="content" source="./media/prometheus-authorization-proxy/workspace-workbooks.png" lightbox="./media/prometheus-authorization-proxy/workspace-workbooks.png" alt-text="A screenshot showing the workbooks gallery for an Azure Monitor workspace.":::
+1. On the explorer page, enter *up* into the query box.
+1. Select the **Grid** tab to see the results.
+1. Check the **cluster** column to see if from your cluster are displayed.
+ :::image type="content" source="./media/prometheus-authorization-proxy/prometheus-explorer.png" lightbox="./media/prometheus-authorization-proxy/prometheus-explorer.png" alt-text="A screenshot showing the Prometheus explorer query page.":::
++
+## [Query metrics example](#tab/query-metrics-example)
+This deployment allows external entities to query an Azure Monitor workspace via the proxy.
+
+Before deploying the proxy, find your managed identity and assign it the `Monitoring Metrics reader` role for the Azure Monitor workspace.
+
+1. Find the `clientId` for the managed identity for your AKS cluster. The managed identity is used to authenticate to the Azure Monitor workspace. The managed identity is created when the AKS cluster is created.
+
+ ```azurecli
+ # Get the identity client_id
+ az aks show -g <AKS-CLUSTER-RESOURCE-GROUP> -n <AKS-CLUSTER-NAME> --query "identityProfile"
+ ```
+
+ The output has the following format:
+ ```bash
+ {
+ "kubeletidentity": {
+ "clientId": "abcd1234-1243-abcd-9876-1234abcd5678",
+ "objectId": "12345678-abcd-abcd-abcd-1234567890ab",
+ "resourceId": "/subscriptions/def0123-1243-abcd-9876-1234abcd5678/resourcegroups/MC_rg-proxytest-01_proxytest-01_eastus/providers/Microsoft.ManagedIdentity/ userAssignedIdentities/proxytest-01-agentpool"
+ }
+ }
+ ```
+
+1. Assign the `Monitoring Data Reader` role to the identity using the `clientId` from the previous command so that it can read from the Azure Monitor workspace.
+
+ ```azurecli
+ az role assignment create --assignee <clientid> --role "Monitoring Data Reader" --scope <workspace-id>
+ ```
+
+1. Use the following YAML file to deploy the proxy for remote query. Modify the following parameters:
+
+ + `TARGET_HOST` - The host that you want to query data from. Use the `Query endpoint` from the Azure monitor workspace Overview page. For example, `https://proxytest-workspace-abcs.eastus.prometheus.monitor.azure.com`
+ + `AAD_CLIENT_ID` - The `clientId` of the managed identity used that was assigned the `Monitoring Metrics Reader` role.
+ + `AUDIENCE` - For querying metrics from Azure Monitor Workspace, set `AUDIENCE` to `https://prometheus.monitor.azure.com/.default`.
+ + Remove `OTEL_GRPC_ENDPOINT` and `OTEL_SERVICE_NAME` if you aren't using OpenTelemetry.
+
+ For more information on the parameters, see the [Parameters](#parameters) table.
+
+ proxy-query.yaml
+
+ ```yml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ labels:
+ app: azuremonitor-query
+ name: azuremonitor-query
+ namespace: observability
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: azuremonitor-query
+ template:
+ metadata:
+ labels:
+ app: azuremonitor-query
+ name: azuremonitor-query
+ spec:
+ containers:
+ - name: aad-auth-proxy
+ image: mcr.microsoft.com/azuremonitor/auth-proxy/prod/aad-auth-proxy/images/aad-auth-proxy:aad-auth-proxy-0.1.0-main-04-11-2023-623473b0
+ imagePullPolicy: Always
+ ports:
+ - name: auth-port
+ containerPort: 8082
+ env:
+ - name: AUDIENCE
+ value: https://prometheus.monitor.azure.com/.default
+ - name: TARGET_HOST
+ value: <Query endpoint host>
+ - name: LISTENING_PORT
+ value: "8082"
+ - name: IDENTITY_TYPE
+ value: userAssigned
+ - name: AAD_CLIENT_ID
+ value: <clientId>
+ - name: AAD_TOKEN_REFRESH_INTERVAL_IN_PERCENTAGE
+ value: "10"
+ - name: OTEL_GRPC_ENDPOINT
+ value: "otel-collector.observability.svc.cluster.local:4317"
+ - name: OTEL_SERVICE_NAME
+ value: azuremonitor_query
+ livenessProbe:
+ httpGet:
+ path: /health
+ port: auth-port
+ initialDelaySeconds: 5
+ timeoutSeconds: 5
+ readinessProbe:
+ httpGet:
+ path: /ready
+ port: auth-port
+ initialDelaySeconds: 5
+ timeoutSeconds: 5
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: azuremonitor-query
+ namespace: observability
+ spec:
+ ports:
+ - port: 80
+ targetPort: 8082
+ selector:
+ app: azuremonitor-query
+ ```
+
+1. Deploy the proxy using command:
+
+ ```bash
+ # create the namespace if it doesn't already exist
+ kubectl create namespace observability
+
+ kubectl apply -f proxy-query.yaml -n observability
+ ```
+
+### Check that you can query using the proxy
+
+To test that the proxy is working, create a port forward to the proxy pod, then query the proxy.
++
+```bash
+# Get the pod name for azuremonitor-query pod
+kubectl get pods -n observability
+
+# Use the pod ID to create the port forward in the background
+kubectl port-forward pod/<pod ID> -n observability 8082:8082 &
+
+# query the proxy
+ curl http://localhost:8082/api/v1/query?query=up
+```
+
+A successful query returns a response similar to the following:
+
+```
+{"status":"success","data":{"resultType":"vector","result":[{"metric":{"__name__":"up","cluster":"proxytest-01","instance":"aks-userpool-20877385-vmss000007","job":"kubelet","kubernetes_io_os":"linux","metrics_path":"/metrics"},"value":[1684177493.19,"1"]},{"metric":{"__name__":"up","cluster":"proxytest-01","instance":"aks-userpool-20877385-vmss000007","job":"cadvisor"},"value":[1684177493.19,"1"]},{"metric":{"__name__":"up","cluster":"proxytest-01","instance":"aks-nodepool1-21858175-vmss000007","job":"node","metrics_path":"/metrics"},"value":[1684177493.19,"1"]}]}}
+```
++
+## Parameters
+
+| Image Parameter | Helm chart Parameter name | Description | Supported values | Mandatory |
+| | | | | |
+| `TARGET_HOST` | `targetHost` | Target host where you want to forward the request to. <br>When sending data to an Azure Monitor workspace, use the `Metrics ingestion endpoint` from the workspaces Overview page. <br> When reading data from an Azure Monitor workspace, use the `Data collection rule` from the workspaces Overview page| | Yes |
+| `IDENTITY_TYPE` | `identityType` | Identity type that is used to authenticate requests. This proxy supports three types of identities. | `systemassigned`, `userassigned`, `aadapplication` | Yes |
+| `AAD_CLIENT_ID` | `aadClientId` | Client ID of the identity used. This is used for `userassigned` and `aadapplication` identity types. Use `az aks show -g <AKS-CLUSTER-RESOURCE-GROUP> -n <AKS-CLUSTER-NAME> --query "identityProfile"` to retrieve the Client ID | | Yes for `userassigned` and `aadapplication` |
+| `AAD_TENANT_ID` | `aadTenantId` | Tenant ID of the identity used. Tenant ID is used for `aadapplication` identity types. | | Yes for `aadapplication` |
+| `AAD_CLIENT_CERTIFICATE_PATH` | `aadClientCertificatePath` | The path where proxy can find the certificate for aadapplication. This path should be accessible by proxy and should be a either a pfx or pem certificate containing private key. | | For `aadapplication` identity types only |
+| `AAD_TOKEN_REFRESH_INTERVAL_IN_PERCENTAGE` | `aadTokenRefreshIntervalInMinutes` | Token is refreshed based on the percentage of time until token expiry. Default value is 10% time before expiry. | | No |
+| `AUDIENCE` | `audience` | Audience for the token | | No |
+| `LISTENING_PORT` | `listeningPort` | Proxy listening on this port | | Yes |
+| `OTEL_SERVICE_NAME` | `otelServiceName` | Service name for OTEL traces and metrics. Default value: aad_auth_proxy | | No |
+| `OTEL_GRPC_ENDPOINT` | `otelGrpcEndpoint` | Proxy pushes OTEL telemetry to this endpoint. Default value: http://localhost:4317 | | No |
++
+## Troubleshooting
+++ The proxy container doesn't start.
+Run the following command to show any errors for the proxy container.
+
+ ```bash
+ kubectl --namespace <Namespace> describe pod <Prometheus-Pod-Name>`
+ ```
+++ Proxy doesn't start - configuration errors+
+ The proxy checks for a valid identity to fetch a token during startup. If it fails to retrieve a token, start up fails. Errors are logged and can be viewed by running the following command:
+
+ ```bash
+ kubectl --namespace <Namespace> logs <Proxy-Pod-Name>
+ ```
+
+ Example output:
+ ```
+ time="2023-05-15T11:24:06Z" level=info msg="Configuration settings loaded:" AAD_CLIENT_CERTIFICATE_PATH= AAD_CLIENT_ID=abc123de-be75-4141-a1e6-abc123987def AAD_TENANT_ID= AAD_TOKEN_REFRESH_INTERVAL_IN_PERCENTAGE=10 AUDIENCE="https://prometheus.monitor.azure.com" IDENTITY_TYPE=userassigned LISTENING_PORT=8082 OTEL_GRPC_ENDPOINT= OTEL_SERVICE_NAME=aad_auth_proxy TARGET_HOST=proxytest-01-workspace-orkw.eastus.prometheus.monitor.azure.com
+ 2023-05-15T11:24:06.414Z [ERROR] TokenCredential creation failed:Failed to get access token: ManagedIdentityCredential authentication failed
+ GET http://169.254.169.254/metadata/identity/oauth2/token
+ --
+ RESPONSE 400 Bad Request
+ --
+ {
+ "error": "invalid_request",
+ "error_description": "Identity not found"
+ }
+ --
+ ```
azure-monitor Prometheus Metrics Disable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-disable.md
+
+ Title: Disable collecting Prometheus metrics on an Azure Kubernetes Service cluster
+description: Disable the collection of Prometheus metrics from an Azure Kubernetes Service cluster and remove the agent from the cluster nodes.
++++ Last updated : 07/30/2023+++
+# Disable Prometheus metrics collection from an AKS cluster
+
+Currently, the Azure CLI is the only option to remove the metrics add-on from your AKS cluster, and stop sending Prometheus metrics to Azure Monitor managed service for Prometheus.
+
+The `az aks update --disable-azure-monitor-metrics` command:
+++ Removes the agent from the cluster nodes. ++ Deletes the recording rules created for that cluster. ++ Deletes the data collection endpoint (DCE). ++ Deletes the data collection rule (DCR).++ Deletes the DCRA and recording rules groups created as part of onboarding.+
+> [!NOTE]
+> This action doesn't remove any existing data stored in your Azure Monitor workspace.
+
+```azurecli
+az aks update --disable-azure-monitor-metrics -n <cluster-name> -g <cluster-resource-group>
+```
+
+## Next steps
+
+- [See the default configuration for Prometheus metrics](./prometheus-metrics-scrape-default.md)
+- [Customize Prometheus metric scraping for the cluster](./prometheus-metrics-scrape-configuration.md)
+- [Use Azure Monitor managed service for Prometheus as the data source for Grafana](../essentials/prometheus-grafana.md)
+- [Configure self-hosted Grafana to use Azure Monitor managed service for Prometheus](../essentials/prometheus-self-managed-grafana-azure-active-directory.md)
azure-monitor Prometheus Metrics Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-enable.md
+
+ Title: Enable Azure Monitor managed service for Prometheus
+description: Enable Azure Monitor managed service for Prometheus and configure data collection from your Azure Kubernetes Service (AKS) cluster.
++++ Last updated : 07/30/2023+++
+# Collect Prometheus metrics from an AKS cluster
+This article describes how to configure your Azure Kubernetes Service (AKS) cluster to send data to Azure Monitor managed service for Prometheus. When you perform this configuration, a containerized version of the [Azure Monitor agent](../agents/agents-overview.md) is installed with a metrics extension. This sends data to the Azure Monitor workspace that you specify.
+
+> [!NOTE]
+> The process described here doesn't enable [Container insights](../containers/container-insights-overview.md) on the cluster. However, both processes use the Azure Monitor agent. For different methods to enable Container insights on your cluster, see [Enable Container insights](../containers/container-insights-onboard.md)..
+
+The Azure Monitor metrics agent's architecture utilizes a ReplicaSet and a DaemonSet. The ReplicaSet pod scrapes cluster-wide targets such as `kube-state-metrics` and custom application targets that are specified. The DaemonSet pods scrape targets solely on the node that the respective pod is deployed on, such as `node-exporter`. This is so that the agent can scale as the number of nodes and pods on a cluster increases.
+
+## Prerequisites
+
+- You must either have an [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md) or [create a new one](../essentials/azure-monitor-workspace-manage.md#create-an-azure-monitor-workspace).
+- The cluster must use [managed identity authentication](../../aks/use-managed-identity.md).
+- The following resource providers must be registered in the subscription of the AKS cluster and the Azure Monitor workspace:
+ - Microsoft.ContainerService
+ - Microsoft.Insights
+ - Microsoft.AlertsManagement
+
+> [!NOTE]
+> `Contributor` permission is enough for enabling the addon to send data to the Azure Monitor workspace. You will need `Owner` level permission in case you're trying to link your Azure Monitor Workspace to view metrics in Azure Managed Grafana. This is required because the user executing the onboarding step, needs to be able to give the Azure Managed Grafana System Identity `Monitoring Reader` role on the Azure Monitor Workspace to query the metrics.
++++
+## Enable Prometheus metric collection
+Use any of the following methods to install the Azure Monitor agent on your AKS cluster and send Prometheus metrics to your Azure Monitor workspace.
+
+### [Azure portal](#tab/azure-portal)
+> [!NOTE]
+> Azure Managed Grafana is not available in the Azure US Government cloud currently.
+
+There are multiple options to enable Prometheus metrics on your cluster from the Azure portal.
+
+#### New cluster
+When you create a new AKS cluster in the Azure portal, you can enable Prometheus, Container insights, and Grafana from the **Integrations** tab.
++
+#### From the Azure Monitor workspace
+This option enables Prometheus metrics on a cluster without enabling Container insights.
+
+1. Open the **Azure Monitor workspaces** menu in the Azure portal and select your workspace.
+1. Select **Monitored clusters** in the **Managed Prometheus** section to display a list of AKS clusters.
+1. Select **Configure** next to the cluster you want to enable.
+
+ :::image type="content" source="media/prometheus-metrics-enable/azure-monitor-workspace-configure-prometheus.png" lightbox="media/prometheus-metrics-enable/azure-monitor-workspace-configure-prometheus.png" alt-text="Screenshot that shows an Azure Monitor workspace with a Prometheus configuration.":::
+
+#### From an existing cluster monitored with Container insights
+This option adds Prometheus metrics to a cluster already enabled for Container insights.
+
+1. Open the **Kubernetes services** menu in the Azure portal and select your AKS cluster.
+2. Click **Insights**.
+3. Click **Monitor settings**.
+
+ :::image type="content" source="media/prometheus-metrics-enable/aks-cluster-monitor-settings.png" lightbox="media/prometheus-metrics-enable/aks-cluster-monitor-settings.png" alt-text="Screenshot of button for monitor settings for an AKS cluster.":::
+
+4. Click the checkbox for **Enable Prometheus metrics** and select your Azure Monitor workspace.
+5. To send the collected metrics to Grafana, select a Grafana workspace. See [Create an Azure Managed Grafana instance](../../managed-grafan) for details on creating a Grafana workspace.
+
+ :::image type="content" source="media/prometheus-metrics-enable/aks-cluster-monitor-settings-details.png" lightbox="media/prometheus-metrics-enable/aks-cluster-monitor-settings-details.png" alt-text="Screenshot of monitor settings for an AKS cluster.":::
+
+6. Click **Configure** to complete the configuration.
+
+See [Collect Prometheus metrics from AKS cluster (preview)](../essentials/prometheus-metrics-enable.md) for details on [verifying your deployment](../essentials/prometheus-metrics-enable.md#verify-deployment) and [limitations](../essentials/prometheus-metrics-enable.md#limitations-during-enablementdeployment)
+
+#### From an existing cluster
+This options enables Prometheus, Grafana, and Container insights on a cluster.
+
+1. Open the clusters menu in the Azure portal and select **Insights**.
+3. Select **Configure monitoring**.
+4. Container insights is already enabled. Select the checkboxes for **Enable Prometheus metrics** and **Enable Grafana**. If you have existing Azure Monitor workspace and Garafana workspace, then they're selected for you. Click **Advanced settings** to select alternate workspaces or create new ones.
+
+ :::image type="content" source="media/prometheus-metrics-enable/configure-container-insights.png" lightbox="media/prometheus-metrics-enable/configure-container-insights.png" alt-text="Screenshot that shows that show the dialog box to configure Container insights with Prometheus and Grafana.":::
+
+5. Click **Configure** to save the configuration.
++
+### [CLI](#tab/cli)
+> [!NOTE]
+> Azure Managed Grafana is not available in the Azure US Government cloud currently.
+
+#### Prerequisites
+
+- The aks-preview extension must be uninstalled by using the command `az extension remove --name aks-preview`. For more information on how to uninstall a CLI extension, see [Use and manage extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview).
+- Az CLI version of 2.49.0 or higher is required for this feature. Check the aks-preview version by using the `az version` command.
+
+#### Install the metrics add-on
+
+Use `az aks create` or `az aks update` with the `-enable-azure-monitor-metrics` option to install the metrics add-on. Depending on the Azure Monitor workspace and Grafana workspace you want to use, choose one of the following options:
+
+- **Create a new default Azure Monitor workspace.**<br>
+If no Azure Monitor workspace is specified, a default Azure Monitor workspace is created in a resource group with the name `DefaultRG-<cluster_region>` and is named `DefaultAzureMonitorWorkspace-<mapped_region>`.
++
+ ```azurecli
+ az aks create/update --enable-azure-monitor-metrics -n <cluster-name> -g <cluster-resource-group>
+ ```
+
+- **Use an existing Azure Monitor workspace.**<br>
+If the existing Azure Monitor workspace is already linked to one or more Grafana workspaces, data is available in that Grafana workspace.
+
+ ```azurecli
+ az aks create/update --enable-azure-monitor-metrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id <workspace-name-resource-id>
+ ```
+
+- **Use an existing Azure Monitor workspace and link with an existing Grafana workspace.**<br>
+This option creates a link between the Azure Monitor workspace and the Grafana workspace.
+
+ ```azurecli
+ az aks create/update --enable-azure-monitor-metrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id <azure-monitor-workspace-name-resource-id> --grafana-resource-id <grafana-workspace-name-resource-id>
+ ```
+
+The output for each command looks similar to the following example:
+
+```json
+"azureMonitorProfile": {
+ "metrics": {
+ "enabled": true,
+ "kubeStateMetrics": {
+ "metricAnnotationsAllowList": "",
+ "metricLabelsAllowlist": ""
+ }
+ }
+}
+```
+
+#### Optional parameters
+You can use the following optional parameters with the previous commands:
+
+- `--ksm-metric-annotations-allow-list` is a comma-separated list of Kubernetes annotations keys used in the resource's kube_resource_annotations metric(For ex- kube_pod_annotations is the annotations metric for the pods resource). By default, the kube_resource_annotations(ex - kube_pod_annotations) metric contains only name and namespace labels. To include more annotations, provide a list of resource names in their plural form and Kubernetes annotation keys that you want to allow for them (Example: 'pods=[kubernetes.io/team,...],namespaces=[kubernetes.io/team],...)'. A single `*` can be provided per resource instead to allow any annotations, but it has severe performance implications.
+- `--ksm-metric-labels-allow-list` is a comma-separated list of more Kubernetes label keys that is used in the resource's kube_resource_labels metric kube_resource_labels metric(For ex- kube_pod_labels is the labels metric for the pods resource). By default the kube_resource_labels(ex - kube_pod_labels) metric contains only name and namespace labels. To include more labels, provide a list of resource names in their plural form and Kubernetes label keys that you want to allow for them (Example: 'pods=[app],namespaces=[k8s-label-1,k8s-label-n,...],...)'. A single asterisk (`*`) can be provided per resource instead to allow any labels, but it has severe performance implications.
+- `--enable-windows-recording-rules` lets you enable the recording rule groups required for proper functioning of the Windows dashboards.
+
+**Use annotations and labels.**
+
+```azurecli
+az aks create/update --enable-azure-monitor-metrics -n <cluster-name> -g <cluster-resource-group> --ksm-metric-labels-allow-list "namespaces=[k8s-label-1,k8s-label-n]" --ksm-metric-annotations-allow-list "pods=[k8s-annotation-1,k8s-annotation-n]"
+```
+
+The output is similar to the following example:
+
+```json
+ "azureMonitorProfile": {
+ "metrics": {
+ "enabled": true,
+ "kubeStateMetrics": {
+ "metricAnnotationsAllowList": "pods=[k8s-annotation-1,k8s-annotation-n]",
+ "metricLabelsAllowlist": "namespaces=[k8s-label-1,k8s-label-n]"
+ }
+ }
+ }
+```
+
+## [Azure Resource Manager](#tab/resource-manager)
+
+> [!NOTE]
+> Azure Managed Grafana is not available in the Azure US Government cloud currently.
+
+### Prerequisites
+
+- If the Azure Managed Grafana instance is in a subscription other than the Azure Monitor workspace subscription, register the Azure Monitor workspace subscription with the `Microsoft.Dashboard` resource provider by following [this documentation](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
+- The Azure Monitor workspace and Azure Managed Grafana instance must already be created.
+- The template must be deployed in the same resource group as the Azure Managed Grafana instance.
+- Users with the `User Access Administrator` role in the subscription of the AKS cluster can enable the `Monitoring Reader` role directly by deploying the template.
+
+### Retrieve required values for Grafana resource
+
+> [!NOTE]
+> Azure Managed Grafana is not available in the Azure US Government cloud currently.
+
+On the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**.
+
+If you're using an existing Azure Managed Grafana instance that's already linked to an Azure Monitor workspace, the list of already existing Grafana integrations is needed. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, then the instance hasn't been linked with any Azure Monitor workspace.
+
+```json
+"properties": {
+ "grafanaIntegrations": {
+ "azureMonitorWorkspaceIntegrations": [
+ {
+ "azureMonitorWorkspaceResourceId": "full_resource_id_1"
+ },
+ {
+ "azureMonitorWorkspaceResourceId": "full_resource_id_2"
+ }
+ ]
+ }
+}
+```
+
+### Download and edit the template and the parameter file
+
+1. Download the template at [https://aka.ms/azureprometheus-enable-arm-template](https://aka.ms/azureprometheus-enable-arm-template) and save it as **existingClusterOnboarding.json**.
+1. Download the parameter file at [https://aka.ms/azureprometheus-enable-arm-template-parameterss](https://aka.ms/azureprometheus-enable-arm-template-parameters) and save it as **existingClusterParam.json**.
+1. Edit the values in the parameter file.
+
+ | Parameter | Value |
+ |:|:|
+ | `azureMonitorWorkspaceResourceId` | Resource ID for the Azure Monitor workspace. Retrieve from the **JSON view** on the **Overview** page for the Azure Monitor workspace. |
+ | `azureMonitorWorkspaceLocation` | Location of the Azure Monitor workspace. Retrieve from the **JSON view** on the **Overview** page for the Azure Monitor workspace. |
+ | `clusterResourceId` | Resource ID for the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. |
+ | `clusterLocation` | Location of the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. |
+ | `metricLabelsAllowlist` | Comma-separated list of Kubernetes labels keys to be used in the resource's labels metric. |
+ | `metricAnnotationsAllowList` | Comma-separated list of more Kubernetes label keys to be used in the resource's annotations metric. |
+ | `grafanaResourceId` | Resource ID for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. |
+ | `grafanaLocation` | Location for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. |
+ | `grafanaSku` | SKU for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. Use the **sku.name**. |
+
+1. Open the template file and update the `grafanaIntegrations` property at the end of the file with the values that you retrieved from the Grafana instance. The following example is similar:
+
+ ```json
+ {
+ "type": "Microsoft.Dashboard/grafana",
+ "apiVersion": "2022-08-01",
+ "name": "[split(parameters('grafanaResourceId'),'/')[8]]",
+ "sku": {
+ "name": "[parameters('grafanaSku')]"
+ },
+ "location": "[parameters('grafanaLocation')]",
+ "properties": {
+ "grafanaIntegrations": {
+ "azureMonitorWorkspaceIntegrations": [
+ {
+ "azureMonitorWorkspaceResourceId": "full_resource_id_1"
+ },
+ {
+ "azureMonitorWorkspaceResourceId": "full_resource_id_2"
+ },
+ {
+ "azureMonitorWorkspaceResourceId": "[parameters('azureMonitorWorkspaceResourceId')]"
+ }
+ ]
+ }
+ }
+ ```
+
+In this JSON, `full_resource_id_1` and `full_resource_id_2` were already in the Azure Managed Grafana resource JSON. They're added here to the Azure Resource Manager template (ARM template). If you have no existing Grafana integrations, don't include these entries for `full_resource_id_1` and `full_resource_id_2`.
+
+The final `azureMonitorWorkspaceResourceId` entry is already in the template and is used to link to the Azure Monitor workspace resource ID provided in the parameters file.
+
+## [Bicep](#tab/bicep)
+
+> [!NOTE]
+> Azure Managed Grafana is not available in the Azure US Government cloud currently.
+
+### Prerequisites
+
+- The Azure Monitor workspace and Azure Managed Grafana instance must already be created.
+- The template needs to be deployed in the same resource group as the Azure Managed Grafana instance.
+- Users with the `User Access Administrator` role in the subscription of the AKS cluster can enable the `Monitoring Reader` role directly by deploying the template.
+
+### Limitation with Bicep deployment
+Currently in Bicep, there's no way to explicitly scope the `Monitoring Reader` role assignment on a string parameter "resource ID" for an Azure Monitor workspace (like in an ARM template). Bicep expects a value of type `resource | tenant`. There also is no REST API [spec](https://github.com/Azure/azure-rest-api-specs) for an Azure Monitor workspace.
+
+Therefore, the default scoping for the `Monitoring Reader` role is on the resource group. The role is applied on the same Azure Monitor workspace (by inheritance), which is the expected behavior. After you deploy this Bicep template, the Grafana instance is given `Monitoring Reader` permissions for all the Azure Monitor workspaces in that resource group.
+
+### Retrieve required values for a Grafana resource
+
+On the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**.
+
+If you're using an existing Azure Managed Grafana instance that's already linked to an Azure Monitor workspace, the list of already existing Grafana integrations is needed. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, then the instance hasn't been linked with any Azure Monitor workspace.
+
+```json
+"properties": {
+ "grafanaIntegrations": {
+ "azureMonitorWorkspaceIntegrations": [
+ {
+ "azureMonitorWorkspaceResourceId": "full_resource_id_1"
+ },
+ {
+ "azureMonitorWorkspaceResourceId": "full_resource_id_2"
+ }
+ ]
+ }
+}
+```
+
+### Download and edit templates and the parameter file
+
+1. Download the [main Bicep template](https://aka.ms/azureprometheus-enable-bicep-template). Save it as **FullAzureMonitorMetricsProfile.bicep**.
+2. Download the [parameter file](https://aka.ms/azureprometheus-enable-bicep-template-parameters) and save it as **FullAzureMonitorMetricsProfileParameters.json** in the same directory as the main Bicep template.
+3. Download the [nested_azuremonitormetrics_dcra_clusterResourceId.bicep](https://aka.ms/nested_azuremonitormetrics_dcra_clusterResourceId) and [nested_azuremonitormetrics_profile_clusterResourceId.bicep](https://aka.ms/nested_azuremonitormetrics_profile_clusterResourceId) files into the same directory as the main Bicep template.
+4. Edit the values in the parameter file.
+5. The main Bicep template creates all the required resources. It uses two modules for creating the Data Collection Rule Associations (DCRA) and Azure Monitor metrics profile resources from the other two Bicep files.
+
+ | Parameter | Value |
+ |:|:|
+ | `azureMonitorWorkspaceResourceId` | Resource ID for the Azure Monitor workspace. Retrieve from the **JSON view** on the **Overview** page for the Azure Monitor workspace. |
+ | `azureMonitorWorkspaceLocation` | Location of the Azure Monitor workspace. Retrieve from the **JSON view** on the **Overview** page for the Azure Monitor workspace. |
+ | `clusterResourceId` | Resource ID for the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. |
+ | `clusterLocation` | Location of the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. |
+ | `metricLabelsAllowlist` | Comma-separated list of Kubernetes labels keys used in the resource's labels metric. |
+ | `metricAnnotationsAllowList` | Comma-separated list of more Kubernetes label keys used in the resource's annotations metric. |
+ | `grafanaResourceId` | Resource ID for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. |
+ | `grafanaLocation` | Location for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. |
+ | `grafanaSku` | SKU for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. Use the **sku.name**. |
+
+1. Open the template file and update the `grafanaIntegrations` property at the end of the file with the values that you retrieved from the Grafana instance. The following example is similar:
+
+ ```json
+ {
+ "type": "Microsoft.Dashboard/grafana",
+ "apiVersion": "2022-08-01",
+ "name": "[split(parameters('grafanaResourceId'),'/')[8]]",
+ "sku": {
+ "name": "[parameters('grafanaSku')]"
+ },
+ "location": "[parameters('grafanaLocation')]",
+ "properties": {
+ "grafanaIntegrations": {
+ "azureMonitorWorkspaceIntegrations": [
+ {
+ "azureMonitorWorkspaceResourceId": "full_resource_id_1"
+ },
+ {
+ "azureMonitorWorkspaceResourceId": "full_resource_id_2"
+ },
+ {
+ "azureMonitorWorkspaceResourceId": "[parameters('azureMonitorWorkspaceResourceId')]"
+ }
+ ]
+ }
+ }
+ ```
+
+In this JSON, `full_resource_id_1` and `full_resource_id_2` were already in the Azure Managed Grafana resource JSON. They're added here to the ARM template. If you have no existing Grafana integrations, don't include these entries for `full_resource_id_1` and `full_resource_id_2`.
+
+The final `azureMonitorWorkspaceResourceId` entry is already in the template and is used to link to the Azure Monitor workspace resource ID provided in the parameters file.
+
+## [Terraform](#tab/terraform)
+
+### Prerequisites
+
+- If the Azure Managed Grafana instance is in a subscription other than the Azure Monitor Workspaces subscription, register the Azure Monitor Workspace subscription with the `Microsoft.Dashboard` resource provider by following [this documentation](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
+- The Azure Monitor workspace and Azure Managed Grafana workspace must already be created.
+- The template needs to be deployed in the same resource group as the Azure Managed Grafana workspace.
+- Users with the User Access Administrator role in the subscription of the AKS cluster can enable the Monitoring Reader role directly by deploying the template.
+
+### Retrieve required values for a Grafana resource
+
+On the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**.
+
+If you're using an existing Azure Managed Grafana instance that's already linked to an Azure Monitor workspace, you need the list of Grafana integrations. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, the instance hasn't been linked with any Azure Monitor workspace. Update the azure_monitor_workspace_integrations block(shown here) in main.tf with the list of grafana integrations.
+
+```.tf
+ azure_monitor_workspace_integrations {
+ resource_id = var.monitor_workspace_id[var.monitor_workspace_id1, var.monitor_workspace_id2]
+ }
+```
+
+### Download and edit the templates
+
+If you're deploying a new AKS cluster using Terraform with managed Prometheus addon enabled, follow these steps:
+
+1. Download all files under [AddonTerraformTemplate](https://aka.ms/AAkm357).
+2. Edit the variables in variables.tf file with the correct parameter values.
+3. Run `terraform init -upgrade` to initialize the Terraform deployment.
+4. Run `terraform plan -out main.tfplan` to initialize the Terraform deployment.
+5. Run `terraform apply main.tfplan` to apply the execution plan to your cloud infrastructure.
++
+Note: Pass the variables for `annotations_allowed` and `labels_allowed` keys in main.tf only when those values exist. These are optional blocks.
+
+> [!NOTE]
+> Edit the main.tf file appropriately before running the terraform template. Add in any existing azure_monitor_workspace_integrations values to the grafana resource before running the template. Else, older values gets deleted and replaced with what is there in the template during deployment. Users with 'User Access Administrator' role in the subscription of the AKS cluster can enable 'Monitoring Reader' role directly by deploying the template. Edit the grafanaSku parameter if you're using a nonstandard SKU and finally run this template in the Grafana Resource's resource group.
+
+## [Azure Policy](#tab/azurepolicy)
+
+> [!NOTE]
+> Azure Managed Grafana is not available in the Azure US Government cloud currently.
+
+### Prerequisites
+
+- The Azure Monitor workspace and Azure Managed Grafana instance must already be created.
+
+### Download Azure Policy rules and parameters and deploy
+
+1. Download the main [Azure Policy rules template](https://aka.ms/AddonPolicyMetricsProfile). Save it as **AddonPolicyMetricsProfile.rules.json**.
+1. Download the [parameter file](https://aka.ms/AddonPolicyMetricsProfile.parameters). Save it as **AddonPolicyMetricsProfile.parameters.json** in the same directory as the rules template.
+1. Create the policy definition using the following command:
+
+ `az policy definition create --name "(Preview) Prometheus Metrics addon" --display-name "(Preview) Prometheus Metrics addon" --mode Indexed --metadata version=1.0.0 category=Kubernetes --rules AddonPolicyMetricsProfile.rules.json --params AddonPolicyMetricsProfile.parameters.json`
+
+1. After you create the policy definition, in the Azure portal, select **Policy** > **Definitions**. Select the policy definition you created.
+1. Select **Assign**, go to the **Parameters** tab, and fill in the details. Select **Review + Create**.
+1. After the policy is assigned to the subscription, whenever you create a new cluster without Prometheus enabled, the policy will run and deploy to enable Prometheus monitoring. If you want to apply the policy to an existing AKS cluster, create a **Remediation task** for that AKS cluster resource after you go to the **Policy Assignment**.
+1. Now you should see metrics flowing in the existing Azure Managed Grafana instance, which is linked with the corresponding Azure Monitor workspace.
+
+Afterwards, if you create a new Managed Grafana instance, you can link it with the corresponding Azure Monitor workspace from the **Linked Grafana Workspaces** tab of the relevant **Azure Monitor Workspace** page. The `Monitoring Reader` role must be assigned to the managed identity of the Managed Grafana instance with the scope as the Azure Monitor workspace, so that Grafana has access to query the metrics. Use the following instructions to do so:
+
+1. On the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**.
+
+1. Copy the value of the `principalId` field for the `SystemAssigned` identity.
+
+ ```json
+ "identity": {
+ "principalId": "00000000-0000-0000-0000-000000000000",
+ "tenantId": "00000000-0000-0000-0000-000000000000",
+ "type": "SystemAssigned"
+ },
+ ```
+1. On the **Access control (IAM)** page for the Azure Managed Grafana instance in the Azure portal, select **Add** > **Add role assignment**.
+1. Select `Monitoring Reader`.
+1. Select **Managed identity** > **Select members**.
+1. Select the **system-assigned managed identity** with the `principalId` from the Grafana resource.
+1. Choose **Select** > **Review+assign**.
+
+### Deploy the template
+
+Deploy the template with the parameter file by using any valid method for deploying ARM templates. For examples of different methods, see [Deploy the sample templates](../resource-manager-samples.md#deploy-the-sample-templates).
+
+### Limitations during enablement/deployment
+
+- Ensure that you update the `kube-state metrics` annotations and labels list with proper formatting. There's a limitation in the ARM template deployments that require exact values in the `kube-state` metrics pods. If the Kubernetes pod has any issues with malformed parameters and isn't running, the feature might not run as expected.
+- A data collection rule and data collection endpoint are created with the name `MSProm-\<short-cluster-region\>-\<cluster-name\>`. Currently, these names can't be modified.
+- You must get the existing Azure Monitor workspace integrations for a Grafana instance and update the ARM template with it. Otherwise, the ARM deployment gets over-written, which removes existing integrations.
++
+## Enable Windows metrics collection
+
+> [!NOTE]
+> There is no CPU/Memory limit in windows-exporter-daemonset.yaml so it may over-provision the Windows nodes
+> For more details see [Resource reservation](https://kubernetes.io/docs/concepts/configuration/windows-resource-management/#resource-reservation)
+>
+> As you deploy workloads, set resource memory and CPU limits on containers. This also subtracts from NodeAllocatable and helps the cluster-wide scheduler in determining which pods to place on which nodes.
+> Scheduling pods without limits may over-provision the Windows nodes and in extreme cases can cause the nodes to become unhealthy.
++
+As of version 6.4.0-main-02-22-2023-3ee44b9e of the Managed Prometheus addon container (prometheus_collector), Windows metric collection has been enabled for the AKS clusters. Onboarding to the Azure Monitor Metrics add-on enables the Windows DaemonSet pods to start running on your node pools. Both Windows Server 2019 and Windows Server 2022 are supported. Follow these steps to enable the pods to collect metrics from your Windows node pools.
+
+1. Manually install windows-exporter on AKS nodes to access Windows metrics.
+ Enable the following collectors:
+
+ * `[defaults]`
+ * `container`
+ * `memory`
+ * `process`
+ * `cpu_info`
+
+ Deploy the [windows-exporter-daemonset YAML](https://github.com/prometheus-community/windows_exporter/blob/master/kubernetes/windows-exporter-daemonset.yaml) file:
+
+ ```
+ kubectl apply -f windows-exporter-daemonset.yaml
+ ```
+
+1. Apply the [ama-metrics-settings-configmap](https://github.com/Azure/prometheus-collector/blob/main/otelcollector/configmaps/ama-metrics-settings-configmap.yaml) to your cluster. Set the `windowsexporter` and `windowskubeproxy` Booleans to `true`. For more information, see [Metrics add-on settings configmap](./prometheus-metrics-scrape-configuration.md#metrics-add-on-settings-configmap).
+1. Enable the recording rules that are required for the out-of-the-box dashboards:
+
+ * If onboarding using the CLI, include the option `--enable-windows-recording-rules`.
+ * If onboarding using an ARM template, Bicep, or Azure Policy, set `enableWindowsRecordingRules` to `true` in the parameters file.
+ * If the cluster is already onboarded, use [this ARM template](https://github.com/Azure/prometheus-collector/blob/kaveesh/windows_recording_rules/AddonArmTemplate/WindowsRecordingRuleGroupTemplate/WindowsRecordingRules.json) and [this parameter file](https://github.com/Azure/prometheus-collector/blob/kaveesh/windows_recording_rules/AddonArmTemplate/WindowsRecordingRuleGroupTemplate/WindowsRecordingRulesParameters.json) to create the rule groups.
+
+## Verify deployment
+
+1. Run the following command to verify that the DaemonSet was deployed properly on the Linux node pools:
+
+ ```
+ kubectl get ds ama-metrics-node --namespace=kube-system
+ ```
+
+ The number of pods should be equal to the number of Linux nodes on the cluster. The output should resemble the following example:
+
+ ```
+ User@aksuser:~$ kubectl get ds ama-metrics-node --namespace=kube-system
+ NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
+ ama-metrics-node 1 1 1 1 1 <none> 10h
+ ```
+
+1. Run the following command to verify that the DaemonSet was deployed properly on the Windows node pools:
+
+ ```
+ kubectl get ds ama-metrics-win-node --namespace=kube-system
+ ```
+
+ The number of pods should be equal to the number of Windows nodes on the cluster. The output should resemble the following example:
+
+ ```
+ User@aksuser:~$ kubectl get ds ama-metrics-node --namespace=kube-system
+ NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
+ ama-metrics-win-node 3 3 3 3 3 <none> 10h
+ ```
+
+1. Run the following command to verify that the two ReplicaSets were deployed properly:
+
+ ```
+ kubectl get rs --namespace=kube-system
+ ```
+
+ The output should resemble the following example:
+
+ ```
+ User@aksuser:~$kubectl get rs --namespace=kube-system
+ NAME DESIRED CURRENT READY AGE
+ ama-metrics-5c974985b8 1 1 1 11h
+ ama-metrics-ksm-5fcf8dffcd 1 1 1 11h
+ ```
+## Artifacts/Resources provisioned/created as a result of metrics addon enablement for an AKS cluster
+
+When you enable metrics addon, the following resources are provisioned:
+
+| Resource Name | Resource Type | Resource Group | Region/Location | Description |
+ |:|:|:|:|:|
+ | `MSPROM-<aksclusterregion>-<clustername>` | **Data Collection Rule** | Same Resource group as AKS cluster resource | Same region as Azure Monitor Workspace | This data collection rule is for prometheus metrics collection by metrics addon, which has the chosen Azure monitor workspace as destination, and also it is associated to the AKS cluster resource |
+ | `MSPROM-<aksclusterregion>-<clustername>` | **Data Collection endpoint** | Same Resource group as AKS cluster resource | Same region as Azure Monitor Workspace | This data collection endpoint is used by the above data collection rule for ingesting Prometheus metrics from the metrics addon|
+
+When you create a new Azure Monitor workspace, the following additional resources are created as part of it
+
+| Resource Name | Resource Type | Resource Group | Region/Location | Description |
+ |:|:|:|:|:|
+ | `<azuremonitor-workspace-name>` | **System Data Collection Rule** | MA_\<azuremonitor-workspace-name>_\<azuremonitor-workspace-region>_managed | Same region as Azure Monitor Workspace | This is **system** data collection rule that customers can use when they use OSS Prometheus server to Remote Write to Azure Monitor Workspace |
+ | `<azuremonitor-workspace-name>` | **System Data Collection endpoint** | MA_\<azuremonitor-workspace-name>_\<azuremonitor-workspace-region>_managed | Same region as Azure Monitor Workspace | This is **system** data collection endpoint that customers can use when they use OSS Prometheus server to Remote Write to Azure Monitor Workspace |
+
+
+## HTTP Proxy
+
+Azure Monitor metrics addon supports HTTP Proxy and uses the same settings as the HTTP Proxy settings for the AKS cluster configured with [these instructions](../../../articles/aks/http-proxy.md).
+
+## Network firewall requirements
+
+**Azure public cloud**
+
+The following table lists the firewall configuration required for Azure monitor Prometheus metrics ingestion for Azure Public cloud. All network traffic from the agent is outbound to Azure Monitor.
+
+|Agent resource| Purpose | Port |
+|--|||
+| `global.handler.control.monitor.azure.com` | Access control service/ Azure Monitor control plane service | 443 |
+| `*.ingest.monitor.azure.com` | Azure monitor managed service for Prometheus - metrics ingestion endpoint (DCE) | 443 |
+| `*.handler.control.monitor.azure.com` | For querying data collection rules | 443 |
+
+**Azure US Government cloud**
+
+The following table lists the firewall configuration required for Azure monitor Prometheus metrics ingestion for Azure US Government cloud. All network traffic from the agent is outbound to Azure Monitor.
+
+|Agent resource| Purpose | Port |
+|--|||
+| `global.handler.control.monitor.azure.us` | Access control service/ Azure Monitor control plane service | 443 |
+| `*.ingest.monitor.azure.us` | Azure monitor managed service for Prometheus - metrics ingestion endpoint (DCE) | 443 |
+| `*.handler.control.monitor.azure.us` | For querying data collection rules | 443 |
+
+## Uninstall the metrics add-on
+
+To uninstall the metrics add-on, see [Disable Prometheus metrics collection on an AKS cluster.](./prometheus-metrics-disable.md)
+
+## Supported regions
+
+The list of regions Azure Monitor Metrics and Azure Monitor Workspace is supported in can be found [here](https://aka.ms/ama-metrics-supported-regions) under the Managed Prometheus tag.
+
+## Next steps
+
+- [See the default configuration for Prometheus metrics](./prometheus-metrics-scrape-default.md)
+- [Customize Prometheus metric scraping for the cluster](./prometheus-metrics-scrape-configuration.md)
+- [Use Azure Monitor managed service for Prometheus as the data source for Grafana](../essentials/prometheus-grafana.md)
+- [Configure self-hosted Grafana to use Azure Monitor managed service for Prometheus](../essentials/prometheus-self-managed-grafana-azure-active-directory.md)
azure-monitor Prometheus Metrics From Arc Enabled Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-from-arc-enabled-cluster.md
+
+ Title: Collect Prometheus metrics from an Arc-enabled Kubernetes cluster (preview)
+description: How to configure your Azure Arc-enabled Kubernetes cluster (preview) to send data to Azure Monitor managed service for Prometheus.
+++ Last updated : 05/07/2023++
+# Collect Prometheus metrics from an Arc-enabled Kubernetes cluster (preview)
+
+This article describes how to configure your Azure Arc-enabled Kubernetes cluster (preview) to send data to Azure Monitor managed service for Prometheus. When you configure your Azure Arc-enabled Kubernetes cluster to send data to Azure Monitor managed service for Prometheus, a containerized version of the Azure Monitor agent is installed with a metrics extension. You then specify the Azure Monitor workspace where the data should be sent.
+
+> [!NOTE]
+> The process described here doesn't enable [Container insights](../containers/container-insights-overview.md) on the cluster even though the Azure Monitor agent installed in this process is the same agent used by Container insights.
+> For different methods to enable Container insights on your cluster, see [Enable Container insights](../containers/container-insights-onboard.md). For details on adding Prometheus collection to a cluster that already has Container insights enabled, see [Collect Prometheus metrics with Container insights](../containers/container-insights-prometheus.md).
+
+## Supported configurations
+
+The following configurations are supported:
+++ Azure Monitor Managed Prometheus supports monitoring Azure Arc-enabled Kubernetes. For more information, see [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md).++ Docker++ Moby++ CRI compatible container runtimes such CRI-O+
+The following configurations are not supported:
+++ Windows++ Azure Red Hat OpenShift 4+
+## Prerequisites
+++ Prerequisites listed in [Deploy and manage Azure Arc-enabled Kubernetes cluster extensions](../../azure-arc/kubernetes/extensions.md#prerequisites)++ An Azure Monitor workspace. To create new workspace, see [Manage an Azure Monitor workspace ](../essentials/azure-monitor-workspace-manage.md).++ The cluster must use [managed identity authentication](../../aks/use-managed-identity.md).++ The following resource providers must be registered in the subscription of the Arc-enabled Kubernetes cluster and the Azure Monitor workspace:
+ + Microsoft.Kubernetes
+ + Microsoft.Insights
+ + Microsoft.AlertsManagement
++ The following endpoints must be enabled for outbound access in addition to the [Azure Arc-enabled Kubernetes network requirements](../../azure-arc/kubernetes/network-requirements.md?tabs=azure-cloud):
+ **Azure public cloud**
+
+ |Endpoint|Port|
+ ||--|
+ |*.ods.opinsights.azure.com |443 |
+ |*.oms.opinsights.azure.com |443 |
+ |dc.services.visualstudio.com |443 |
+ |*.monitoring.azure.com |443 |
+ |login.microsoftonline.com |443 |
+ |global.handler.control.monitor.azure.com |443 |
+ | \<cluster-region-name\>.handler.control.monitor.azure.com |443 |
+
+## Create an extension instance
+
+### [Portal](#tab/portal)
+
+### Onboard from Azure Monitor workspace
+
+1. Open the **Azure Monitor workspaces** menu in the Azure portal and select your cluster.
+
+1. Select **Managed Prometheus** to display a list of AKS and Arc clusters.
+1. Select **Configure** for the cluster you want to enable.
++
+### Onboard from Container insights
+
+1. In the Azure portal, select the Azure Arc-enabled Kubernetes cluster that you wish to monitor.
+
+1. From the resource pane on the left, select **Insights** under the **Monitoring** section.
+1. On the onboarding page, select **Configure monitoring**.
+1. On the **Configure Container insights** page, select the **Enable Prometheus metrics** checkbox.
+1. Select **Configure**.
++
+### [CLI](#tab/cli)
+
+### Prerequisites
+++ The k8s-extension extension must be installed. Install the extension using the command `az extension add --name k8s-extension`.++ The k8s-extension version 1.4.1 or higher is required. Check the k8s-extension version by using the `az version` command.+
+### Create an extension with default values
+++ A default Azure Monitor workspace is created in the DefaultRG-<cluster_region> following the format `DefaultAzureMonitorWorkspace-<mapped_region>`.++ Auto-upgrade is enabled for the extension.+
+```azurecli
+az k8s-extension create \
+--name azuremonitor-metrics \
+--cluster-name <cluster-name> \
+--resource-group <resource-group> \
+--cluster-type connectedClusters \
+--extension-type Microsoft.AzureMonitor.Containers.Metrics
+```
+
+### Create an extension with an existing Azure Monitor workspace
+
+If the Azure Monitor workspace is already linked to one or more Grafana workspaces, the data is available in Grafana.
+
+```azurecli
+az k8s-extension create\
+--name azuremonitor-metrics\
+--cluster-name <cluster-name>\
+--resource-group <resource-group>\
+--cluster-type connectedClusters\
+--extension-type Microsoft.AzureMonitor.Containers.Metrics\
+--configuration-settings azure-monitor-workspace-resource-id=<workspace-name-resource-id>
+```
+
+### Create an extension with an existing Azure Monitor workspace and link with an existing Grafana workspace
+
+This option creates a link between the Azure Monitor workspace and the Grafana workspace.
+
+```azurecli
+az k8s-extension create\
+--name azuremonitor-metrics\
+--cluster-name <cluster-name>\
+--resource-group <resource-group>\
+--cluster-type connectedClusters\
+--extension-type Microsoft.AzureMonitor.Containers.Metrics\
+--configuration-settings azure-monitor-workspace-resource-id=<workspace-name-resource-id> \
+grafana-resource-id=<grafana-workspace-name-resource-id>
+```
+
+### Create an extension with optional parameters
+
+You can use the following optional parameters with the previous commands:
+
+`--configurationsettings.AzureMonitorMetrics.KubeStateMetrics.MetricsLabelsAllowlist` is a comma-separated list of Kubernetes label keys that will be used in the resource' labels metric. By default the metric contains only name and namespace labels. To include additional labels, provide a list of resource names in their plural form and Kubernetes label keys you would like to allow for them. For example, `=namespaces=[kubernetes.io/team,...],pods=[kubernetes.io/team],...`
+
+`--configurationSettings.AzureMonitorMetrics.KubeStateMetrics.MetricAnnotationsAllowList` is a comma-separated list of Kubernetes annotations keys that will be used in the resource' labels metric. By default the metric contains only name and namespace labels. To include additional annotations, provide a list of resource names in their plural form and Kubernetes annotation keys you would like to allow for them. For example, `=namespaces=[kubernetes.io/team,...],pods=[kubernetes.io/team],...`.
+
+> [!NOTE]
+> A single `*`, for example `'=pods=[*]'` can be provided per resource to allow any labels, however, this has severe performance implications.
++
+```azurecli
+az k8s-extension create\
+--name azuremonitor-metrics\
+--cluster-name <cluster-name>\
+--resource-group <resource-group>\
+--cluster-type connectedClusters\
+--extension-type Microsoft.AzureMonitor.Containers.Metrics\
+--configuration-settings azure-monitor-workspace-resource-id=<workspace-name-resource-id> \
+grafana-resource-id=<grafana-workspace-name-resource-id> \
+AzureMonitorMetrics.KubeStateMetrics.MetricAnnotationsAllowList="pods=[k8s-annotation-1,k8s-annotation-n]" \
+AzureMonitorMetrics.KubeStateMetrics.MetricsLabelsAllowlist "namespaces=[k8s-label-1,k8s-label-n]"
+```
+
+### [Resource Manager](#tab/resource-manager)
+
+### Prerequisites
+++ If the Azure Managed Grafana instance is in a subscription other than the Azure Monitor Workspaces subscription, register the Azure Monitor Workspace subscription with the `Microsoft.Dashboard` resource provider by following the steps in the [Register resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider) section of the Azure resource providers and types article.+++ The Azure Monitor workspace and Azure Managed Grafana workspace must already exist.++ The template must be deployed in the same resource group as the Azure Managed Grafana workspace.++ Users with the User Access Administrator role in the subscription of the AKS cluster can enable the Monitoring Data Reader role directly by deploying the template.+
+### Create an extension
+
+1. Retrieve required values for the Grafana resource
+
+ > [!NOTE]
+ > Azure Managed Grafana is not currently available in the Azure US Government cloud.
+
+ On the Overview page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**.
+
+ If you're using an existing Azure Managed Grafana instance that's already linked to an Azure Monitor workspace, you need the list of already existing Grafana integrations. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If the field doesn't exist, the instance hasn't been linked with any Azure Monitor workspace.
+
+ ```json
+ "properties": {
+ "grafanaIntegrations": {
+ "azureMonitorWorkspaceIntegrations": [
+ {
+ "azureMonitorWorkspaceResourceId": "full_resource_id_1"
+ },
+ {
+ "azureMonitorWorkspaceResourceId": "full_resource_id_2"
+ }
+ ]
+ }
+ }
+ ```
+
+1. Download and edit the template and the parameter file
++
+ 1. Download the template at https://aka.ms/azureprometheus-arc-arm-template and save it as *existingClusterOnboarding.json*.
+
+ 1. Download the parameter file at https://aka.ms/azureprometheus-arc-arm-template-parameters and save it as *existingClusterParam.json*.
+
+1. Edit the following fields' values in the parameter file.
+
+ |Parameter|Value |
+ |||
+ |`azureMonitorWorkspaceResourceId` |Resource ID for the Azure Monitor workspace. Retrieve from the **JSON view** on the Overview page for the Azure Monitor workspace. |
+ |`azureMonitorWorkspaceLocation`|Location of the Azure Monitor workspace. Retrieve from the JSON view on the Overview page for the Azure Monitor workspace. |
+ |`clusterResourceId` |Resource ID for the Arc cluster. Retrieve from the **JSON view** on the Overview page for the cluster. |
+ |`clusterLocation` |Location of the Arc cluster. Retrieve from the **JSON view** on the Overview page for the cluster. |
+ |`metricLabelsAllowlist` |Comma-separated list of Kubernetes labels keys to be used in the resource's labels metric.|
+ |`metricAnnotationsAllowList` |Comma-separated list of more Kubernetes label keys to be used in the resource's labels metric. |
+ |`grafanaResourceId` |Resource ID for the managed Grafana instance. Retrieve from the **JSON view** on the Overview page for the Grafana instance. |
+ |`grafanaLocation` |Location for the managed Grafana instance. Retrieve from the **JSON view** on the Overview page for the Grafana instance. |
+ |`grafanaSku` |SKU for the managed Grafana instance. Retrieve from the **JSON view** on the Overview page for the Grafana instance. Use the `sku.name`. |
+
+1. Open the template file and update the `grafanaIntegrations` property at the end of the file with the values that you retrieved from the Grafana instance. For example:
+
+ ```json
+ {
+ "type": "Microsoft.Dashboard/grafana",
+ "apiVersion": "2022-08-01",
+ "name": "[split(parameters('grafanaResourceId'),'/')[8]]",
+ "sku": {
+ "name": "[parameters('grafanaSku')]"
+ },
+ "location": "[parameters('grafanaLocation')]",
+ "properties": {
+ "grafanaIntegrations": {
+ "azureMonitorWorkspaceIntegrations": [
+ {
+ "azureMonitorWorkspaceResourceId": "full_resource_id_1"
+ },
+ {
+ "azureMonitorWorkspaceResourceId": "full_resource_id_2"
+ },
+ {
+ "azureMonitorWorkspaceResourceId": "[parameters ('azureMonitorWorkspaceResourceId')]"
+ }
+ ]
+ }
+ }
+ }
+ ```
+
+ In the example JSON above, `full_resource_id_1` and `full_resource_id_2` are already in the Azure Managed Grafana resource JSON. They're added here to the Azure Resource Manager template (ARM template). If you don't have any existing Grafana integrations, don't include these entries.
+
+ The final `azureMonitorWorkspaceResourceId` entry is in the template by default and is used to link to the Azure Monitor workspace resource ID provided in the parameters file.
+
+### Verify extension installation status
+
+Once you have successfully created the Azure Monitor extension for your Azure Arc-enabled Kubernetes cluster, you can check the status of the installation using the Azure portal or CLI. Successful installations show the status as `Installed`.
+
+#### Azure portal
+
+1. In the Azure portal, select the Azure Arc-enabled Kubernetes cluster with the extension installation.
+
+1. From the resource pane on the left, select the **Extensions** item under the **Setting**' section.
+
+1. An extension with the name **azuremonitor-metrics** is listed, with the current status in the **Install status** column.
+
+#### Azure CLI
+
+Run the following command to show the latest status of the` Microsoft.AzureMonitor.Containers.Metrics` extension.
+
+```azurecli
+az k8s-extension show \
+--name azuremonitor-metrics \
+--cluster-name <cluster-name> \
+--resource-group <resource-group> \
+--cluster-type connectedClusters
+```
++
+### Delete the extension instance
+
+To delete the extension instance, use the following CLI command:
+
+```azurecli
+az k8s-extension delete --name azuremonitor-metrics -g <cluster_resource_group> -c<cluster_name> -t connectedClusters
+```
+
+The command only deletes the extension instance. The Azure Monitor workspace and its data are not deleted.
+
+## Disconnected clusters
+
+If your cluster is disconnected from Azure for more than 48 hours, Azure Resource Graph won't have information about your cluster. As a result, your Azure Monitor Workspace may have incorrect information about your cluster state.
+
+## Troubleshooting
+
+For issues with the extension, see the [Troubleshooting Guide](./prometheus-metrics-troubleshoot.md).
+
+## Next Steps
+++ [Default Prometheus metrics configuration in Azure Monitor ](prometheus-metrics-scrape-default.md)++ [Customize scraping of Prometheus metrics in Azure Monitor](prometheus-metrics-scrape-configuration.md)++ [Use Azure Monitor managed service for Prometheus as data source for Grafana using managed system identity](../essentials/prometheus-grafana.md)++ [Configure self-managed Grafana to use Azure Monitor managed service for Prometheus with Azure Active Directory](../essentials/prometheus-self-managed-grafana-azure-active-directory.md)
azure-monitor Prometheus Metrics Multiple Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-multiple-workspaces.md
+
+ Title: Send Prometheus metrics to multiple Azure Monitor workspaces
+description: Describes data collection rules required to send Prometheus metrics from a cluster in Azure Monitor to multiple Azure Monitor workspaces.
++ Last updated : 09/28/2022+++
+# Send Prometheus metrics to multiple Azure Monitor workspaces
+
+Routing metrics to more Azure Monitor workspaces can be done through the creation of additional data collection rules. All metrics can be sent to all workspaces or different metrics can be sent to different workspaces.
+
+## Send same metrics to multiple Azure Monitor workspaces
+
+You can create multiple Data Collection Rules that point to the same Data Collection Endpoint for metrics to be sent to additional Azure Monitor workspaces from the same Kubernetes cluster. In case you have a very high volume of metrics, a new Data Collection Endpoint can be created as well. Please refer to the service limits [document](../service-limits.md) regarding ingestion limits. Currently, this is only available through onboarding through Resource Manager templates. You can follow the [regular onboarding process](prometheus-metrics-enable.md) and then edit the same Resource Manager templates to add additional DCRs and DCEs (if applicable) for your additional Azure Monitor workspaces. You'll need to edit the template to add an additional parameters for every additional Azure Monitor workspace, add another DCR for every additional Azure Monitor workspace, add another DCE (if applicable), add the Monitor Reader Role for the new Azure Monitor workspace and add an additional Azure Monitor workspace integration for Grafana.
+
+- Add the following parameters:
+ ```json
+ "parameters": {
+ "azureMonitorWorkspaceResourceId2": {
+ "type": "string"
+ },
+ "azureMonitorWorkspaceLocation2": {
+ "type": "string",
+ "defaultValue": "",
+ "allowedValues": [
+ "eastus2euap",
+ "centraluseuap",
+ "centralus",
+ "eastus",
+ "eastus2",
+ "northeurope",
+ "southcentralus",
+ "southeastasia",
+ "uksouth",
+ "westeurope",
+ "westus",
+ "westus2"
+ ]
+ },
+ ...
+ }
+ ```
+
+- For high metric volume, add an additional Data Collection Endpoint. You *must* replace `<dceName>`:
+ ```json
+ {
+ "type": "Microsoft.Insights/dataCollectionEndpoints",
+ "apiVersion": "2021-09-01-preview",
+ "name": "[variables('dceName')]",
+ "location": "[parameters('azureMonitorWorkspaceLocation2')]",
+ "kind": "Linux",
+ "properties": {}
+ }
+ ```
+- Add an additional DCR with the same or a different Data Collection Endpoint. You *must* replace `<dcrName>`:
+ ```json
+ {
+ "type": "Microsoft.Insights/dataCollectionRules",
+ "apiVersion": "2021-09-01-preview",
+ "name": "<dcrName>",
+ "location": "[parameters('azureMonitorWorkspaceLocation2')]",
+ "kind": "Linux",
+ "properties": {
+ "dataCollectionEndpointId": "[resourceId('Microsoft.Insights/dataCollectionEndpoints/', variables('dceName'))]",
+ "dataFlows": [
+ {
+ "destinations": ["MonitoringAccount2"],
+ "streams": ["Microsoft-PrometheusMetrics"]
+ }
+ ],
+ "dataSources": {
+ "prometheusForwarder": [
+ {
+ "name": "PrometheusDataSource",
+ "streams": ["Microsoft-PrometheusMetrics"],
+ "labelIncludeFilter": {}
+ }
+ ]
+ },
+ "description": "DCR for Azure Monitor Metrics Profile (Managed Prometheus)",
+ "destinations": {
+ "monitoringAccounts": [
+ {
+ "accountResourceId": "[parameters('azureMonitorWorkspaceResourceId2')]",
+ "name": "MonitoringAccount2"
+ }
+ ]
+ }
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Insights/dataCollectionEndpoints/', variables('dceName'))]"
+ ]
+ }
+ ```
+
+- Add an additional DCRA with the relevant Data Collection Rule. You *must* replace `<dcraName>`:
+ ```json
+ {
+ "type": "Microsoft.Resources/deployments",
+ "name": "<dcraName>",
+ "apiVersion": "2017-05-10",
+ "subscriptionId": "[variables('clusterSubscriptionId')]",
+ "resourceGroup": "[variables('clusterResourceGroup')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.Insights/dataCollectionEndpoints/', variables('dceName'))]",
+ "[resourceId('Microsoft.Insights/dataCollectionRules', variables('dcrName'))]"
+ ],
+ "properties": {
+ "mode": "Incremental",
+ "template": {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {},
+ "variables": {},
+ "resources": [
+ {
+ "type": "Microsoft.ContainerService/managedClusters/providers/dataCollectionRuleAssociations",
+ "name": "[concat(variables('clusterName'),'/microsoft.insights/', variables('dcraName'))]",
+ "apiVersion": "2021-09-01-preview",
+ "location": "[parameters('clusterLocation')]",
+ "properties": {
+ "description": "Association of data collection rule. Deleting this association will break the data collection for this AKS Cluster.",
+ "dataCollectionRuleId": "[resourceId('Microsoft.Insights/dataCollectionRules', variables('dcrName'))]"
+ }
+ }
+ ]
+ },
+ "parameters": {}
+ }
+ }
+ ```
+- Add an additional Grafana integration:
+ ```json
+ {
+ "type": "Microsoft.Dashboard/grafana",
+ "apiVersion": "2022-08-01",
+ "name": "[split(parameters('grafanaResourceId'),'/')[8]]",
+ "sku": {
+ "name": "[parameters('grafanaSku')]"
+ },
+ "location": "[parameters('grafanaLocation')]",
+ "properties": {
+ "grafanaIntegrations": {
+ "azureMonitorWorkspaceIntegrations": [
+ // Existing azureMonitorWorkspaceIntegrations values (if any)
+ // {
+ // "azureMonitorWorkspaceResourceId": "<value>"
+ // },
+ // {
+ // "azureMonitorWorkspaceResourceId": "<value>"
+ // },
+ {
+ "azureMonitorWorkspaceResourceId": "[parameters('azureMonitorWorkspaceResourceId')]"
+ },
+ {
+ "azureMonitorWorkspaceResourceId": "[parameters('azureMonitorWorkspaceResourceId2')]"
+ }
+ ]
+ }
+ }
+ }
+ ```
+ - Assign `Monitoring Data Reader` role to read data from the new Azure Monitor Workspace:
+
+ ```json
+ {
+ "type": "Microsoft.Authorization/roleAssignments",
+ "apiVersion": "2022-04-01",
+ "name": "[parameters('roleNameGuid')]",
+ "scope": "[parameters('azureMonitorWorkspaceResourceId2')]",
+ "properties": {
+ "roleDefinitionId": "[concat('/subscriptions/', variables('clusterSubscriptionId'), '/providers/Microsoft.Authorization/roleDefinitions/', 'b0d8363b-8ddd-447d-831f-62ca05bff136')]",
+ "principalId": "[reference(resourceId('Microsoft.Dashboard/grafana', split(parameters('grafanaResourceId'),'/')[8]), '2022-08-01', 'Full').identity.principalId]"
+ }
+ }
+
+ ```
+## Send different metrics to different Azure Monitor workspaces
+
+If you want to send some metrics to one Azure Monitor workspace and other metrics to a different one, follow the above steps to add additional DCRs. The value of `microsoft_metrics_include_label` under the `labelIncludeFilter` in the DCR is the identifier for the workspace. To then configure which metrics are routed to which workspace, you can add an extra pre-defined label, `microsoft_metrics_account` to the metrics. The value should be the same as the corresponding `microsoft_metrics_include_label` in the DCR for that workspace. To add the label to the metrics, you can utilize `relabel_configs` in your scrape config. To send all metrics from one job to a certain workspace, add the following relabel config:
+
+```yaml
+relabel_configs:
+- source_labels: [__address__]
+ target_label: microsoft_metrics_account
+ action: replace
+ replacement: "MonitoringAccountLabel1"
+```
+
+The source label is `__address__` because this label will always exist so this relabel config will always be applied. The target label will always be `microsoft_metrics_account` and its value should be replaced with the corresponding label value for the workspace.
++
+### Example
+
+If you want to configure three different jobs to send the metrics to three different workspaces, then include the following in each data collection rule:
+
+```json
+"labelIncludeFilter": {
+ "microsoft_metrics_include_label": "MonitoringAccountLabel1"
+}
+```
+
+```json
+"labelIncludeFilter": {
+ "microsoft_metrics_include_label": "MonitoringAccountLabel2"
+}
+```
+
+```json
+"labelIncludeFilter": {
+ "microsoft_metrics_include_label": "MonitoringAccountLabel3"
+}
+```
+
+Then in your scrape config, include the same label value for each:
+```yaml
+scrape_configs:
+- job_name: prometheus_ref_app_1
+ kubernetes_sd_configs:
+ - role: pod
+ relabel_configs:
+ - source_labels: [__meta_kubernetes_pod_label_app]
+ action: keep
+ regex: "prometheus-reference-app-1"
+ - source_labels: [__address__]
+ target_label: microsoft_metrics_account
+ action: replace
+ replacement: "MonitoringAccountLabel1"
+- job_name: prometheus_ref_app_2
+ kubernetes_sd_configs:
+ - role: pod
+ relabel_configs:
+ - source_labels: [__meta_kubernetes_pod_label_app]
+ action: keep
+ regex: "prometheus-reference-app-2"
+ - source_labels: [__address__]
+ target_label: microsoft_metrics_account
+ action: replace
+ replacement: "MonitoringAccountLabel2"
+- job_name: prometheus_ref_app_3
+ kubernetes_sd_configs:
+ - role: pod
+ relabel_configs:
+ - source_labels: [__meta_kubernetes_pod_label_app]
+ action: keep
+ regex: "prometheus-reference-app-3"
+ - source_labels: [__address__]
+ target_label: microsoft_metrics_account
+ action: replace
+ replacement: "MonitoringAccountLabel3"
+```
++
+## Next steps
+
+- [Learn more about Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md).
+- [Collect Prometheus metrics from AKS cluster](prometheus-metrics-enable.md).
azure-monitor Prometheus Metrics Scrape Configuration Minimal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-configuration-minimal.md
+
+ Title: Minimal Prometheus ingestion profile in Azure Monitor
+description: Describes minimal ingestion profile in Azure Monitor managed service for Prometheus and how you can configure it collect more data.
++ Last updated : 09/28/2022+++++
+# Minimal ingestion profile for Prometheus metrics in Azure Monitor
+Azure monitor metrics addon collects number of Prometheus metrics by default. `Minimal ingestion profile` is a setting that helps reduce ingestion volume of metrics, as only metrics used by default dashboards, default recording rules & default alerts are collected. This article describes how this setting is configured. This article also lists metrics collected by default when `minimal ingestion profile` is enabled. You can modify collection to enable collecting more metrics, as specified below.
+
+> [!NOTE]
+> For addon based collection, `Minimal ingestion profile` setting is enabled by default.
+
+Following targets are **enabled/ON** by default - meaning you don't have to provide any scrape job configuration for scraping these targets, as metrics addon will scrape these targets automatically by default
+
+- `cadvisor` (`job=cadvisor`)
+- `nodeexporter` (`job=node`)
+- `kubelet` (`job=kubelet`)
+- `kube-state-metrics` (`job=kube-state-metrics`)
+
+Following targets are available to scrape, but scraping isn't enabled (**disabled/OFF**) by default - meaning you don't have to provide any scrape job configuration for scraping these targets but they're disabled/OFF by default and you need to turn ON/enable scraping for these targets using [ama-metrics-settings-configmap](https://aka.ms/azureprometheus-addon-settings-configmap) under `default-scrape-settings-enabled` section
+
+- `core-dns` (`job=kube-dns`)
+- `kube-proxy` (`job=kube-proxy`)
+- `api-server` (`job=kube-apiserver`)
+
+> [!NOTE]
+> The default scrape frequency for all default targets and scrapes is `30 seconds`. You can override it per target using the [ama-metrics-settings-configmap](https://aka.ms/azureprometheus-addon-settings-configmap) under `default-targets-scrape-interval-settings` section.
+> You can read more about four different configmaps used by metrics addon [here](prometheus-metrics-scrape-configuration.md)
+
+## Configuration setting
+The setting `default-targets-metrics-keep-list.minimalIngestionProfile="true"` is enabled by default on the metrics addon. You can specify this setting in [ama-metrics-settings-configmap](https://aka.ms/azureprometheus-addon-settings-configmap) under `default-targets-metrics-keep-list` section.
+
+## Scenarios
+
+There are four scenarios where you may want to customize this behavior:
+
+**Ingest only minimal metrics per default target.**<br>
+This is the default behavior with the setting `default-targets-metrics-keep-list.minimalIngestionProfile="true"`. Only metrics listed below are ingested for each of the default targets.
+
+**Ingest a few other metrics for one or more default targets in addition to minimal metrics.**<br>
+Keep ` minimalIngestionProfile="true"` and specify the appropriate `keeplistRegexes.*` specific to the target, for example `keeplistRegexes.coreDns="X``Y"`. X,Y is merged with default metric list for the target and then ingested. ``
++
+**Ingest only a specific set of metrics for a target, and nothing else.**<br>
+Set `minimalIngestionProfile="false"` and specify the appropriate `default-targets-metrics-keep-list.="X``Y"` specific to the target in the `ama-metrics-settings-configmap`.
++
+**Ingest all metrics scraped for the default target.**<br>
+Set `minimalIngestionProfile="false"` and don't specify any `default-targets-metrics-keep-list.<targetname>` for that target. Changing to `false` can increase metric ingestion volume by a factor per target.
++
+> [!NOTE]
+> `up` metric is not part of the allow/keep list because it is ingested per scrape, per target, regardless of `keepLists` specified. This metric is not actually scraped but produced as result of scrape operation by the metrics addon. For histograms and summaries, each series has to be included explicitly in the list (`*bucket`, `*sum`, `*count` series).
+
+### Minimal ingestion for default ON targets
+The following metrics are allow-listed with `minimalingestionprofile=true` for default ON targets. The below metrics are collected by default as these targets are scraped by default.
+
+**kubelet**<br>
+- `kubelet_volume_stats_used_bytes`
+- `kubelet_node_name`
+- `kubelet_running_pods`
+- `kubelet_running_pod_count`
+- `kubelet_running_containers`
+- `kubelet_running_container_count`
+- `volume_manager_total_volumes`
+- `kubelet_node_config_error`
+- `kubelet_runtime_operations_total`
+- `kubelet_runtime_operations_errors_total`
+- `kubelet_runtime_operations_duration_seconds` `kubelet_runtime_operations_duration_seconds_bucket` `kubelet_runtime_operations_duration_seconds_sum` `kubelet_runtime_operations_duration_seconds_count`
+- `kubelet_pod_start_duration_seconds` `kubelet_pod_start_duration_seconds_bucket` `kubelet_pod_start_duration_seconds_sum` `kubelet_pod_start_duration_seconds_count`
+- `kubelet_pod_worker_duration_seconds` `kubelet_pod_worker_duration_seconds_bucket` `kubelet_pod_worker_duration_seconds_sum` `kubelet_pod_worker_duration_seconds_count`
+- `storage_operation_duration_seconds` `storage_operation_duration_seconds_bucket` `storage_operation_duration_seconds_sum` `storage_operation_duration_seconds_count`
+- `storage_operation_errors_total`
+- `kubelet_cgroup_manager_duration_seconds` `kubelet_cgroup_manager_duration_seconds_bucket` `kubelet_cgroup_manager_duration_seconds_sum` `kubelet_cgroup_manager_duration_seconds_count`
+- `kubelet_pleg_relist_duration_seconds` `kubelet_pleg_relist_duration_seconds_bucket` `kubelet_pleg_relist_duration_sum` `kubelet_pleg_relist_duration_seconds_count`
+- `kubelet_pleg_relist_interval_seconds` `kubelet_pleg_relist_interval_seconds_bucket` `kubelet_pleg_relist_interval_seconds_sum` `kubelet_pleg_relist_interval_seconds_count`
+- `rest_client_requests_total`
+- `rest_client_request_duration_seconds` `rest_client_request_duration_seconds_bucket` `rest_client_request_duration_seconds_sum` `rest_client_request_duration_seconds_count`
+- `process_resident_memory_bytes`
+- `process_cpu_seconds_total`
+- `go_goroutines`
+- `kubelet_volume_stats_capacity_bytes`
+- `kubelet_volume_stats_available_bytes`
+- `kubelet_volume_stats_inodes_used`
+- `kubelet_volume_stats_inodes`
+- `kubernetes_build_info"`
+
+**cadvisor**<br>
+- `container_spec_cpu_period`
+- `container_spec_cpu_quota`
+- `container_cpu_usage_seconds_total`
+- `container_memory_rss`
+- `container_network_receive_bytes_total`
+- `container_network_transmit_bytes_total`
+- `container_network_receive_packets_total`
+- `container_network_transmit_packets_total`
+- `container_network_receive_packets_dropped_total`
+- `container_network_transmit_packets_dropped_total`
+- `container_fs_reads_total`
+- `container_fs_writes_total`
+- `container_fs_reads_bytes_total`
+- `container_fs_writes_bytes_total`
+- `container_memory_working_set_bytes`
+- `container_memory_cache`
+- `container_memory_swap`
+- `container_cpu_cfs_throttled_periods_total`
+- `container_cpu_cfs_periods_total`
+- `container_memory_usage_bytes`
+- `kubernetes_build_info"`
+
+**kube-state-metrics**<br>
+- `kube_node_status_capacity`
+- `kube_job_status_succeeded`
+- `kube_job_spec_completions`
+- `kube_daemonset_status_desired_number_scheduled`
+- `kube_daemonset_status_number_ready`
+- `kube_deployment_spec_replicas`
+- `kube_deployment_status_replicas_ready`
+- `kube_pod_container_status_last_terminated_reason`
+- `kube_node_status_condition`
+- `kube_pod_container_status_restarts_total`
+- `kube_pod_container_resource_requests`
+- `kube_pod_status_phase`
+- `kube_pod_container_resource_limits`
+- `kube_node_status_allocatable`
+- `kube_pod_info`
+- `kube_pod_owner`
+- `kube_resourcequota`
+- `kube_statefulset_replicas`
+- `kube_statefulset_status_replicas`
+- `kube_statefulset_status_replicas_ready`
+- `kube_statefulset_status_replicas_current`
+- `kube_statefulset_status_replicas_updated`
+- `kube_namespace_status_phase`
+- `kube_node_info`
+- `kube_statefulset_metadata_generation`
+- `kube_pod_labels`
+- `kube_pod_annotations`
+- `kube_horizontalpodautoscaler_status_current_replicas`
+- `kube_horizontalpodautoscaler_status_desired_replicas`
+- `kube_horizontalpodautoscaler_spec_min_replicas`
+- `kube_horizontalpodautoscaler_spec_max_replicas`
+- `kube_node_status_condition`
+- `kube_node_spec_taint`
+- `kube_pod_container_status_waiting_reason`
+- `kube_job_failed`
+- `kube_job_status_start_time`
+- `kube_deployment_spec_replicas`
+- `kube_deployment_status_replicas_available`
+- `kube_deployment_status_replicas_updated`
+- `kube_job_status_active`
+- `kubernetes_build_info`
+- `kube_pod_container_info`
+- `kube_replicaset_owner`
+- `kube_resource_labels` (ex - kube_pod_labels, kube_deployment_labels)
+- `kube_resource_annotations` (ex - kube_pod_annotations, kube_deployment_annotations)
+
+**node-exporter (linux)**<br>
+- `node_cpu_seconds_total`
+- `node_memory_MemAvailable_bytes`
+- `node_memory_Buffers_bytes`
+- `node_memory_Cached_bytes`
+- `node_memory_MemFree_bytes`
+- `node_memory_Slab_bytes`
+- `node_memory_MemTotal_bytes`
+- `node_netstat_Tcp_RetransSegs`
+- `node_netstat_Tcp_OutSegs`
+- `node_netstat_TcpExt_TCPSynRetrans`
+- `node_load1``node_load5`
+- `node_load15`
+- `node_disk_read_bytes_total`
+- `node_disk_written_bytes_total`
+- `node_disk_io_time_seconds_total`
+- `node_filesystem_size_bytes`
+- `node_filesystem_avail_bytes`
+- `node_filesystem_readonly`
+- `node_network_receive_bytes_total`
+- `node_network_transmit_bytes_total`
+- `node_vmstat_pgmajfault`
+- `node_network_receive_drop_total`
+- `node_network_transmit_drop_total`
+- `node_disk_io_time_weighted_seconds_total`
+- `node_exporter_build_info`
+- `node_time_seconds`
+- `node_uname_info"`
+
+### Minimal ingestion for default OFF targets
+The following are metrics that are allow-listed with `minimalingestionprofile=true` for default OFF targets. These metrics are not collected by default as these targets are not scraped by default (due to being OFF by default). You can turn ON scraping for these targets using `default-scrape-settings-enabled.<target-name>=true`' using [ama-metrics-settings-configmap](https://aka.ms/azureprometheus-addon-settings-configmap) under `default-scrape-settings-enabled` section.
+
+**core-dns**
+- `coredns_build_info`
+- `coredns_panics_total`
+- `coredns_dns_responses_total`
+- `coredns_forward_responses_total`
+- `coredns_dns_request_duration_seconds` `coredns_dns_request_duration_seconds_bucket` `coredns_dns_request_duration_seconds_sum` `coredns_dns_request_duration_seconds_count`
+- `coredns_forward_request_duration_seconds` `coredns_forward_request_duration_seconds_bucket` `coredns_forward_request_duration_seconds_sum` `coredns_forward_request_duration_seconds_count`
+- `coredns_dns_requests_total`
+- `coredns_forward_requests_total`
+- `coredns_cache_hits_total`
+- `coredns_cache_misses_total`
+- `coredns_cache_entries`
+- `coredns_plugin_enabled`
+- `coredns_dns_request_size_bytes` `coredns_dns_request_size_bytes_bucket` `coredns_dns_request_size_bytes_sum` `coredns_dns_request_size_bytes_count`
+- `coredns_dns_response_size_bytes` `coredns_dns_response_size_bytes_bucket` `coredns_dns_response_size_bytes_sum` `coredns_dns_response_size_bytes_count`
+- `coredns_dns_response_size_bytes` `coredns_dns_response_size_bytes_bucket` `coredns_dns_response_size_bytes_sum` `coredns_dns_response_size_bytes_count`
+- `process_resident_memory_bytes`
+- `process_cpu_seconds_total`
+- `go_goroutines`
+- `kubernetes_build_info"`
+
+**kube-proxy**
+- `kubeproxy_sync_proxy_rules_duration_seconds` `kubeproxy_sync_proxy_rules_duration_seconds_bucket` `kubeproxy_sync_proxy_rules_duration_seconds_sum` `kubeproxy_sync_proxy_rules_duration_seconds_count` `kubeproxy_network_programming_duration_seconds`
+- `kubeproxy_network_programming_duration_seconds` `kubeproxy_network_programming_duration_seconds_bucket` `kubeproxy_network_programming_duration_seconds_sum` `kubeproxy_network_programming_duration_seconds_count` `rest_client_requests_total`
+- `rest_client_request_duration_seconds` `rest_client_request_duration_seconds_bucket` `rest_client_request_duration_seconds_sum` `rest_client_request_duration_seconds_count`
+- `process_resident_memory_bytes`
+- `process_cpu_seconds_total`
+- `go_goroutines`
+- `kubernetes_build_info"`
+
+**api-server**
+- `apiserver_request_duration_seconds` `apiserver_request_duration_seconds_bucket` `apiserver_request_duration_seconds_sum` `apiserver_request_duration_seconds_count`
+- `apiserver_request_total`
+- `workqueue_adds_total``workqueue_depth`
+- `workqueue_queue_duration_seconds` `workqueue_queue_duration_seconds_bucket` `workqueue_queue_duration_seconds_sum` `workqueue_queue_duration_seconds_count`
+- `process_resident_memory_bytes`
+- `process_cpu_seconds_total`
+- `go_goroutines`
+- `kubernetes_build_info"`
+
+**windows-exporter (job=windows-exporter)**<br>
+- `windows_system_system_up_time`
+- `windows_cpu_time_total`
+- `windows_memory_available_bytes`
+- `windows_os_visible_memory_bytes`
+- `windows_memory_cache_bytes`
+- `windows_memory_modified_page_list_bytes`
+- `windows_memory_standby_cache_core_bytes`
+- `windows_memory_standby_cache_normal_priority_bytes`
+- `windows_memory_standby_cache_reserve_bytes`
+- `windows_memory_swap_page_operations_total`
+- `windows_logical_disk_read_seconds_total`
+- `windows_logical_disk_write_seconds_total`
+- `windows_logical_disk_size_bytes`
+- `windows_logical_disk_free_bytes`
+- `windows_net_bytes_total`
+- `windows_net_packets_received_discarded_total`
+- `windows_net_packets_outbound_discarded_total`
+- `windows_container_available`
+- `windows_container_cpu_usage_seconds_total`
+- `windows_container_memory_usage_commit_bytes`
+- `windows_container_memory_usage_private_working_set_bytes`
+- `windows_container_network_receive_bytes_total`
+- `windows_container_network_transmit_bytes_total`
+
+**kube-proxy-windows (job=kube-proxy-windows)**<br>
+- `kubeproxy_sync_proxy_rules_duration_seconds`
+- `kubeproxy_sync_proxy_rules_duration_seconds_bucket`
+- `kubeproxy_sync_proxy_rules_duration_seconds_sum`
+- `kubeproxy_sync_proxy_rules_duration_seconds_count`
+- `rest_client_requests_total`
+- `rest_client_request_duration_seconds`
+- `rest_client_request_duration_seconds_bucket`
+- `rest_client_request_duration_seconds_sum`
+- `rest_client_request_duration_seconds_count`
+- `process_resident_memory_bytes`
+- `process_cpu_seconds_total`
+- `go_goroutines`
+
+## Next steps
+
+- [Learn more about customizing Prometheus metric scraping in Container insights](prometheus-metrics-scrape-configuration.md).
azure-monitor Prometheus Metrics Scrape Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-configuration.md
+
+ Title: Customize scraping of Prometheus metrics in Azure Monitor
+description: Customize metrics scraping for a Kubernetes cluster with the metrics add-on in Azure Monitor.
++ Last updated : 09/28/2022+++
+# Customize scraping of Prometheus metrics in Azure Monitor managed service for Prometheus
+
+This article provides instructions on customizing metrics scraping for a Kubernetes cluster with the [metrics addon](prometheus-metrics-enable.md) in Azure Monitor.
+
+## Configmaps
+
+Four different configmaps can be configured to provide scrape configuration and other settings for the metrics add-on. All config-maps should be applied to `kube-system` namespace for any cluster.
+
+> [!NOTE]
+> None of the four configmaps exist by default in the cluster when Managed Prometheus is enabled. Depending on what needs to be customized, you need to deploy any or all of these four configmaps with the same name specified, in `kube-system` namespace. AMA-Metrics pods will pick up these configmaps after you deploy them to `kube-system` namespace, and will restart in 2-3 minutes to apply the configuration settings specified in the configmap(s).
+
+1. [`ama-metrics-settings-configmap`](https://aka.ms/azureprometheus-addon-settings-configmap)
+ This config map has below simple settings that can be configured. You can take the configmap from the above git hub repo, change the settings are required and apply/deploy the configmap to `kube-system` namespace for your cluster
+ * cluster alias (to change the value of `cluster` label in every time-series/metric that's ingested from a cluster)
+ * enable/disable default scrape targets - Turn ON/OFF default scraping based on targets. Scrape configuration for these default targets are already pre-defined/built-in
+ * enable pod annotation based scraping per namespace
+ * metric keep-lists - this setting is used to control which metrics are listed to be allowed from each default target and to change the default behavior
+ * scrape intervals for default/pre-definetargets. `30 secs` is the default scrape frequency and it can be changed per default target using this configmap
+ * debug-mode - turning this ON helps to debug missing metric/ingestion issues - see more on [troubleshooting](prometheus-metrics-troubleshoot.md#debug-mode)
+2. [`ama-metrics-prometheus-config`](https://aka.ms/azureprometheus-addon-rs-configmap)
+ This config map can be used to provide Prometheus scrape config for addon replica. Addon runs a singleton replica, and any cluster level services can be discovered and scraped by providing scrape jobs in this configmap. You can take the sample configmap from the above git hub repo, add scrape jobs that you would need and apply/deploy the config map to `kube-system` namespace for your cluster.
+3. [`ama-metrics-prometheus-config-node`](https://aka.ms/azureprometheus-addon-ds-configmap)
+ This config map can be used to provide Prometheus scrape config for addon DaemonSet that runs on every **Linux** node in the cluster, and any node level targets on each node can be scraped by providing scrape jobs in this configmap. When you use this configmap, you can use `$NODE_IP` variable in your scrape config, which gets substituted by corresponding node's ip address in DaemonSet pod running on each node. This way you get access to scrape anything that runs on that node from the metrics addon DaemonSet. **Please be careful when you use discoveries in scrape config in this node level config map, as every node in the cluster will setup & discover the target(s) and will collect redundant metrics**.
+ You can take the sample configmap from the above git hub repo, add scrape jobs that you would need and apply/deploy the config map to `kube-system` namespace for your cluster
+4. [`ama-metrics-prometheus-config-node-windows`](https://aka.ms/azureprometheus-addon-ds-configmap-windows)
+ This config map can be used to provide Prometheus scrape config for addon DaemonSet that runs on every **Windows** node in the cluster, and node level targets on each node can be scraped by providing scrape jobs in this configmap. When you use this configmap, you can use `$NODE_IP` variable in your scrape config, which will be substituted by corresponding node's ip address in DaemonSet pod running on each node. This way you get access to scrape anything that runs on that node from the metrics addon DaemonSet. **Please be careful when you use discoveries in scrape config in this node level config map, as every node in the cluster will setup & discover the target(s) and will collect redundant metrics**.
+ You can take the sample configmap from the above git hub repo, add scrape jobs that you would need and apply/deploy the config map to `kube-system` namespace for your cluster
+
+## Metrics add-on settings configmap
+
+The [ama-metrics-settings-configmap](https://aka.ms/azureprometheus-addon-settings-configmap) can be downloaded, edited, and applied to the cluster to customize the out-of-the-box features of the metrics add-on.
+
+### Enable and disable default targets
+The following table has a list of all the default targets that the Azure Monitor metrics add-on can scrape by default and whether it's initially enabled. Default targets are scraped every 30 seconds. A replica is deployed to scrape cluster-wide targets such as kube-state-metrics. A DaemonSet is also deployed to scrape node-wide targets such as kubelet.
+
+| Key | Type | Enabled | Pod | Description |
+|--||-|-|-|
+| kubelet | bool | `true` | Linux DaemonSet | Scrape kubelet in every node in the K8s cluster without any extra scrape config. |
+| cadvisor | bool | `true` | Linux DaemonSet | Scrape cadvisor in every node in the K8s cluster without any extra scrape config.<br>Linux only. |
+| kubestate | bool | `true` | Linux replica | Scrape kube-state-metrics in the K8s cluster (installed as a part of the add-on) without any extra scrape config. |
+| nodeexporter | bool | `true` | Linux DaemonSet | Scrape node metrics without any extra scrape config.<br>Linux only. |
+| coredns | bool | `false` | Linux replica | Scrape coredns service in the K8s cluster without any extra scrape config. |
+| kubeproxy | bool | `false` | Linux DaemonSet | Scrape kube-proxy in every Linux node discovered in the K8s cluster without any extra scrape config.<br>Linux only. |
+| apiserver | bool | `false` | Linux replica | Scrape the Kubernetes API server in the K8s cluster without any extra scrape config. |
+| windowsexporter | bool | `false` | Windows DaemonSet | Scrape windows-exporter in every node in the K8s cluster without any extra scrape config.<br>Windows only. |
+| windowskubeproxy | bool | `false` | Windows DaemonSet | Scrape windows-kube-proxy in every node in the K8s cluster without any extra scrape config.<br>Windows only. |
+| prometheuscollectorhealth | bool | `false` | Linux replica | Scrape information about the prometheus-collector container, such as the amount and size of time series scraped. |
+
+If you want to turn on the scraping of the default targets that aren't enabled by default, edit the [configmap](https://aka.ms/azureprometheus-addon-settings-configmap) `ama-metrics-settings-configmap` to update the targets listed under `default-scrape-settings-enabled` to `true`. Apply the configmap to your cluster.
+
+### Customize metrics collected by default targets
+By default, for all the default targets, only minimal metrics used in the default recording rules, alerts, and Grafana dashboards are ingested as described in [minimal-ingestion-profile](prometheus-metrics-scrape-configuration-minimal.md). To collect all metrics from default targets, update the keep-lists in the settings configmap under `default-targets-metrics-keep-list`, and set `minimalingestionprofile` to `false`.
+
+To allowlist more metrics in addition to default metrics that are listed to be allowed, for any default targets, edit the settings under `default-targets-metrics-keep-list` for the corresponding job you want to change.
+
+For example, `kubelet` is the metric filtering setting for the default target kubelet. Use the following script to filter *in* metrics collected for the default targets by using regex-based filtering.
+
+```
+kubelet = "metricX|metricY"
+apiserver = "mymetric.*"
+```
+
+> [!NOTE]
+> If you use quotation marks or backslashes in the regex, you need to escape them by using a backslash like the examples `"test\'smetric\"s\""` and `testbackslash\\*`.
+
+To further customize the default jobs to change properties like collection frequency or labels, disable the corresponding default target by setting the configmap value for the target to `false`. Then apply the job by using a custom configmap. For details on custom configuration, see [Customize scraping of Prometheus metrics in Azure Monitor](prometheus-metrics-scrape-configuration.md#configure-custom-prometheus-scrape-jobs).
+
+### Cluster alias
+The cluster label appended to every time series scraped uses the last part of the full AKS cluster's Azure Resource Manager resource ID. For example, if the resource ID is `/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/rg-name/providers/Microsoft.ContainerService/managedClusters/myclustername`, the cluster label is `myclustername`.
+
+To override the cluster label in the time series scraped, update the setting `cluster_alias` to any string under `prometheus-collector-settings` in the [configmap](https://aka.ms/azureprometheus-addon-settings-configmap) `ama-metrics-settings-configmap`. You can create this configmap if it doesn't exist in the cluster or you can edit the existing one if its already exists in your cluster.
+
+The new label also shows up in the cluster parameter dropdown in the Grafana dashboards instead of the default one.
+
+> [!NOTE]
+> Only alphanumeric characters are allowed. Any other characters are replaced with `_`. This change is to ensure that different components that consume this label adhere to the basic alphanumeric convention.
+
+### Debug mode
+To view every metric that's being scraped for debugging purposes, the metrics add-on agent can be configured to run in debug mode by updating the setting `enabled` to `true` under the `debug-mode` setting in the [configmap](https://aka.ms/azureprometheus-addon-settings-configmap) `ama-metrics-settings-configmap`. You can either create this configmap or edit an existing one. For more information, see the [Debug mode section in Troubleshoot collection of Prometheus metrics](prometheus-metrics-troubleshoot.md#debug-mode).
+
+### Scrape interval settings
+To update the scrape interval settings for any target, you can update the duration in the setting `default-targets-scrape-interval-settings` for that target in the [configmap](https://aka.ms/azureprometheus-addon-settings-configmap) `ama-metrics-settings-configmap`. You have to set the scrape intervals in the correct format specified in [this website](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file). Otherwise, the default value of 30 seconds is applied to the corresponding targets. For example - If you want to update the scrape interval for the `kubelet` job to `60s` then you can update the following section in the YAML:
+
+```
+default-targets-scrape-interval-settings: |-
+ kubelet = "60s"
+ coredns = "30s"
+ cadvisor = "30s"
+ kubeproxy = "30s"
+ apiserver = "30s"
+ kubestate = "30s"
+ nodeexporter = "30s"
+ windowsexporter = "30s"
+ windowskubeproxy = "30s"
+ kappiebasic = "30s"
+ prometheuscollectorhealth = "30s"
+ podannotations = "30s"
+```
+and apply the YAML using the following command: `kubectl apply -f .\ama-metrics-settings-configmap.yaml`
+
+## Configure custom Prometheus scrape jobs
+
+You can configure the metrics add-on to scrape targets other than the default ones by using the same configuration format as the [Prometheus configuration file](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file).
+
+Follow the instructions to [create, validate, and apply the configmap](prometheus-metrics-scrape-validate.md) for your cluster.
+
+### Advanced setup: Configure custom Prometheus scrape jobs for the DaemonSet
+
+The `ama-metrics` Replica pod consumes the custom Prometheus config and scrapes the specified targets. For a cluster with a large number of nodes and pods and a large volume of metrics to scrape, some of the applicable custom scrape targets can be off-loaded from the single `ama-metrics` Replica pod to the `ama-metrics` DaemonSet pod.
+
+The [ama-metrics-prometheus-config-node configmap](https://aka.ms/azureprometheus-addon-ds-configmap), is similar to the replica-set configmap, and can be created to have static scrape configs on each node. The scrape config should only target a single node and shouldn't use service discovery. Otherwise, each node tries to scrape all targets and makes many calls to the Kubernetes API server.
+
+Example:- The following `node-exporter` config is one of the default targets for the DaemonSet pods. It uses the `$NODE_IP` environment variable, which is already set for every `ama-metrics` add-on container to target a specific port on the node.
+
+ ```yaml
+ - job_name: nodesample
+ scrape_interval: 30s
+ scheme: http
+ metrics_path: /metrics
+ relabel_configs:
+ - source_labels: [__metrics_path__]
+ regex: (.*)
+ target_label: metrics_path
+ - source_labels: [__address__]
+ replacement: '$NODE_NAME'
+ target_label: instance
+ static_configs:
+ - targets: ['$NODE_IP:9100']
+ ```
+
+Custom scrape targets can follow the same format by using `static_configs` with targets and using the `$NODE_IP` environment variable and specifying the port to scrape. Each pod of the DaemonSet takes the config, scrapes the metrics, and sends them for that node.
+
+## Prometheus configuration tips and examples
+
+Learn some tips from examples in this section.
+
+### Configuration file for custom scrape config
+
+The configuration format is the same as the [Prometheus configuration file](https://aka.ms/azureprometheus-promioconfig). Currently, the following sections are supported:
+
+```yaml
+global:
+ scrape_interval: <duration>
+ scrape_timeout: <duration>
+ external_labels:
+ <labelname1>: <labelvalue>
+ <labelname2>: <labelvalue>
+scrape_configs:
+ - <job-x>
+ - <job-y>
+```
+
+Any other unsupported sections must be removed from the config before they're applied as a configmap. Otherwise, the custom configuration fails validation and isn't applied.
+
+See the [Apply config file](prometheus-metrics-scrape-validate.md#deploy-config-file-as-configmap) section to create a configmap from the Prometheus config.
+
+> [!NOTE]
+> When custom scrape configuration fails to apply because of validation errors, default scrape configuration continues to be used.
+
+## Scrape configs
+Currently, the supported methods of target discovery for a [scrape config](https://aka.ms/azureprometheus-promioconfig-scrape) are either [`static_configs`](https://aka.ms/azureprometheus-promioconfig-static) or [`kubernetes_sd_configs`](https://aka.ms/azureprometheus-promioconfig-sdk8s) for specifying or discovering targets.
+
+#### Static config
+
+A static config has a list of static targets and any extra labels to add to them.
+
+```yaml
+scrape_configs:
+ - job_name: example
+ - targets: [ '10.10.10.1:9090', '10.10.10.2:9090', '10.10.10.3:9090' ... ]
+ - labels: [ label1: value1, label1: value2, ... ]
+```
+
+#### Kubernetes Service Discovery config
+
+Targets discovered using [`kubernetes_sd_configs`](https://aka.ms/azureprometheus-promioconfig-sdk8s) each have different `__meta_*` labels depending on what role is specified. You can use the labels in the `relabel_configs` section to filter targets or replace labels for the targets.
+
+See the [Prometheus examples](https://aka.ms/azureprometheus-promsampleossconfig) of scrape configs for a Kubernetes cluster.
+
+### Relabel configs
+The `relabel_configs` section is applied at the time of target discovery and applies to each target for the job. The following examples show ways to use `relabel_configs`.
+
+#### Add a label
+Add a new label called `example_label` with the value `example_value` to every metric of the job. Use `__address__` as the source label only because that label always exists and adds the label for every target of the job.
+
+```yaml
+relabel_configs:
+- source_labels: [__address__]
+ target_label: example_label
+ replacement: 'example_value'
+```
+
+#### Use Kubernetes Service Discovery labels
+
+If a job is using [`kubernetes_sd_configs`](https://aka.ms/azureprometheus-promioconfig-sdk8s) to discover targets, each role has associated `__meta_*` labels for metrics. The `__*` labels are dropped after discovering the targets. To filter by using them at the metrics level, first keep them using `relabel_configs` by assigning a label name. Then use `metric_relabel_configs` to filter.
+
+```yaml
+# Use the kubernetes namespace as a label called 'kubernetes_namespace'
+relabel_configs:
+- source_labels: [__meta_kubernetes_namespace]
+ action: replace
+ target_label: kubernetes_namespace
+
+# Keep only metrics with the kubernetes namespace 'default'
+metric_relabel_configs:
+- source_labels: [kubernetes_namespace]
+ action: keep
+ regex: 'default'
+```
+
+#### Job and instance relabeling
+
+You can change the `job` and `instance` label values based on the source label, just like any other label.
+
+```yaml
+# Replace the job name with the pod label 'k8s app'
+relabel_configs:
+- source_labels: [__meta_kubernetes_pod_label_k8s_app]
+ target_label: job
+
+# Replace the instance name with the node name. This is helpful to replace a node IP
+# and port with a value that is more readable
+relabel_configs:
+- source_labels: [__meta_kubernetes_node_name]]
+ target_label: instance
+```
+
+### Metric relabel configs
+
+Metric relabel configs are applied after scraping and before ingestion. Use the `metric_relabel_configs` section to filter metrics after scraping. The following examples show how to do so.
+
+#### Drop metrics by name
+
+```yaml
+# Drop the metric named 'example_metric_name'
+metric_relabel_configs:
+- source_labels: [__name__]
+ action: drop
+ regex: 'example_metric_name'
+```
+
+#### Keep only certain metrics by name
+
+```yaml
+# Keep only the metric named 'example_metric_name'
+metric_relabel_configs:
+- source_labels: [__name__]
+ action: keep
+ regex: 'example_metric_name'
+```
+
+```yaml
+# Keep only metrics that start with 'example_'
+metric_relabel_configs:
+- source_labels: [__name__]
+ action: keep
+ regex: '(example_.*)'
+```
+
+#### Rename metrics
+Metric renaming isn't supported.
+
+#### Filter metrics by labels
+
+```yaml
+# Keep metrics only where example_label = 'example'
+metric_relabel_configs:
+- source_labels: [example_label]
+ action: keep
+ regex: 'example'
+```
+
+```yaml
+# Keep metrics only if `example_label` equals `value_1` or `value_2`
+metric_relabel_configs:
+- source_labels: [example_label]
+ action: keep
+ regex: '(value_1|value_2)'
+```
+
+```yaml
+# Keep metrics only if `example_label_1 = value_1` and `example_label_2 = value_2`
+metric_relabel_configs:
+- source_labels: [example_label_1, example_label_2]
+ separator: ';'
+ action: keep
+ regex: 'value_1;value_2'
+```
+
+```yaml
+# Keep metrics only if `example_label` exists as a label
+metric_relabel_configs:
+- source_labels: [example_label_1]
+ action: keep
+ regex: '.+'
+```
+
+### Pod annotation-based scraping
+
+The following scrape config uses the `__meta_*` labels added from the `kubernetes_sd_configs` for the `pod` role to filter for pods with certain annotations.
+
+To scrape certain pods, specify the port, path, and scheme through annotations for the pod and the following job scrapes only the address specified by the annotation:
+
+- `prometheus.io/scrape`: Enable scraping for this pod.
+- `prometheus.io/scheme`: If the metrics endpoint is secured, you need to set scheme to `https` and most likely set the TLS config.
+- `prometheus.io/path`: If the metrics path isn't /metrics, define it with this annotation.
+- `prometheus.io/port`: Specify a single port that you want to scrape.
+
+```yaml
+scrape_configs:
+ - job_name: 'kubernetespods-sample'
+
+ kubernetes_sd_configs:
+ - role: pod
+
+ relabel_configs:
+ # Scrape only pods with the annotation: prometheus.io/scrape = true
+ - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
+ action: keep
+ regex: true
+
+ # If prometheus.io/path is specified, scrape this path instead of /metrics
+ - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
+ action: replace
+ target_label: __metrics_path__
+ regex: (.+)
+
+ # If prometheus.io/port is specified, scrape this port instead of the default
+ - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
+ action: replace
+ regex: ([^:]+)(?::\d+)?;(\d+)
+ replacement: $1:$2
+ target_label: __address__
+
+ # If prometheus.io/scheme is specified, scrape with this scheme instead of http
+ - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
+ action: replace
+ regex: (http|https)
+ target_label: __scheme__
+
+ # Include the pod namespace as a label for each metric
+ - source_labels: [__meta_kubernetes_namespace]
+ action: replace
+ target_label: kubernetes_namespace
+
+ # Include the pod name as a label for each metric
+ - source_labels: [__meta_kubernetes_pod_name]
+ action: replace
+ target_label: kubernetes_pod_name
+
+ # [Optional] Include all pod labels as labels for each metric
+ - action: labelmap
+ regex: __meta_kubernetes_pod_label_(.+)
+```
+
+See the [Apply config file](prometheus-metrics-scrape-validate.md#deploy-config-file-as-configmap) section to create a configmap from the Prometheus config.
+
+## Next steps
+
+[Learn more about collecting Prometheus metrics](../essentials/prometheus-metrics-overview.md)
azure-monitor Prometheus Metrics Scrape Default https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-default.md
+
+ Title: Default Prometheus metrics configuration in Azure Monitor
+description: This article lists the default targets, dashboards, and recording rules for Prometheus metrics in Azure Monitor.
++ Last updated : 09/28/2022+++
+# Default Prometheus metrics configuration in Azure Monitor
+
+This article lists the default targets, dashboards, and recording rules when you [configure Prometheus metrics to be scraped from an Azure Kubernetes Service (AKS) cluster](prometheus-metrics-enable.md) for any AKS cluster.
+
+## Scrape frequency
+
+ The default scrape frequency for all default targets and scrapes is 30 seconds.
+
+## Targets scraped by default
+
+- `cadvisor` (`job=cadvisor`)
+- `nodeexporter` (`job=node`)
+- `kubelet` (`job=kubelet`)
+- `kube-state-metrics` (`job=kube-state-metrics`)
+
+## Metrics collected from default targets
+
+The following metrics are collected by default from each default target. All other metrics are dropped through relabeling rules.
+
+ **cadvisor (job=cadvisor)**<br>
+ - `container_spec_cpu_period`
+ - `container_spec_cpu_quota`
+ - `container_cpu_usage_seconds_total`
+ - `container_memory_rss`
+ - `container_network_receive_bytes_total`
+ - `container_network_transmit_bytes_total`
+ - `container_network_receive_packets_total`
+ - `container_network_transmit_packets_total`
+ - `container_network_receive_packets_dropped_total`
+ - `container_network_transmit_packets_dropped_total`
+ - `container_fs_reads_total`
+ - `container_fs_writes_total`
+ - `container_fs_reads_bytes_total`
+ - `container_fs_writes_bytes_total`
+ - `container_memory_working_set_bytes`
+ - `container_memory_cache`
+ - `container_memory_swap`
+ - `container_cpu_cfs_throttled_periods_total`
+ - `container_cpu_cfs_periods_total`
+ - `container_memory_usage_bytes`
+ - `kubernetes_build_info"`
+
+ **kubelet (job=kubelet)**<br>
+ - `kubelet_volume_stats_used_bytes`
+ - `kubelet_node_name`
+ - `kubelet_running_pods`
+ - `kubelet_running_pod_count`
+ - `kubelet_running_containers`
+ - `kubelet_running_container_count`
+ - `volume_manager_total_volumes`
+ - `kubelet_node_config_error`
+ - `kubelet_runtime_operations_total`
+ - `kubelet_runtime_operations_errors_total`
+ - `kubelet_runtime_operations_duration_seconds` `kubelet_runtime_operations_duration_seconds_bucket` `kubelet_runtime_operations_duration_seconds_sum` `kubelet_runtime_operations_duration_seconds_count`
+ - `kubelet_pod_start_duration_seconds` `kubelet_pod_start_duration_seconds_bucket` `kubelet_pod_start_duration_seconds_sum` `kubelet_pod_start_duration_seconds_count`
+ - `kubelet_pod_worker_duration_seconds` `kubelet_pod_worker_duration_seconds_bucket` `kubelet_pod_worker_duration_seconds_sum` `kubelet_pod_worker_duration_seconds_count`
+ - `storage_operation_duration_seconds` `storage_operation_duration_seconds_bucket` `storage_operation_duration_seconds_sum` `storage_operation_duration_seconds_count`
+ - `storage_operation_errors_total`
+ - `kubelet_cgroup_manager_duration_seconds` `kubelet_cgroup_manager_duration_seconds_bucket` `kubelet_cgroup_manager_duration_seconds_sum` `kubelet_cgroup_manager_duration_seconds_count`
+ - `kubelet_pleg_relist_duration_seconds` `kubelet_pleg_relist_duration_seconds_bucket` `kubelet_pleg_relist_duration_sum` `kubelet_pleg_relist_duration_seconds_count`
+ - `kubelet_pleg_relist_interval_seconds` `kubelet_pleg_relist_interval_seconds_bucket` `kubelet_pleg_relist_interval_seconds_sum` `kubelet_pleg_relist_interval_seconds_count`
+ - `rest_client_requests_total`
+ - `rest_client_request_duration_seconds` `rest_client_request_duration_seconds_bucket` `rest_client_request_duration_seconds_sum` `rest_client_request_duration_seconds_count`
+ - `process_resident_memory_bytes`
+ - `process_cpu_seconds_total`
+ - `go_goroutines`
+ - `kubelet_volume_stats_capacity_bytes`
+ - `kubelet_volume_stats_available_bytes`
+ - `kubelet_volume_stats_inodes_used`
+ - `kubelet_volume_stats_inodes`
+ - `kubernetes_build_info"`
+
+ **nodexporter (job=node)**<br>
+ - `node_cpu_seconds_total`
+ - `node_memory_MemAvailable_bytes`
+ - `node_memory_Buffers_bytes`
+ - `node_memory_Cached_bytes`
+ - `node_memory_MemFree_bytes`
+ - `node_memory_Slab_bytes`
+ - `node_memory_MemTotal_bytes`
+ - `node_netstat_Tcp_RetransSegs`
+ - `node_netstat_Tcp_OutSegs`
+ - `node_netstat_TcpExt_TCPSynRetrans`
+ - `node_load1``node_load5`
+ - `node_load15`
+ - `node_disk_read_bytes_total`
+ - `node_disk_written_bytes_total`
+ - `node_disk_io_time_seconds_total`
+ - `node_filesystem_size_bytes`
+ - `node_filesystem_avail_bytes`
+ - `node_filesystem_readonly`
+ - `node_network_receive_bytes_total`
+ - `node_network_transmit_bytes_total`
+ - `node_vmstat_pgmajfault`
+ - `node_network_receive_drop_total`
+ - `node_network_transmit_drop_total`
+ - `node_disk_io_time_weighted_seconds_total`
+ - `node_exporter_build_info`
+ - `node_time_seconds`
+ - `node_uname_info"`
+
+ **kube-state-metrics (job=kube-state-metrics)**<br>
+ - `kube_job_status_succeeded`
+ - `kube_job_spec_completions`
+ - `kube_daemonset_status_desired_number_scheduled`
+ - `kube_daemonset_status_number_ready`
+ - `kube_deployment_status_replicas_ready`
+ - `kube_pod_container_status_last_terminated_reason`
+ - `kube_pod_container_status_waiting_reason`
+ - `kube_pod_container_status_restarts_total`
+ - `kube_node_status_allocatable`
+ - `kube_pod_owner`
+ - `kube_pod_container_resource_requests`
+ - `kube_pod_status_phase`
+ - `kube_pod_container_resource_limits`
+ - `kube_replicaset_owner`
+ - `kube_resourcequota`
+ - `kube_namespace_status_phase`
+ - `kube_node_status_capacity`
+ - `kube_node_info`
+ - `kube_pod_info`
+ - `kube_deployment_spec_replicas`
+ - `kube_deployment_status_replicas_available`
+ - `kube_deployment_status_replicas_updated`
+ - `kube_statefulset_status_replicas_ready`
+ - `kube_statefulset_status_replicas`
+ - `kube_statefulset_status_replicas_updated`
+ - `kube_job_status_start_time`
+ - `kube_job_status_active`
+ - `kube_job_failed`
+ - `kube_horizontalpodautoscaler_status_desired_replicas`
+ - `kube_horizontalpodautoscaler_status_current_replicas`
+ - `kube_horizontalpodautoscaler_spec_min_replicas`
+ - `kube_horizontalpodautoscaler_spec_max_replicas`
+ - `kubernetes_build_info`
+ - `kube_node_status_condition`
+ - `kube_node_spec_taint`
+ - `kube_pod_container_info`
+ - `kube_resource_labels` (ex - kube_pod_labels, kube_deployment_labels)
+ - `kube_resource_annotations` (ex - kube_pod_annotations, kube_deployment_annotations)
+
+## Default targets scraped for Windows
+Following Windows targets are configured to scrape, but scraping is not enabled (**disabled/OFF**) by default - meaning you don't have to provide any scrape job configuration for scraping these targets but they are disabled/OFF by default and you need to turn ON/enable scraping for these targets using [ama-metrics-settings-configmap](https://aka.ms/azureprometheus-addon-settings-configmap) under `default-scrape-settings-enabled` section
+
+Two default jobs can be run for Windows that scrape metrics required for the dashboards specific to Windows.
+- `windows-exporter` (`job=windows-exporter`)
+- `kube-proxy-windows` (`job=kube-proxy-windows`)
+
+> [!NOTE]
+> This requires applying or updating the `ama-metrics-settings-configmap` configmap and installing `windows-exporter` on all Windows nodes. For more information, see the [enablement document](./prometheus-metrics-enable.md#enable-prometheus-metric-collection).
+
+## Metrics scraped for Windows
+
+The following metrics are collected when windows-exporter and kube-proxy-windows are enabled.
+
+**windows-exporter (job=windows-exporter)**<br>
+ - `windows_system_system_up_time`
+ - `windows_cpu_time_total`
+ - `windows_memory_available_bytes`
+ - `windows_os_visible_memory_bytes`
+ - `windows_memory_cache_bytes`
+ - `windows_memory_modified_page_list_bytes`
+ - `windows_memory_standby_cache_core_bytes`
+ - `windows_memory_standby_cache_normal_priority_bytes`
+ - `windows_memory_standby_cache_reserve_bytes`
+ - `windows_memory_swap_page_operations_total`
+ - `windows_logical_disk_read_seconds_total`
+ - `windows_logical_disk_write_seconds_total`
+ - `windows_logical_disk_size_bytes`
+ - `windows_logical_disk_free_bytes`
+ - `windows_net_bytes_total`
+ - `windows_net_packets_received_discarded_total`
+ - `windows_net_packets_outbound_discarded_total`
+ - `windows_container_available`
+ - `windows_container_cpu_usage_seconds_total`
+ - `windows_container_memory_usage_commit_bytes`
+ - `windows_container_memory_usage_private_working_set_bytes`
+ - `windows_container_network_receive_bytes_total`
+ - `windows_container_network_transmit_bytes_total`
+
+**kube-proxy-windows (job=kube-proxy-windows)**<br>
+ - `kubeproxy_sync_proxy_rules_duration_seconds`
+ - `kubeproxy_sync_proxy_rules_duration_seconds_bucket`
+ - `kubeproxy_sync_proxy_rules_duration_seconds_sum`
+ - `kubeproxy_sync_proxy_rules_duration_seconds_count`
+ - `rest_client_requests_total`
+ - `rest_client_request_duration_seconds`
+ - `rest_client_request_duration_seconds_bucket`
+ - `rest_client_request_duration_seconds_sum`
+ - `rest_client_request_duration_seconds_count`
+ - `process_resident_memory_bytes`
+ - `process_cpu_seconds_total`
+ - `go_goroutines`
+
+## Dashboards
+
+The following default dashboards are automatically provisioned and configured by Azure Monitor managed service for Prometheus when you [link your Azure Monitor workspace to an Azure Managed Grafana instance](../essentials/azure-monitor-workspace-manage.md#link-a-grafana-workspace). Source code for these dashboards can be found in [this GitHub repository](https://aka.ms/azureprometheus-mixins). The below dashboards will be provisioned in the specified Azure Grafana instance under `Managed Prometheus` folder in Grafana. These are the standard open source community dashboards for monitoring Kubernetes clusters with Prometheus and Grafana.
+
+- `Kubernetes / Compute Resources / Cluster`
+- `Kubernetes / Compute Resources / Namespace (Pods)`
+- `Kubernetes / Compute Resources / Node (Pods)`
+- `Kubernetes / Compute Resources / Pod`
+- `Kubernetes / Compute Resources / Namespace (Workloads)`
+- `Kubernetes / Compute Resources / Workload`
+- `Kubernetes / Kubelet`
+- `Node Exporter / USE Method / Node`
+- `Node Exporter / Nodes`
+- `Kubernetes / Compute Resources / Cluster (Windows)`
+- `Kubernetes / Compute Resources / Namespace (Windows)`
+- `Kubernetes / Compute Resources / Pod (Windows)`
+- `Kubernetes / USE Method / Cluster (Windows)`
+- `Kubernetes / USE Method / Node (Windows)`
+
+## Recording rules
+
+The following default recording rules are automatically configured by Azure Monitor managed service for Prometheus when you [link your Azure Monitor workspace to an Azure Managed Grafana instance](../essentials/azure-monitor-workspace-manage.md#link-a-grafana-workspace). Source code for these recording rules can be found in [this GitHub repository](https://aka.ms/azureprometheus-mixins). These are the standard open source recording rules used in the dashboards above.
+
+- `cluster:node_cpu:ratio_rate5m`
+- `namespace_cpu:kube_pod_container_resource_requests:sum`
+- `namespace_cpu:kube_pod_container_resource_limits:sum`
+- `:node_memory_MemAvailable_bytes:sum`
+- `namespace_memory:kube_pod_container_resource_requests:sum`
+- `namespace_memory:kube_pod_container_resource_limits:sum`
+- `namespace_workload_pod:kube_pod_owner:relabel`
+- `node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate`
+- `cluster:namespace:pod_cpu:active:kube_pod_container_resource_requests`
+- `cluster:namespace:pod_cpu:active:kube_pod_container_resource_limits`
+- `cluster:namespace:pod_memory:active:kube_pod_container_resource_requests`
+- `cluster:namespace:pod_memory:active:kube_pod_container_resource_limits`
+- `node_namespace_pod_container:container_memory_working_set_bytes`
+- `node_namespace_pod_container:container_memory_rss`
+- `node_namespace_pod_container:container_memory_cache`
+- `node_namespace_pod_container:container_memory_swap`
+- `instance:node_cpu_utilisation:rate5m`
+- `instance:node_load1_per_cpu:ratio`
+- `instance:node_memory_utilisation:ratio`
+- `instance:node_vmstat_pgmajfault:rate5m`
+- `instance:node_network_receive_bytes_excluding_lo:rate5m`
+- `instance:node_network_transmit_bytes_excluding_lo:rate5m`
+- `instance:node_network_receive_drop_excluding_lo:rate5m`
+- `instance:node_network_transmit_drop_excluding_lo:rate5m`
+- `instance_device:node_disk_io_time_seconds:rate5m`
+- `instance_device:node_disk_io_time_weighted_seconds:rate5m`
+- `instance:node_num_cpu:sum`
+- `node:windows_node:sum`
+- `node:windows_node_num_cpu:sum`
+- `:windows_node_cpu_utilisation:avg5m`
+- `node:windows_node_cpu_utilisation:avg5m`
+- `:windows_node_memory_utilisation:`
+- `:windows_node_memory_MemFreeCached_bytes:sum`
+- `node:windows_node_memory_totalCached_bytes:sum`
+- `:windows_node_memory_MemTotal_bytes:sum`
+- `node:windows_node_memory_bytes_available:sum`
+- `node:windows_node_memory_bytes_total:sum`
+- `node:windows_node_memory_utilisation:ratio`
+- `node:windows_node_memory_utilisation:`
+- `node:windows_node_memory_swap_io_pages:irate`
+- `:windows_node_disk_utilisation:avg_irate`
+- `node:windows_node_disk_utilisation:avg_irate`
+- `node:windows_node_filesystem_usage:`
+- `node:windows_node_filesystem_avail:`
+- `:windows_node_net_utilisation:sum_irate`
+- `node:windows_node_net_utilisation:sum_irate`
+- `:windows_node_net_saturation:sum_irate`
+- `node:windows_node_net_saturation:sum_irate`
+- `windows_pod_container_available`
+- `windows_container_total_runtime`
+- `windows_container_memory_usage`
+- `windows_container_private_working_set_usage`
+- `windows_container_network_received_bytes_total`
+- `windows_container_network_transmitted_bytes_total`
+- `kube_pod_windows_container_resource_memory_request`
+- `kube_pod_windows_container_resource_memory_limit`
+- `kube_pod_windows_container_resource_cpu_cores_request`
+- `kube_pod_windows_container_resource_cpu_cores_limit`
+- `namespace_pod_container:windows_container_cpu_usage_seconds_total:sum_rate`
+
+## Next steps
+
+[Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md)
azure-monitor Prometheus Metrics Scrape Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-scale.md
+
+ Title: Scrape Prometheus metrics at scale in Azure Monitor
+description: Guidance on performance that can be expected when collection metrics at high scale for Azure Monitor managed service for Prometheus.
++ Last updated : 09/28/2022+++
+# Scrape Prometheus metrics at scale in Azure Monitor
+This article provides guidance on performance that can be expected when collection metrics at high scale for [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md).
++
+## CPU and memory
+The CPU and memory usage is correlated with the number of bytes of each sample and the number of samples scraped. These benchmarks are based on the [default targets scraped](prometheus-metrics-scrape-default.md), volume of custom metrics scraped, and number of nodes, pods, and containers. These numbers are meant as a reference since usage can still vary significantly depending on the number of time series and bytes per metric.
+
+The upper volume limit per pod is currently about 3-3.5 million samples per minute, depending on the number of bytes per sample. This limitation is addressed when sharding is added in future.
+
+The agent consists of a deployment with one replica and DaemonSet for scraping metrics. The DaemonSet scrapes any node-level targets such as cAdvisor, kubelet, and node exporter. You can also configure it to scrape any custom targets at the node level with static configs. The replicaset scrapes everything else such as kube-state-metrics or custom scrape jobs that utilize service discovery.
+
+## Comparison between small and large cluster for replica
+
+| Scrape Targets | Samples Sent / Minute | Node Count | Pod Count | Prometheus-Collector CPU Usage (cores) |Prometheus-Collector Memory Usage (bytes) |
+|:|:|:|:|:|:|
+| default targets | 11,344 | 3 | 40 | 12.9 mc | 148 Mi |
+| default targets | 260,000 | 340 | 13000 | 1.10 c | 1.70 GB |
+| default targets<br>+ custom targets | 3.56 million | 340 | 13000 | 5.13 c | 9.52 GB |
+
+## Comparison between small and large cluster for DaemonSets
+
+| Scrape Targets | Samples Sent / Minute Total | Samples Sent / Minute / Pod | Node Count | Pod Count | Prometheus-Collector CPU Usage Total (cores) |Prometheus-Collector Memory Usage Total (bytes) | Prometheus-Collector CPU Usage / Pod (cores) |Prometheus-Collector Memory Usage / Pod (bytes) |
+|:|:|:|:|:|:|:|:|:|
+| default targets | 9,858 | 3,327 | 3 | 40 | 41.9 mc | 581 Mi | 14.7 mc | 189 Mi |
+| default targets | 2.3 million | 14,400 | 340 | 13000 | 805 mc | 305.34 GB | 2.36 mc | 898 Mi |
+
+For more custom metrics, the single pod behaves the same as the replica pod depending on the volume of custom metrics.
++
+## Schedule ama-metrics replica pod on a node pool with more resources
+
+A large volume of metrics per pod requires a large enough node to be able to handle the CPU and memory usage required. If the *ama-metrics* replica pod doesn't get scheduled on a node or nodepool that has enough resources, it might keep getting OOMKilled and go to CrashLoopBackoff. In order to overcome this issue, if you have a node or nodepool on your cluster that has higher resources (in [system node pool](../../aks/use-system-pools.md#system-and-user-node-pools)) and want to get the replica scheduled on that node, you can add the label `azuremonitor/metrics.replica.preferred=true` on the node and the replica pod will get scheduled on this node. Also you can create additional system pool(s), if needed, with larger nodes and can add the same label to their node(s) or nodepool. It's also better to add labels to [nodepool](../../aks/use-labels.md#updating-labels-on-existing-node-pools) rather than nodes so newer nodes in the same pool can also be used for scheduling when this label is applicable to all nodes in the pool.
+
+ ```
+ kubectl label nodes <node-name> azuremonitor/metrics.replica.preferred="true"
+ ```
+## Next steps
+
+- [Troubleshoot issues with Prometheus data collection](prometheus-metrics-troubleshoot.md).
azure-monitor Prometheus Metrics Scrape Validate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-validate.md
+
+ Title: Create, validate and troubleshoot custom configuration file for Prometheus metrics in Azure Monitor
+description: Describes how to create custom configuration file Prometheus metrics in Azure Monitor and use validation tool before applying to Kubernetes cluster.
++ Last updated : 09/28/2022+++
+# Create and validate custom configuration file for Prometheus metrics in Azure Monitor
+
+In addition to the default scrape targets that Azure Monitor Prometheus agent scrapes by default, use the following steps to provide more scrape config to the agent using a configmap. The Azure Monitor Prometheus agent doesn't understand or process operator [CRDs](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) for scrape configuration, but instead uses the native Prometheus configuration as defined in [Prometheus configuration](https://aka.ms/azureprometheus-promioconfig-scrape).
+
+The three configmaps that can be used for custom target scraping are -
+- ama-metrics-prometheus-config - When a configmap with this name is created, scrape jobs defined in it are run from the Azure monitor metrics replica pod running in the cluster.
+- ama-metrics-prometheus-config-node - When a configmap with this name is created, scrape jobs defined in it are run from each **Linux** DaemonSet pod running in the cluster. For more information, see [Advanced Setup](prometheus-metrics-scrape-configuration.md#advanced-setup-configure-custom-prometheus-scrape-jobs-for-the-daemonset).
+- ama-metrics-prometheus-config-node-windows - When a configmap with this name is created, scrape jobs defined in it are run from each **windows** DaemonSet. For more information, see [Advanced Setup](prometheus-metrics-scrape-configuration.md#advanced-setup-configure-custom-prometheus-scrape-jobs-for-the-daemonset).
+
+## Create Prometheus configuration file
+
+One easier way to author Prometheus scrape configuration jobs:
+- Step:1 Use a config file (yaml) to author/define scrape jobs
+- Step:2 Validate the scrape config file using a custom tool (as specified in this article) and then convert that configfile to configmap
+- Step:3 Deploy the scrape config file as configmap to your clusters.
+
+Doing this way is easier to author yaml config (which is extremely space sensitive), and not add unintended spaces by directly authoring scrape config inside config map.
+
+Create a Prometheus scrape configuration file named `prometheus-config`. For more information, see [configuration tips and examples](prometheus-metrics-scrape-configuration.md#prometheus-configuration-tips-and-examples) which gives more details on authoring scrape config for Prometheus. You can also refer to [Prometheus.io](https://aka.ms/azureprometheus-promio) scrape configuration [reference](https://aka.ms/azureprometheus-promioconfig-scrape). Your config file lists the scrape configs under the section `scrape_configs` section and can optionally use the global section for setting the global `scrape_interval`, `scrape_timeout`, and `external_labels`.
++
+> [!TIP]
+> Changes to global section will impact the default configs and the custom config.
+
+Here is a sample Prometheus scrape config file:
+
+```
+global:
+ scrape_interval: 30s
+scrape_configs:
+- job_name: my_static_config
+ scrape_interval: 60s
+ static_configs:
+ - targets: ['my-static-service.svc.cluster.local:1234']
+
+- job_name: prometheus_example_app
+ scheme: http
+ kubernetes_sd_configs:
+ - role: service
+ relabel_configs:
+ - source_labels: [__meta_kubernetes_service_name]
+ action: keep
+ regex: "prometheus-example-service"
+```
+
+## Validate the scrape config file
+
+The agent uses a custom `promconfigvalidator` tool to validate the Prometheus config given to it through the configmap. If the config isn't valid, then the custom configuration given gets rejected by the addon agent. Once you have your Prometheus config file, you can *optionally* use the `promconfigvalidator` tool to validate your config before creating a configmap that the agent consumes.
+
+The `promconfigvalidator` tool is shipped inside the Azure Monitor metrics addon pod(s). You can use any of the `ama-metrics-node-*` pods in `kube-system` namespace in your cluster to download the tool for validation. Use `kubectl cp` to download the tool and its configuration:
+
+```
+for podname in $(kubectl get pods -l rsName=ama-metrics -n=kube-system -o json | jq -r '.items[].metadata.name'); do kubectl cp -n=kube-system "${podname}":/opt/promconfigvalidator ./promconfigvalidator; kubectl cp -n=kube-system "${podname}":/opt/microsoft/otelcollector/collector-config-template.yml ./collector-config-template.yml; chmod 500 promconfigvalidator; done
+```
+
+After copying the executable and the yaml, locate the path of your Prometheus configuration file that you authored. Then replace `<config path>` in the command and run the validator with the command:
+
+```
+./promconfigvalidator/promconfigvalidator --config "<config path>" --otelTemplate "./promconfigvalidator/collector-config-template.yml"
+```
+
+Running the validator generates the merged configuration file `merged-otel-config.yaml` if no path is provided with the optional `output` parameter. Don't use this autogenerated merged file as config to the metrics collector agent, as it's only used for tool validation and debugging purposes.
+
+### Deploy config file as configmap
+Your custom Prometheus configuration file is consumed as a field named `prometheus-config` inside metrics addon configmap(s) `ama-metrics-prometheus-config` (or) `ama-metrics-prometheus-config-node` (or) `ama-metrics-prometheus-config-node-windows` in the `kube-system` namespace. You can create a configmap from the scrape config file you created above, by renaming your Prometheus configuration file to `prometheus-config` (with no file extension) and running one or more of the following commands, depending on which configmap you want to create for your custom scrape job(s) config.
+
+Ex;- to create configmap to be used by replicsset
+```
+kubectl create configmap ama-metrics-prometheus-config --from-file=prometheus-config -n kube-system
+```
+This creates a configmap named `ama-metrics-prometheus-config` in `kube-system` namespace. The Azure Monitor metrics replica pod restarts in 30-60 secs to apply the new config. To see if there any issues with the config validation, processing, or merging, you can look at the `ama-metrics` replica pods
+
+Ex;- to create configmap to be used by linux DaemonSet
+```
+kubectl create configmap ama-metrics-prometheus-config-node --from-file=prometheus-config -n kube-system
+```
+This creates a configmap named `ama-metrics-prometheus-config-node` in `kube-system` namespace. Every Azure Monitor metrics Linux DaemonSet pod restarts in 30-60 secs to apply the new config. To see if there any issues with the config validation, processing, or merging, you can look at the `ama-metrics-node` linux deamonset pods
++
+Ex;- to create configmap to be used by windows DaemonSet
+```
+kubectl create configmap ama-metrics-prometheus-config-node-windows --from-file=prometheus-config -n kube-system
+```
+
+This creates a configmap named `ama-metrics-prometheus-config-node-windows` in `kube-system` namespace. Every Azure Monitor metrics Windows DaemonSet pod restarts in 30-60 secs to apply the new config. To see if there any issues with the config validation, processing, or merging, you can look at the `ama-metrics-win-node` windows deamonset pods
++
+*Ensure that the Prometheus config file is named `prometheus-config` before running the following command since the file name is used as the configmap setting name.*
+
+This creates a configmap named `ama-metrics-prometheus-config` in `kube-system` namespace. The Azure Monitor metrics pod restarts to apply the new config. To see if there any issues with the config validation, processing, or merging, you can look at the `ama-metrics` pods.
+
+A sample of the `ama-metrics-prometheus-config` configmap is [here](https://github.com/Azure/prometheus-collector/blob/main/otelcollector/configmaps/ama-metrics-prometheus-config-configmap.yaml).
+
+### Troubleshooting
+If you successfully created the configmap (ama-metrics-prometheus-config or ama-metrics-prometheus-config-node) in the **kube-system** namespace and still don't see the custom targets being scraped, check for errors in the **replica pod** logs for **ama-metrics-prometheus-config** configmap or **DaemonSet pod** logs for **ama-metrics-prometheus-config-node** configmap) using *kubectl logs* and make sure there are no errors in the *Start Merging Default and Custom Prometheus Config* section with prefix *prometheus-config-merger*
+
+## Next steps
+
+- [Learn more about collecting Prometheus metrics](../essentials/prometheus-metrics-overview.md).
azure-monitor Prometheus Metrics Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-troubleshoot.md
+
+ Title: Troubleshoot collection of Prometheus metrics in Azure Monitor
+description: Steps that you can take if you aren't collecting Prometheus metrics as expected.
++ Last updated : 09/28/2022+++
+# Troubleshoot collection of Prometheus metrics in Azure Monitor
+
+Follow the steps in this article to determine the cause of Prometheus metrics not being collected as expected in Azure Monitor.
+
+Replica pod scrapes metrics from `kube-state-metrics` and custom scrape targets in the `ama-metrics-prometheus-config` configmap. DaemonSet pods scrape metrics from the following targets on their respective node: `kubelet`, `cAdvisor`, `node-exporter`, and custom scrape targets in the `ama-metrics-prometheus-config-node` configmap. The pod that you want to view the logs and the Prometheus UI for it depends on which scrape target you're investigating.
+
+## Metrics Throttling
+
+In the Azure portal, navigate to your Azure Monitor Workspace. Go to `Metrics` and verify that the metrics `Active Time Series % Utilization` and `Events Per Minute Ingested % Utilization` are below 100%.
++
+If either of them are more than 100%, ingestion into this workspace is being throttled. In the same workspace, navigate to `New Support Request` to create a request to increase the limits. Select the issue type as `Service and subscription limits (quotas)` and the quota type as `Managed Prometheus`.
+
+## Intermittent gaps in metric data collection
+
+During node updates, you may see a 1 to 2 minute gap in metric data for metrics collected from our cluster level collector. This gap is because the node it runs on is being updated as part of a normal update process. It affects cluster-wide targets such as kube-state-metrics and custom application targets that are specified. It occurs when your cluster is updated manually or via autoupdate. This behavior is expected and occurs due to the node it runs on being updated. None of our recommended alert rules are affected by this behavior.
+
+## Pod status
+
+Check the pod status with the following command:
+
+```
+kubectl get pods -n kube-system | grep ama-metrics
+```
+
+- There should be one `ama-metrics-xxxxxxxxxx-xxxxx` replica pod, one `ama-metrics-ksm-*` pod, and an `ama-metrics-node-*` pod for each node on the cluster.
+- Each pod state should be `Running` and have an equal number of restarts to the number of configmap changes that have been applied:
++
+If each pod state is `Running` but one or more pods have restarts, run the following command:
+
+```
+kubectl describe pod <ama-metrics pod name> -n kube-system
+```
+
+- This command provides the reason for the restarts. Pod restarts are expected if configmap changes have been made. If the reason for the restart is `OOMKilled`, the pod can't keep up with the volume of metrics. See the scale recommendations for the volume of metrics.
+
+If the pods are running as expected, the next place to check is the container logs.
+
+## Container logs
+View the container logs with the following command:
+
+```
+kubectl logs <ama-metrics pod name> -n kube-system -c prometheus-collector
+```
+
+ At startup, any initial errors are printed in red, while warnings are printed in yellow. (Viewing the colored logs requires at least PowerShell version 7 or a linux distribution.)
+
+- Verify if there's an issue with getting the authentication token:
+ - The message *No configuration present for the AKS resource* gets logged every 5 minutes.
+ - The pod restarts every 15 minutes to try again with the error: *No configuration present for the AKS resource*.
+ - If so, check that the Data Collection Rule and Data Collection Endpoint exist in your resource group.
+ - Also verify that the Azure Monitor Workspace exists.
+ - Verify that you don't have a private AKS cluster and that it's not linked to an Azure Monitor Private Link Scope for any other service. This scenario is currently not supported.
+- Verify there are no errors with parsing the Prometheus config, merging with any default scrape targets enabled, and validating the full config.
+- If you did include a custom Prometheus config, verify that it's recognized in the logs. If not:
+ - Verify that your configmap has the correct name: `ama-metrics-prometheus-config` in the `kube-system` namespace.
+ - Verify that in the configmap your Prometheus config is under a section called `prometheus-config` under `data` like shown here:
+ ```
+ kind: ConfigMap
+ apiVersion: v1
+ metadata:
+ name: ama-metrics-prometheus-config
+ namespace: kube-system
+ data:
+ prometheus-config: |-
+ scrape_configs:
+ - job_name: <your scrape job here>
+ ```
+- Verify there are no errors from `MetricsExtension` regarding authenticating with the Azure Monitor workspace.
+- Verify there are no errors from the `OpenTelemetry collector` about scraping the targets.
+
+Run the following command:
+
+```
+kubectl logs <ama-metrics pod name> -n kube-system -c addon-token-adapter
+```
+
+- This command shows an error if there's an issue with authenticating with the Azure Monitor workspace. The example below shows logs with no issues:
+ :::image type="content" source="media/prometheus-metrics-troubleshoot/addon-token-adapter.png" alt-text="Screenshot showing addon token log." lightbox="media/prometheus-metrics-troubleshoot/addon-token-adapter.png" :::
+
+If there are no errors in the logs, the Prometheus interface can be used for debugging to verify the expected configuration and targets being scraped.
+
+## Prometheus interface
+
+Every `ama-metrics-*` pod has the Prometheus Agent mode User Interface available on port 9090. Port-forward into either the replica pod or one of the daemon set pods to check the config, service discovery and targets endpoints as described here to verify the custom configs are correct, the intended targets have been discovered for each job, and there are no errors with scraping specific targets.
+
+Run the command `kubectl port-forward <ama-metrics pod> -n kube-system 9090`.
+
+- Open a browser to the address `127.0.0.1:9090/config`. This user interface has the full scrape configuration. Verify all jobs are included in the config.
++
+- Go to `127.0.0.1:9090/service-discovery` to view the targets discovered by the service discovery object specified and what the relabel_configs have filtered the targets to be. For example, when missing metrics from a certain pod, you can find if that pod was discovered and what its URI is. You can then use this URI when looking at the targets to see if there are any scrape errors.
++
+- Go to `127.0.0.1:9090/targets` to view all jobs, the last time the endpoint for that job was scraped, and any errors
+
+If there are no issues and the intended targets are being scraped, you can view the exact metrics being scraped by enabling debug mode.
+
+## Debug mode
+
+The metrics addon can be configured to run in debug mode by changing the configmap setting `enabled` under `debug-mode` to `true` by following the instructions [here](prometheus-metrics-scrape-configuration.md#debug-mode). This mode can affect performance and should only be enabled for a short time for debugging purposes.
+
+When enabled, all Prometheus metrics that are scraped are hosted at port 9090. Run the following command:
+
+```
+kubectl port-forward <ama-metrics pod name> -n kube-system 9090
+```
+
+Go to `127.0.0.1:9090/metrics` in a browser to see if the metrics were scraped by the OpenTelemetry Collector. This user interface can be accessed for every `ama-metrics-*` pod. If metrics aren't there, there could be an issue with the metric or label name lengths or the number of labels. Also check for exceeding the ingestion quota for Prometheus metrics as specified in this article.
+
+## Metric names, label names & label values
+
+Agent based scraping currently has the limitations in the following table:
+
+| Property | Limit |
+|:|:|
+| Label name length | Less than or equal to 511 characters. When this limit is exceeded for any time-series in a job, the entire scrape job fails, and metrics get dropped from that job before ingestion. You can see up=0 for that job and also target Ux shows the reason for up=0. |
+| Label value length | Less than or equal to 1023 characters. When this limit is exceeded for any time-series in a job, the entire scrape fails, and metrics get dropped from that job before ingestion. You can see up=0 for that job and also target Ux shows the reason for up=0. |
+| Number of labels per time series | Less than or equal to 63. When this limit is exceeded for any time-series in a job, the entire scrape job fails, and metrics get dropped from that job before ingestion. You can see up=0 for that job and also target Ux shows the reason for up=0. |
+| Metric name length | Less than or equal to 511 characters. When this limit is exceeded for any time-series in a job, only that particular series get dropped. MetricextensionConsoleDebugLog has traces for the dropped metric. |
+| Label names with different casing | Two labels within the same metric sample, with different casing is treated as having duplicate labels and are dropped when ingested. For example, the time series `my_metric{ExampleLabel="label_value_0", examplelabel="label_value_1}` is dropped due to duplicate labels since `ExampleLabel` and `examplelabel` are seen as the same label name. |
+
+## Check ingestion quota on Azure Monitor workspace
+
+If you see metrics missed, you can first check if the ingestion limits are being exceeded for your Azure Monitor workspace. In the Azure portal, you can check the current usage for any Azure monitor Workspace. You can see current usage metrics under `Metrics` menu for the Azure Monitor workspace. Following utilization metrics are available as standard metrics for each Azure Monitor workspace.
+
+- Active Time Series - The number of unique time series recently ingested into the workspace over the previous 12 hours
+- Active Time Series Limit - The limit on the number of unique time series that can be actively ingested into the workspace
+- Active Time Series % Utilization - The percentage of current active time series being utilized
+- Events Per Minute Ingested - The number of events (samples) per minute recently received
+- Events Per Minute Ingested Limit - The maximum number of events per minute that can be ingested before getting throttled
+- Events Per Minute Ingested % Utilization - The percentage of current metric ingestion rate limit being util
+
+Refer to [service quotas and limits](../service-limits.md#prometheus-metrics) for default quotas and also to understand what can be increased based on your usage. You can request quota increase for Azure Monitor workspaces using the `Support Request` menu for the Azure Monitor workspace. Ensure you include the ID, internal ID and Location/Region for the Azure Monitor workspace in the support request, which you can find in the `Properties' menu for the Azure Monitor workspace in the Azure portal.
++
+## Next steps
+
+- [Check considerations for collecting metrics at high scale](prometheus-metrics-scrape-scale.md).
azure-monitor Prometheus Remote Write Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-remote-write-active-directory.md
+
+ Title: Remote-write in Azure Monitor Managed Service for Prometheus using Azure Active Directory
+description: Describes how to configure remote-write to send data from self-managed Prometheus running in your Kubernetes cluster running on-premises or in another cloud using Azure Active Directory authentication.
++ Last updated : 11/01/2022++
+# Configure remote write for Azure Monitor managed service for Prometheus using Azure Active Directory authentication
+This article describes how to configure [remote-write](prometheus-remote-write.md) to send data from self-managed Prometheus running in your AKS cluster or Azure Arc-enabled Kubernetes cluster using Azure Active Directory authentication.
+
+## Cluster configurations
+This article applies to the following cluster configurations:
+
+- Azure Kubernetes service (AKS)
+- Azure Arc-enabled Kubernetes cluster
+- Kubernetes cluster running in another cloud or on-premises
+
+> [!NOTE]
+> For Azure Kubernetes service (AKS) or Azure Arc-enabled Kubernetes cluster, managed identify authentication is recommended. See [Azure Monitor managed service for Prometheus remote write - managed identity](prometheus-remote-write-managed-identity.md).
+
+## Prerequisites
+See prerequisites at [Azure Monitor managed service for Prometheus remote write](prometheus-remote-write.md#prerequisites).
+
+## Create Azure Active Directory application
+Follow the procedure at [Register an application with Azure AD and create a service principal](../../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal) to register an application for Prometheus remote-write and create a service principal.
++
+## Get the client ID of the Azure Active Directory application.
+
+1. From the **Azure Active Directory** menu in Azure Portal, select **App registrations**.
+2. Locate your application and note the client ID.
+
+ :::image type="content" source="media/prometheus-remote-write-active-directory/application-client-id.png" alt-text="Screenshot showing client ID of Azure Active Directory application." lightbox="media/prometheus-remote-write-active-directory/application-client-id.png":::
+
+## Assign Monitoring Metrics Publisher role on the data collection rule to the application
+The application requires the *Monitoring Metrics Publisher* role on the data collection rule associated with your Azure Monitor workspace.
+
+1. From the menu of your Azure Monitor Workspace account, click the **Data collection rule** to open the **Overview** page for the data collection rule.
+
+ :::image type="content" source="media/prometheus-remote-write-managed-identity/azure-monitor-account-data-collection-rule.png" alt-text="Screenshot showing data collection rule used by Azure Monitor workspace." lightbox="media/prometheus-remote-write-managed-identity/azure-monitor-account-data-collection-rule.png":::
+
+2. Click on **Access control (IAM)** in the **Overview** page for the data collection rule.
+
+ :::image type="content" source="media/prometheus-remote-write-managed-identity/azure-monitor-account-access-control.png" alt-text="Screenshot showing Access control (IAM) menu item on the data collection rule Overview page." lightbox="media/prometheus-remote-write-managed-identity/azure-monitor-account-access-control.png":::
+
+3. Click **Add** and then **Add role assignment**.
+
+ :::image type="content" source="media/prometheus-remote-write-managed-identity/data-collection-rule-add-role-assignment.png" alt-text="Screenshot showing adding a role assignment on Access control pages." lightbox="media/prometheus-remote-write-managed-identity/data-collection-rule-add-role-assignment.png":::
+
+4. Select **Monitoring Metrics Publisher** role and click **Next**.
+
+ :::image type="content" source="media/prometheus-remote-write-managed-identity/add-role-assignment.png" alt-text="Screenshot showing list of role assignments." lightbox="media/prometheus-remote-write-managed-identity/add-role-assignment.png":::
+
+5. Select **User, group, or service principal** and then click **Select members**. Select the application that you created and click **Select**.
+
+ :::image type="content" source="media/prometheus-remote-write-active-directory/select-application.png" alt-text="Screenshot showing selection of application." lightbox="media/prometheus-remote-write-active-directory/select-application.png":::
+
+6. Click **Review + assign** to complete the role assignment.
++
+## Create an Azure key vault and generate certificate
+
+1. If you don't already have an Azure key vault, then create a new one using the guidance at [Create a vault](../../key-vault/general/quick-create-portal.md#create-a-vault).
+2. Create a certificate using the guidance at [Add a certificate to Key Vault](../../key-vault/certificates/quick-create-portal.md#add-a-certificate-to-key-vault).
+3. Download the newly generated certificate in CER format using the guidance at [Export certificate from Key Vault](../../key-vault/certificates/quick-create-portal.md#export-certificate-from-key-vault).
+
+## Add certificate to the Azure Active Directory application
+
+1. From the menu for your Azure Active Directory application, select **Certificates & secrets**.
+2. Click **Upload certificate** and select the certificate that you downloaded.
+
+ :::image type="content" source="media/prometheus-remote-write-active-directory/upload-certificate.png" alt-text="Screenshot showing upload of certificate for Azure Active Directory application." lightbox="media/prometheus-remote-write-active-directory/upload-certificate.png":::
+
+> [!WARNING]
+> Certificates have an expiration date, and it's the responsibility of the user to keep these certificates valid.
+
+## Add CSI driver and storage for cluster
+
+> [!NOTE]
+> Azure Key Vault CSI driver configuration is just one of the ways to get certificate mounted on the pod. The remote write container only needs a local path to a certificate in the pod for the setting `AZURE_CLIENT_CERTIFICATE_PATH` value in the [Deploy Side car and configure remote write on the Prometheus server](#deploy-side-car-and-configure-remote-write-on-the-prometheus-server) step below.
+
+This step is only required if you didn't enable Azure Key Vault Provider for Secrets Store CSI Driver when you created your cluster.
+
+1. Run the following Azure CLI command to enable Azure Key Vault Provider for Secrets Store CSI Driver for your cluster.
+
+ ```azurecli
+ az aks enable-addons --addons azure-keyvault-secrets-provider --name <aks-cluster-name> --resource-group <resource-group-name>
+ ```
+
+2. Run the following commands to give the identity access to the key vault.
+
+ ```azurecli
+ # show client id of the managed identity of the cluster
+ az aks show -g <resource-group> -n <cluster-name> --query addonProfiles.azureKeyvaultSecretsProvider.identity.clientId -o tsv
+
+ # set policy to access keys in your key vault
+ az keyvault set-policy -n <keyvault-name> --key-permissions get --spn <identity-client-id>
+
+ # set policy to access secrets in your key vault
+ az keyvault set-policy -n <keyvault-name> --secret-permissions get --spn <identity-client-id>
+
+ # set policy to access certs in your key vault
+ az keyvault set-policy -n <keyvault-name> --certificate-permissions get --spn <identity-client-id>
+ ```
+
+3. Create a *SecretProviderClass* by saving the following YAML to a file named *secretproviderclass.yml*. Replace the values for `userAssignedIdentityID`, `keyvaultName`, `tenantId` and the objects to retrieve from your key vault. See [Provide an identity to access the Azure Key Vault Provider for Secrets Store CSI Driver](../../aks/csi-secrets-store-identity-access.md) for details on values to use.
+
+ ```yml
+ # This is a SecretProviderClass example using user-assigned identity to access your key vault
+ apiVersion: secrets-store.csi.x-k8s.io/v1
+ kind: SecretProviderClass
+ metadata:
+ name: azure-kvname-user-msi
+ spec:
+ provider: azure
+ parameters:
+ usePodIdentity: "false"
+ useVMManagedIdentity: "true" # Set to true for using managed identity
+ userAssignedIdentityID: <client-id> # Set the clientID of the user-assigned managed identity to use
+ keyvaultName: <key-vault-name> # Set to the name of your key vault
+ cloudName: "" # [OPTIONAL for Azure] if not provided, the Azure environment defaults to AzurePublicCloud
+ objects: |
+ array:
+ - |
+ objectName: <name-of-cert>
+ objectType: secret # object types: secret, key, or cert
+ objectFormat: pfx
+ objectEncoding: base64
+ objectVersion: ""
+ tenantId: <tenant-id> # The tenant ID of the key vault
+ ```
+
+4. Apply the *SecretProviderClass* by running the following command on your cluster.
+
+ ```
+ kubectl apply -f secretproviderclass.yml
+ ```
+
+## Deploy Side car and configure remote write on the Prometheus server
+
+1. Copy the YAML below and save to a file. This YAML assumes you're using 8081 as your listening port. Modify that value if you use a different port.
++
+ ```yml
+ prometheus:
+ prometheusSpec:
+ externalLabels:
+ cluster: <CLUSTER-NAME>
+
+ ## Azure Managed Prometheus currently exports some default mixins in Grafana.
+ ## These mixins are compatible with data scraped by Azure Monitor agent on your
+ ## Azure Kubernetes Service cluster. These mixins aren't compatible with Prometheus
+ ## metrics scraped by the Kube Prometheus stack.
+ ## To make these mixins compatible, uncomment the remote write relabel configuration below:
+
+ ## writeRelabelConfigs:
+ ## - sourceLabels: [metrics_path]
+ ## regex: /metrics/cadvisor
+ ## targetLabel: job
+ ## replacement: cadvisor
+ ## action: replace
+ ## - sourceLabels: [job]
+ ## regex: 'node-exporter'
+ ## targetLabel: job
+ ## replacement: node
+ ## action: replace
+
+ ## https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write
+ remoteWrite:
+ - url: 'http://localhost:8081/api/v1/write'
+
+ # Additional volumes on the output StatefulSet definition.
+ # Required only for AAD based auth
+ volumes:
+ - name: secrets-store-inline
+ csi:
+ driver: secrets-store.csi.k8s.io
+ readOnly: true
+ volumeAttributes:
+ secretProviderClass: azure-kvname-user-msi
+ containers:
+ - name: prom-remotewrite
+ image: <CONTAINER-IMAGE-VERSION>
+ imagePullPolicy: Always
+
+ # Required only for AAD based auth
+ volumeMounts:
+ - name: secrets-store-inline
+ mountPath: /mnt/secrets-store
+ readOnly: true
+ ports:
+ - name: rw-port
+ containerPort: 8081
+ livenessProbe:
+ httpGet:
+ path: /health
+ port: rw-port
+ initialDelaySeconds: 10
+ timeoutSeconds: 10
+ readinessProbe:
+ httpGet:
+ path: /ready
+ port: rw-port
+ initialDelaySeconds: 10
+ timeoutSeconds: 10
+ env:
+ - name: INGESTION_URL
+ value: '<INGESTION_URL>'
+ - name: LISTENING_PORT
+ value: '8081'
+ - name: IDENTITY_TYPE
+ value: aadApplication
+ - name: AZURE_CLIENT_ID
+ value: '<APP-REGISTRATION-CLIENT-ID>'
+ - name: AZURE_TENANT_ID
+ value: '<TENANT-ID>'
+ - name: AZURE_CLIENT_CERTIFICATE_PATH
+ value: /mnt/secrets-store/<CERT-NAME>
+ - name: CLUSTER
+ value: '<CLUSTER-NAME>'
+ ```
++
+2. Replace the following values in the YAML.
+
+ | Value | Description |
+ |:|:|
+ | `<CLUSTER-NAME>` | Name of your AKS cluster |
+ | `<CONTAINER-IMAGE-VERSION>` | `mcr.microsoft.com/azuremonitor/prometheus/promdev/prom-remotewrite:prom-remotewrite-20230505.1`<br>This is the remote write container image version. |
+ | `<INGESTION-URL>` | **Metrics ingestion endpoint** from the **Overview** page for the Azure Monitor workspace |
+ | `<APP-REGISTRATION -CLIENT-ID> ` | Client ID of your application |
+ | `<TENANT-ID> ` | Tenant ID of the Azure Active Directory application |
+ | `<CERT-NAME>` | Name of the certificate |
+ | `<CLUSTER-NAME>` | Name of the cluster Prometheus is running on |
+
+
+++
+3. Open Azure Cloud Shell and upload the YAML file.
+4. Use helm to apply the YAML file to update your Prometheus configuration with the following CLI commands.
+
+ ```azurecli
+ # set context to your cluster
+ az aks get-credentials -g <aks-rg-name> -n <aks-cluster-name>
+
+ # use helm to update your remote write config
+ helm upgrade -f <YAML-FILENAME>.yml prometheus prometheus-community/kube-prometheus-stack -namespace <namespace where Prometheus pod resides>
+ ```
+
+## Verification and troubleshooting
+See [Azure Monitor managed service for Prometheus remote write](prometheus-remote-write.md#verify-remote-write-is-working-correctly).
+
+## Next steps
+
+= [Setup Grafana to use Managed Prometheus as a data source](../essentials/prometheus-grafana.md).
+- [Learn more about Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md).
azure-monitor Prometheus Remote Write Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-remote-write-azure-ad-pod-identity.md
+
+ Title: Configure remote write for Azure Monitor managed service for Prometheus using Microsoft Azure Active Directory pod identity (preview)
+description: Configure remote write for Azure Monitor managed service for Prometheus using Azure AD pod identity (preview)
+++ Last updated : 05/11/2023+++
+# Configure remote write for Azure Monitor managed service for Prometheus using Microsoft Azure Active Directory pod identity (preview)
++
+> [!NOTE]
+> The remote write sidecar should only be configured via the following steps only if the AKS cluster already has the Azure AD pod enabled. This approach is not recommended as AAD pod identity has been deprecated to be replace by [Azure Workload Identity](/azure/active-directory/workload-identities/workload-identities-overview)
++
+To configure remote write for Azure Monitor managed service for Prometheus using Azure AD pod identity, follow the steps below.
+
+1. Create user assigned identity or use an existing user assigned managed identity. For information on creating the managed identity, see [Configure remote write for Azure Monitor managed service for Prometheus using managed identity authentication](./prometheus-remote-write-managed-identity.md#get-the-client-id-of-the-user-assigned-identity).
+1. Assign the `Managed Identity Operator` and `Virtual Machine Contributor` roles to the managed identity created/used in the previous step.
+
+ ```azurecli
+ az role assignment create --role "Managed Identity Operator" --assignee <managed identity clientID> --scope <NodeResourceGroupResourceId>
+
+ az role assignment create --role "Virtual Machine Contributor" --assignee <managed identity clientID> --scope <Node ResourceGroup Id>
+ ```
+
+ The node resource group of the AKS cluster contains resources that you will require for other steps in this process. This resource group has the name MC_\<AKS-RESOURCE-GROUP\>_\<AKS-CLUSTER-NAME\>_\<REGION\>. You can locate it from the Resource groups menu in the Azure portal.
+
+1. Grant user-assigned managed identity `Monitoring Metrics Publisher` roles.
+
+ ```azurecli
+ az role assignment create --role "Monitoring Metrics Publisher" --assignee <managed identity clientID> --scope <NodeResourceGroupResourceId>
+ ```
+
+1. Create AzureIdentityBinding
+
+ The user assigned managed identity requires identity binding in order to be used as a pod identity. Run the following commands:
+
+ Copy the following YAML to the `aadpodidentitybinding.yaml` file.
+
+ ```yml
+
+ apiVersion: "aadpodidentity.k8s.io/v1"
+
+ kind: AzureIdentityBinding
+ metadata:
+ name: demo1-azure-identity-binding
+ spec:
+ AzureIdentity: ΓÇ£<AzureIdentityName>ΓÇ¥
+ Selector: ΓÇ£<AzureIdentityBindingSelector>ΓÇ¥
+ ```
+
+ Run the following command:
+
+ ```azurecli
+ kubectl create -f aadpodidentitybinding.yaml
+ ```
+
+1. Add a `aadpodidbinding` label to the Prometheus pod.
+ The `aadpodidbinding` label must be added to the Prometheus pod for the pod identity to take effect. This can be achieved by updating the `deployment.yaml` or injecting labels while deploying the sidecar as mentioned in the next step.
+
+1. Deploy side car and configure remote write on the Prometheus server.
+
+ 1. Copy the YAML below and save to a file.
+
+ ```yml
+ prometheus:
+ prometheusSpec:
+ podMetadata:
+ labels:
+ aadpodidbinding: <AzureIdentityBindingSelector>
+ externalLabels:
+ cluster: <AKS-CLUSTER-NAME>
+ remoteWrite:
+ - url: 'http://localhost:8081/api/v1/write'
+ containers:
+ - name: prom-remotewrite
+ image: <CONTAINER-IMAGE-VERSION>
+ imagePullPolicy: Always
+ ports:
+ - name: rw-port
+ containerPort: 8081
+ livenessProbe:
+ httpGet:
+ path: /health
+ port: rw-port
+ initialDelaySeconds: 10
+ timeoutSeconds: 10
+ readinessProbe:
+ httpGet:
+ path: /ready
+ port: rw-port
+ initialDelaySeconds: 10
+ timeoutSeconds: 10
+ env:
+ - name: INGESTION_URL
+ value: <INGESTION_URL>
+ - name: LISTENING_PORT
+ value: '8081'
+ - name: IDENTITY_TYPE
+ value: userAssigned
+ - name: AZURE_CLIENT_ID
+ value: <MANAGED-IDENTITY-CLIENT-ID>
+ # Optional parameter
+ - name: CLUSTER
+ value: <CLUSTER-NAME>
+ ```
+
+ b. Use helm to apply the YAML file to update your Prometheus configuration with the following CLI commands.
+
+ ```azurecli
+ # set context to your cluster
+ az aks get-credentials -g <aks-rg-name> -n <aks-cluster-name>
+ # use helm to update your remote write config
+ helm upgrade -f <YAML-FILENAME>.yml prometheus prometheus-community/kube-prometheus-stack --namespace <namespace where Prometheus pod resides>
+ ```
azure-monitor Prometheus Remote Write Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-remote-write-managed-identity.md
+
+ Title: Remote-write in Azure Monitor Managed Service for Prometheus using managed identity
+description: Describes how to configure remote-write to send data from self-managed Prometheus running in your AKS cluster or Azure Arc-enabled Kubernetes cluster using managed identity authentication.
++ Last updated : 11/01/2022++
+# Configure remote write for Azure Monitor managed service for Prometheus using managed identity authentication
+This article describes how to configure [remote-write](prometheus-remote-write.md) to send data from self-managed Prometheus running in your AKS cluster or Azure Arc-enabled Kubernetes cluster using managed identity authentication. You either use an existing identity created by AKS or [create one of your own](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md). Both options are described here.
+
+## Cluster configurations
+This article applies to the following cluster configurations:
+
+- Azure Kubernetes service (AKS)
+- Azure Arc-enabled Kubernetes cluster
+
+> [!NOTE]
+> For a Kubernetes cluster running in another cloud or on-premises, see [Azure Monitor managed service for Prometheus remote write - Azure Active Directory](prometheus-remote-write-active-directory.md).
+
+## Prerequisites
+See prerequisites at [Azure Monitor managed service for Prometheus remote write](prometheus-remote-write.md#prerequisites).
+
+## Locate AKS node resource group
+The node resource group of the AKS cluster contains resources that you will require for other steps in this process. This resource group has the name `MC_<AKS-RESOURCE-GROUP>_<AKS-CLUSTER-NAME>_<REGION>`. You can locate it from the **Resource groups** menu in the Azure portal. Start by making sure that you can locate this resource group since other steps below will refer to it.
++
+## Get the client ID of the user assigned identity
+You will require the client ID of the identity that you're going to use. Note this value for use in later steps in this process.
+
+Get the **Client ID** from the **Overview** page of your [managed identity](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
++
+Instead of creating your own ID, you can use one of the identities created by AKS, which are listed in [Use a managed identity in Azure Kubernetes Service](../../aks/use-managed-identity.md). This article uses the `Kubelet` identity. The name of this identity will be `<AKS-CLUSTER-NAME>-agentpool` and located in the node resource group of the AKS cluster.
++++
+## Assign Monitoring Metrics Publisher role on the data collection rule to the managed identity
+The managed identity requires the *Monitoring Metrics Publisher* role on the data collection rule associated with your Azure Monitor workspace.
+
+1. From the menu of your Azure Monitor Workspace account, click the **Data collection rule** to open the **Overview** page for the data collection rule.
+
+ :::image type="content" source="media/prometheus-remote-write-managed-identity/azure-monitor-account-data-collection-rule.png" alt-text="Screenshot showing data collection rule used by Azure Monitor workspace." lightbox="media/prometheus-remote-write-managed-identity/azure-monitor-account-data-collection-rule.png":::
+
+2. Click on **Access control (IAM)** in the **Overview** page for the data collection rule.
+
+ :::image type="content" source="media/prometheus-remote-write-managed-identity/azure-monitor-account-access-control.png" alt-text="Screenshot showing Access control (IAM) menu item on the data collection rule Overview page." lightbox="media/prometheus-remote-write-managed-identity/azure-monitor-account-access-control.png":::
+
+3. Click **Add** and then **Add role assignment**.
+
+ :::image type="content" source="media/prometheus-remote-write-managed-identity/data-collection-rule-add-role-assignment.png" alt-text="Screenshot showing adding a role assignment on Access control pages." lightbox="media/prometheus-remote-write-managed-identity/data-collection-rule-add-role-assignment.png":::
+
+4. Select **Monitoring Metrics Publisher** role and click **Next**.
+
+ :::image type="content" source="media/prometheus-remote-write-managed-identity/add-role-assignment.png" alt-text="Screenshot showing list of role assignments." lightbox="media/prometheus-remote-write-managed-identity/add-role-assignment.png":::
+
+5. Select **Managed Identity** and then click **Select members**. Choose the subscription the user assigned identity is located in and then select **User-assigned managed identity**. Select the User Assigned Identity that you're going to use and click **Select**.
+
+ :::image type="content" source="media/prometheus-remote-write-managed-identity/select-managed-identity.png" alt-text="Screenshot showing selection of managed identity." lightbox="media/prometheus-remote-write-managed-identity/select-managed-identity.png":::
+
+6. Click **Review + assign** to complete the role assignment.
++
+## Grant AKS cluster access to the identity
+This step isn't required if you're using an AKS identity since it will already have access to the cluster.
+
+> [!IMPORTANT]
+> You must have owner/user access administrator access on the cluster.
+
+1. Identify the virtual machine scale sets in the [node resource group](#locate-aks-node-resource-group) for your AKS cluster.
+
+ :::image type="content" source="media/prometheus-remote-write-managed-identity/resource-group-details-virtual-machine-scale-sets.png" alt-text="Screenshot showing virtual machine scale sets in the node resource group." lightbox="media/prometheus-remote-write-managed-identity/resource-group-details-virtual-machine-scale-sets.png":::
+
+2. Run the following command in Azure CLI for each virtual machine scale set.
+
+ ```azurecli
+ az vmss identity assign -g <AKS-NODE-RESOURCE-GROUP> -n <AKS-VMSS-NAME> --identities <USER-ASSIGNED-IDENTITY-RESOURCE-ID>
+ ```
++
+## Deploy Side car and configure remote write on the Prometheus server
+
+1. Copy the YAML below and save to a file. This YAML assumes you're using 8081 as your listening port. Modify that value if you use a different port.
+
+ ```yml
+ prometheus:
+ prometheusSpec:
+ externalLabels:
+ cluster: <AKS-CLUSTER-NAME>
+
+ ## https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write
+ remoteWrite:
+ - url: 'http://localhost:8081/api/v1/write'
+
+ ## Azure Managed Prometheus currently exports some default mixins in Grafana.
+ ## These mixins are compatible with Azure Monitor agent on your Azure Kubernetes Service cluster.
+ ## However, these mixins aren't compatible with Prometheus metrics scraped by the Kube Prometheus stack.
+ ## In order to make these mixins compatible, uncomment remote write relabel configuration below:
+
+ ## writeRelabelConfigs:
+ ## - sourceLabels: [metrics_path]
+ ## regex: /metrics/cadvisor
+ ## targetLabel: job
+ ## replacement: cadvisor
+ ## action: replace
+ ## - sourceLabels: [job]
+ ## regex: 'node-exporter'
+ ## targetLabel: job
+ ## replacement: node
+ ## action: replace
+
+ containers:
+ - name: prom-remotewrite
+ image: <CONTAINER-IMAGE-VERSION>
+ imagePullPolicy: Always
+ ports:
+ - name: rw-port
+ containerPort: 8081
+ livenessProbe:
+ httpGet:
+ path: /health
+ port: rw-port
+ initialDelaySeconds: 10
+ timeoutSeconds: 10
+ readinessProbe:
+ httpGet:
+ path: /ready
+ port: rw-port
+ initialDelaySeconds: 10
+ timeoutSeconds: 10
+ env:
+ - name: INGESTION_URL
+ value: <INGESTION_URL>
+ - name: LISTENING_PORT
+ value: '8081'
+ - name: IDENTITY_TYPE
+ value: userAssigned
+ - name: AZURE_CLIENT_ID
+ value: <MANAGED-IDENTITY-CLIENT-ID>
+ # Optional parameter
+ - name: CLUSTER
+ value: <CLUSTER-NAME>
+ ```
++
+2. Replace the following values in the YAML.
+
+ | Value | Description |
+ |:|:|
+ | `<AKS-CLUSTER-NAME>` | Name of your AKS cluster |
+ | `<CONTAINER-IMAGE-VERSION>` | `mcr.microsoft.com/azuremonitor/prometheus/promdev/prom-remotewrite:prom-remotewrite-20230505.1`<br>This is the remote write container image version. |
+ | `<INGESTION-URL>` | **Metrics ingestion endpoint** from the **Overview** page for the Azure Monitor workspace |
+ | `<MANAGED-IDENTITY-CLIENT-ID>` | **Client ID** from the **Overview** page for the managed identity |
+ | `<CLUSTER-NAME>` | Name of the cluster Prometheus is running on |
+
+
+++
+3. Open Azure Cloud Shell and upload the YAML file.
+4. Use helm to apply the YAML file to update your Prometheus configuration with the following CLI commands.
+
+ ```azurecli
+ # set context to your cluster
+ az aks get-credentials -g <aks-rg-name> -n <aks-cluster-name>
+
+ # use helm to update your remote write config
+ helm upgrade -f <YAML-FILENAME>.yml prometheus prometheus-community/kube-prometheus-stack --namespace <namespace where Prometheus pod resides>
+ ```
+
+## Verification and troubleshooting
+See [Azure Monitor managed service for Prometheus remote write](prometheus-remote-write.md#verify-remote-write-is-working-correctly).
+
+## Next steps
+
+- [Setup Grafana to use Managed Prometheus as a data source](../essentials/prometheus-grafana.md).
+- [Learn more about Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md).
azure-monitor Prometheus Remote Write https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-remote-write.md
+
+ Title: Remote-write in Azure Monitor Managed Service for Prometheus
+description: Describes how to configure remote-write to send data from self-managed Prometheus running in your AKS cluster or Azure Arc-enabled Kubernetes cluster
++ Last updated : 11/01/2022++
+# Azure Monitor managed service for Prometheus remote write
+Azure Monitor managed service for Prometheus is intended to be a replacement for self managed Prometheus so you don't need to manage a Prometheus server in your Kubernetes clusters. You may also choose to use the managed service to centralize data from self-managed Prometheus clusters for long term data retention and to create a centralized view across your clusters. In this case, you can use [remote_write](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage) to send data from your self-managed Prometheus into the Azure managed service.
+
+## Architecture
+Azure Monitor provides a reverse proxy container (Azure Monitor [side car container](/azure/architecture/patterns/sidecar)) that provides an abstraction for ingesting Prometheus remote write metrics and helps in authenticating packets. The Azure Monitor side car container currently supports User Assigned Identity and Azure Active Directory (Azure AD) based authentication to ingest Prometheus remote write metrics to Azure Monitor workspace.
++
+## Prerequisites
+
+- You must have self-managed Prometheus running on your AKS cluster. For example, see [Using Azure Kubernetes Service with Grafana and Prometheus](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/using-azure-kubernetes-service-with-grafana-and-prometheus/ba-p/3020459).
+- You used [Kube-Prometheus Stack](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) when you set up Prometheus on your AKS cluster.
+- Data for Azure Monitor managed service for Prometheus is stored in an [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md). You must [create a new workspace](../essentials/azure-monitor-workspace-manage.md#create-an-azure-monitor-workspace) if you don't already have one.
+
+## Configure remote write
+The process for configuring remote write depends on your cluster configuration and the type of authentication that you use.
+
+- **Managed identity** is recommended for Azure Kubernetes service (AKS) and Azure Arc-enabled Kubernetes cluster. See [Azure Monitor managed service for Prometheus remote write - managed identity](prometheus-remote-write-managed-identity.md)
+- **Azure Active Directory** can be used for Azure Kubernetes service (AKS) and Azure Arc-enabled Kubernetes cluster and is required for Kubernetes cluster running in another cloud or on-premises. See [Azure Monitor managed service for Prometheus remote write - Azure Active Directory](prometheus-remote-write-active-directory.md)
+
+> [!NOTE]
+> Whether you use Managed Identity or Azure Active Directory to enable permissions for ingesting data, these settings take some time to take effect. When following the steps below to verify that the setup is working please allow up to 10-15 minutes for the authorization settings needed to ingest data to complete.
+
+## Verify remote write is working correctly
+
+Use the following methods to verify that Prometheus data is being sent into your Azure Monitor workspace.
+
+### kubectl commands
+
+Use the following command to view your container log. Remote write data is flowing if the output has non-zero value for `avgBytesPerRequest` and `avgRequestDuration`.
+
+```azurecli
+kubectl logs <Prometheus-Pod-Name> <Azure-Monitor-Side-Car-Container-Name>
+# example: kubectl logs prometheus-prometheus-kube-prometheus-prometheus-0 prom-remotewrite
+```
+
+The output from this command should look similar to the following:
+
+```
+time="2022-11-02T21:32:59Z" level=info msg="Metric packets published in last 1 minute" avgBytesPerRequest=19713 avgRequestDurationInSec=0.023 failedPublishing=0 successfullyPublished=122
+```
++
+### PromQL queries
+Use PromQL queries in Grafana and verify that the results return expected data. See [getting Grafana setup with Managed Prometheus](../essentials/prometheus-grafana.md) to configure Grafana
+
+## Troubleshoot remote write
+
+### No data is flowing
+If remote data isn't flowing, run the following command which will indicate the errors if any in the remote write container.
+
+```azurecli
+kubectl --namespace <Namespace> describe pod <Prometheus-Pod-Name>
+```
++
+### Container keeps restarting
+A container regularly restarting is likely due to misconfiguration of the container. Run the following command to view the configuration values set for the container. Verify the configuration values especially `AZURE_CLIENT_ID` and `IDENTITY_TYPE`.
+
+```azureccli
+kubectl get pod <Prometheus-Pod-Name> -o json | jq -c '.spec.containers[] | select( .name | contains("<Azure-Monitor-Side-Car-Container-Name>"))'
+```
+
+The output from this command should look similar to the following:
+
+```
+{"env":[{"name":"INGESTION_URL","value":"https://my-azure-monitor-workspace.eastus2-1.metrics.ingest.monitor.azure.com/dataCollectionRules/dcr-00000000000000000/streams/Microsoft-PrometheusMetrics/api/v1/write?api-version=2021-11-01-preview"},{"name":"LISTENING_PORT","value":"8081"},{"name":"IDENTITY_TYPE","value":"userAssigned"},{"name":"AZURE_CLIENT_ID","value":"00000000-0000-0000-0000-00000000000"}],"image":"mcr.microsoft.com/azuremonitor/prometheus/promdev/prom-remotewrite:prom-remotewrite-20221012.2","imagePullPolicy":"Always","name":"prom-remotewrite","ports":[{"containerPort":8081,"name":"rw-port","protocol":"TCP"}],"resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","volumeMounts":[{"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount","name":"kube-api-access-vbr9d","readOnly":true}]}
+```
+
+### Hitting your ingestion quota limit
+With remote write you will typically get started using the remote write endpoint shown on the Azure Monitor workspace overview page. Behind the scenes, this uses a system Data Collection Rule (DCR) and system Data Collection Endpoint (DCE). These resources have an ingestion limit covered in the [Azure Monitor service limits](../service-limits.md#prometheus-metrics) document. You may hit these limits if you setup remote write for several clusters all sending data into the same endpoint in the same Azure Monitor workspace. If this is the case you can [create additional DCRs and DCEs](https://aka.ms/prometheus/remotewrite/dcrartifacts) and use them to spread out the ingestion loads across a few ingestion endpoints.
+
+The INGESTION-URL uses the following format:
+https\://\<Metrics-Ingestion-URL>/dataCollectionRules/\<DCR-Immutable-ID>/streams/Microsoft-PrometheusMetrics/api/v1/write?api-version=2021-11-01-preview
+
+Metrics-Ingestion-URL: can be obtained by viewing DCE JSON body with API version 2021-09-01-preview or newer.
+
+DCR-Immutable-ID: can be obtained by viewing DCR JSON body or running the following command in the Azure CLI:
+
+```azureccli
+az monitor data-collection rule show --name "myCollectionRule" --resource-group "myResourceGroup"
+```
+
+## Next steps
+
+- [Setup Grafana to use Managed Prometheus as a data source](../essentials/prometheus-grafana.md).
+- [Learn more about Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md).
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
Currently, there are two category groups:
- **All**: Every resource log offered by the resource. - **Audit**: All resource logs that record customer interactions with data or the settings of the service. Audit logs are an attempt by each resource provider to provide the most relevant audit data, but may not be considered sufficient from an auditing standards perspective.
-The "Audit" category is a subset of "All", but the Azure portal and REST API consider them separate settings. Selecting "All" does collect all audit logs regardless of if the "Audit" category is also selected.
+The "Audit" category is a subset of "All", but the Azure portal and REST API consider them separate settings. Selecting "All" does collect all audit logs regardless of if the "Audit" category is also selected.
-Note: Enabling *Audit* for Azure SQL Database does not enable auditing for Azure SQL Database. To enable database auditing, you have to enable it from the auditing blade for Azure Database.
+The following image shows the logs category groups on the add diagnostics settings page.
+ :::image type="content" source="./media/diagnostic-settings/audit-category-group.png" alt-text="A screenshot showing the logs category groups.":::
++
+> [!NOTE]
+> Enabling *Audit* for Azure SQL Database does not enable auditing for Azure SQL Database. To enable database auditing, you have to enable it from the auditing blade for Azure Database.
### Activity log
azure-monitor Data Retention Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-retention-archive.md
Title: Configure data retention and archive in Azure Monitor Logs
+ Title: Data retention and archive in Azure Monitor Logs
description: Configure archive settings for a table in a Log Analytics workspace in Azure Monitor.
Last updated 6/28/2023
# Customer intent: As an Azure account administrator, I want to set data retention and archive policies to save retention costs.
-# Configure data retention and archive policies in Azure Monitor Logs
+# Data retention and archive in Azure Monitor Logs
-Retention policies define when to remove or archive data in a [Log Analytics workspace](log-analytics-workspace-overview.md). Archiving lets you keep older, less used data in your workspace at a reduced cost.
+Azure Monitor Logs retains data in two states:
-This article describes how to configure data retention and archiving.
-
-## Permissions required
+* **Interactive retention**: Lets you retain Analytics logs for [interactive queries](../logs/get-started-queries.md) of up to 2 years.
+* **Archive**: Lets you keep older, less used data in your workspace at a reduced cost. You can access data in the archived state by using [search jobs](../logs/search-jobs.md) and [restore](../logs/restore.md). You can currently keep data in archived state for up to 7 years. In the coming months, it will be possible to extend archive to 12 years.
-| Action | Permissions required |
-|:-|:|
-| Configure data retention and archive policies for a Log Analytics workspace | `Microsoft.OperationalInsights/workspaces/write` and `microsoft.operationalinsights/workspaces/tables/write` permissions to the Log Analytics workspace, as provided by the [Log Analytics Contributor built-in role](./manage-access.md#log-analytics-contributor), for example |
-| Get the retention and archive policy by table for a Log Analytics workspace | `Microsoft.OperationalInsights/workspaces/tables/read` permissions to the Log Analytics workspace, as provided by the [Log Analytics Reader built-in role](./manage-access.md#log-analytics-reader), for example |
-| Purge data from a Log Analytics workspace | `Microsoft.OperationalInsights/workspaces/purge/action` permissions to the Log Analytics workspace, as provided by the [Log Analytics Contributor built-in role](./manage-access.md#log-analytics-contributor), for example |
-| Set data retention for a classic Application Insights resource | `microsoft.insights/components/write` permissons to the classic Application Insights resource, as provided by the [Application Insights Component Contributor built-in role](../../role-based-access-control/built-in-roles.md#application-insights-component-contributor), for example |
-| Purge data from a classic Application Insights resource | `Microsoft.Insights/components/purge/action` permissions to the classic Application Insights resource, as provided by the [Application Insights Component Contributor built-in role](../../role-based-access-control/built-in-roles.md#application-insights-component-contributor), for example |
+This article describes how to configure data retention and archiving.
## How retention and archiving work
-Each workspace has a default retention policy that's applied to all tables. You can set a different retention policy on individual tables.
+Each workspace has a default retention setting that's applied to all tables. You can configure a different retention setting on individual tables.
:::image type="content" source="media/data-retention-configure/retention-archive.png" alt-text="Diagram that shows an overview of data retention and archive periods.":::
You can access archived data by [running a search job](search-jobs.md) or [resto
> [!NOTE] > The archive period can only be set at the table level, not at the workspace level.- ### Adjustments to retention and archive settings
-When you shorten an existing retention policy, Azure Monitor waits 30 days before removing the data, so you can revert the change and prevent data loss in the event of an error in configuration. You can [purge data](#purge-retained-data) immediately when required.
+When you shorten an existing retention setting, Azure Monitor waits 30 days before removing the data, so you can revert the change and avoid data loss in the event of an error in configuration. You can [purge data](#purge-retained-data) immediately when required.
-When you increase the retention policy, the new retention period applies to all data that's already been ingested into the table and hasn't yet been purged or removed.
+When you increase the retention setting, the new retention period applies to all data that's already been ingested into the table and hasn't yet been purged or removed.
-If you change the archive settings on a table with existing data, the relevant data in the table is also affected immediately. For example, you might have an existing table with 180 days of interactive retention and no archive period. You decide to change the retention policy to 90 days of interactive retention without changing the total retention period of 180 days. Log Analytics immediately archives any data that's older than 90 days and none of the data is deleted.
+If you change the archive settings on a table with existing data, the relevant data in the table is also affected immediately. For example, you might have an existing table with 180 days of interactive retention and no archive period. You decide to change the retention setting to 90 days of interactive retention without changing the total retention period of 180 days. Log Analytics immediately archives any data that's older than 90 days and none of the data is deleted.
-## Configure the default workspace retention policy
+## Permissions required
-You can set the workspace default retention policy in the Azure portal to 30, 31, 60, 90, 120, 180, 270, 365, 550, and 730 days. You can set a different policy for specific tables by [configuring the retention and archive policy at the table level](#set-retention-and-archive-policy-by-table). If you're on the *free* tier, you'll need to upgrade to the paid tier to change the data retention period.
+| Action | Permissions required |
+|:-|:|
+| Configure data retention and archive policies for a Log Analytics workspace | `Microsoft.OperationalInsights/workspaces/write` and `microsoft.operationalinsights/workspaces/tables/write` permissions to the Log Analytics workspace, as provided by the [Log Analytics Contributor built-in role](./manage-access.md#log-analytics-contributor), for example |
+| Get the retention and archive policy by table for a Log Analytics workspace | `Microsoft.OperationalInsights/workspaces/tables/read` permissions to the Log Analytics workspace, as provided by the [Log Analytics Reader built-in role](./manage-access.md#log-analytics-reader), for example |
+| Purge data from a Log Analytics workspace | `Microsoft.OperationalInsights/workspaces/purge/action` permissions to the Log Analytics workspace, as provided by the [Log Analytics Contributor built-in role](./manage-access.md#log-analytics-contributor), for example |
+| Set data retention for a classic Application Insights resource | `microsoft.insights/components/write` permissions to the classic Application Insights resource, as provided by the [Application Insights Component Contributor built-in role](../../role-based-access-control/built-in-roles.md#application-insights-component-contributor), for example |
+| Purge data from a classic Application Insights resource | `Microsoft.Insights/components/purge/action` permissions to the classic Application Insights resource, as provided by the [Application Insights Component Contributor built-in role](../../role-based-access-control/built-in-roles.md#application-insights-component-contributor), for example |
+
+## Configure the default workspace retention
+
+You can set a Log Analytics workspace's default retention in the Azure portal to 30, 31, 60, 90, 120, 180, 270, 365, 550, and 730 days. You can apply a different setting to specific tables by [configuring retention and archive at the table level](#configure-retention-and-archive-at-the-table-level). If you're on the *free* tier, you need to upgrade to the paid tier to change the data retention period.
-To set the default workspace retention policy:
+To set the default workspace retention:
1. From the **Log Analytics workspaces** menu in the Azure portal, select your workspace. 1. Select **Usage and estimated costs** in the left pane.
To set the default workspace retention policy:
1. Move the slider to increase or decrease the number of days, and then select **OK**.
-## Set retention and archive policy by table
+## Configure retention and archive at the table level
-By default, all tables in your workspace inherit the workspace's interactive retention setting and have no archive policy. You can modify the retention and archive policies of individual tables, except for workspaces in the legacy Free Trial pricing tier.
+By default, all tables in your workspace inherit the workspace's interactive retention setting and have no archive. You can modify the retention and archive settings of individual tables, except for workspaces in the legacy Free Trial pricing tier.
-The Analytics log data plan includes 30 days of interactive retention. You can increase the interactive retention period to up to 730 days at an [additional cost](https://azure.microsoft.com/pricing/details/monitor/). If needed, you can reduce the interactive retention period to as low as four days using the API or CLI, however, since 30 days are included in the ingestion price, lowering the retention period below 30 days does not reduce costs. You can set the archive period to a total retention time of up to 2,556 days (seven years).
+The Analytics log data plan includes 30 days of interactive retention. You can increase the interactive retention period to up to 730 days at an [additional cost](https://azure.microsoft.com/pricing/details/monitor/). If needed, you can reduce the interactive retention period to as little as four days using the API or CLI. However, since 30 days are included in the ingestion price, lowering the retention period below 30 days doesn't reduce costs. You can set the archive period to a total retention time of up to 2,556 days (seven years).
+
+> [!NOTE]
+> In the coming months, new settings will enable retaining data for up to 12 years.
# [Portal](#tab/portal-1)
The request body includes the values in the following table.
|Name | Type | Description | | | | |
-|properties.retentionInDays | integer | The table's data retention in days. This value can be between 4 and 730. <br/>Setting this property to null will default to the workspace retention. For a Basic Logs table, the value is always 8. |
+|properties.retentionInDays | integer | The table's data retention in days. This value can be between 4 and 730. <br/>Setting this property to null applies the workspace retention period. For a Basic Logs table, the value is always 8. |
|properties.totalRetentionInDays | integer | The table's total data retention including archive period. This value can be between 4 and 730; or 1095, 1460, 1826, 2191, or 2556. Set this property to null if you don't want to archive data. | **Example**
Update-AzOperationalInsightsTable -ResourceGroupName ContosoRG -WorkspaceName Co
-## Get retention and archive policy by table
+## Get retention and archive settings by table
# [Portal](#tab/portal-2)
The **Tables** screen shows the interactive retention and archive period for all
# [API](#tab/api-2)
-To get the retention policy of a particular table (in this example, `SecurityEvent`), call the **Tables - Get** API:
+To get the retention setting of a particular table (in this example, `SecurityEvent`), call the **Tables - Get** API:
```JSON GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent?api-version=2021-12-01-preview ```
-To get all table-level retention policies in your workspace, don't set a table name.
+To get all table-level retention settings in your workspace, don't set a table name.
For example:
GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResource
# [CLI](#tab/cli-2)
-To get the retention policy of a particular table, run the [az monitor log-analytics workspace table show](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-show) command.
+To get the retention setting of a particular table, run the [az monitor log-analytics workspace table show](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-show) command.
For example:
az monitor log-analytics workspace table show --subscription ContosoSID --resour
# [PowerShell](#tab/PowerShell-2)
-To get the retention policy of a particular table, run the [Get-AzOperationalInsightsTable](/powershell/module/az.operationalinsights/get-azoperationalinsightstable) cmdlet.
+To get the retention setting of a particular table, run the [Get-AzOperationalInsightsTable](/powershell/module/az.operationalinsights/get-azoperationalinsightstable) cmdlet.
For example:
Get-AzOperationalInsightsTable -ResourceGroupName ContosoRG -WorkspaceName Conto
## Purge retained data
-If you set the data retention policy to 30 days, you can purge older data immediately by using the `immediatePurgeDataOn30Days` parameter in Azure Resource Manager. The purge functionality is useful when you need to remove personal data immediately. The immediate purge functionality isn't available through the Azure portal.
+If you set the data retention to 30 days, you can purge older data immediately by using the `immediatePurgeDataOn30Days` parameter in Azure Resource Manager. The purge functionality is useful when you need to remove personal data immediately. The immediate purge functionality isn't available through the Azure portal.
-Workspaces with a 30-day retention policy might keep data for 31 days if you don't set the `immediatePurgeDataOn30Days` parameter.
+Workspaces with a 30-day retention might keep data for 31 days if you don't set the `immediatePurgeDataOn30Days` parameter.
You can also purge data from a workspace by using the [purge feature](personal-data-mgmt.md#exporting-and-deleting-personal-data), which removes personal data. You can't purge data from archived logs. The Log Analytics [Purge API](/rest/api/loganalytics/workspacepurge/purge) doesn't affect retention billing. To lower retention costs, *decrease the retention period for the workspace or for specific tables*.
-## Tables with unique retention policies
+## Tables with unique retention periods
By default, two data types, `Usage` and `AzureActivity`, keep data for at least 90 days at no charge. When you increase the workspace retention to more than 90 days, you also increase the retention of these data types. These tables are also free from data ingestion charges.
-Tables related to Application Insights resources also keep data for 90 days at no charge. You can adjust the retention policy of each of these tables individually:
+Tables related to Application Insights resources also keep data for 90 days at no charge. You can adjust the retention of each of these tables individually:
- `AppAvailabilityResults` - `AppBrowserTimings`
azure-monitor Workspace Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/workspace-design.md
A workspace with Microsoft Sentinel gets three months of free data retention ins
**Combined workspace** Combing your data from Azure Monitor and Microsoft Sentinel in the same workspace gives you better visibility across all of your data allowing you to easily combine both in queries and workbooks. If access to the security data should be limited to a particular team, you can use [table level RBAC](../logs/manage-access.md#set-table-level-read-access) to block particular users from tables with security data or limit users to accessing the workspace using [resource-context](../logs/manage-access.md#access-mode).
-This configuration may result in cost savings if helps you reach a [commitment tier](#commitment-tiers), which provides a discount to your ingestion charges. For example, consider an organization that has operational data and security data each ingesting about 50 GB per day. Combining the data in the same workspace would allow a commitment tier at 100 GB per day. That scenario would provide a 15% discount for Azure Monitor and a 50% discount for Microsoft Sentinel.
+This configuration may result in cost savings if it helps you reach a [commitment tier](#commitment-tiers), which provides a discount to your ingestion charges. For example, consider an organization that has operational data and security data each ingesting about 50 GB per day. Combining the data in the same workspace would allow a commitment tier at 100 GB per day. That scenario would provide a 15% discount for Azure Monitor and a 50% discount for Microsoft Sentinel.
If you create separate workspaces for other criteria, you'll usually create more workspace pairs. For example, if you have two Azure tenants, you might create four workspaces with an operational and security workspace in each tenant.
You might need to split billing between different parties or perform charge back
- **If you need to split billing or perform charge back:** Consider whether [Azure Cost Management + Billing](../usage-estimated-costs.md#azure-cost-management--billing) or a log query provides cost reporting that's granular enough for your requirements. If not, use a separate workspace for each cost owner. ### Data retention and archive
-You can configure default [data retention and archive settings](data-retention-archive.md) for a workspace or [configure different settings for each table](data-retention-archive.md#set-retention-and-archive-policy-by-table). You might require different settings for different sets of data in a particular table. If so, you need to separate that data into different workspaces, each with unique retention settings.
+You can configure default [data retention and archive settings](data-retention-archive.md) for a workspace or [configure different settings for each table](data-retention-archive.md#configure-retention-and-archive-at-the-table-level). You might require different settings for different sets of data in a particular table. If so, you need to separate that data into different workspaces, each with unique retention settings.
- **If you can use the same retention and archive settings for all data in each table:** Use a single workspace for all resources. - **If you require different retention and archive settings for different resources in the same table:** Use a separate workspace for different resources.
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
azure-netapp-files Configure Ldap Extended Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-ldap-extended-groups.md
Azure NetApp Files supports fetching of extended groups from the LDAP name servi
When itΓÇÖs determined that LDAP will be used for operations such as name lookup and fetching extended groups, the following process occurs:
-1. Azure NetApp Files uses an LDAP client configuration to make a connection attempt to the AD DS/Azure AD DS LDAP server that is specified in the [Azure NetApp Files AD configuration](create-active-directory-connections.md).
-1. If the TCP connection over the defined AD DS/Azure AD DS LDAP service port is successful, then the Azure NetApp Files LDAP client attempts to ΓÇ£bindΓÇ¥ (sign in) to the AD DS/Azure AD DS LDAP server (domain controller) by using the defined credentials in the LDAP client configuration.
-1. If the bind is successful, then the Azure NetApp Files LDAP client uses the RFC 2307bis LDAP schema to make an LDAP search query to the AD DS/Azure AD DS LDAP server (domain controller).
+1. Azure NetApp Files uses an LDAP client configuration to make a connection attempt to the AD DS or Azure AD DS LDAP server that is specified in the [Azure NetApp Files AD configuration](create-active-directory-connections.md).
+1. If the TCP connection over the defined AD DS or Azure AD DS LDAP service port is successful, then the Azure NetApp Files LDAP client attempts to ΓÇ£bindΓÇ¥ (sign in) to the AD DS or Azure AD DS LDAP server (domain controller) by using the defined credentials in the LDAP client configuration.
+1. If the bind is successful, then the Azure NetApp Files LDAP client uses the RFC 2307bis LDAP schema to make an LDAP search query to the AD DS or Azure AD DS LDAP server (domain controller).
The following information is passed to the server in the query: * [Base/user DN](configure-ldap-extended-groups.md#ldap-search-scope) (to narrow search scope) * Search scope type (subtree)
The following information is passed to the server in the query:
* UID or username * Requested attributes (`uid`, `uidNumber`, `gidNumber` for users, or `gidNumber` for groups) 1. If the user or group isnΓÇÖt found, the request fails, and access is denied.
-1. If the request is successful, then user and group attributes are [cached for future use](configure-ldap-extended-groups.md#considerations). This operation improves the performance of subsequent LDAP queries associated with the cached user or group attributes. It also reduces the load on the AD DS/Azure AD DS LDAP server.
+1. If the request is successful, then user and group attributes are [cached for future use](configure-ldap-extended-groups.md#considerations). This operation improves the performance of subsequent LDAP queries associated with the cached user or group attributes. It also reduces the load on the AD DS or Azure AD DS LDAP server.
## Considerations
The following information is passed to the server in the query:
* [Configure an NFS client for Azure NetApp Files](configure-nfs-clients.md) * [Troubleshoot volume errors for Azure NetApp Files](troubleshoot-volumes.md) * [Modify Active Directory connections for Azure NetApp Files](modify-active-directory-connections.md)
+* [Understand NFS group memberships and supplemental groups](network-file-system-group-memberships.md)
azure-netapp-files Configure Ldap Over Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-ldap-over-tls.md
Disabling LDAP over TLS stops encrypting LDAP queries to Active Directory (LDAP
* [Create an SMB volume for Azure NetApp Files](azure-netapp-files-create-volumes-smb.md) * [Create a dual-protocol volume for Azure NetApp Files](create-volumes-dual-protocol.md) * [Modify Active Directory connections for Azure NetApp Files](modify-active-directory-connections.md)-
+* [Understand the use of LDAP with Azure NetApp Files](lightweight-directory-access-protocol.md)
azure-netapp-files Create Volumes Dual Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-volumes-dual-protocol.md
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
* You don't need a server root CA certificate for creating a dual-protocol volume. It is required only if LDAP over TLS is enabled.
+* To understand Azure NetApp Files dual protocols and related considerations, see the [Dual Protocols section in Understand NAS protocols in Azure NetApp Files](network-attached-storage-protocols.md#dual-protocols).
+ ## Create a dual-protocol volume 1. Click the **Volumes** blade from the Capacity Pools blade. Click **+ Add volume** to create a volume.
Follow instructions in [Configure an NFS client for Azure NetApp Files](configur
## Next steps
+* [Considerations for Azure NetApp Files dual-protocol volumes](network-attached-storage-protocols.md#considerations-for-azure-netapp-files-dual-protocol-volumes)
* [Manage availability zone volume placement for Azure NetApp Files](manage-availability-zone-volume-placement.md) * [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md) * [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md)
azure-netapp-files Dual Protocol Permission Behaviors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/dual-protocol-permission-behaviors.md
+
+ Title: Understand dual-protocol security style and permission behaviors in Azure NetApp Files | Microsoft Docs
+description: This article helps you understand dual-protocol security style and permission when you use Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 08/02/2023+++
+# Understand dual-protocol security style and permission behaviors in Azure NetApp Files
+
+SMB and NFS use different permission models for user and group access. As a result, an Azure NetApp File volume must be configured to honor the desired permission model for protocol access. For NFS-only environments, the decision is simple ΓÇô use UNIX security styles. For SMB-only environments, use NTFS security styles.
+
+If NFS and SMB on the same datasets (dual-protocol) are required, then the decision should be made based on two questions:
+
+* What protocol will users manage permissions from the most?
+* What is the desired permission management endpoint? In other words, do users require the ability to manage permissions from NFS clients or Windows clients? Or both?
+
+Volume security styles can really be considered permission styles, where the desired style of ACL management is the deciding factor.
+
+> [!NOTE]
+> Security styles are chosen at volume creation. Once the security style has been chosen, it cannot be changed.
+
+## About Azure NetApp Files volume security styles
+
+There are two main choices for volume security styles in Azure NetApp Files:
+
+**UNIX** - The UNIX security style provides UNIX-style permissions, such as basic POSIX mode bits (Owner/Group/Everyone access with standard Read/Write/Execute permissions, such as 0755) and NFSv4.x ACLs. POSIX ACLs aren't supported.
+
+**NTFS** - The NTFS security style provides identical functionality as [standard Windows NTFS permissions](/windows/security/identity-protection/access-control/access-control), with granular user and groups in ACLs and detailed security/audit permissions.
+
+In a dual-protocol NAS environment, only one security permission style can be active. You should evaluate considerations for each security style before choosing one.
+
+| Security style | Considerations |
+| - | - |
+| UNIX | - Windows clients can only set UNIX permission attributes through SMBs that map to UNIX attributes (Read/Write/Execute only; no special permissions). <br> - NFSv4.x ACLs don't have GUI management. Management is done only via CLI using [nfs4_getfacl and nfs4_setfacl commands](https://manpages.debian.org/testing/nfs4-acl-tools/https://docsupdatetracker.net/index.html). <br> - If a file or folder has NFSv4.x ACLs, the Windows security properties tab can't display them. <br> |
+| NTFS | - UNIX clients can't set attributes through NFS via commands such as `chown/chmod`. <br> - NFS clients show only approximated NTFS permissions when using `ls` commands. For instance, if a user has a permission in a Windows NTFS ACL that can't be cleanly translated into a POSIX mode bit (such as traverse directory), it's translated into the closest POSIX mode-bit value (such as `1` for execute). <br> |
+
+The selection of volume security style determines how the name mapping for a user is performed. This operation is the core piece of how dual-protocol volumes maintain predictable permissions regardless of protocol in use.
+
+Use the following table as a decision matrix for selecting the proper volume security styles.
+
+| Security style | Mostly NFS | Mostly SMB | Need for granular security |
+| - | - | - | - |
+| UNIX | X | - | X (using NFSv4.x ACLs) |
+| NTFS | - | X | X |
+
+## How name mapping works in Azure NetApp Files
+
+In Azure NetApp Files, only users are authenticated and mapped. Groups aren't mapped. Instead, group memberships are determined by using the user identity.
+
+When a user attempts to access an Azure NetApp Files volume, that attempt passes along an identity to the service. That identity includes a user name and unique numeric identifier (UID number for NFSv3, name string for NFSv4.1, SID for SMB). Azure NetApp Files uses that identity to authenticate against a configured name service to verify the identity of the user.
+
+* LDAP search for numeric IDs is used to look up a user name in Active Directory.
+* Name strings use LDAP search to look up a user name and the client and server consult the [configured ID domain for NFSv4.1](azure-netapp-files-configure-nfsv41-domain.md) to ensure the match.
+* Windows users are queried using standard Windows RPC calls to Active Directory.
+* Group memberships are also queried, and everything is added to a credential cache for faster processing on subsequent requests to the volume.
+* Currently, custom local users aren't supported for use with Azure NetApp Files. Only users in Active Directory can be used with dual protocols.
+* Currently, the only local users that can be used with dual-protocol volumes are root and the `nfsnobody` user.
+
+After a user name is authenticated and validated by Azure NetApp Files, the next step for dual-protocol volume authentication is the mapping of user names for UNIX and Windows interoperability.
+
+A volume's security style determines how a name mapping takes place in Azure NetApp Files. Windows and UNIX permission semantics are different. If a name mapping can't be performed, then authentication fails, and access to a volume from a client is denied. A common scenario where this situation occurs is when NFSv3 access is attempted to a volume with NTFS security style. The initial access request from NFSv3 comes to Azure NetApp Files as a numeric UID. If a user named `user1` with a numeric ID of `1001` tries to access the NFSv3 mount, the authentication request arrives as numeric ID `1001`. Azure NetApp Files then takes numeric ID `1001` and attempts to resolve `1001` to a user name. This user name is required for mapping to a valid Windows user, because the NTFS permissions on the volume will contain Windows user names instead of a numeric ID. Azure NetApp Files will use the configured name service server (LDAP) to search for the user name. If the user name can't be found, then authentication fails, and access is denied. This operation is by design in order to prevent unwanted access to files and folders.
+
+## Name mapping based on security style
+
+The direction in which the name mapping occurs in Azure NetApp Files (Windows to UNIX, or UNIX to Windows) depends not only on the protocol being used but also the security style of a volume. A Windows client always requires a Windows-to-UNIX name mapping to allow access, but it doesn't always need a matching UNIX user name. If no valid UNIX user name exists in the configured name service server, Azure NetApp Files provides a fallback default UNIX user with the numeric UID of `65534` to allow initial authentication for SMB connections. After that, file and folder permissions will control access. Because `65534` generally corresponds with the `nfsnobody` user, access is limited in most cases. Conversely, an NFS client only needs to use a UNIX-to-Windows name mapping if the NTFS security style is in use. There's no default Windows user in Azure NetApp Files. As such, if a valid Windows user that matches the requesting name can't be found, access will be denied.
+
+The following table breaks down the different name mapping permutations and how they behave depending on protocol in use.
+
+| Protocol | Security style | Name mapping direction | Permissions applied |
+| - | - | - | - |
+| SMB | UNIX | Windows to UNIX | UNIX <br> (mode-bits or NFSv4.x ACLs) |
+| SMB | NTFS | Windows to UNIX | NTFS ACLs <br> (based on Windows SID accessing share) |
+| NFSv3 | UNIX | None | UNIX <br> (mode-bits or NFSv4.x ACLs*) |
+| NFSv4.x | UNIX | Numeric ID to UNIX user name | UNIX <br> (mode-bits or NFSv4.x ACLs) |
+| NFS3/4.x | NTFS | UNIX to Windows | NTFS ACLs <br> (based on mapped Windows user SID) |
+
+> [!NOTE]
+> NFSv4.x ACLs can be applied using an NFSv4.x administrative client and honored by NFSv3 clients by [switching between protocols](convert-nfsv3-nfsv41.md).
+
+Name-mapping rules in Azure NetApp Files can currently be controlled only by using LDAP. There is no option to create explicit name mapping rules within the service.
+
+## Name services with dual-protocol volumes
+
+Regardless of what NAS protocol is used, dual-protocol volumes use name-mapping concepts to handle permissions properly. As such, name services play a critical role in maintaining functionality in environments that use both SMB and NFS for access to volumes.
+
+Name services act as identity sources for users and groups accessing NAS volumes. This operation includes Active Directory, which can act as a source for both Windows and UNIX users and groups using both standard domain services and LDAP functionality.
+
+Name services aren't a hard requirement but highly recommended for Azure NetApp Files dual-protocol volumes. There's no concept of creation of custom local users and groups within the service. As such, to have proper authentication and accurate user and group owner information across protocols, LDAP is a necessity. If you have only a handful of users and you don't need to populate accurate user and group identity information, then consider using the [Allow local NFS users with LDAP to access a dual-protocol volume functionality](create-volumes-dual-protocol.md#allow-local-nfs-users-with-ldap-to-access-a-dual-protocol-volume). Keep in mind that enabling this functionality disables the [extended group functionality](configure-ldap-extended-groups.md#considerations).
+
+### When clients, name services, and storage reside in different areas
+
+In some cases, NAS clients might live in a segmented network with multiple interfaces that have isolated connections to the storage and name services.
+
+One such example is if your storage resides in Azure NetApp Files, while your NAS clients and domain services all reside on-premises (such as a [hub-spoke architecture in Azure](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke)). In those scenarios, you would need to provide network access to both the NAS clients and the name services.
+
+The following figure shows an example of that kind of configuration.
++
+## Next steps
+
+* [Understand the use of LDAP with Azure NetApp Files](lightweight-directory-access-protocol.md)
+* [Create a dual-protocol volume for Azure NetApp Files](create-volumes-dual-protocol.md)
+* [Azure NetApp Files NFS FAQ](faq-nfs.md)
+* [Azure NetApp Files SMB FAQ](faq-smb.md)
azure-netapp-files Lightweight Directory Access Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/lightweight-directory-access-protocol.md
+
+ Title: Understand the use of LDAP with Azure NetApp Files | Microsoft Learn
+description: This article helps you understand how Azure NetApp Files uses lightweight directory access protocol (LDAP).
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 08/05/2023+++
+# Understand the use of LDAP with Azure NetApp Files
+
+Lightweight directory access protocol (LDAP) is a standard directory access protocol that was developed by an international committee called the Internet Engineering Task Force (IETF). LDAP is intended to provide a general-purpose, network-based directory service that you can use across heterogeneous platforms to locate network objects.
+
+LDAP models define how to communicate with the LDAP directory store, how to find an object in the directory, how to describe the objects in the store, and the security that is used to access the directory. LDAP allows customization and extension of the objects that are described in the store. Therefore, you can use an LDAP store to store many types of diverse information. Many of the initial LDAP deployments focused on the use of LDAP as a directory store for applications such as email and web applications and to store employee information. Many companies are replacing or have replaced Network Information Service (NIS) with LDAP as a network directory store.
+
+An LDAP server provides UNIX user and group identities for use with NAS volumes. In Azure NetApp Files, Active Directory is the only currently supported LDAP server that can be used. This support includes both Active Directory Domain Services (AD DS) and Azure Active Directory Domain Services (Azure AD DS).
+
+LDAP requests can be broken down into two main operations.
+
+* **LDAP binds** are logins to the LDAP server from an LDAP client. The bind is used to authenticate to the LDAP server with read-only access to perform LDAP lookups. Azure NetApp Files acts as an LDAP client.
+* **LDAP lookups** are used to query the directory for user and group information, such as names, numeric IDs, home directory paths, login shell paths, group memberships and more.
+
+LDAP can store the following information that is used in dual-protocol NAS access:
+
+* User names
+* Group names
+* Numeric user IDs (UIDs) and group IDs (GIDs)
+* Home directories
+* Login shell
+* Netgroups, DNS names, and IP addresses
+* Group membership
+
+Currently, Azure NetApp Files only uses LDAP for user and group information ΓÇô no netgroup or host information.
+
+LDAP offers various benefits for your UNIX users and groups as an identity source.
+
+* **LDAP is future-proof.**
+ As more NFS clients add support for NFSv4.x, NFSv4.x ID domains that contain an up-to-date list of users and groups accessible from clients and storage are needed to ensure optimal security and guaranteed access when access is defined. Having an identity-management server that provides one-to-one name mappings for SMB and NFS users alike greatly simplifies life for storage administrators, not just in the present, but for years to come.
+* **LDAP is scalable.**
+ LDAP servers offer the ability to contain millions of user and group objects, and with Microsoft Active Directory, multiple servers can be used to replicate across multiple sites for both performance and resiliency scale.
+* **LDAP is secure.**
+ LDAP offers security in the form of how a storage system can connect to the LDAP server to make requests for user information. LDAP servers offer the following bind levels:
+ * Anonymous (disabled by default in Microsoft Active Directory; not supported in Azure NetApp Files)
+ * Simple password (plain text passwords; not supported in Azure NetApp Files)
+ * [Simple Authentication and Security Layer (SASL)](https://www.iana.org/assignments/sasl-mechanisms/sasl-mechanisms.xhtml#:~:text=The%20Simple%20Authentication%20and%20Security,support%20to%20connection%2Dbased%20protocols.) ΓÇô Encrypted bind methods including TLS, SSL, Kerberos, and so on. Azure NetApp Files supports LDAP over TLS, LDAP signing (using Kerberos), LDAP over SSL.
+* **LDAP is robust.**
+ NIS, NIS+, and local files offer basic information such UID, GID, password, home directories, and so on. However, LDAP offers those attributes and many more. The additional attributes that LDAP uses makes dual-protocol management much more integrated with LDAP versus NIS. Only LDAP is supported as an external name service for identity management with Azure NetApp Files.
+* **Microsoft Active Directory is built on LDAP.**
+ By default, Microsoft Active Directory uses an LDAP back-end for its user and group entries. However, this LDAP database doesn't contain UNIX style attributes. These attributes are added when the LDAP schema is extended through Identity Management for UNIX (Windows 2003R2 and later), Service for UNIX (Windows 2003 and earlier), or third-party LDAP tools such as *Centrify*. Because Microsoft uses LDAP as a back-end, it makes LDAP the perfect solution for environments that choose to leverage dual-protocol volumes in Azure NetApp Files.
+ > [!NOTE]
+ > Azure NetApp Files currently only supports native Microsoft Active Directory for LDAP services.
+
+## LDAP basics in Azure NetApp Files
+
+The following section discusses the basics of LDAP as it pertains to Azure NetApp Files.
+
+* LDAP information is stored in flat files in an LDAP server and is organized by way of an LDAP schema. You should configure LDAP clients in a way that coordinates their requests and lookups with the schema on the LDAP server.
+* LDAP clients initiate queries by way of an LDAP bind, which is essentially a login to the LDAP server using an account that has read access to the LDAP schema. The LDAP bind configuration on the clients is configured to use the security mechanism that is defined by the LDAP server. Sometimes, they are user name and password exchanges in plain text (simple). In other cases, binds are secured through Simple Authentication and Security Layer methods (`sasl`) such as Kerberos or LDAP over TLS. Azure NetApp Files uses the SMB machine account to bind using SASL authentication for the best possible security.
+* User and group information that is stored in LDAP is queried by clients by using standard LDAP search requests as defined in [RFC 2307](https://datatracker.ietf.org/doc/html/rfc2307). In addition, newer mechanisms, such as [RFC 2307bis](https://datatracker.ietf.org/doc/html/draft-howard-rfc2307bis-02), allow more streamlined user and group lookups. Azure NetApp Files uses a form of RFC 2307bis for its schema lookups in Windows Active Directory.
+* LDAP servers can store user and group information and netgroup. However, Azure NetApp Files currently can't use netgroup functionality in LDAP on Windows Active Directory.
+* LDAP in Azure NetApp Files operates on port 389. This port currently cannot be modified to use a custom port, such as port 636 (LDAP over SSL) or port 3268 (Active Directory Global Catalog searches).
+* Encrypted LDAP communications can be achieved using [LDAP over TLS](configure-ldap-over-tls.md#considerations) (which operates over port 389) or LDAP signing, both of which can be configured on the Active Directory connection.
+* Azure NetApp Files supports LDAP queries that take no longer than 3 seconds to complete. If the LDAP server has many objects, that timeout may be exceeded, and authentication requests can fail. In those cases, consider specifying an [LDAP search scope](https://ldap.com/the-ldap-search-operation/) to filter queries for better performance.
+* Azure NetApp Files also supports specifying preferred LDAP servers to help speed up requests. Use this setting if you want to ensure the LDAP server closest to your Azure NetApp Files region is being used.
+* If no preferred LDAP server is set, the Active Directory domain name is queried in DNS for LDAP service records to populate the list of LDAP servers available for your region located within that SRV record. You can manually query LDAP service records in DNS from a client using [`nslookup`](/troubleshoot/windows-server/networking/verify-srv-dns-records-have-been-created) or [`dig`](https://www.cyberciti.biz/faq/linux-unix-dig-command-examples-usage-syntax/) commands.
+
+ For example:
+ ```
+ C:\>nslookup
+ Default Server: localhost
+ Address: ::1
+
+ > set type=SRV
+ > _ldap._tcp.contoso.com.
+
+ Server: localhost
+ Address: ::1
+
+ _ldap._tcp.contoso.com SRV service location:
+ priority = 0
+ weight = 0
+ port = 389
+ svr hostname = oneway.contoso.com
+ _ldap._tcp.contoso.com SRV service location:
+ priority = 0
+ weight = 100
+ port = 389
+ svr hostname = ONEWAY.Contoso.com
+ _ldap._tcp.contoso.com SRV service location:
+ priority = 0
+ weight = 100
+ port = 389
+ svr hostname = oneway.contoso.com
+ _ldap._tcp.contoso.com SRV service location:
+ priority = 0
+ weight = 100
+ port = 389
+ svr hostname = parisi-2019dc.contoso.com
+ _ldap._tcp.contoso.com SRV service location:
+ priority = 0
+ weight = 100
+ port = 389
+ svr hostname = contoso.com
+ oneway.contoso.com internet address = x.x.x.x
+ ONEWAY.Contoso.com internet address = x.x.x.x
+ oneway.contoso.com internet address = x.x.x.x
+ parisi-2019dc.contoso.com internet address = y.y.y.y
+ contoso.com internet address = x.x.x.x
+ contoso.com internet address = y.y.y.y
+ ```
+* LDAP servers can also be used to perform custom name mapping for users. For more information, see [Custom name mapping using LDAP](#custom-name-mapping-using-ldap).
++
+## Name mapping types
+
+Name mapping rules can be broken down into two main types: *symmetric* and *asymmetric*.
+
+* *Symmetric* name mapping is implicit name mapping between UNIX and Windows users who use the same user name. For example, Windows user `CONTOSO\user1` maps to UNIX user `user1`.
+* *Asymmetric* name mapping is name mapping between UNIX and Windows users who use **different** user names. For example, Windows user `CONTOSO\user1` maps to UNIX user `user2`.
+
+By default, Azure NetApp Files uses symmetric name mapping rules. If asymmetric name mapping rules are required, consider configuring the LDAP user objects to use them.
+
+## Custom name mapping using LDAP
+
+LDAP can be a name mapping resource, if the LDAP schema attributes on the LDAP server have been populated. For example, to map UNIX users to corresponding Windows user names that don't match one-to-one (that is, *asymmetric*), you can specify a different value for `uid` in the user object than what is configured for the Windows user name.
+
+In the following example, a user has a Windows name of `asymmetric` and needs to map to a UNIX identity of `UNIXuser`. To achieve that in Azure NetApp Files, open an instance of the [Active Directory Users and Computers MMC](/troubleshoot/windows-server/system-management-components/remote-server-administration-tools). Then, find the desired user and open the properties box. (Doing so requires [enabling the Attribute Editor](http://directoryadmin.blogspot.com/2019/02/attribute-editor-tab-missing-in-active.html)). Navigate to the Attribute Editor tab and find the UID field, then populate the UID field with the desired UNIX user name `UNIXuser` and click **Add** and **OK** to confirm.
++
+After this action is done, files written from Windows SMB shares by the Windows user `asymmetric` will be owned by `UNIXuser` from the NFS side.
+
+The following example shows Windows SMB owner `asymmetric`:
++
+The following example shows NFS owner `UNIXuser` (mapped from Windows user `asymmetric` using LDAP):
+
+```
+root@ubuntu:~# su UNIXuser
+UNIXuser@ubuntu:/root$ cd /mnt
+UNIXuser@ubuntu:/mnt$ ls -la
+total 8
+drwxrwxrwx 2 root root 4096 Jul 3 20:09 .
+drwxr-xr-x 21 root root 4096 Jul 3 20:12 ..
+-rwxrwxrwx 1 UNIXuser group1 19 Jul 3 20:10 asymmetric-user-file.txt
+```
+
+## LDAP schemas
+
+LDAP schemas are how LDAP servers organize and collect information. LDAP server schemas generally follow the same standards, but different LDAP server providers might have variations on how schemas are presented.
+
+When Azure NetApp Files queries LDAP, schemas are used to help speed up name lookups because they enable the use of specific attributes to find information about a user, such as the UID. The schema attributes must exist in the LDAP server for Azure NetApp Files to be able to find the entry. Otherwise, LDAP queries might return no data and authentication requests might fail.
+
+For example, if a UID number (such as root=0) must be queried by Azure NetApp Files, then the schema attribute RFC 2307 `uidNumber Attribute` is used. If no UID number `0` exists in LDAP in the `uidNumber` field, then the lookup request fails.
+
+The schema type currently used by Azure NetApp Files is a form of schema based on RFC 2307bis and can't be modified.
+
+[RFC 2307bis](https://tools.ietf.org/html/draft-howard-rfc2307bis-02) is an extension of RFC 2307 and adds support for `posixGroup`, which enables dynamic lookups for auxiliary groups by using the `uniqueMember` attribute, rather than by using the `memberUid` attribute in the LDAP schema. Instead of using just the name of the user, this attribute contains the full distinguished name (DN) of another object in the LDAP database. Therefore, groups can have other groups as members, which allows nesting of groups. Support for RFC 2307bis also adds support for the object class `groupOfUniqueNames`.
+
+This RFC extension fits nicely into how Microsoft Active Directory manages users and groups through the usual management tools. This is because when you add a Windows user to a group (and if that group has a valid numeric GID) using the standard Windows management methods, LDAP lookups will pull the necessary supplemental group information from the usual Windows attribute and find the numeric GIDs automatically.
+
+## Next steps
+
+* [Configure AD DS LDAP over TLS for Azure NetApp Files](configure-ldap-over-tls.md)
+* [Understand NFS group memberships and supplemental groups](network-file-system-group-memberships.md)
+* [Azure NetApp Files NFS FAQ](faq-nfs.md)
+* [Azure NetApp Files SMB FAQ](faq-smb.md)
azure-netapp-files Network Attached Storage Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-storage-concept.md
Title: Understand NAS concepts in Azure NetApp Files
+ Title: Understand NAS concepts in Azure NetApp Files | Microsoft Docs
description: This article covers important information about NAS volumes when using Azure NetApp Files. documentationcenter: ''
azure-netapp-files Network Attached Storage Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-storage-protocols.md
Title: Understand NAS protocols in Azure NetApp Files
-description: Learn how SMB and NFS operate in Azure NetApp Files.
+ Title: Understand NAS protocols in Azure NetApp Files | Microsoft Learn
+description: Learn how SMB, NFS, and dual protocols operate in Azure NetApp Files.
documentationcenter: ''-+ editor: ''
na Previously updated : 06/26/2023 Last updated : 08/02/2023
NAS protocols are how conversations happen between clients and servers. NFS and SMB are the NAS protocols used in Azure NetApp Files. Each offers their own distinct methods for communication, but at their root, they operate mostly in the same way. * Both serve a single dataset to many disparate networked attached clients.
-* Both can leverage encrypted authentication methods for sharing data.
+* Both can use encrypted authentication methods for sharing data.
* Both can be gated with share and file permissions. * Both can encrypt data in-flight. * Both can use multiple connections to help parallelize performance. ## Network File System (NFS)
-NFS is primarily used with Linux/UNIX based clients such as Red Hat, SUSE, Ubuntu, AIX, Solaris, Apple OS, etc. and Azure NetApp Files supports any NFS client that operates in the RFC standards. Windows can also use NFS for access, but it does not operate using Request for Comments (RFC) standards.
+NFS is primarily used with Linux/UNIX based clients such as Red Hat, SUSE, Ubuntu, AIX, Solaris, and Apple OS. Azure NetApp Files supports any NFS client that operates in the RFC standards. Windows can also use NFS for access, but it doesn't operate using Request for Comments (RFC) standards.
RFC standards for NFS protocols can be found here:
RFC standards for NFS protocols can be found here:
### NFSv3 NFSv3 is a basic offering of the protocol and has the following key attributes:
-* NFSv3 is stateless, meaning that the NFS server does not keep track of the states of connections (including locks).
-* Locking is handled outside of the NFS protocol, using Network Lock Manager (NLM). Because locks are not integrated into the protocol, stale locks can sometimes occur.
-* Since NFSv3 is stateless, performance with NFSv3 can be substantially better in some workloads (particularly workloads with high metadata operations such as OPEN, CLOSE, SETATTR, GETATTR), as there is less general work that needs to be done to process requests on the server and client.
+* NFSv3 is stateless, meaning that the NFS server doesn't keep track of the states of connections (including locks).
+* Locking is handled outside of the NFS protocol, using Network Lock Manager (NLM). Because locks aren't integrated into the protocol, stale locks can sometimes occur.
+* Because NFSv3 is stateless, performance with NFSv3 can be substantially better in some workloads, particularly in workloads with high-metadata operations such as OPEN, CLOSE, SETATTR, and GETATTR. This is the case because there's less general work that needs to be done to process requests on the server and client.
* NFSv3 uses a basic file permission model where only the owner of the file, a group and everyone else can be assigned a combination of read/write/execute permissions.
-* NFSv3 can use NFSv4.x ACLs, but an NFSv4.x management client would be required to configure and manage the ACLs. Azure NetApp Files does not support the use of nonstandard POSIX draft ACLs.
+* NFSv3 can use NFSv4.x ACLs, but an NFSv4.x management client would be required to configure and manage the ACLs. Azure NetApp Files doesn't support the use of nonstandard POSIX draft ACLs.
* NFSv3 also requires use of other ancillary protocols for regular operations such as port discovery, mounting, locking, status monitoring and quotas. Each ancillary protocol uses a unique network port, which means NFSv3 operations require more exposure through firewalls with well-known port numbers. * Azure NetApp Files uses the following port numbers for NFSv3 operations. It's not possible to change these port numbers: * Portmapper (111)
NFSv3 is a basic offering of the protocol and has the following key attributes:
* NLM (4045) * NSM (4046) * Rquota (4049)
-* NFSv3 can use security enhancements such as Kerberos, but Kerberos only affects the NFS portion of the packets; ancillary protocols (such as NLM, portmapper, mount) are not included in the Kerberos conversation.
+* NFSv3 can use security enhancements such as Kerberos, but Kerberos only affects the NFS portion of the packets; ancillary protocols (such as NLM, portmapper, mount) aren't included in the Kerberos conversation.
* Azure NetApp Files only supports NFSv4.1 Kerberos encryption
-* NFSv3 uses numeric IDs for its user and group authentication. Usernames and group names are not required for communication or permissions, which can make spoofing a user easier, but configuration and management are simpler.
+* NFSv3 uses numeric IDs for its user and group authentication. Usernames and group names aren't required for communication or permissions, which can make spoofing a user easier, but configuration and management are simpler.
* NFSv3 can use LDAP for user and group lookups. ### NFSv4.x
-NFSv4.x refers to all NFS versions/minor versions that are under NFSv4. This includes NFSv4.0, NFSv4.1 and NFSv4.2. Azure NetApp Files currently only supports NFSv4.1.
+*NFSv4.x* refers to all NFS versions or minor versions that are under NFSv4, including NFSv4.0, NFSv4.1, and NFSv4.2. Azure NetApp Files currently supports NFSv4.1 only.
NFSv4.x has the following characteristics: * NFSv4.x is a stateful protocol, which means that the client and server keep track of the states of the NFS connections, including lock states. The NFS mount uses a concept known as a ΓÇ£state IDΓÇ¥ to keep track of the connections.
-* Locking is integrated into the NFS protocol and does not require ancillary locking protocols to keep track of NFS locks. Instead, locks are granted on a lease basis and will expire after a certain period of time if a client/server connection is lost, thus returning the lock back to the system for use with other NFS clients.
+* Locking is integrated into the NFS protocol and doesn't require ancillary locking protocols to keep track of NFS locks. Instead, locks are granted on a lease basis. They expire after a certain duration if a client or server connection is lost, thus returning the lock back to the system for use with other NFS clients.
* The statefulness of NFSv4.x does contain some drawbacks, such as potential disruptions during network outages or storage failovers, and performance overhead in certain workload types (such as high metadata workloads). * NFSv4.x provides many significant advantages over NFSv3, including: * Better locking concepts (lease-based locking)
NFSv4.x has the following characteristics:
* Compound NFS operations (multiple commands in a single packet request to reduce network chatter) * TCP-only * NFSv4.x can use a more robust file permission model that is similar to Windows NTFS permissions. These granular ACLs can be applied to users or groups and allow for permissions to be set on a wider range of operations than basic read/write/execute operations. NFSv4.x can also use the standard POSIX mode bits that NFSv3 employs.
-* Since NFSv4.x does not use ancillary protocols, Kerberos is applied to the entire NFS conversation when in use.
-* NFSv4.x uses a combination of user/group names and domain strings to verify user and group information. The client and server must agree on the domain strings for proper user and group authentication to occur. If the domain strings do not match, then the NFS user or group gets squashed to the specified user in the /etc/idmapd.conf file on the NFS client (for example, nobody).
-* While NFSv4.x does default to using domain strings, it is possible to configure the client and server to fall back on the classic numeric IDs seen in NFSv3 when AUTH_SYS is in use.
-* Because NFSv4.x has such deep integration with user and group name strings and because the server and clients must agree on these users/groups, using a name service server for user authentication such as LDAP is recommended on NFS clients and servers.
+* Since NFSv4.x doesn't use ancillary protocols, Kerberos is applied to the entire NFS conversation when in use.
+* NFSv4.x uses a combination of user/group names and domain strings to verify user and group information. The client and server must agree on the domain strings for proper user and group authentication to occur. If the domain strings don't match, then the NFS user or group gets squashed to the specified user in the /etc/idmapd.conf file on the NFS client (for example, nobody).
+* While NFSv4.x does default to using domain strings, it's possible to configure the client and server to fall back on the classic numeric IDs seen in NFSv3 when AUTH_SYS is in use.
+* NFSv4.x has deep integration with user and group name strings, and the server and clients must agree on these users and groups. As such, consider using a name service server for user authentication such as LDAP on NFS clients and servers.
For frequently asked questions regarding NFS in Azure NetApp Files, see the [Azure NetApp Files NFS FAQ](faq-nfs.md). ## Server Message Block (SMB)
-SMB is primarily used with Windows clients for NAS functionality. However, it can also be used on Linux-based operating systems such as AppleOS, RedHat, etc. This deployment is generally accomplished using an application called Samba. Azure NetApp Files has official support for SMB using Windows and macOS. SMB/Samba on Linux operating systems can work with Azure NetApp Files, but there is no official support.
+SMB is primarily used with Windows clients for NAS functionality. However, it can also be used on Linux-based operating systems such as AppleOS, RedHat, etc. This deployment is accomplished using an application called Samba. Azure NetApp Files has official support for SMB using Windows and macOS. SMB/Samba on Linux operating systems can work with Azure NetApp Files, but there's no official support.
Azure NetApp Files supports only SMB 2.1 and SMB 3.1 versions. SMB has the following characteristics: * SMB is a stateful protocol: the clients and server maintain a ΓÇ£stateΓÇ¥ for SMB share connections for better security and locking.
-* Locking in SMB is considered mandatory. Once a file is locked, no other client can write to that file until the lock is released.
-* SMBv2.x and later leverage compound calls to perform operations.
+* Locking in SMB is considered mandatory. When a file is locked, no other client can write to that file until the lock is released.
+* SMBv2.x and later use compound calls to perform operations.
* SMB supports full Kerberos integration. With the way Windows clients are configured, Kerberos is often in use without end users ever knowing.
-* When Kerberos is unable to be used for authentication, Windows NT LAN Manager (NTLM) may be used as a fallback. If NTLM is disabled in the Active Directory environment, then authentication requests that cannot use Kerberos fail.
+* When Kerberos is unable to be used for authentication, Windows NT LAN Manager (NTLM) may be used as a fallback. If NTLM is disabled in the Active Directory environment, then authentication requests that can't use Kerberos fail.
* SMBv3.0 and later supports [end-to-end encryption](azure-netapp-files-create-volumes-smb.md) for SMB shares. * SMBv3.x supports [multichannel](../storage/files/storage-files-smb-multichannel-performance.md) for performance gains in certain workloads. * SMB uses user and group names (via SID translation) for authentication. User and group information is provided by an Active Directory domain controller.
SMB has the following characteristics:
For frequently asked questions regarding SMB in Azure NetApp Files, see the [Azure NetApp Files SMB FAQ](faq-smb.md).
+## Dual protocols
+
+Some organizations have pure Windows or pure UNIX environments (homogenous) in which all data is accessed using only one of the following approaches:
+
+* SMB and [NTFS](/windows-server/storage/file-server/ntfs-overview) file security
+* NFS and UNIX file security - mode bits or [NFSv4.x access control lists (ACLs)](https://wiki.linux-nfs.org/wiki/index.php/ACLs)
+
+However, many sites must enable data sets to be accessed from both Windows and UNIX clients (heterogenous). For environments with these requirements, Azure NetApp Files has native dual-protocol NAS support. After the user is authenticated on the network and has both appropriate share or export permissions and the necessary file-level permissions, the user can access the data from UNIX hosts using NFS or from Windows hosts using SMB.
+
+### Reasons for using dual-protocol volumes
+
+Using dual-protocol volumes with Azure NetApp Files delivers several distinct advantages. When data sets can be seamlessly and simultaneously accessed by clients using different NAS protocols, the following benefits can be achieved:
+
+* Reduce the overall storage administrator management tasks.
+* Require only a single copy of data to be stored for NAS access from multiple client types.
+* Protocol agnostic NAS allows storage administrators to control the style of ACL and access control being presented to end users.
+* Centralize identity management operations in a NAS environment.
+
+### Common considerations with dual-protocol environments
+
+Dual-protocol NAS access is desirable by many organizations for its flexibility. However, there is a perception of difficulty that creates a set of considerations unique to the concept of sharing across protocols. These considerations include, but are not limited to:
+
+* Requirement of knowledge across multiple protocols, operating systems and storage systems.
+* Working knowledge of name service servers, such as DNS, LDAP, and so on.
+
+In addition, external factors can come into play, such as:
+
+* Dealing with multiple departments and IT groups (such as Windows groups and UNIX groups)
+* Company acquisitions
+* Domain consolidations
+* Reorganizations
+
+Despite these considerations, dual-protocol NAS setup, configuration, and access can be simple and seamlessly integrated into any environment.
+
+### How Azure NetApp Files simplifies dual-protocol use
+
+Azure NetApp Files consolidates the infrastructure required for successful dual-protocol NAS environments into a single management plane, including storage and identity management services.
+
+Dual-protocol configuration is straightforward, and most of the tasks are shielded by the Azure NetApp Files resource management framework to simplify operations for cloud operators.
+
+After an Active Directory connection is established with Azure NetApp Files, dual-protocol volumes can use the connection to handle both the Windows and UNIX identity management needed for proper user and group authentication with Azure NetApp Files volumes without extra configuration steps outside of the normal user and group management within the Active Directory or LDAP services.
+
+By removing the extra storage-centric steps for dual-protocol configurations, Azure NetApp Files streamlines the overall dual-protocol deployment for organizations looking to move to Azure.
+
+### How Azure NetApp Files dual-protocol volumes work
+
+At a high level, Azure NetApp Files dual-protocol volumes use a combination of name mapping and permissions styles to deliver consistent data access regardless of the protocol in use. That means whether you're accessing a file from NFS or SMB, you can be assured that users with access to those files can access them, and users without access to those files can't access them.
+
+When a NAS client requests access to a dual-protocol volume in Azure NetApp Files, the following operations occur to provide a transparent experience to the end user.
+
+1. A NAS client makes a NAS connection to the Azure NetApp Files dual-protocol volume.
+2. The NAS client passes user identity information to Azure NetApp Files.
+3. Azure NetApp Files checks to make sure the NAS client/user has access to the NAS share.
+4. Azure NetApp Files takes that user and maps it to a valid user found in name services.
+5. Azure NetApp Files compares that user against the file-level permissions in the system.
+6. File permissions control the level of access the user has.
+
+In the following illustration, `user1` authenticates to Azure NetApp Files to access a dual-protocol volume through either SMB or NFS. Azure NetApp Files finds the userΓÇÖs Windows and UNIX information in Azure Active Directory and then maps the user's Windows and UNIX identities one-to-one. The user is verified as `user1` and gets `user1`'s access credentials.
+
+In this instance, `user1` gets full control on their own folder (`user1-dir`) and no access to the `HR` folder. This setting is based on the security ACLs specified in the file system, and `user1` will get the expected access regardless of which protocol they're accessing the volumes from.
++
+### Considerations for Azure NetApp Files dual-protocol volumes
+
+When you use Azure NetApp Files volumes for access to both SMB and NFS, some considerations apply:
+
+* You need an Active Directory connection. As such, you need to meet the [Requirements for Active Directory connections](create-active-directory-connections.md#requirements-for-active-directory-connections).
+* Dual-protocol volumes require a reverse lookup zone in DNS with an associated pointer (PTR) record of the AD host machine to prevent dual-protocol volume creation failures.
+* Your NFS client and associated packages (such as `nfs-utils`) should be up to date for the best security, reliability and feature support.
+* Dual-protocol volumes support both Active Directory Domain Services (AD DS) and Azure Active Directory Domain Services (Azure AD DS, or AADDS).
+* Dual-protocol volumes don't support the use of LDAP over TLS with AADDS. See [LDAP over TLS considerations](configure-ldap-over-tls.md#considerations).
+* Supported NFS versions include: NFSv3 and NFSv4.1.
+* NFSv4.1 features such as parallel network file system (pNFS), session trunking, and referrals aren't currently supported with Azure NetApp Files volumes.
+* [Windows extended attributes `set`/`get`](/windows/win32/api/fileapi/ns-fileapi-createfile2_extended_parameters) aren't supported in dual-protocol volumes.
+* See additional [considerations for creating a dual-protocol volume for Azure NetApp Files](create-volumes-dual-protocol.md#considerations).
+<!-- planning to consolidate and move considerations from the Create article to this subsection. -->
+ ## Next steps
-* [Azure NetApp Files NFS FAQ](faq-nfs.md)
-* [Azure NetApp Files SMB FAQ](faq-smb.md)
+
+* [Understand dual-protocol security style and permission behaviors in Azure NetApp Files](dual-protocol-permission-behaviors.md)
+* [Understand the use of LDAP with Azure NetApp Files](lightweight-directory-access-protocol.md)
+* [Understand NFS group memberships and supplemental groups](network-file-system-group-memberships.md)
* [Understand file locking and lock types in Azure NetApp Files](understand-file-locks.md)
+* [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md)
+* [Create an SMB volume for Azure NetApp Files](azure-netapp-files-create-volumes-smb.md)
+* [Create a dual-protocol volume for Azure NetApp Files](create-volumes-dual-protocol.md)
+* [Azure NetApp Files NFS FAQ](faq-nfs.md)
+* [Azure NetApp Files SMB FAQ](faq-smb.md)
azure-netapp-files Network File System Group Memberships https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-file-system-group-memberships.md
+
+ Title: Understand NFS group memberships and supplemental groups for Azure NetApp Files | Microsoft Learn
+description: This article helps you understand NFS group memberships and supplemental groups as they apply to Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 08/02/2023+++
+# Understand NFS group memberships and supplemental groups
+
+You can use LDAP to control group membership and to return supplemental groups for NFS users. This behavior is controlled through schema attributes in the LDAP server.
+
+## Primary GID
+
+For Azure NetApp Files to be able to authenticate a user properly, LDAP users must always have a primary GID defined. The userΓÇÖs primary GID is defined by the schema `gidNumber` in the LDAP server.
+
+## Secondary, supplemental, and auxiliary GIDs
+
+Secondary, supplemental, and auxiliary groups are groups that a user is a member of outside of their primary GID. In Azure NetApp Files, LDAP is implemented using Microsoft Active Directory and supplemental groups are controlled using standard Windows group membership logic.
+
+When a user is added to a Windows group, the LDAP schema attribute `Member` is populated on the group with the distinguished name (DN) of the user that is a member of that group. When a userΓÇÖs group membership is queried by Azure NetApp Files, an LDAP search is done for the userΓÇÖs DN on all groupsΓÇÖ `Member` attribute. All groups with a UNIX `gidNumber` and the userΓÇÖs DN are returned in the search and populated as the userΓÇÖs supplemental group memberships.
+
+The following example shows the output from Active Directory with a userΓÇÖs DN populated in the `Member` field of a group and a subsequent LDAP search done using [`ldp.exe`](/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/cc771022(v=ws.11)).
+
+The following example shows the Windows group member field:
++
+The following example shows `LDAPsearch` of all groups where `User1` is a member:
++
+You can also query group memberships for a user in Azure NetApp Files by selecting **LDAP Group ID List** link under **Support + troubleshooting** on the volume menu.
++
+## Group limits in NFS
+
+Remote Procedure Call (RPC) in NFS has a specific limitation for the maximum number of auxiliary GIDs that can be honored in a single NFS request. The maximum for [`AUTH_SYS/AUTH_UNIX` is 16](http://tools.ietf.org/html/rfc5531), and for AUTH_GSS (Kerberos), it's 32. This protocol limitation affects all NFS servers - not just Azure NetApp Files. However, many modern NFS servers and clients include ways to work around these limitations.
+
+To work around this NFS limitation in Azure NetApp Files, see [Enable Active Directory Domain Services (AD DS) LDAP authentication for NFS volumes](configure-ldap-extended-groups.md).
+
+## How extending the group limitation works
+
+The options to extend the group limitation work the same way that the `manage-gids` option for other NFS servers works. Basically, rather than dumping the entire list of auxiliary GIDs that a user belongs to, the option performs a lookup for the GID on the file or folder and returns that value instead.
+
+The following example shows RPC packet with 16 GIDs.
++
+Any GID past the limit of 16 is dropped by the protocol. With extended groups in Azure NetApp Files, when a new NFS request comes in, information about the userΓÇÖs group membership is requested.
+
+## Considerations for extended GIDs with Active Directory LDAP
+
+By default, in Microsoft Active Directory LDAP servers, the `MaxPageSize` attribute is set to a default of 1,000. That setting means that groups beyond 1,000 would be truncated in LDAP queries. To enable full support with the 1,024 value for extended groups, the `MaxPageSize` attribute must be modified to reflect the 1,024 value. For information about how to change that value, see the Microsoft TechNet article [How to View and Set LDAP Policy in Active Directory by Using Ntdsutil.exe](https://support.microsoft.com/kb/315071) and the TechNet library article [MaxPageSize Is Set Too High](https://technet.microsoft.com/library/aa998536(v=exchg.80).aspx).
+
+## Next steps
+
+* [Enable Active Directory Domain Services (AD DS) LDAP authentication for NFS volumes](configure-ldap-extended-groups.md)
+* [Understand file locking and lock types in Azure NetApp Files](understand-file-locks.md)
+* [Understand dual-protocol security style and permission behaviors in Azure NetApp Files](dual-protocol-permission-behaviors.md)
+* [Understand the use of LDAP with Azure NetApp Files](lightweight-directory-access-protocol.md)
+* [Azure NetApp Files NFS FAQ](faq-nfs.md)
+* [Enable Active Directory Domain Services (AD DS) LDAP authentication for NFS volumes](configure-ldap-extended-groups.md)
azure-netapp-files Understand File Locks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-file-locks.md
Title: Understand file locking and lock types in Azure NetApp Files
+ Title: Understand file locking and lock types in Azure NetApp Files | Microsoft Docs
description: Understand the concept of file locking and the different types of NFS locks. documentationcenter: ''
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
azure-resource-manager Bicep Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config.md
Title: Bicep config file
description: Describes the configuration file for your Bicep deployments Previously updated : 06/20/2023 Last updated : 08/08/2023 # Configure your Bicep environment
azure-resource-manager Loops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/loops.md
Title: Iterative loops in Bicep
description: Use loops to iterate over collections in Bicep Previously updated : 05/02/2023 Last updated : 08/07/2023 # Iterative loops in Bicep
resource share 'Microsoft.Storage/storageAccounts/fileServices/shares@2021-06-01
}] ```
+## Reference resource/module collections
+
+The ARM template [`references`](../templates/template-functions-resource.md#references) function returns an array of objects representing a resource collection's runtime states. In Bicep, there is no explicit references function. Instead, symbolic collection usage is employed directly, and during code generation, Bicep translates it to an ARM template that utilizes the ARM template references function. For the translation feature that transforms symbolic collections into ARM templates using the references function, it is necessary to have Bicep CLI version 0.20.4 or a more recent version. Additionally, in the [`bicepconfig.json`](./bicep-config.md#enable-experimental-features) file, the `symbolicNameCodegen` setting should be presented and set to `true`.
+
+The outputs of the two samples in [Integer index](#integer-index) can be written as:
+
+```bicep
+param location string = resourceGroup().location
+param storageCount int = 2
+
+resource storageAcct 'Microsoft.Storage/storageAccounts@2022-09-01' = [for i in range(0, storageCount): {
+ name: '${i}storage${uniqueString(resourceGroup().id)}'
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'Storage'
+}]
+
+output storageInfo array = map(storageAcct, store => {
+ blobEndpoint: store.properties.primaryEndpoints
+ status: store.properties.statusOfPrimary
+})
+
+output storageAccountEndpoints array = map(storageAcct, store => store.properties.primaryEndpoints)
+```
+
+This Bicep file is transpiled into the following ARM JSON template that utilizes the `references` function:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "languageVersion": "1.10-experimental",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ },
+ "storageCount": {
+ "type": "int",
+ "defaultValue": 2
+ }
+ },
+ "resources": {
+ "storageAcct": {
+ "copy": {
+ "name": "storageAcct",
+ "count": "[length(range(0, parameters('storageCount')))]"
+ },
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2022-09-01",
+ "name": "[format('{0}storage{1}', range(0, parameters('storageCount'))[copyIndex()], uniqueString(resourceGroup().id))]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "Storage"
+ }
+ },
+ "outputs": {
+ "storageInfo": {
+ "type": "array",
+ "value": "[map(references('storageAcct', 'full'), lambda('store', createObject('blobEndpoint', lambdaVariables('store').properties.primaryEndpoints, 'status', lambdaVariables('store').properties.statusOfPrimary)))]"
+ },
+ "storageAccountEndpoints": {
+ "type": "array",
+ "value": "[map(references('storageAcct', 'full'), lambda('store', lambdaVariables('store').properties.primaryEndpoints))]"
+ }
+ }
+}
+```
+
+Note in the preceding ARM JSON template, `languageVersion` must be set to `1.10-experimental`, and the resource element is an object instead of an array.
+ ## Next steps - To set dependencies on resources that are created in a loop, see [Resource dependencies](resource-dependencies.md).
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
azure-resource-manager Deployment Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/deployment-models.md
Only resources created through Resource Manager support tags. You can't apply ta
The following diagram displays compute, network, and storage resources deployed through Resource Manager.
-![Resource Manager architecture](./media/deployment-models/arm_arch3.png)
SRP: Storage Resource Provider, CRP: Compute Resource Provider, NRP: Network Resource Provider
Note the following relationships between the resources:
Here are the components and their relationships for classic deployment:
-![classic architecture](./media/deployment-models/arm_arch1.png)
The classic solution for hosting a virtual machine includes:
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
azure-resource-manager Add Template To Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/add-template-to-azure-pipelines.md
This article assumes your ARM template and Azure DevOps organization are ready f
1. If you haven't added a pipeline previously, you need to create a new pipeline. From your Azure DevOps organization, select **Pipelines** and **New pipeline**.
- ![Add new pipeline](./media/add-template-to-azure-pipelines/new-pipeline.png)
+ :::image type="content" source="./media/add-template-to-azure-pipelines/new-pipeline.png" alt-text="Screenshot of the Add new pipeline button":::
1. Specify where your code is stored. The following image shows selecting **Azure Repos Git**.
- ![Select code source](./media/add-template-to-azure-pipelines/select-source.png)
+ :::image type="content" source="./media/add-template-to-azure-pipelines/select-source.png" alt-text="Screenshot of selecting the code source in Azure DevOps":::
1. From that source, select the repository that has the code for your project.
- ![Select repository](./media/add-template-to-azure-pipelines/select-repo.png)
+ :::image type="content" source="./media/add-template-to-azure-pipelines/select-repo.png" alt-text="Screenshot of selecting the repository for the project in Azure DevOps":::
1. Select the type of pipeline to create. You can select **Starter pipeline**.
- ![Select pipeline](./media/add-template-to-azure-pipelines/select-pipeline.png)
+ :::image type="content" source="./media/add-template-to-azure-pipelines/select-pipeline.png" alt-text="Screenshot of selecting the type of pipeline to create in Azure DevOps":::
You're ready to either add an Azure PowerShell task or the copy file and deploy tasks.
ScriptArguments: -Location 'centralus' -ResourceGroupName 'demogroup' -TemplateF
When you select **Save**, the build pipeline is automatically run. Go back to the summary for your build pipeline, and watch the status.
-![View results](./media/add-template-to-azure-pipelines/view-results.png)
You can select the currently running pipeline to see details about the tasks. When it finishes, you see the results for each step.
azure-resource-manager Create Visual Studio Deployment Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/create-visual-studio-deployment-project.md
In this section, you create an Azure Resource Group project with a **Web app** t
1. In Visual Studio, choose **File**>**New**>**Project**. 1. Select the **Azure Resource Group** project template and **Next**.
- ![Screenshot shows the Create a new project window with Azure Resource Group and the Next button highlighted.](./media/create-visual-studio-deployment-project/create-project.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/create-project.png" alt-text="Screenshot of Create a new project window highlighting Azure Resource Group and Next button.":::
1. Give your project a name. The other default settings are probably fine, but review them to make they work for your environment. When done, select **Create**.
- ![Create project](./media/create-visual-studio-deployment-project/name-project.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/name-project.png" alt-text="Screenshot of the project naming window in Visual Studio.":::
1. Choose the template that you want to deploy to Azure Resource Manager. Notice there are many different options based on the type of project you wish to deploy. For this article, choose the **Web app** template and **OK**.
- ![Choose a template](./media/create-visual-studio-deployment-project/select-project.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/select-project.png" alt-text="Screenshot of the template selection window with Web app template highlighted.":::
The template you pick is just a starting point; you can add and remove resources to fulfill your scenario. 1. Visual Studio creates a resource group deployment project for the web app. To see the files for your project, look at the node in the deployment project.
- ![Show nodes](./media/create-visual-studio-deployment-project/show-items.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/show-items.png" alt-text="Screenshot of the Visual Studio Solution Explorer showing the resource group deployment project files.":::
Since you chose the Web app template, you see the following files:
You can customize a deployment project by modifying the Resource Manager templat
1. The Visual Studio editor provides tools to assist you with editing the Resource Manager template. The **JSON Outline** window makes it easy to see the elements defined in your template.
- ![Show JSON outline](./media/create-visual-studio-deployment-project/show-json-outline.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/show-json-outline.png" alt-text="Screenshot of the JSON Outline window in Visual Studio for the Resource Manager template.":::
1. Select an element in the outline to go to that part of the template.
- ![Navigate JSON](./media/create-visual-studio-deployment-project/navigate-json.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/navigate-json.png" alt-text="Screenshot of the Visual Studio editor with a selected element in the JSON Outline window.":::
1. You can add a resource by either selecting the **Add Resource** button at the top of the JSON Outline window, or by right-clicking **resources** and selecting **Add New Resource**.
- ![Screenshot shows the JSON Outline window with the Add New Resource option highlighted.](./media/create-visual-studio-deployment-project/add-resource.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/add-resource.png" alt-text="Screenshot of the JSON Outline window highlighting the Add New Resource option.":::
1. Select **Storage Account** and give it a name. Provide a name that is no more than 11 characters, and only contains numbers and lower-case letters.
- ![Add storage](./media/create-visual-studio-deployment-project/add-storage.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/add-storage.png" alt-text="Screenshot of the Add New Resource window with Storage Account selected.":::
1. Notice that not only was the resource added, but also a parameter for the type storage account, and a variable for the name of the storage account.
- ![Show outline](./media/create-visual-studio-deployment-project/show-new-items.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/show-new-items.png" alt-text="Screenshot of the JSON Outline window displaying the added Storage Account resource.":::
1. The parameter for the type of storage account is pre-defined with allowed types and a default type. You can leave these values or edit them for your scenario. If you don't want anyone to deploy a **Premium_LRS** storage account through this template, remove it from the allowed types.
You can customize a deployment project by modifying the Resource Manager templat
1. Visual Studio also provides intellisense to help you understand the properties that are available when editing the template. For example, to edit the properties for your App Service plan, navigate to the **HostingPlan** resource, and add a value for the **properties**. Notice that intellisense shows the available values and provides a description of that value.
- ![Show intellisense](./media/create-visual-studio-deployment-project/show-intellisense.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/show-intellisense.png" alt-text="Screenshot of Visual Studio editor showing intellisense suggestions for Resource Manager template.":::
You can set **numberOfWorkers** to 1, and save the file.
For the AzureRM module script, use Visual Studio:
1. On the shortcut menu of the deployment project node, choose **Deploy** > **New**.
- ![New deployment menu item](./media/create-visual-studio-deployment-project/deploy.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/deploy.png" alt-text="Screenshot of the deployment project context menu with Deploy and New options highlighted.":::
1. The **Deploy to Resource Group** dialog box appears. In the **Resource group** dropdown box, choose an existing resource group or create a new one. Select **Deploy**.
- ![Deploy to resource group dialog box](./media/create-visual-studio-deployment-project/show-deployment.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/show-deployment.png" alt-text="Screenshot of the Deploy to Resource Group dialog box in Visual Studio.":::
1. In the **Output** windows, you see the status of the deployment. When the deployment has finished, the last message indicates a successful deployment with something similar to:
Let's check the results.
1. You see all the deployed resources. Notice that the name of the storage account isn't exactly what you specified when adding that resource. The storage account must be unique. The template automatically adds a string of characters to the name you provided to create a unique name.
- ![Show resources](./media/create-visual-studio-deployment-project/show-deployed-resources.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/show-deployed-resources.png" alt-text="Screenshot of the Azure portal displaying the deployed resources in a resource group.":::
## Add code to project
At this point, you've deployed the infrastructure for your app, but there's no a
1. Add a project to your Visual Studio solution. Right-click the solution, and select **Add** > **New Project**.
- ![Add project](./media/create-visual-studio-deployment-project/add-project.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/add-project.png" alt-text="Screenshot of the Add New Project context menu in Visual Studio.":::
1. Add an **ASP.NET Core Web Application**.
- ![Add web app](./media/create-visual-studio-deployment-project/add-app.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/add-app.png" alt-text="Screenshot of the New Project window with ASP.NET Core Web Application selected.":::
1. Give your web app a name, and select **Create**.
- ![Name web app](./media/create-visual-studio-deployment-project/name-web-app.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/name-web-app.png" alt-text="Screenshot of the project naming window for the ASP.NET Core Web Application.":::
1. Select **Web Application** and **Create**.
- ![Select Web application](./media/create-visual-studio-deployment-project/select-project-type.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/select-project-type.png" alt-text="Screenshot of the New ASP.NET Core Web Application window with Web Application selected.":::
1. After Visual Studio creates your web app, you see both projects in the solution.
- ![Show projects](./media/create-visual-studio-deployment-project/show-projects.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/show-projects.png" alt-text="Screenshot of the Visual Studio Solution Explorer displaying both projects in the solution.":::
1. Now, you need to make sure your resource group project is aware of the new project. Go back to your resource group project (ExampleAppDeploy). Right-click **References** and select **Add Reference**.
- ![Screenshot shows the ExampleAppDeploy menu with the Add Reference option highlighted.](./media/create-visual-studio-deployment-project/add-new-reference.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/add-new-reference.png" alt-text="Screenshot of the ExampleAppDeploy context menu highlighting the Add Reference option.":::
1. Select the web app project that you created.
- ![Add reference](./media/create-visual-studio-deployment-project/add-reference.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/add-reference.png" alt-text="Screenshot of the Add Reference window in Visual Studio with the web app project selected.":::
By adding a reference, you link the web app project to the resource group project, and automatically sets some properties. You see these properties in the **Properties** window for the reference. The **Include File Path** has the path where the package is created. Note the folder (ExampleApp) and file (package.zip). You need to know these values because you provide them as parameters when deploying the app.
- ![See reference](./media/create-visual-studio-deployment-project/see-reference.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/see-reference.png" alt-text="Screenshot of the Properties window displaying the reference properties for the web app project.":::
1. Go back to your template (WebSite.json) and add a resource to the template.
- ![Add resource](./media/create-visual-studio-deployment-project/add-resource-2.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/add-resource-2.png" alt-text="Screenshot of the JSON Outline window with the Add New Resource option highlighted.":::
1. This time select **Web Deploy for Web Apps**.
- ![Add web deploy](./media/create-visual-studio-deployment-project/add-web-deploy.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/add-web-deploy.png" alt-text="Screenshot of the Add New Resource window with Web Deploy for Web Apps selected.":::
Save your template.
For the AzureRM module script, use Visual Studio:
1. To redeploy, choose **Deploy**, and the resource group you deployed earlier.
- ![Redeploy project](./media/create-visual-studio-deployment-project/redeploy.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/redeploy.png" alt-text="Screenshot of the deployment project context menu with Deploy and the previously used resource group highlighted.":::
1. Select the storage account you deployed with this resource group for the **Artifact storage account**.
- ![Redeploy web deploy](./media/create-visual-studio-deployment-project/redeploy-web-app.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/redeploy-web-app.png" alt-text="Screenshot of the Deploy to Resource Group dialog box with Artifact storage account selected.":::
## View web app 1. After the deployment has finished, select your web app in the portal. Select the URL to browse to the site.
- ![Browse site](./media/create-visual-studio-deployment-project/browse-site.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/browse-site.png" alt-text="Screenshot of the Azure portal displaying the web app resource with the URL highlighted.":::
1. Notice that you've successfully deployed the default ASP.NET app.
- ![Show deployed app](./media/create-visual-studio-deployment-project/show-deployed-app.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/show-deployed-app.png" alt-text="Screenshot of the deployed default ASP.NET app in a web browser.":::
## Add operations dashboard
You aren't limited to only the resources that are available through the Visual S
1. After deployment has finished, view your dashboard in the portal. Select **Dashboard** and pick the one you deployed.
- ![Screenshot shows the Dashboard page with an example custom dashboard highlighted.](./media/create-visual-studio-deployment-project/view-custom-dashboards.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/view-custom-dashboards.png" alt-text="Screenshot of the Azure portal Dashboard page highlighting an example custom dashboard.":::
1. You see the customized dashboard.
- ![Custom Dashboard](./media/create-visual-studio-deployment-project/Ops-DemoSiteGroup-dashboard.png)
+ :::image type="content" source="./media/create-visual-studio-deployment-project/Ops-DemoSiteGroup-dashboard.png" alt-text="Screenshot of the customized operational dashboard in the Azure portal.":::
You can manage access to the dashboard by using Azure role-based access control (Azure RBAC). You can also customize the dashboard's appearance after it's deployed. However, if you redeploy the resource group, the dashboard is reset to its default state in your template. For more information about creating dashboards, see [Programmatically create Azure Dashboards](../../azure-portal/azure-portal-dashboards-create-programmatically.md).
azure-resource-manager Deploy Cloud Shell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-cloud-shell.md
To deploy an external template, provide the URI of the template exactly as you w
1. Open the Cloud Shell prompt.
- :::image type="content" source="./media/deploy-cloud-shell/open-cloud-shell.png" alt-text="Open Cloud Shell":::
+ :::image type="content" source="./media/deploy-cloud-shell/open-cloud-shell.png" alt-text="Screenshot of the button to open Cloud Shell.":::
1. To deploy the template, use the following commands:
To deploy a local template, you must first upload your template to the storage a
1. Select either **PowerShell** or **Bash**.
- :::image type="content" source="./media/deploy-cloud-shell/cloud-shell-bash-powershell.png" alt-text="Select Bash or PowerShell":::
+ :::image type="content" source="./media/deploy-cloud-shell/cloud-shell-bash-powershell.png" alt-text="Screenshot of the option to select Bash or PowerShell in Cloud Shell.":::
1. Select **Upload/Download files**, and then select **Upload**.
- :::image type="content" source="./media/deploy-cloud-shell/cloud-shell-upload.png" alt-text="Upload file":::
+ :::image type="content" source="./media/deploy-cloud-shell/cloud-shell-upload.png" alt-text="Screenshot of the Cloud Shell interface with the Upload file option highlighted.":::
1. Select the ARM template you want to upload, and then select **Open**.
azure-resource-manager Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-portal.md
This article shows both methods.
1. To create a new resource group, select **Resource groups** from the [Azure portal](https://portal.azure.com).
- ![Select resource groups](./media/deploy-portal/select-resource-groups.png)
+ :::image type="content" source="./media/deploy-portal/select-resource-groups.png" alt-text="Screenshot of selecting resource groups in Azure portal":::
1. Under Resource groups, select **Add**.
- ![Add resource group](./media/deploy-portal/add-resource-group.png)
+ :::image type="content" source="./media/deploy-portal/add-resource-group.png" alt-text="Screenshot of adding a resource group in Azure portal":::
1. Select or enter the following property values:
This article shows both methods.
- **Resource group**: Give the resource group a name. - **Region**: Specify an Azure location. This location is where the resource group stores metadata about the resources. For compliance reasons, you may want to specify where that metadata is stored. In general, we recommend that you specify a location where most of your resources will be. Using the same location can simplify your template.
- ![Set group values](./media/deploy-portal/set-group-properties.png)
+ :::image type="content" source="./media/deploy-portal/set-group-properties.png" alt-text="Screenshot of setting resource group property values in Azure portal":::
1. Select **Review + create**. 1. Review the values, and then select **Create**.
After you create a resource group, you can deploy resources to the group from th
1. To start a deployment, select **Create a resource** from the [Azure portal](https://portal.azure.com).
- ![New resource](./media/deploy-portal/new-resources.png)
+ :::image type="content" source="./media/deploy-portal/new-resources.png" alt-text="Screenshot of creating a new resource in Azure portal":::
1. Find the type of resource you would like to deploy. The resources are organized in categories. If you don't see the particular solution you would like to deploy, you can search the Marketplace for it. The following screenshot shows that Ubuntu Server is selected.
- ![Select resource type](./media/deploy-portal/select-resource-type.png)
+ :::image type="content" source="./media/deploy-portal/select-resource-type.png" alt-text="Screenshot of selecting a resource type in Azure portal":::
1. Depending on the type of selected resource, you have a collection of relevant properties to set before deployment. For all types, you must select a destination resource group. The following image shows how to create a Linux virtual machine and deploy it to the resource group you created.
- ![Create resource group](./media/deploy-portal/select-existing-group.png)
+ :::image type="content" source="./media/deploy-portal/select-existing-group.png" alt-text="Screenshot of creating a Linux virtual machine and deploying it to a resource group in Azure portal":::
You can decide to create a resource group when deploying your resources. Select **Create new** and give the resource group a name. 1. Your deployment begins. The deployment could take several minutes. Some resources take longer time than other resources. When the deployment has finished, you see a notification. Select **Go to resource** to open
- ![View notification](./media/deploy-portal/view-notification.png)
+ :::image type="content" source="./media/deploy-portal/view-notification.png" alt-text="Screenshot of viewing deployment notification in Azure portal":::
1. After deploying your resources, you can add more resources to the resource group by selecting **Add**.
- ![Add resource](./media/deploy-portal/add-resource.png)
+ :::image type="content" source="./media/deploy-portal/add-resource.png" alt-text="Screenshot of adding a resource to a resource group in Azure portal":::
Although you didn't see it, the portal used an ARM template to deploy the resources you selected. You can find the template from the deployment history. For more information, see [Export template after deployment](export-template-portal.md#export-template-after-deployment).
If you want to execute a deployment but not use any of the templates in the Mark
1. To deploy a customized template through the portal, select **Create a resource**, search for **template**. and then select **Template deployment**.
- ![Search template deployment](./media/deploy-portal/search-template.png)
+ :::image type="content" source="./media/deploy-portal/search-template.png" alt-text="Screenshot of searching for template deployment in Azure portal":::
1. Select **Create**. 1. You see several options for creating a template:
If you want to execute a deployment but not use any of the templates in the Mark
- **Common templates**: Select from common solutions. - **Load a GitHub quickstart template**: Select from [quickstart templates](https://azure.microsoft.com/resources/templates/).
- ![View options](./media/deploy-portal/see-options.png)
+ :::image type="content" source="./media/deploy-portal/see-options.png" alt-text="Screenshot of template creation options in Azure portal":::
This tutorial provides the instruction for loading a quickstart template.
If you want to execute a deployment but not use any of the templates in the Mark
1. Select **Edit template** to explore the portal template editor. The template is loaded in the editor. Notice there are two parameters: `storageAccountType` and `location`.
- ![Create template](./media/deploy-portal/show-json.png)
+ :::image type="content" source="./media/deploy-portal/show-json.png" alt-text="Screenshot of editing a JSON template in Azure portal":::
1. Make a minor change to the template. For example, update the `storageAccountName` variable to:
azure-resource-manager Deploy To Azure Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-to-azure-button.md
To add the button to your web page or repository, use the following image:
The image appears as:
-![Deploy to Azure button](https://aka.ms/deploytoazurebutton)
## Create URL for deploying template
This section shows how to get the URLs for the templates stored in GitHub and Az
To create the URL for your template, start with the raw URL to the template in your GitHub repo. To see the raw URL, select **Raw**. The format of the URL is:
For Git with Azure repo, the button is in the format:
To test the full solution, select the following button:
-[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.storage%2Fstorage-account-create%2Fazuredeploy.json)
The portal displays a pane that allows you to easily provide parameter values. The parameters are pre-filled with the default values from the template. The camel-cased parameter name, *storageAccountType*, defined in the template is turned into a space-separated string when displayed on the portal.
-![Use portal to deploy](./media/deploy-to-azure-button/portal.png)
## Next steps
azure-resource-manager Deployment Script Template Configure Dev https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-script-template-configure-dev.md
You also can upload the file by using the Azure portal or the Azure CLI.
2. Open the container group. The default container group name is the project name appended with *cg*. The container instance is in the **Running** state. 3. In the resource menu, select **Containers**. The container instance name is the project name appended with *container*.
- ![Screenshot of the deployment script connect container instance in the Azure portal.](./media/deployment-script-template-configure-dev/deployment-script-container-instance-connect.png)
+ :::image type="content" source="./media/deployment-script-template-configure-dev/deployment-script-container-instance-connect.png" alt-text="Screenshot of the deployment script connect container instance option in the Azure portal.":::
4. Select **Connect**, and then select **Connect**. If you can't connect to the container instance, restart the container group and try again. 5. In the console pane, run the following commands:
You also can upload the file by using the Azure portal or the Azure CLI.
The output is **Hello John Dole**.
- ![Screenshot of the deployment script connect container instance test output in the console.](./media/deployment-script-template-configure-dev/deployment-script-container-instance-test.png)
+ :::image type="content" source="./media/deployment-script-template-configure-dev/deployment-script-container-instance-test.png" alt-text="Screenshot of the deployment script connect container instance test output displayed in the console.":::
## Use an Azure CLI container instance
You also can upload the file by using the Azure portal or the Azure CLI.
1. Open the container group. The default container group name is the project name appended with *cg*. The container instance is shown in the **Running** state. 1. In the resource menu, select **Containers**. The container instance name is the project name appended with *container*.
- ![deployment script connect container instance](./media/deployment-script-template-configure-dev/deployment-script-container-instance-connect.png)
+ :::image type="content" source="./media/deployment-script-template-configure-dev/deployment-script-container-instance-connect.png" alt-text="Screenshot of the deployment script connect container instance option in the Azure portal.":::
1. Select **Connect**, and then select **Connect**. If you can't connect to the container instance, restart the container group and try again. 1. In the console pane, run the following commands:
You also can upload the file by using the Azure portal or the Azure CLI.
The output is **Hello John Dole**.
- ![deployment script container instance test](./media/deployment-script-template-configure-dev/deployment-script-container-instance-test-cli.png)
+ :::image type="content" source="./media/deployment-script-template-configure-dev/deployment-script-container-instance-test-cli.png" alt-text="Screenshot of the deployment script container instance test output displayed in the console.":::
## Use Docker
You also need to configure file sharing to mount the directory, which contains t
1. The following screenshot shows how to run a PowerShell script, given that you have a *helloworld.ps1* file in the shared drive.
- ![Resource Manager template deployment script docker cmd](./medi.png)
+ :::image type="content" source="./medi.png" alt-text="Screenshot of the Resource Manager template deployment script using Docker command.":::
After the script is tested successfully, you can use it as a deployment script in your templates.
azure-resource-manager Deployment Script Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-script-template.md
Write-Host "Press [ENTER] to continue ..."
The output looks like:
-![Resource Manager template deployment script hello world output](./media/deployment-script-template/resource-manager-template-deployment-script-helloworld-output.png)
## Use external scripts
The max allowed size for environment variables is 64 KB.
The script service creates a [storage account](../../storage/common/storage-account-overview.md) (unless you specify an existing storage account) and a [container instance](../../container-instances/container-instances-overview.md) for script execution. If these resources are automatically created by the script service, both resources have the `azscripts` suffix in the resource names.
-![Resource Manager template deployment script resource names](./media/deployment-script-template/resource-manager-template-deployment-script-resources.png)
The user script, the execution results, and the stdout file are stored in the files shares of the storage account. There's a folder called `azscripts`. In the folder, there are two more folders for the input and the output files: `azscriptinput` and `azscriptoutput`.
The output folder contains a _executionresult.json_ and the script output file.
After you deploy a deployment script resource, the resource is listed under the resource group in the Azure portal. The following screenshot shows the **Overview** page of a deployment script resource:
-![Resource Manager template deployment script portal overview](./media/deployment-script-template/resource-manager-deployment-script-portal.png)
The overview page displays some important information of the resource, such as **Provisioning state**, **Storage account**, **Container instance**, and **Logs**.
It only works before the deployment script resources are deleted.
To see the deploymentScripts resource in the portal, select **Show hidden types**:
-![Resource Manager template deployment script, show hidden types, portal](./media/deployment-script-template/resource-manager-deployment-script-portal-show-hidden-types.png)
## Clean up deployment script resources
azure-resource-manager Deployment Tutorial Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-tutorial-pipeline.md
If you don't have a GitHub account, see [Prerequisites](#prerequisites).
1. Sign in to [GitHub](https://github.com). 1. Select your account image on the upper right corner, and then select **Your repositories**.
- ![Azure Resource Manager Azure DevOps Azure Pipelines create GitHub repository](./media/deployment-tutorial-pipeline/azure-resource-manager-devops-pipelines-github-repository.png)
+ :::image type="content" source="./media/deployment-tutorial-pipeline/azure-resource-manager-devops-pipelines-github-repository.png" alt-text="Screenshot of creating a GitHub repository for Azure Resource Manager Azure DevOps Azure Pipelines.":::
1. Select **New**, a green button. 1. In **Repository name**, enter a repository name. For example, **ARMPipeline-repo**. Remember to replace any of **ARMPipeline** with your project name. You can select either **Public** or **private** for going through this tutorial. And then select **Create repository**.
A DevOps organization is needed before you can proceed to the next procedure. If
1. Select a DevOps organization from the left, and then select **New project**. If you don't have any projects, the create project page is opened automatically. 1. Enter the following values:
- ![Azure Resource Manager Azure DevOps Azure Pipelines create Azure DevOps project](./media/deployment-tutorial-pipeline/azure-resource-manager-devops-pipelines-create-devops-project.png)
+ :::image type="content" source="./media/deployment-tutorial-pipeline/azure-resource-manager-devops-pipelines-create-devops-project.png" alt-text="Screenshot of creating an Azure DevOps project for Azure Resource Manager Azure DevOps Azure Pipelines.":::
* **Project name**: Enter a project name. You can use the project name you picked at the very beginning of the tutorial. * **Visibility**: Select **Private**.
To create a pipeline with a step to deploy a template:
1. Select **Create pipeline**. 1. From the **Connect** tab, select **GitHub**. If asked, enter your GitHub credentials, and then follow the instructions. If you see the following screen, select **Only select repositories**, and verify your repository is in the list before you select **Approve & Install**.
- ![Azure Resource Manager Azure DevOps Azure Pipelines only select repositories](./media/deployment-tutorial-pipeline/azure-resource-manager-devops-pipelines-only-select-repositories.png)
+ :::image type="content" source="./media/deployment-tutorial-pipeline/azure-resource-manager-devops-pipelines-only-select-repositories.png" alt-text="Screenshot of selecting repositories for Azure Resource Manager Azure DevOps Azure Pipelines.":::
1. From the **Select** tab, select your repository. The default name is `[YourAccountName]/[YourGitHubRepositoryName]`. 1. From the **Configure** tab, select **Starter pipeline**. It shows the _azure-pipelines.yml_ pipeline file with two script steps.
To create a pipeline with a step to deploy a template:
* **Deployment mode**: Select **Incremental**. * **Deployment name**: Enter **DeployPipelineTemplate**. Select **Advanced** before you can see **Deployment name**.
- ![Screenshot shows the ARM template deployment page with required values entered.](./media/deployment-tutorial-pipeline/resource-manager-template-pipeline-configure.png)
+ :::image type="content" source="./media/deployment-tutorial-pipeline/resource-manager-template-pipeline-configure.png" alt-text="Screenshot of the ARM template deployment page with required values entered for Azure DevOps Azure Pipelines.":::
1. Select **Add**.
To create a pipeline with a step to deploy a template:
The _.yml_ file shall be similar to:
- ![Screenshot shows the Review page with the new pipeline titled Review your pipeline YAML.](./media/deployment-tutorial-pipeline/azure-resource-manager-devops-pipelines-yml.png)
+ :::image type="content" source="./media/deployment-tutorial-pipeline/azure-resource-manager-devops-pipelines-yml.png" alt-text="Screenshot of the Review page with the new pipeline titled Review your pipeline YAML for Azure DevOps Azure Pipelines.":::
1. Select **Save and run**. 1. From the **Save and run** pane, select **Save and run** again. A copy of the YAML file is saved into the connected repository. You can see the YAML file by browse to your repository. 1. Verify that the pipeline is executed successfully.
- ![Azure Resource Manager Azure DevOps Azure Pipelines yaml](./media/deployment-tutorial-pipeline/azure-resource-manager-devops-pipelines-status.png)
+ :::image type="content" source="./media/deployment-tutorial-pipeline/azure-resource-manager-devops-pipelines-status.png" alt-text="Screenshot of Azure Resource Manager Azure DevOps Azure Pipelines YAML file.":::
## Verify the deployment
When you update the template and push the changes to the remote repository, the
1. Open _linkedStorageAccount.json_ from your local repository in Visual Studio Code or any text editor. 1. Update the **defaultValue** of **storageAccountType** to **Standard_GRS**. See the following screenshot:
- ![Azure Resource Manager Azure DevOps Azure Pipelines update yaml](./media/deployment-tutorial-pipeline/azure-resource-manager-devops-pipelines-update-yml.png)
+ :::image type="content" source="./media/deployment-tutorial-pipeline/azure-resource-manager-devops-pipelines-update-yml.png" alt-text="Screenshot of updating the YAML file for Azure Resource Manager Azure DevOps Azure Pipelines.":::
1. Save the changes. 1. Push the changes to the remote repository by running the following commands from Git Bash/Shell.
azure-resource-manager Linked Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/linked-templates.md
You may reference templates using parameters that include HTTP or HTTPS. For exa
If you're linking to a template in GitHub, use the raw URL. The link has the format: `https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/get-started-with-templates/quickstart-template/azuredeploy.json`. To get the raw link, select **Raw**. [!INCLUDE [Deploy templates in private GitHub repo](../../../includes/resource-manager-private-github-repo-templates.md)]
The `relativePath` property of `Microsoft.Resources/deployments` makes it easier
Assume a folder structure like this:
-![resource manager linked template relative path](./media/linked-templates/resource-manager-linked-templates-relative-path.png)
The following template shows how *mainTemplate.json* deploys *nestedChild.json* illustrated in the preceding image.
To use the public IP address from the preceding template when deploying a load b
Resource Manager processes each template as a separate deployment in the deployment history. A main template with three linked or nested templates appears in the deployment history as:
-![Deployment history](./media/linked-templates/deployment-history.png)
You can use these separate entries in the history to retrieve output values after the deployment. The following template creates a public IP address and outputs the IP address:
azure-resource-manager Quickstart Create Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/quickstart-create-template-specs.md
The template spec is a resource type named `Microsoft.Resources/templateSpecs`.
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for **template specs**. Select **Template specs** from the available options.
- :::image type="content" source="./media/quickstart-create-template-specs/search-template-spec.png" alt-text="search template specs":::
+ :::image type="content" source="./media/quickstart-create-template-specs/search-template-spec.png" alt-text="Screenshot of search bar with 'template specs' query.":::
1. Select **Import template**.
- :::image type="content" source="./media/quickstart-create-template-specs/import-template.png" alt-text="import template":::
+ :::image type="content" source="./media/quickstart-create-template-specs/import-template.png" alt-text="Screenshot of 'Import template' button in Template specs page.":::
1. Select the folder icon.
- :::image type="content" source="./media/quickstart-create-template-specs/open-folder.png" alt-text="open folder":::
+ :::image type="content" source="./media/quickstart-create-template-specs/open-folder.png" alt-text="Screenshot of folder icon to open file explorer.":::
1. Navigate to the local template you saved and select it. Select **Open**. 1. Select **Import**.
- :::image type="content" source="./media/quickstart-create-template-specs/select-import.png" alt-text="select import option":::
+ :::image type="content" source="./media/quickstart-create-template-specs/select-import.png" alt-text="Screenshot of 'Import' button after selecting a template file.":::
1. Provide the following values:
To deploy a template spec, use the same deployment commands as you would use to
1. Select the template spec you created.
- :::image type="content" source="./media/quickstart-create-template-specs/select-template-spec.png" alt-text="select template specs":::
+ :::image type="content" source="./media/quickstart-create-template-specs/select-template-spec.png" alt-text="Screenshot of Template specs list with one item selected.":::
1. Select **Deploy**.
- :::image type="content" source="./media/quickstart-create-template-specs/deploy-template-spec.png" alt-text="deploy template specs":::
+ :::image type="content" source="./media/quickstart-create-template-specs/deploy-template-spec.png" alt-text="Screenshot of 'Deploy' button in the selected Template spec.":::
1. Provide the following values:
Rather than creating a new template spec for the revised template, add a new ver
1. In your template spec, select **Create new version**.
- :::image type="content" source="./media/quickstart-create-template-specs/select-versions.png" alt-text="create new version":::
+ :::image type="content" source="./media/quickstart-create-template-specs/select-versions.png" alt-text="Screenshot of 'Create new version' button in Template spec details.":::
1. Name the new version `2.0` and optionally add notes. Select **Edit template**.
- :::image type="content" source="./media/quickstart-create-template-specs/add-version-name.png" alt-text="name new version":::
+ :::image type="content" source="./media/quickstart-create-template-specs/add-version-name.png" alt-text="Screenshot of naming the new version and selecting 'Edit template' button.":::
1. Replace the contents of the template with your updated template. Select **Review + Save**. 1. Select **Save changes**. 1. To deploy the new version, select **Versions**
- :::image type="content" source="./media/quickstart-create-template-specs/see-versions.png" alt-text="list versions":::
+ :::image type="content" source="./media/quickstart-create-template-specs/see-versions.png" alt-text="Screenshot of 'Versions' tab in Template spec details.":::
1. For the version you want to deploy, select the three dots and **Deploy**.
- :::image type="content" source="./media/quickstart-create-template-specs/deploy-version.png" alt-text="select version to deploy":::
+ :::image type="content" source="./media/quickstart-create-template-specs/deploy-version.png" alt-text="Screenshot of 'Deploy' option in the context menu of a specific version.":::
1. Fill in the fields as you did when deploying the earlier version. 1. Select **Review + create**.
azure-resource-manager Quickstart Create Templates Use The Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md
Rather than manually building an entire ARM template, let's start by retrieving
1. In a web browser, go to the [Azure portal](https://portal.azure.com) and sign in. 1. From the Azure portal search bar, search for **deploy a custom template** and then select it from the available options.
- :::image type="content" source="./media/quickstart-create-templates-use-the-portal/search-custom-template.png" alt-text="Screenshot of Search for Custom Template.":::
+ :::image type="content" source="./media/quickstart-create-templates-use-the-portal/search-custom-template.png" alt-text="Screenshot of searching for custom template in Azure portal.":::
1. For **Template** source, notice that **Quickstart template** is selected by default. You can keep this selection. In the drop-down, search for *quickstarts/microsoft.storage/storage-account-create* and select it. After finding the quickstart template, select **Select template.**
- :::image type="content" source="./media/quickstart-create-templates-use-the-portal/select-custom-template.png" alt-text="Screenshot of Select Quickstart Template.":::
+ :::image type="content" source="./media/quickstart-create-templates-use-the-portal/select-custom-template.png" alt-text="Screenshot of selecting a Quickstart Template in Azure portal.":::
1. In the next blade, you provide custom values to use for the deployment. For **Resource group**, select **Create new** and provide *myResourceGroup* for the name. You can use the default values for the other fields. When you've finished providing values, select **Review + create**.
- :::image type="content" source="./media/quickstart-create-templates-use-the-portal/input-fields-template.png" alt-text="Screenshot for Input Fields for Template.":::
+ :::image type="content" source="./media/quickstart-create-templates-use-the-portal/input-fields-template.png" alt-text="Screenshot of input fields for custom template in Azure portal.":::
1. The portal validates your template and the values you provided. After validation succeeds, select **Create** to start the deployment.
- :::image type="content" source="./media/quickstart-create-templates-use-the-portal/template-validation.png" alt-text="Screenshot for Validation and create.":::
+ :::image type="content" source="./media/quickstart-create-templates-use-the-portal/template-validation.png" alt-text="Screenshot of template validation and create button in Azure portal.":::
1. Once your validation has passed, you'll see the status of the deployment. When it completes successfully, select **Go to resource** to see the storage account.
- :::image type="content" source="./media/quickstart-create-templates-use-the-portal/deploy-success.png" alt-text="Screenshot for Deployment Succeeded Notification.":::
+ :::image type="content" source="./media/quickstart-create-templates-use-the-portal/deploy-success.png" alt-text="Screenshot of deployment succeeded notification in Azure portal.":::
1. From this screen, you can view the new storage account and its properties.
- :::image type="content" source="./media/quickstart-create-templates-use-the-portal/view-storage-account.png" alt-text="Screenshot for View Deployment Page.":::
+ :::image type="content" source="./media/quickstart-create-templates-use-the-portal/view-storage-account.png" alt-text="Screenshot of view deployment page with storage account in Azure portal.":::
## Edit and deploy the template
In this section, let's suppose you have an ARM template that you want to deploy
1. This time, select **Build your own template in the editor**.
- :::image type="content" source="./media/quickstart-create-templates-use-the-portal/build-own-template.png" alt-text="Screenshot for Build your own template.":::
+ :::image type="content" source="./media/quickstart-create-templates-use-the-portal/build-own-template.png" alt-text="Screenshot of build your own template option in Azure portal.":::
1. You see a blank template.
- :::image type="content" source="./media/quickstart-create-templates-use-the-portal/blank-template.png" alt-text="Screenshot for Blank Template.":::
+ :::image type="content" source="./media/quickstart-create-templates-use-the-portal/blank-template.png" alt-text="Screenshot of blank ARM template in Azure portal.":::
1. Replace the blank template with the following template. It deploys a virtual network with a subnet.
In this section, let's suppose you have an ARM template that you want to deploy
1. When the deployment completes, you see the status of the deployment. This time select the name of the resource group.
- :::image type="content" source="./media/quickstart-create-templates-use-the-portal/view-second-deployment.png" alt-text="Screenshot for View second deployment.":::
+ :::image type="content" source="./media/quickstart-create-templates-use-the-portal/view-second-deployment.png" alt-text="Screenshot of view second deployment page in Azure portal.":::
1. Notice that your resource group now contains a storage account and a virtual network.
- :::image type="content" source="./media/quickstart-create-templates-use-the-portal/view-resource-group.png" alt-text="Screenshot for View Storage Account and Virtual Network.":::
+ :::image type="content" source="./media/quickstart-create-templates-use-the-portal/view-resource-group.png" alt-text="Screenshot of resource group with storage account and virtual network in Azure portal.":::
## Export a custom template
Sometimes the easiest way to work with an ARM template is to have the portal gen
1. In your resource group, select **Export template**.
- :::image type="content" source="./media/quickstart-create-templates-use-the-portal/export-template.png" alt-text="Screenshot for Export Template.":::
+ :::image type="content" source="./media/quickstart-create-templates-use-the-portal/export-template.png" alt-text="Screenshot of export template option in Azure portal.":::
1. The portal generates a template for you based on the current state of the resource group. Notice that this template isn't the same as either template you deployed earlier. It contains definitions for both the storage account and virtual network, along with other resources like a blob service that was automatically created for your storage account. 1. To save this template for later use, select **Download**.
- :::image type="content" source="./media/quickstart-create-templates-use-the-portal/download-template.png" alt-text="Screenshot for Download exported template.":::
+ :::image type="content" source="./media/quickstart-create-templates-use-the-portal/download-template.png" alt-text="Screenshot of download button for exported ARM template in Azure portal.":::
You now have an ARM template that represents the current state of the resource group. This template is auto-generated. Before using the template for production deployments, you may want to revise it, such as adding parameters for template reuse.
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-resource.md
Title: Template functions - resources description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve values about resources. Previously updated : 08/02/2023 Last updated : 08/08/2023
The [providers operation](/rest/api/resources/providers) is still available thro
`reference(resourceName or resourceIdentifier, [apiVersion], ['Full'])`
-Returns an object representing a resource's runtime state.
+Returns an object representing a resource's runtime state. To return an array of objects representing a resource collections's runtime states, see [references](#references).
Bicep provide the reference function, but in most cases, the reference function isn't required. It's recommended to use the symbolic name for the resource instead. See [reference](../bicep/bicep-functions-resource.md#reference).
The following example template references a storage account that isn't deployed
`references(symbolic name of a resource collection, ['Full', 'Properties])`
-The `references` function works similarly as [`reference`](#reference). Instead of returning an object presenting a resource's runtime state, the `references` function returns an array of objects representing a collection of resource's runtime states. This function requires ARM template language version `1.10-experimental` and with [symbolic name](../bicep/file.md#resources) enabled:
+The `references` function works similarly as [`reference`](#reference). Instead of returning an object presenting a resource's runtime state, the `references` function returns an array of objects representing a resource collection's runtime states. This function requires ARM template language version `1.10-experimental` and with [symbolic name](../bicep/file.md#resources) enabled:
```json {
The `references` function works similarly as [`reference`](#reference). Instead
} ```
-In Bicep, there is no explicit `references` function. Instead, symbolic collection usage is employed directly, and during code generation, Bicep translates it to an ARM template that utilizes the ARM template `references` function. The forthcoming release of Bicep will include the translation feature that converts symbolic collections to ARM templates using the `references` function.
+In Bicep, there is no explicit `references` function. Instead, symbolic collection usage is employed directly, and during code generation, Bicep translates it to an ARM template that utilizes the ARM template `references` function. For more information, see [Reference resource/module collections](../bicep/loops.md#reference-resourcemodule-collections).
### Parameters
azure-resource-manager Template Specs Create Portal Forms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-specs-create-portal-forms.md
When you create the template spec, you package the form and Azure Resource Manag
The following screenshot shows a form opened in the Azure portal. ## Prerequisites
The Azure portal provides a sandbox for creating and previewing forms. This sand
1. Open the [Form view sandbox](https://aka.ms/form/sandbox).
- :::image type="content" source="./media/template-specs-create-portal-forms/deploy-template-spec-config.png" alt-text="Screenshot of form view sandbox.":::
+ :::image type="content" source="./media/template-specs-create-portal-forms/deploy-template-spec-config.png" alt-text="Screenshot of Azure portal form view sandbox interface.":::
1. In **Package Type**, select **CustomTemplate**. Make sure you select the package type before specify deployment template. 1. In **Deployment template (optional)**, select the key vault template you saved locally. When prompted if you want to overwrite current changes, select **Yes**. The autogenerated form is displayed in the code window. The form is editable from the portal. To customize the form, see [customize form](#customize-form).
The Azure portal provides a sandbox for creating and previewing forms. This sand
1. To see that works it without any modifications, select **Preview**.
- :::image type="content" source="./media/template-specs-create-portal-forms/view-portal-basic.png" alt-text="Screenshot of the generated basic form.":::
+ :::image type="content" source="./media/template-specs-create-portal-forms/view-portal-basic.png" alt-text="Screenshot of the generated basic Azure portal form.":::
The sandbox displays the form. It has fields for selecting a subscription, resource group, and region. It also fields for all of the parameters from the template.
The default form is a good starting point for understanding forms but usually yo
1. Select **Preview**. You'll see the steps, but most of them don't have any elements.
- :::image type="content" source="./media/template-specs-create-portal-forms/view-steps.png" alt-text="Screenshot of form steps.":::
+ :::image type="content" source="./media/template-specs-create-portal-forms/view-steps.png" alt-text="Screenshot of Azure portal form with multiple steps.":::
1. Now, move elements to the appropriate steps. Start with the elements labeled **Secret Name** and **Secret Value**. Remove these elements from the **Basics** step and add them to the **Secret** step.
az ts create \
To test the form, go to the portal and navigate to your template spec. Select **Deploy**. You'll see the form you created. Go through the steps and provide values for the fields.
az ts create \
Redeploy your template spec with the improved portal form. Notice that your permission fields are now drop-down that allow multiple values.
azure-resource-manager Template Tutorial Deploy Sql Extensions Bacpac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-deploy-sql-extensions-bacpac.md
The template used in this tutorial is stored in [GitHub](https://raw.githubuserc
The following example shows the updated template:
- :::image type="content" source="media/template-tutorial-deploy-sql-extensions-bacpac/resource-manager-tutorial-deploy-sql-extensions-bacpac-firewall.png" alt-text="Template with firewall definition.":::
+ :::image type="content" source="media/template-tutorial-deploy-sql-extensions-bacpac/resource-manager-tutorial-deploy-sql-extensions-bacpac-firewall.png" alt-text="Screenshot of the template with firewall definition.":::
- Add a SQL Database extension resource to the database definition with the following JSON:
The template used in this tutorial is stored in [GitHub](https://raw.githubuserc
The following example shows the updated template:
- :::image type="content" source="media/template-tutorial-deploy-sql-extensions-bacpac/resource-manager-tutorial-deploy-sql-extensions-bacpac.png" alt-text="Template with SQL Database extension.":::
+ :::image type="content" source="media/template-tutorial-deploy-sql-extensions-bacpac/resource-manager-tutorial-deploy-sql-extensions-bacpac.png" alt-text="Screenshot of the template with SQL Database extension.":::
To understand the resource definition, see the API version's [SQL Database extension reference](/azure/templates/microsoft.sql/servers/databases/extensions). The following are some important elements:
Use the project name and location that were used when you prepared the BACPAC fi
1. Sign in to [Cloud Shell](https://shell.azure.com). 1. Select **PowerShell** from the upper left corner.
- :::image type="content" source="media/template-tutorial-deploy-sql-extensions-bacpac/cloud-shell-select.png" alt-text="Open Azure Cloud Shell in PowerShell and upload a file.":::
+ :::image type="content" source="media/template-tutorial-deploy-sql-extensions-bacpac/cloud-shell-select.png" alt-text="Screenshot of Azure Cloud Shell in PowerShell with the option to upload a file.":::
1. Select **Upload/Download files** and upload your _azuredeploy.json_ file. 1. To deploy the template, copy and paste the following script into the shell window.
For example, when you sign in to **Query editor** a message is displayed that th
In the Azure portal, from the resource group select the database. Select **Query editor (preview)**, and enter the administrator credentials. You'll see two tables were imported into the database.
-![Query editor (preview)](./media/template-tutorial-deploy-sql-extensions-bacpac/resource-manager-tutorial-deploy-sql-extensions-bacpac-query-editor.png)
## Clean up resources
azure-resource-manager Test Toolkit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/test-toolkit.md
Tests that fail are displayed in **red** and prefaced with `[-]`.
Tests with a warning are displayed in **yellow** and prefaced with `[?]`. The text results are:
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
azure-video-indexer Accounts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/accounts-overview.md
Title: Azure AI Video Indexer accounts description: This article gives an overview of Azure AI Video Indexer accounts and provides links to other articles for more details. Previously updated : 01/25/2023 Last updated : 08/07/2023
This article gives an overview of Azure AI Video Indexer accounts types and prov
When starting out with [Azure AI Video Indexer](https://www.videoindexer.ai/), click **start free** to kick off a quick and easy process of creating a trial account. No Azure subscription is required and this is a great way to explore Azure AI Video Indexer and try it out with your content. Keep in mind that the trial Azure AI Video Indexer account has a limitation on the number of indexing minutes, support, and SLA.
-With a trial account, Azure AI Video Indexer provides:
-
-* up to 600 minutes of free indexing to the [Azure AI Video Indexer](https://www.videoindexer.ai/) website users and
-* up to 2400 minutes of free indexing to users that subscribe to the Azure AI Video Indexer API on the [developer portal](https://api-portal.videoindexer.ai/).
+With a trial account, Azure AI Video Indexer provides up to 2,400 minutes of free indexing when using the [Azure AI Video Indexer](https://www.videoindexer.ai/) website or the Azure AI Video Indexer API (see [developer portal](https://api-portal.videoindexer.ai/)).
The trial account option is not available on the Azure Government cloud. For other Azure Government limitations, see [Limitations of Azure AI Video Indexer on Azure Government](connect-to-azure.md#limitations-of-azure-ai-video-indexer-on-azure-government).
azure-video-indexer Detected Clothing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/detected-clothing.md
Title: Enable detected clothing feature description: Azure AI Video Indexer detects clothing associated with the person wearing it in the video and provides information such as the type of clothing detected and the timestamp of the appearance (start, end). The API returns the detection confidence level. Previously updated : 08/02/2023 Last updated : 08/07/2023
The newly added clothing detection feature is available when indexing your file
:::image type="content" source="./media/detected-clothing/index-video.png" alt-text="This screenshot represents an indexing video option":::
-When you choose to see **Insights** of your video on the [Azure AI Video Indexer](https://www.videoindexer.ai/) website, the People's detected clothing could be viewed from the **Observed People** tracing insight. When choosing a thumbnail of a person the detected clothing became available.
+When you choose to see **Insights** of your video on the [Azure AI Video Indexer](https://www.videoindexer.ai/) website, the People's detected clothing could be viewed from the **Observed People** tracking insight. When choosing a thumbnail of a person the detected clothing became available.
:::image type="content" source="./media/detected-clothing/observed-people.png" alt-text="Observed people screenshot":::
If you're interested to view People's detected clothing in the Timeline of your
Searching for a specific clothing to return all the observed people wearing it's enabled using the search bar of either the **Insights** or from the **Timeline** of your video on the Azure AI Video Indexer website.
-The following JSON response illustrates what Azure AI Video Indexer returns when tracing observed people having detected clothing associated:
+The following JSON response illustrates what Azure AI Video Indexer returns when tracking observed people having detected clothing associated:
```json "observedPeople": [
As the detected clothing feature uses observed people tracking, the tracking qua
## Next steps
-[Trace observed people in a video](observed-people-tracing.md)
+[Track observed people in a video](observed-people-tracking.md)
azure-video-indexer Observed People Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/observed-people-tracking.md
+
+ Title: Track observed people in a video
+description: This topic gives an overview of Track observed people in a video concept.
+ Last updated : 08/07/2023+++
+# Track observed people in a video (preview)
+
+Azure AI Video Indexer detects observed people in videos and provides information such as the location of the person in the video frame and the exact timestamp (start, end) when a person appears. The API returns the bounding box coordinates (in pixels) for each person instance detected, including detection confidence.
+
+Some scenarios where this feature could be useful:
+
+* Post-event analysisΓÇödetect and track a personΓÇÖs movement to better analyze an accident or crime post-event (for example, explosion, bank robbery, incident).
+* Improve efficiency when creating raw data for content creators, like video advertising, news, or sport games (for example, find people wearing a red shirt in a video archive).
+* Create a summary out of a long video, like court evidence of a specific personΓÇÖs appearance in a video, using the same detected personΓÇÖs ID.
+* Learn and analyze trends over time, for exampleΓÇöhow customers move across aisles in a shopping mall or how much time they spend in checkout lines.
+
+For example, if a video contains a person, the detect operation will list the personΓÇÖs appearances together with their coordinates in the video frames. You can use this functionality to determine the personΓÇÖs path in a video. It also lets you determine whether there are multiple instances of the same person in a video.
+
+The newly added **Observed people tracking** feature is available when indexing your file by choosing the **Advanced option** -> **Advanced video** or **Advanced video + audio** preset (under **Video + audio indexing**). Standard indexing will not include this new advanced model.
+
+
+When you choose to see **Insights** of your video on the [Video Indexer](https://www.videoindexer.ai/account/login) website, the Observed People Tracking will show up on the page with all detected people thumbnails. You can choose a thumbnail of a person and see where the person appears in the video player.
+
+The following JSON response illustrates what Video Indexer returns when tracking observed people:
+
+```json
+ {
+ ...
+ "videos": [
+ {
+ ...
+ "insights": {
+ ...
+ "observedPeople": [{
+ "id": 1,
+ "thumbnailId": "560f2cfb-90d0-4d6d-93cb-72bd1388e19d",
+ "instances": [
+ {
+ "adjustedStart": "0:00:01.5682333",
+ "adjustedEnd": "0:00:02.7027",
+ "start": "0:00:01.5682333",
+ "end": "0:00:02.7027"
+ }
+ ]
+ },
+ {
+ "id": 2,
+ "thumbnailId": "9c97ae13-558c-446b-9989-21ac27439da0",
+ "instances": [
+ {
+ "adjustedStart": "0:00:16.7167",
+ "adjustedEnd": "0:00:18.018",
+ "start": "0:00:16.7167",
+ "end": "0:00:18.018"
+ }
+ ]
+ },]
+ }
+ ...
+ }
+ ]
+}
+```
+
+## Limitations and assumptions
+
+For more information, see [Considerations and limitations when choosing a use case](observed-matched-people.md#considerations-and-limitations-when-choosing-a-use-case).
+
+## Next steps
+
+Review [overview](video-indexer-overview.md)
azure-web-pubsub Quickstarts Push Messages From Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstarts-push-messages-from-server.md
const client = new WebPubSubClient("<client-access-url>")
// Registers a handler for the "server-message" event client.on("server-message", (e) => {
- console.log(`Received message ${e.message.data}`);
+ console.log(`Received message ${e.message.data}`)
+});
// Before a client can receive a message, // you must invoke start() on the client object.
Now this client establishes a connection with your Web PubSub resource and is re
# [C#](#tab/csharp)+ #### Create a project directory named `subscriber` and install required dependencies ```bash mkdir subscriber cd subscriber- # Create a .net console app dotnet new console
dotnet new console
dotnet add package Azure.Messaging.WebPubSub.Client --prerelease ```
+#### Connect to your Web PubSub resource and register a listener for the `ServerMessageReceived` event
+A client uses a ***Client Access URL*** to connect and authenticate with your resource.
+This URL follows a pattern of `wss://<service_name>.webpubsub.azure.com/client/hubs/<hub_name>?access_token=<token>`. A client can have a few ways to obtain the Client Access URL. For this quick start, you can copy and paste one from Azure portal shown in the following diagram. It's best practice to not hard code the Client Access URL in your code. In the production world, we usually set up an app server to return this URL on demand. [Generate Client Access URL](./howto-generate-client-access-url.md) describes the practice in detail.
+
+![The diagram shows how to get client access url.](./media/quickstarts-push-messages-from-server/push-messages-from-server.png)
+
+As shown in the diagram above, the client joins the hub named `myHub1`.
++ #### Replace the code in the `Program.cs` with the following code ```csharp
using Azure.Messaging.WebPubSub.Clients;
// Instantiates the client object // <client-access-uri> is copied from Azure portal mentioned above var client = new WebPubSubClient(new Uri("<client-access-uri>"));- client.ServerMessageReceived += eventArgs => { Console.WriteLine($"Receive message: {eventArgs.Message.Data}"); return Task.CompletedTask; };
+client.Connected += eventArgs =>
+{
+ Console.WriteLine("Connected");
+ return Task.CompletedTask;
+};
+ await client.StartAsync();++
+// This keeps the subscriber active until the user closes the stream by pressing Ctrl+C
+var streaming = Console.ReadLine();
+while (streaming != null)
+{
+ if (!string.IsNullOrEmpty(streaming))
+ {
+ await client.SendToGroupAsync("stream", BinaryData.FromString(streaming + Environment.NewLine), WebPubSubDataType.Text);
+ }
+
+ streaming = Console.ReadLine();
+}
+
+await client.StopAsync();
+ ``` #### Run the following command ```bash
-dotnet run "myHub1"
+dotnet run
``` Now this client establishes a connection with your Web PubSub resource and is ready to receive messages pushed from your application server.
The `SendToAllAsync()` call sends a message to all connected clients in the hub.
#### Run the server program to push messages to all connected clients ```bash
+$connection_string="<connection-string>"
dotnet run $connection_string "myHub1" "Hello World" ```
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
description: Learn about frequently asked questions for Azure Bastion.
Previously updated : 08/03/2023 Last updated : 08/08/2023 # Azure Bastion FAQ
Yes, [Azure AD guest accounts](../active-directory/external-identities/what-is-b
### <a name="shareable-links-domains"></a>Are custom domains supported with Bastion shareable links?
-No, custom domains are not supported with Bastion shareable links. Users will receive a certificate error upon trying to add specific domains in the CN/SAN of the Bastion host certificate.
+No, custom domains aren't supported with Bastion shareable links. Users receive a certificate error upon trying to add specific domains in the CN/SAN of the Bastion host certificate.
## <a name="vm"></a>VM features and connection FAQs
In order to make a connection, the following roles are required:
* Reader role on the Azure Bastion resource. * Reader role on the virtual network of the target virtual machine (if the Bastion deployment is in a peered virtual network).
+Additionally, the user must have the rights (if required) to connect to the VM. For example, if the user is connecting to a Windows VM via RDP and isn't a member of the local Administrators group, they must be a member of the Remote Desktop Users group.
+ ### <a name="publicip"></a>Do I need a public IP on my virtual machine to connect via Azure Bastion?
-No. When you connect to a VM using Azure Bastion, you don't need a public IP on the Azure virtual machine that you're connecting to. The Bastion service will open the RDP/SSH session/connection to your virtual machine over the private IP of your virtual machine, within your virtual network.
+No. When you connect to a VM using Azure Bastion, you don't need a public IP on the Azure virtual machine that you're connecting to. The Bastion service opens the RDP/SSH session/connection to your virtual machine over the private IP of your virtual machine, within your virtual network.
### <a name="rdpssh"></a>Do I need an RDP or SSH client?
Azure Bastion offers support for file transfer between your target VM and local
### <a name="aadj"></a>Does Bastion hardening work with AADJ VM extension-joined VMs?
-This feature doesn't work with AADJ VM extension-joined machines using Azure AD users. For more information, see [Log in to a Windows virtual machine in Azure by using Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md#requirements).
+This feature doesn't work with AADJ VM extension-joined machines using Azure AD users. For more information, see [Sign in to a Windows virtual machine in Azure by using Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md#requirements).
### <a name="rdscal"></a>Does Azure Bastion require an RDS CAL for administrative purposes on Azure-hosted VMs?
To set your target language as your keyboard layout on a Windows workstation, na
### <a name="shortcut"></a>Is there a keyboard solution to toggle focus between a VM and browser?
-Users can use "Ctrl+Shift+Alt" to effectively switch focus between the VM and the browser.
+Users can use "Ctrl+Shift+Alt" to effectively switch focus between the VM and the browser.
+
+### <a name="keyboard-focus"></a>How do I take keyboard or mouse focus back from an instance?
+
+Click the Windows key twice in a row to take back focus within the Bastion window.
### <a name="res"></a>What is the maximum screen resolution supported via Bastion?
Make sure the user has **read** access to both the VM, and the peered VNet. Addi
|Microsoft.Network/virtualNetworks/subnets/virtualMachines/read|Gets references to all the virtual machines in a virtual network subnet|Action| |Microsoft.Network/virtualNetworks/virtualMachines/read|Gets references to all the virtual machines in a virtual network|Action|
-### My privatelink.azure.com cannot resolve to management.privatelink.azure.com
+### My privatelink.azure.com can't resolve to management.privatelink.azure.com
This may be due to the Private DNS zone for privatelink.azure.com linked to the Bastion virtual network causing management.azure.com CNAMEs to resolve to management.privatelink.azure.com behind the scenes. Create a CNAME record in their privatelink.azure.com zone for management.privatelink.azure.com to arm-frontdoor-prod.trafficmanager.net to enable successful DNS resolution.
This may be due to the Private DNS zone for privatelink.azure.com linked to the
## Next steps
-For more information, see [What is Azure Bastion](bastion-overview.md).
+For more information, see [What is Azure Bastion](bastion-overview.md).
bastion Connect Vm Native Client Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-vm-native-client-linux.md
description: Learn how to connect to a VM from a Linux computer by using Bastion
Previously updated : 06/23/2023 Last updated : 08/08/2023
az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupNa
[!INCLUDE [IP address](../../includes/bastion-native-ip-address.md)]
+### Multi-connection tunnel
++ ## Next steps [Upload or download files](vm-upload-download-native.md)
bastion Connect Vm Native Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-vm-native-client-windows.md
description: Learn how to connect to a VM from a Windows computer by using Basti
Previously updated : 06/23/2023 Last updated : 08/08/2023
az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupNa
[!INCLUDE [IP address](../../includes/bastion-native-ip-address.md)]
+### Multi-connection tunnel
++ ## Next steps [Upload or download files](vm-upload-download-native.md)
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
container-apps Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/authentication.md
This feature should be used with HTTPS only. Ensure `allowInsecure` is disabled
You can configure your container app for authentication with or without restricting access to your site content and APIs. To restrict app access only to authenticated users, set its *Restrict access* setting to **Require authentication**. To authenticate but not restrict access, set its *Restrict access* setting to **Allow unauthenticated access**.
-Each container app issues its own unique cookie or token for authentication. A client cannot use the same cookie or token provided by one container app to authenticate with another container app, even within the same container app environment.
+By default, each container app issues its own unique cookie or token for authentication. You can also provide your own signing and encryption keys.
## Feature architecture
container-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md
Title: Built-in policy definitions for Azure Container Apps
description: Lists Azure Policy built-in policy definitions for Azure Container Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/policy-reference.md
Previously updated : 08/03/2023 Last updated : 08/08/2023 # Azure Policy built-in definitions for Azure Container Instances
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry
description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
cosmos-db How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys.md
az cosmosdb show \
--query "keyVaultKeyUri" ``` + ## Using a managed identity in the Azure Key Vault access policy
Currently, only user-assigned managed identity is supported for creating continu
Once the account has been created, you can update the identity to system-assigned managed identity.
-> [!NOTE]
-> System-assigned identity and continuous backup mode is currently under Public Preview and may change in the future.
- Alternatively, user can also create a system identity with periodic backup mode first, then migrate the account to Continuous backup mode using these instructions [Migrate an Azure Cosmos DB account from periodic to continuous backup mode](./migrate-continuous-backup.md) ### [Azure CLI](#tab/azure-cli)
A user-assigned identity is required in the restore request because the source a
Use the Azure CLI to restore a continuous account that is already configured using a system-assigned or user-assigned managed identity.
-> [!NOTE]
-> This feature is currently under Public Preview and requires Cosmos DB CLI Extension version 0.20.0 or higher.
-- 1. Create a new user-assigned identity (or use an existing one) for the restore process. 1. Create the new access policy in your Azure Key Vault account as described previously, use the Object ID of the managed identity from step 1.
Use the Azure CLI to restore a continuous account that is already configured usi
1. Once the restore has completed, the target (restored) account will have the user-assigned identity. If desired, user can update the account to use System-Assigned managed identity. - ### [PowerShell / Azure Resource Manager template / Azure portal](#tab/azure-powershell+arm-template+azure-portal) Not available
Steps to assign a new managed-identity:
## Next steps - Learn more about [data encryption in Azure Cosmos DB](database-encryption-at-rest.md).+
cosmos-db Performance Tips Dotnet Sdk V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/performance-tips-dotnet-sdk-v3.md
You can get the lowest possible latency by ensuring that the calling application
Because calls to Azure Cosmos DB are made over the network, you might need to vary the degree of concurrency of your requests so that the client application spends minimal time waiting between requests. For example, if you're using the .NET [Task Parallel Library](/dotnet/standard/parallel-programming/task-parallel-library-tpl), create on the order of hundreds of tasks that read from or write to Azure Cosmos DB.
-**Enable accelerated networking**
-
-To reduce latency and CPU jitter, we recommend that you enable accelerated networking on your client virtual machines. For more information, see [Create a Windows virtual machine with accelerated networking](../../virtual-network/create-vm-accelerated-networking-powershell.md) or [Create a Linux virtual machine with accelerated networking](../../virtual-network/create-vm-accelerated-networking-cli.md).
+**Enable accelerated networking to reduce latency and CPU jitter**
+
+It is recommended that you follow the instructions to enable [Accelerated Networking](../../virtual-network/accelerated-networking-overview.md) in your [Windows (click for instructions)](../../virtual-network/create-vm-accelerated-networking-powershell.md) or [Linux (click for instructions)](../../virtual-network/create-vm-accelerated-networking-cli.md) Azure VM, in order to maximize performance.
+
+Without accelerated networking, IO that transits between your Azure VM and other Azure resources may be unnecessarily routed through a host and virtual switch situated between the VM and its network card. Having the host and virtual switch inline in the datapath not only increases latency and jitter in the communication channel, it also steals CPU cycles from the VM. With accelerated networking, the VM interfaces directly with the NIC without intermediaries; any network policy details which were being handled by the host and virtual switch are now handled in hardware at the NIC; the host and virtual switch are bypassed. Generally you can expect lower latency and higher throughput, as well as more *consistent* latency and decreased CPU utilization when you enable accelerated networking.
+
+Limitations: accelerated networking must be supported on the VM OS, and can only be enabled when the VM is stopped and deallocated. The VM cannot be deployed with Azure Resource Manager. [App Service](../../app-service/overview.md) has no accelerated network enabled.
+
+Please see the [Windows](../../virtual-network/create-vm-accelerated-networking-powershell.md) and [Linux](../../virtual-network/create-vm-accelerated-networking-cli.md) instructions for more details.
## <a id="sdk-usage"></a> SDK usage
cosmos-db Performance Tips Java Sdk V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/performance-tips-java-sdk-v4.md
When possible, place any applications calling Azure Cosmos DB in the same region
An app that interacts with a multi-region Azure Cosmos DB account needs to configure [preferred locations](tutorial-global-distribution.md#preferred-locations) to ensure that requests are going to a collocated region.
-* **Enable Accelerated Networking on your Azure VM for lower latency.**
+**Enable accelerated networking to reduce latency and CPU jitter**
-It is recommended that you follow the instructions to enable Accelerated Networking in your [Windows (click for instructions)](../../virtual-network/create-vm-accelerated-networking-powershell.md) or [Linux (click for instructions)](../../virtual-network/create-vm-accelerated-networking-cli.md) Azure VM, in order to maximize performance.
+It is recommended that you follow the instructions to enable [Accelerated Networking](../../virtual-network/accelerated-networking-overview.md) in your [Windows (click for instructions)](../../virtual-network/create-vm-accelerated-networking-powershell.md) or [Linux (click for instructions)](../../virtual-network/create-vm-accelerated-networking-cli.md) Azure VM, in order to maximize performance (reduce latency and CPU jitter).
Without accelerated networking, IO that transits between your Azure VM and other Azure resources may be unnecessarily routed through a host and virtual switch situated between the VM and its network card. Having the host and virtual switch inline in the datapath not only increases latency and jitter in the communication channel, it also steals CPU cycles from the VM. With accelerated networking, the VM interfaces directly with the NIC without intermediaries; any network policy details which were being handled by the host and virtual switch are now handled in hardware at the NIC; the host and virtual switch are bypassed. Generally you can expect lower latency and higher throughput, as well as more *consistent* latency and decreased CPU utilization when you enable accelerated networking.
-Limitations: accelerated networking must be supported on the VM OS, and can only be enabled when the VM is stopped and deallocated. The VM cannot be deployed with Azure Resource Manager.
+Limitations: accelerated networking must be supported on the VM OS, and can only be enabled when the VM is stopped and deallocated. The VM cannot be deployed with Azure Resource Manager. [App Service](../../app-service/overview.md) has no accelerated network enabled.
Please see the [Windows](../../virtual-network/create-vm-accelerated-networking-powershell.md) and [Linux](../../virtual-network/create-vm-accelerated-networking-cli.md) instructions for more details.
cosmos-db Performance Tips Query Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/performance-tips-query-sdk.md
zone_pivot_groups: programming-languages-set-cosmos
# Query performance tips for Azure Cosmos DB SDKs [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)] + Azure Cosmos DB is a fast, flexible distributed database that scales seamlessly with guaranteed latency and throughput levels. You don't have to make major architecture changes or write complex code to scale your database with Azure Cosmos DB. Scaling up and down is as easy as making a single API call. To learn more, see [provision container throughput](how-to-provision-container-throughput.md) or [provision database throughput](how-to-provision-database-throughput.md). ::: zone pivot="programming-language-csharp"
Azure Cosmos DB is a fast, flexible distributed database that scales seamlessly
To execute a query, a query plan needs to be built. This in general represents a network request to the Azure Cosmos DB Gateway, which adds to the latency of the query operation. There are two ways to remove this request and reduce the latency of the query operation:
+### Optimizing single partition queries with Optimistic Direct Execution
+
+Azure Cosmos DB NoSQL has an optimization called Optimistic Direct Execution (ODE), which can improve the efficiency of certain NoSQL queries. Specifically, queries that donΓÇÖt require distribution include those that can be executed on a single physical partition or that have responses that don't require [pagination](query/pagination.md). Queries that donΓÇÖt require distribution can confidently skip some processes, such as client-side query plan generation and query rewrite, thereby reducing query latency and RU cost. If you specify the partition key in the request or query itself (or have only one physical partition), and the results of your query donΓÇÖt require pagination, then ODE can improve your queries.
+
+ODE is now available and enabled by default in the .NET SDK (preview) version 3.35.0-preview and later. When you execute a query and specify a partition key in the request or query itself, or your database has only one physical partition, your query execution can leverage the benefits of ODE. To disable ODE, set EnableOptimisticDirectExecution to false in the QueryRequestOptions.
+
+Single partition queries that feature GROUP BY, ORDER BY, DISTINCT, and aggregation functions (like sum, mean, min, and max) can significantly benefit from using ODE. However, in scenarios where the query is targeting multiple partitions or still requires pagination, the latency of the query response and RU cost might be higher than without using ODE. Therefore, when using ODE, we recommend to:
+- Specify the partition key in the call or query itself.
+- Ensure that your data size hasnΓÇÖt grown and caused the partition to split.
+- Ensure that your query results donΓÇÖt require pagination to get the full benefit of ODE.
+
+Here are a few examples of simple single partition queries which can benefit from ODE:
+```
+- SELECT * FROM r
+- SELECT * FROM r WHERE r.pk == "value"
+- SELECT * FROM r WHERE r.id > 5
+- SELECT r.id FROM r JOIN id IN r.id
+- SELECT TOP 5 r.id FROM r ORDER BY r.id
+- SELECT * FROM r WHERE r.id > 5 OFFSET 5 LIMIT 3
+```
+There can be cases where single partition queries may still require distribution if the number of data items increases over time and your Azure Cosmos DB database [splits the partition](../partitioning-overview.md#physical-partitions). Examples of queries where this could occur include:
+```
+- SELECT Count(r.id) AS count_a FROM r
+- SELECT DISTINCT r.id FROM r
+- SELECT Max(r.a) as min_a FROM r
+- SELECT Avg(r.a) as min_a FROM r
+- SELECT Sum(r.a) as sum_a FROM r WHERE r.a > 0
+```
+Some complex queries can always require distribution, even if targeting a single partition. Examples of such queries include:
+```
+- SELECT Sum(id) as sum_id FROM r JOIN id IN r.id
+- SELECT DISTINCT r.id FROM r GROUP BY r.id
+- SELECT DISTINCT r.id, Sum(r.id) as sum_a FROM r GROUP BY r.id
+- SELECT Count(1) FROM (SELECT DISTINCT r.id FROM root r)
+- SELECT Avg(1) AS avg FROM root r
+```
+
+It's important to note that ODE might not always retrieve the query plan and, as a result, is not able to disallow or turn off for unsupported queries. For example, after partition split, such queries are no longer eligible for ODE and, therefore, won't run because client-side query plan evaluation will block those. To ensure compatibility/service continuity, it's critical to ensure that only queries that are fully supported in scenarios without ODE (that is, they execute and produce the correct result in the general multi-partition case) are used with ODE.
+
+>[!NOTE]
+> Using ODE can potentially cause a new type of continuation token to be generated. Such a token is not recognized by the older SDKs by design and this could result in a Malformed Continuation Token Exception. If you have a scenario where tokens generated from the newer SDKs are used by an older SDK, we recommend a 2 step approach to upgrade:
+>
+>- Upgrade to the new SDK and disable ODE, both together as part of a single deployment. Wait for all nodes to upgrade.
+> - In order to disable ODE, set EnableOptimisticDirectExecution to false in the QueryRequestOptions.
+>- Enable ODE as part of second deployment for all nodes.
++ ### Use local Query Plan generation The SQL SDK includes a native ServiceInterop.dll to parse and optimize queries locally. ServiceInterop.dll is supported only on the **Windows x64** platform. The following types of applications use 32-bit host processing by default. To change host processing to 64-bit processing, follow these steps, based on the type of your application:
IQueryable<dynamic> authorResults = client.CreateDocumentQuery(
Pre-fetching works the same way regardless of the degree of parallelism, and there's a single buffer for the data from all partitions.
-## Optimizing single partition queries with Optimistic Direct Execution
-
-Azure Cosmos DB NoSQL has an optimization called Optimistic Direct Execution (ODE), which can improve the efficiency of certain NoSQL queries. Specifically, queries that donΓÇÖt require distribution include those that can be executed on a single physical partition or that have responses that don't require [pagination](query/pagination.md). Queries that donΓÇÖt require distribution can confidently skip some processes, such as client-side query plan generation and query rewrite, thereby reducing query latency and RU cost. If you specify the partition key in the request or query itself (or have only one physical partition), and the results of your query donΓÇÖt require pagination, then ODE can improve your queries.
-
-Single partition queries that feature GROUP BY, ORDER BY, DISTINCT, and aggregation functions (like sum, mean, min, and max) can significantly benefit from using ODE. However, in scenarios where the query is targeting multiple partitions or still requires pagination, the latency of the query response and RU cost might be higher than without using ODE. Therefore, when using ODE, we recommend to:
-- Specify the partition key in the call or query itself. -- Ensure that your data size hasnΓÇÖt grown and caused the partition to split.-- Ensure that your query results donΓÇÖt require pagination to get the full benefit of ODE.
-
-Here are a few examples of simple single partition queries which can benefit from ODE:
-```
-- SELECT * FROM r-- SELECT VALUE r.id FROM r-- SELECT * FROM r WHERE r.id > 5-- SELECT r.id FROM r JOIN id IN r.id-- SELECT TOP 5 r.id FROM r ORDER BY r.id-- SELECT * FROM r WHERE r.id > 5 OFFSET 5 LIMIT 3
-```
-There can be cases where single partition queries may still require distribution if the number of data items increases over time and your Azure Cosmos DB database [splits the partition](../partitioning-overview.md#physical-partitions). Examples of queries where this could occur include:
-```
-- SELECT Count(r.id) AS count_a FROM r-- SELECT DISTINCT r.id FROM r-- SELECT Max(r.a) as min_a FROM r-- SELECT Avg(r.a) as min_a FROM r-- SELECT Sum(r.a) as sum_a FROM r WHERE r.a > 0
-```
-Some complex queries can always require distribution, even if targeting a single partition. Examples of such queries include:
-```
-- SELECT Sum(id) as sum_id FROM r JOIN id IN r.id-- SELECT DISTINCT r.id FROM r GROUP BY r.id-- SELECT DISTINCT r.id, Sum(r.id) as sum_a FROM r GROUP BY r.id-- SELECT Count(1) FROM (SELECT DISTINCT r.id FROM root r)-- SELECT Avg(1) AS avg FROM root r
-```
-
-It's important to note that ODE might not always retrieve the query plan and, as a result, is not able to disallow or turn off for unsupported queries. For example, after partition split, such queries are no longer eligible for ODE and, therefore, won't run because client-side query plan evaluation will block those. To ensure compatibility/service continuity, it's critical to ensure that only queries that are fully supported in scenarios without ODE (that is, they execute and produce the correct result in the general multi-partition case) are used with ODE.
-
-### Using ODE via the SDKs
-ODE is now available and enabled by default in the .NET Preview SDK for versions 3.35.0 and later. When you execute a query and specify a partition key in the request or query itself, or your database has only one physical partition, your query execution can leverage the benefits of ODE.
-
-To disable ODE, set the flag `EnableOptimisticDirectExecution` to false in your QueryRequestOptions object.
--- ## Next steps To learn more about performance using the .NET SDK:
cosmos-db Periodic Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/periodic-backup-restore-introduction.md
With the periodic backup mode, the backups are taken only in the write region of
## What is restored into new account? --You can choose to restore any combination of provisioned throughput containers, shared throughput database, or the entire account.--The restore action restores all data and its index properties into a new account.--The duration of restore will depend on the amount of data that needs to be restored.--The newly restored database accountΓÇÖs consistency setting will be same as the source database accountΓÇÖs consistency settings.
+- You can choose to restore any combination of provisioned throughput containers, shared throughput database, or the entire account.
+- The restore action restores all data and its index properties into a new account.
+- The duration of restore will depend on the amount of data that needs to be restored.
+- The newly restored database accountΓÇÖs consistency setting will be same as the source database accountΓÇÖs consistency settings.
## What isn't restored? The following configurations aren't restored after the point-in-time recovery.--A subset of containers under a shared throughput database cannot be restored. The entire database can be restored as a whole.--Database account keys. The restored account will be generated with new database account keys. --Firewall, VNET, Data plane RBAC or private endpoint settings. --Regions. The restored account will only be a single region account, which is the write region of the source account. --Stored procedures, triggers, UDFs. --Role-based access control assignments. These will need to be re-assigned. --Documents that were deleted because of expired TTL. --Analytical data when synapse link is enabled. --Materialized views
+- A subset of containers under a shared throughput database cannot be restored. The entire database can be restored as a whole.
+- Database account keys. The restored account will be generated with new database account keys.
+- Firewall, VNET, Data plane RBAC or private endpoint settings.
+- Regions. The restored account will only be a single region account, which is the write region of the source account.
+- Stored procedures, triggers, UDFs.
+- Role-based access control assignments. These will need to be re-assigned.
+- Documents that were deleted because of expired TTL.
+- Analytical data when synapse link is enabled.
+- Materialized views
Some of these configurations can be added to the restored account after the restore is completed.
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
cost-management-billing Exchange And Refund Azure Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/exchange-and-refund-azure-reservations.md
Previously updated : 05/03/2023 Last updated : 08/08/2023
If you're exchanging for a different size, series, region or payment frequency,
Microsoft cancels the existing reservation. Then the pro-rated amount for that reservation is refunded. If there's an exchange, the new purchase is processed. Microsoft processes refunds using one of the following methods, depending on your account type and payment method.
-### Enterprise agreement customers
+### Enterprise Agreement customers
Money is added to the Azure Prepayment (previously called monetary commitment) for exchanges and refunds if the original purchase was made using one. If the Azure Prepayment term using the reservation was purchased is no longer active, then credit is added to your current enterprise agreement Azure Prepayment term. The credit is valid for 90 days from the date of refund. Unused credit expires at the end of 90 days.
-If the original purchase was made as an overage, the original invoice on which the reservation was purchased and all later invoices are reopened and readjusted. Microsoft issues a credit memo for the refunds.
+If the original reservation purchase was made from an overage, the refund is returned to you as a partial credit note. The refund doesnΓÇÖt affect the original or later invoices.
### Pay-as-you-go invoice payments and CSP program
data-factory Concepts Change Data Capture Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-change-data-capture-resource.md
Previously updated : 04/06/2023 Last updated : 08/08/2023 # Change data capture resource overview
The new Change Data Capture resource in ADF allows for full fidelity change data
* Parquet * SQL Server * XML
+* Snowflake
## Supported targets
data-factory Concepts Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-change-data-capture.md
Previously updated : 01/23/2023 Last updated : 08/08/2023 # Change data capture in Azure Data Factory and Azure Synapse Analytics
The changed data including inserted, updated and deleted rows can be automatical
- [Azure SQL Managed Instance](connector-azure-sql-managed-instance.md) - [Azure Cosmos DB (SQL API)](connector-azure-cosmos-db.md) - [Azure Cosmos DB analytical store](../cosmos-db/analytical-store-introduction.md)
+- [Snowflake](connector-snowflake.md)
### Auto incremental extraction in mapping data flow
data-factory How To Configure Shir For Log Analytics Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-configure-shir-for-log-analytics-collection.md
It's important to note that when choosing the event logs using the interface, it
The event journal name we must configure are: -- Connectors ΓÇô Integration Runtime
+- Connectors - Integration Runtime
- Integration Runtime :::image type="content" source="media/how-to-configure-shir-for-log-analytics-collection/configure-journals-for-collection.png" alt-text="Screenshot of the selection of the SHIR relevant logs with errors and warnings checked.":::
data-factory Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md
Previously updated : 08/03/2023 Last updated : 08/08/2023 # Azure Policy built-in definitions for Data Factory
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
data-lake-store Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
databox-online Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/policy-reference.md
Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
databox Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/policy-reference.md
Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
ddos-protection Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/alerts.md
description: Learn how to configure DDoS protection metric alerts for Azure DDoS
-+ Previously updated : 01/30/2023 Last updated : 08/07/2023 # Configure Azure DDoS Protection metric alerts through portal
-Azure DDoS Protection provides detailed attack insights and visualization with DDoS Attack Analytics. Customers protecting their virtual networks against DDoS attacks have detailed visibility into attack traffic and actions taken to mitigate the attack via attack mitigation reports & mitigation flow logs. Rich telemetry is exposed via Azure Monitor including detailed metrics during the duration of a DDoS attack. Alerting can be configured for any of the Azure Monitor metrics exposed by DDoS Protection. Logging can be further integrated with [Microsoft Sentinel](../sentinel/data-connectors/azure-ddos-protection.md), Splunk (Azure Event Hubs), OMS Log Analytics, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.
+DDoS Protection metrics alerts are an important step in alerting your team through Azure portal, email, SMS message, push, or voice notification when an attack is detected.
-In this article, you'll learn how to configure metrics alerts through Azure Monitor.
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Configure metrics alerts through Azure Monitor.
## Prerequisites
You can select any of the available Azure DDoS Protection metrics to alert you w
1. Select **+ Create** on the navigation bar, then select **Alert rule**.
- :::image type="content" source="./media/manage-ddos-protection/ddos-protection-alert-page.png" alt-text="Screenshot of creating Alerts.":::
+ :::image type="content" source="./media/ddos-alerts/ddos-protection-alert-page.png" alt-text="Screenshot of creating Alerts." lightbox="./media/ddos-alerts/ddos-protection-alert-page.png":::
1. On the **Create an alert rule** page, select **+ Select scope**, then select the following information in the **Select a resource** page.
- :::image type="content" source="./media/manage-ddos-protection/ddos-protection-alert-scope.png" alt-text="Screenshot of selecting DDoS Protection attack alert scope.":::
+ :::image type="content" source="./media/ddos-alerts/ddos-protection-alert-scope.png" alt-text="Screenshot of selecting DDoS Protection attack alert scope." lightbox="./media/ddos-alerts/ddos-protection-alert-scope.png":::
| Setting | Value |
You can select any of the available Azure DDoS Protection metrics to alert you w
1. Select **Done**, then select **Next: Condition**. 1. On the **Condition** page, select **+ Add Condition**, then in the *Search by signal name* search box, search and select **Under DDoS attack or not**.
- :::image type="content" source="./media/manage-ddos-protection/ddos-protection-alert-add-condition.png" alt-text="Screenshot of adding DDoS Protection attack alert condition.":::
+ :::image type="content" source="./media/ddos-alerts/ddos-protection-alert-add-condition.png" alt-text="Screenshot of adding DDoS Protection attack alert condition." lightbox="./media/ddos-alerts/ddos-protection-alert-add-condition.png":::
1. In the **Create an alert rule** page, enter or select the following information.
- :::image type="content" source="./media/manage-ddos-protection/ddos-protection-alert-signal.png" alt-text="Screenshot of adding DDoS Protection attack alert signal.":::
+ :::image type="content" source="./media/ddos-alerts/ddos-protection-alert-signal.png" alt-text="Screenshot of adding DDoS Protection attack alert signal." lightbox="./media/ddos-alerts/ddos-protection-alert-signal.png":::
| Setting | Value | |--|--|
You can select any of the available Azure DDoS Protection metrics to alert you w
### Create action group 1. In the **Create action group** page, enter the following information, then select **Next: Notifications**. | Setting | Value | |--|--|
You can select any of the available Azure DDoS Protection metrics to alert you w
1. On the *Notifications* tab, under *Notification type*, select **Email/SMS message/Push/Voice**. Under *Name*, enter **myUnderAttackEmailAlert**.
- :::image type="content" source="./media/manage-ddos-protection/ddos-protection-alert-action-group-notification.png" alt-text="Screenshot of adding DDoS Protection attack alert notification type.":::
+ :::image type="content" source="./media/ddos-alerts/ddos-protection-alert-action-group-notification.png" alt-text="Screenshot of adding DDoS Protection attack alert notification type." lightbox="./media/ddos-alerts/ddos-protection-alert-action-group-notification.png":::
1. On the *Email/SMS message/Push/Voice* page, select the **Email** check box, then enter the required email. Select **OK**.
- :::image type="content" source="./media/manage-ddos-protection/ddos-protection-alert-notification.png" alt-text="Screenshot of adding DDoS Protection attack alert notification page.":::
+ :::image type="content" source="./media/ddos-alerts/ddos-protection-alert-notification.png" alt-text="Screenshot of adding DDoS Protection attack alert notification page." lightbox="./media/ddos-alerts/ddos-protection-alert-notification.png":::
1. Select **Review + create** and then select **Create**. ### Continue configuring alerts through portal 1. Select **Next: Details**.
- :::image type="content" source="./media/manage-ddos-protection/ddos-protection-alert-details.png" alt-text="Screenshot of adding DDoS Protection attack alert details page.":::
+ :::image type="content" source="./media/ddos-alerts/ddos-protection-alert-details.png" alt-text="Screenshot of adding DDoS Protection attack alert details page." lightbox="./media/ddos-alerts/ddos-protection-alert-details.png":::
1. On the *Details* tab, under *Alert rule details*, enter the following information.
You can select any of the available Azure DDoS Protection metrics to alert you w
Within a few minutes of attack detection, you should receive an email from Azure Monitor metrics that looks similar to the following picture: You can also learn more about [configuring webhooks](../azure-monitor/alerts/alerts-webhooks.md?toc=%2fazure%2fvirtual-network%2ftoc.json) and [logic apps](../logic-apps/logic-apps-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) for creating alerts.
You can keep your resources for the next tutorial. If no longer needed, delete t
1. In the search box at the top of the portal, enter **Alerts**. Select **Alerts** in the search results.
- :::image type="content" source="./media/manage-ddos-protection/ddos-protection-alert-rule.png" alt-text="Screenshot of Alerts page.":::
+ :::image type="content" source="./media/ddos-alerts/ddos-protection-alert-rule.png" alt-text="Screenshot of Alerts page." lightbox="./media/ddos-alerts/ddos-protection-alert-rule.png":::
1. Select **Alert rules**.
- :::image type="content" source="./media/manage-ddos-protection/ddos-protection-delete-alert-rules.png" alt-text="Screenshot of Alert rules page.":::
+ :::image type="content" source="./media/ddos-alerts/ddos-protection-delete-alert-rules.png" alt-text="Screenshot of Alert rules page." lightbox="./media/ddos-alerts/ddos-protection-delete-alert-rules.png":::
1. In the Alert rules page, select your subscription. 1. Select the alerts created in this tutorial, then select **Delete**. ## Next steps
-* [Test through simulations](test-through-simulations.md)
-* [View alerts in Microsoft Defender for Cloud](ddos-view-alerts-defender-for-cloud.md)
+In this tutorial you learned how to configure metric alerts through Azure portal.
+
+To configure diagnostic logging, continue to the next tutorial.
+
+> [!div class="nextstepaction"]
+> [Configure diagnostic logging](diagnostic-logging.md)
+> [Test through simulations](test-through-simulations.md)
+> [View alerts in Microsoft Defender for Cloud](ddos-view-alerts-defender-for-cloud.md)
ddos-protection Ddos Configure Log Analytics Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-configure-log-analytics-workspace.md
description: Learn how to configure Log Analytics workspace for Azure DDoS Prote
-+ Previously updated : 01/30/2023 Last updated : 08/07/2023
In order to use diagnostic logging, you'll first need a Log Analytics workspace with diagnostic settings enabled.
-In this article, you'll learn how to configure a Log Analytics workspace for Azure DDoS Protection.
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Configure a Log Analytics workspace for DDoS Protection.
## Prerequisites
For more information, see [Log Analytics workspace overview](../azure-monitor/lo
## Next steps
-* [configure diagnostic logging alerts](ddos-diagnostic-alert-templates.md)
+In this tutorial you learned how to create a Log Analytics workspace for Azure DDoS Protection. To learn how to configure alerts, continue to the next article.
+
+> [!div class="nextstepaction"]
+> [Configure diagnostic logging alerts](ddos-diagnostic-alert-templates.md)
ddos-protection Ddos Diagnostic Alert Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-diagnostic-alert-templates.md
description: Learn how to configure DDoS protection diagnostic alerts for Azure
-+ Previously updated : 01/30/2023 Last updated : 08/07/2023 # Configure Azure DDoS Protection diagnostic logging alerts
-Azure DDoS Protection provides detailed attack insights and visualization with DDoS Attack Analytics. Customers protecting their virtual networks against DDoS attacks have detailed visibility into attack traffic and actions taken to mitigate the attack via attack mitigation reports & mitigation flow logs. Rich telemetry is exposed via Azure Monitor including detailed metrics during the duration of a DDoS attack. Alerting can be configured for any of the Azure Monitor metrics exposed by DDoS Protection. Logging can be further integrated with [Microsoft Sentinel](../sentinel/data-connectors/azure-ddos-protection.md), Splunk (Azure Event Hubs), OMS Log Analytics, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.
+DDoS Protection diagnostic logging alerts provide visibility into DDoS attacks and mitigation actions. You can configure alerts for all DDoS protected public IP addresses that you have enabled diagnostic logging on.
-In this article, you'll learn how to configure diagnostic logging alerts through Azure Monitor and Logic App.
+In this tutorial, you learn how to:
+> [!div class="checklist"]
+> * Configure diagnostic logging alerts through Azure Monitor and Logic App.
## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
The Azure Monitor alert rule template will run a query against the diagnostic lo
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Network-Security%2Fmaster%2FAzure%2520DDoS%2520Protection%2FAlert%2520-%2520DDOS%2520Mitigation%2520started%2520azure%2520monitor%2520alert%2FDDoSMitigationStarted.json) 1. On the *Custom deployment* page, under *Project details*, enter the following information. +
+ :::image type="content" source="./media/ddos-diagnostic-alert-templates/ddos-deploy-alert.png" alt-text="Screenshot of Azure Monitor alert rule template." lightbox="./media/ddos-diagnostic-alert-templates/ddos-deploy-alert.png":::
| Setting | Value | |--|--|
This DDoS Mitigation Alert Enrichment template deploys the necessary components
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Network-Security%2Fmaster%2FAzure%2520DDoS%2520Protection%2FAutomation%2520-%2520DDoS%2520Mitigation%2520Alert%2520Enrichment%2FEnrich-DDoSAlert.json) 1. On the *Custom deployment* page, under *Project details*, enter the following information. +
+ :::image type="content" source="./media/ddos-diagnostic-alert-templates/ddos-deploy-alert-logic-app.png" alt-text="Screenshot of DDoS Mitigation Alert Enrichment template." lightbox="./media/ddos-diagnostic-alert-templates/ddos-deploy-alert-logic-app.png":::
| Setting | Value | |--|--|
You can keep your resources for the next guide. If no longer needed, delete the
1. In the search box at the top of the portal, enter **Alerts**. Select **Alerts** in the search results.
- :::image type="content" source="./media/manage-ddos-protection/ddos-protection-alert-rule.png" alt-text="Screenshot of Alerts page.":::
+ :::image type="content" source="./media/ddos-diagnostic-alert-templates/ddos-protection-alert-rule.png" alt-text="Screenshot of Alerts page." lightbox="./media/ddos-diagnostic-alert-templates/ddos-protection-alert-rule.png":::
1. Select **Alert rules**, then in the Alert rules page, select your subscription.
- :::image type="content" source="./media/manage-ddos-protection/ddos-protection-delete-alert-rules.png" alt-text="Screenshot of Alert rules page.":::
+ :::image type="content" source="./media/ddos-diagnostic-alert-templates/ddos-protection-delete-alert-rules.png" alt-text="Screenshot of Alert rules page." lightbox="./media/ddos-diagnostic-alert-templates/ddos-protection-delete-alert-rules.png":::
1. Select the alerts created in this guide, then select **Delete**. ## Next steps
-* [Test through simulations](test-through-simulations.md)
-* [View alerts in Microsoft Defender for Cloud](ddos-view-alerts-defender-for-cloud.md)
+In this tutorial you learned how to configure diagnostic alerts through Azure portal.
+
+To test DDoS Protection through simulations, continue to the next guide.
+
+> [!div class="nextstepaction"]
+> [Test through simulations](test-through-simulations.md)
+> [View alerts in Microsoft Defender for Cloud](ddos-view-alerts-defender-for-cloud.md)
ddos-protection Ddos Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-overview.md
Azure DDoS Protection, combined with application design best practices, provides
Azure DDoS Protection protects at layer 3 and layer 4 network layers. For web applications protection at layer 7, you need to add protection at the application layer using a WAF offering. For more information, see [Application DDoS protection](../web-application-firewall/shared/application-ddos-protection.md).
+## Tiers
+
+### DDoS Network Protection
+
+Azure DDoS Network Protection, combined with application design best practices, provides enhanced DDoS mitigation features to defend against DDoS attacks. It's automatically tuned to help protect your specific Azure resources in a virtual network. For more information about enabling DDoS Network Protection, see [Quickstart: Create and configure Azure DDoS Network Protection using the Azure portal](manage-ddos-protection.md).
+
+### DDoS IP Protection
+
+DDoS IP Protection is a pay-per-protected IP model. DDoS IP Protection contains the same core engineering features as DDoS Network Protection, but will differ in the following value-added
++
+For more information about the tiers, see [Tier comparison](ddos-protection-sku-comparison.md).
## Key benefits ### Always-on traffic monitoring
Azure DDoS Protection applies three auto-tuned mitigation policies (TCP SYN, TCP
### Azure DDoS Rapid Response During an active attack, Azure DDoS Protection customers have access to the DDoS Rapid Response (DRR) team, who can help with attack investigation during an attack and post-attack analysis. For more information, see [Azure DDoS Rapid Response](ddos-rapid-response.md).
-## Tier
-
-Azure DDoS Protection is offered in two available tiers, DDoS IP Protection and DDoS Network Protection. For more information about the tiers, see [Tier comparison](ddos-protection-sku-comparison.md).
-- ### Native platform integration Natively integrated into Azure. Includes configuration through the Azure portal. Azure DDoS Protection understands your resources and resource configuration.
ddos-protection Ddos Protection Sku Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-sku-comparison.md
Previously updated : 05/23/2023 Last updated : 08/08/2023
The sections in this article discuss the resources and settings of Azure DDoS Protection.
-## DDoS Network Protection
-
-Azure DDoS Network Protection, combined with application design best practices, provides enhanced DDoS mitigation features to defend against DDoS attacks. It's automatically tuned to help protect your specific Azure resources in a virtual network. For more information about enabling DDoS Network Protection, see [Quickstart: Create and configure Azure DDoS Network Protection using the Azure portal](manage-ddos-protection.md).
-
-## DDoS IP Protection
-
- DDoS IP Protection is a pay-per-protected IP model. DDoS IP Protection contains the same core engineering features as DDoS Network Protection, but will differ in the following value-added
- ## Tiers
-Azure DDoS Protection supports two tier Types, DDoS IP Protection and DDoS Network Protection. The tier is configured in the Azure portal during the workflow when you configure Azure DDoS Protection.
+Azure DDoS Protection supports two tier types, DDoS IP Protection and DDoS Network Protection. The tier is configured in the Azure portal during the workflow when you configure Azure DDoS Protection.
The following table shows features and corresponding tiers.
The following table shows features and corresponding tiers.
DDoS Network Protection and DDoS IP Protection have the following limitations: -- PaaS services (multi-tenant), which includes Azure App Service Environment for Power Apps, Azure API Management in deployment modes other than APIM with virtual network integration (For more informaiton see https://techcommunity.microsoft.com/t5/azure-network-security-blog/azure-ddos-standard-protection-now-supports-apim-in-vnet/ba-p/3641671), and Azure Virtual WAN aren't currently supported.
+- PaaS services (multi-tenant), which includes Azure App Service Environment for Power Apps, Azure API Management in deployment modes other than APIM with virtual network integration (For more information see https://techcommunity.microsoft.com/t5/azure-network-security-blog/azure-ddos-standard-protection-now-supports-apim-in-vnet/ba-p/3641671), and Azure Virtual WAN aren't currently supported.
- Protecting a public IP resource attached to a NAT Gateway isn't supported. - Virtual machines in Classic/RDFE deployments aren't supported.-- VPN gateway or Virtual network gateway is protected by a fixed DDoS policy. Adaptive tuning is not supported at this stage. -- Disabling DDoS protection for a public IP address is currently a preview feature. If you disable DDoS protection for a public IP resource that is linked to a virtual network with an active DDoS protection plan, you will still be billed for DDoS Network Protection. However, the following functionalities will be suspended: mitigation of DDoS attacks, telemetry, and logging of DDoS mitigation events.
+- VPN gateway or Virtual network gateway is protected by a fixed DDoS policy. Adaptive tuning isn't supported at this stage.
+- Disabling DDoS protection for a public IP address is currently a preview feature. If you disable DDoS protection for a public IP resource that is linked to a virtual network with an active DDoS protection plan, you'll still be billed for DDoS Network Protection. However, the following functionalities will be suspended: mitigation of DDoS attacks, telemetry, and logging of DDoS mitigation events.
- Partially supported: the Azure DDoS Protection service can protect a public load balancer with a public IP address prefix linked to its frontend. It effectively detects and mitigates DDoS attacks. However, telemetry and logging for the protected public IP addresses within the prefix range are currently unavailable.
ddos-protection Ddos View Alerts Defender For Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-view-alerts-defender-for-cloud.md
description: Learn how to view DDoS protection alerts in Microsoft Defender for
-+ Previously updated : 03/29/2023 Last updated : 08/08/2023
Microsoft Defender for Cloud provides a list of [security alerts](../security-center/security-center-managing-and-responding-alerts.md), with information to help investigate and remediate problems. With this feature, you get a unified view of alerts - including DDoS attack-related alerts - and the actions to take to mitigate the attack.
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * View Azure DDoS Protection alerts in Microsoft Defender for Cloud.
+ There are two specific alerts that you'll see for any DDoS attack detection and mitigation: - **DDoS Attack detected for Public IP**: This alert is generated when the DDoS protection service detects that one of your public IP addresses is the target of a DDoS attack.
There are two specific alerts that you'll see for any DDoS attack detection and
To view the alerts, open **Defender for Cloud** in the Azure portal and select **Security alerts**. The following screenshot shows an example of the DDoS attack alerts. ++ ## Prerequisites
To view the alerts, open **Defender for Cloud** in the Azure portal and select *
1. In the search box at the top of the portal, enter **Microsoft Defender for Cloud**. Select **Microsoft Defender for Cloud** from the search results. 1. From the side menu, select **Security alerts**. To filter the alerts list, select your subscription, or any of the relevant filters. You can optionally add filters with the **Add filter** option.
- :::image type="content" source="./media/manage-ddos-protection/ddos-protection-security-alerts.png" alt-text="Screenshot of Security alert in Microsoft Defender for Cloud.":::
+ :::image type="content" source="./media/ddos-view-alerts-defender-for-cloud/ddos-protection-security-alerts.png" alt-text="Screenshot of Security alert in Microsoft Defender for Cloud." lightbox="./media/ddos-view-alerts-defender-for-cloud/ddos-protection-security-alerts.png":::
The alerts include general information about the public IP address thatΓÇÖs under attack, geo and threat intelligence information, and remediation steps. ## Next steps
-* [Engage with Azure DDoS Rapid Response](ddos-rapid-response.md)
+In this tutorial you learned how to view DDoS protection alerts in Microsoft Defender for Cloud. To learn more about the recommended steps to take when you receive an alert, see these next steps.
+
+> [!div class="nextstepaction"]
+> [Engage with Azure DDoS Rapid Response](ddos-rapid-response.md)
+> [components of a DDoS Rapid Response Strategy](ddos-response-strategy.md)
ddos-protection Ddos View Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-view-diagnostic-logs.md
description: Learn how to view DDoS protection diagnostic logs in Log Analytics
-+ Previously updated : 05/11/2023 Last updated : 08/08/2023 # View Azure DDoS Protection logs in Log Analytics workspace
-In this guide, you'll learn how to view Azure DDoS Protection diagnostic logs, including notifications, mitigation reports and mitigation flow logs.
+DDoS Protection diagnostic logs provide you with the ability to view DDoS Protection notifications, mitigation reports and mitigation flow logs after a DDoS attack. You can view these logs in your Log Analytics workspace.
++
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * view Azure DDoS Protection diagnostic logs including notifications, mitigation reports and mitigation flow logs.
+ ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
The following table lists the field names and descriptions:
## Next steps
-* [Engage DDoS Rapid Response](ddos-rapid-response.md)
+In this tutorial you learned how to view DDoS Protection diagnostic logs in a Log Analytics workspace. To learn more about the recommended steps to take when you receive a DDoS attack, see these next steps.
+
+> [!div class="nextstepaction"]
+> [Engage with Azure DDoS Rapid Response](ddos-rapid-response.md)
+> [components of a DDoS Rapid Response Strategy](ddos-response-strategy.md)
ddos-protection Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/diagnostic-logging.md
description: Learn how to configure Azure DDoS Protection diagnostic logs.
-+ Previously updated : 03/14/2023 Last updated : 08/07/2023 # Configure Azure DDoS Protection diagnostic logging through portal
-In this guide, you'll learn how to configure Azure DDoS Protection diagnostic logs, including notifications, mitigation reports and mitigation flow logs.
+Configure diagnostic logging for Azure DDoS Protection to gain visibility into DDoS attacks.
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Configure diagnostic logs.
+> * Query logs in log analytics workspace.
## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
For more information on log schemas, see [View diagnostic logs](ddos-view-diagno
## Next steps
-* [Test through simulations](test-through-simulations.md)
-* [View logs in Log Analytics workspace](ddos-view-diagnostic-logs.md)
+In this tutorial you learned how to configure diagnostic logging for DDoS Protection. To learn how to configure diagnostic logging alerts, continue to the next article.
+
+> [!div class="nextstepaction"]
+> [Configure diagnostic logging alerts](ddos-diagnostic-alert-templates.md)
+> [Test through simulations](test-through-simulations.md)
+> [View logs in Log Analytics workspace](ddos-view-diagnostic-logs.md)
ddos-protection Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md
Previously updated : 08/03/2023 Last updated : 08/08/2023
defender-for-cloud Concept Agentless Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-data-collection.md
This article explains how agentless scanning works and how it helps you collect
- Learn more about how to [enable agentless scanning for VMs](enable-vulnerability-assessment-agentless.md). -- Check out Defender for Cloud's [common questions](faq-data-collection-agents.yml) for more information on agentless scanning for machines.
+- Check out common questions about agentless scanning and [how it affects the subscription/account](faq-cspm.yml#how-does-scanning-affect-the-account-subscription-), [agentless data collection](faq-data-collection-agents.yml#agentless), and [permissions used by agentless](faq-permissions.yml#which-permissions-are-used-by-agentless-scanning-).
defender-for-cloud Episode Thirty Five https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-thirty-five.md
+
+ Title: Security alert correlation | Defender for Cloud in the Field
+description: Security alert correlation
+ Last updated : 08/08/2023++
+# Security alert correlation
+
+**Episode description**: In this episode of Defender for Cloud in the Field, Daniel Davrayev joins Yuri Diogenes to talk about security alert correlation capability in Defender for Cloud. Daniel talks about the importance of have a built-in capability to correlate alerts in Defender for Cloud, how this saves time for SOC analysts to investigate alert and respond to potential threats. Daniel also explains how data correlation works and demonstrate how this correlation appears in Defender for Cloud dashboard as a security incident.
+
+<br>
+<br>
+<iframe src="https://aka.ms/docs/player?id=6573561d-70a6-4b4c-ad16-9efe747c9a61" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+- [00:00](/shows/mdc-in-the-field/security-alert-correlation#time=00m00s) - Intro
+- [02:15](/shows/mdc-in-the-field/security-alert-correlation#time=02m15s) - How Defender for Cloud handles alert prioritization
+- [04:29](/shows/mdc-in-the-field/security-alert-correlation#time=04m29s) - How Defender for Cloud can help with alert correlation
+- [07:05](/shows/mdc-in-the-field/security-alert-correlation#time=07m05s) - How Defender for Cloud creates alerts correlation
+- [09:06](/shows/mdc-in-the-field/security-alert-correlation#time=09m06s) - Does alert correlation works across different Defender for Cloud plans?
+- [11:42](/shows/mdc-in-the-field/security-alert-correlation#time=11m42s) - Demonstration
+
+## Recommended resources
+
+- [Learn more](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/correlating-alerts-in-microsoft-defender-for-cloud/ba-p/3839209)
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
+- Learn more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+- Follow us on social media:
+
+ - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
+ - [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
defender-for-cloud Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md
Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important upcoming changes description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 07/25/2023 Last updated : 08/08/2023 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you can find them in the [What's
| Planned change | Estimated date for change | |--|--|
-| [Replacing the "Key Vaults should have purge protection enabled" recommendation with combined recommendation "Key Vaults should have deletion protection enabled"](#replacing-the-key-vaults-should-have-purge-protection-enabled-recommendation-with-combined-recommendation-key-vaults-should-have-deletion-protection-enabled) | June 2023
-| [Changes to the Defender for DevOps recommendations environment source and resource ID](#changes-to-the-defender-for-devops-recommendations-environment-source-and-resource-id) | July 2023 |
-| [DevOps Resource Deduplication for Defender for DevOps](#devops-resource-deduplication-for-defender-for-devops) | July 2023 |
-| [Business model and pricing updates for Defender for Cloud plans](#business-model-and-pricing-updates-for-defender-for-cloud-plans) | July 2023 |
| [Replacing the "Key Vaults should have purge protection enabled" recommendation with combined recommendation "Key Vaults should have deletion protection enabled"](#replacing-the-key-vaults-should-have-purge-protection-enabled-recommendation-with-combined-recommendation-key-vaults-should-have-deletion-protection-enabled) | June 2023| | [Changes to the Defender for DevOps recommendations environment source and resource ID](#changes-to-the-defender-for-devops-recommendations-environment-source-and-resource-id) | August 2023 | | [DevOps Resource Deduplication for Defender for DevOps](#devops-resource-deduplication-for-defender-for-devops) | August 2023 |
defender-for-cloud Working With Log Analytics Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/working-with-log-analytics-agent.md
Previously updated : 09/12/2022 Last updated : 07/31/2023 # Collect data from your workloads with the Log Analytics agent
You can define the level of security event data to store at the workspace level.
1. From Defender for Cloud's menu in the Azure portal, select **Environment settings**. 1. Select the relevant workspace. The only data collection events for a workspace are the Windows security events described on this page.
- :::image type="content" source="media/enable-data-collection/event-collection-workspace.png" alt-text="Screenshot of setting the security event data to store in a workspace.":::
+ :::image type="content" source="media/enable-data-collection/event-collection-workspace.png" alt-text="Screenshot of setting the security event data to store in a workspace." lightbox="media/enable-data-collection/event-collection-workspace.png":::
1. Select the amount of raw event data to store and select **Save**.
You can define the level of security event data to store at the workspace level.
To manually install the Log Analytics agent:
-1. Turn off the Log Analytics agent in **Environment Settings** > Monitoring coverage > **Settings**.
+1. In the Azure portal, navigate to the Defender for Cloud's **Environment Settings** page.
+1. Select the relevant subscription and then select **Settings & monitoring**.
+1. Turn Log Analytics agent/Azure Monitor Agent **Off**.
-1. Optionally, create a workspace.
+ :::image type="content" source="media/working-with-log-analytics-agent/manual-provision.png" alt-text="Screenshot of turning off the Log Analytics setting." lightbox="media/working-with-log-analytics-agent/manual-provision.png":::
+1. Optionally, create a workspace.
1. Enable Microsoft Defender for Cloud on the workspace on which you're installing the Log Analytics agent: 1. From Defender for Cloud's menu, open **Environment settings**. 1. Set the workspace on which you're installing the agent. Make sure the workspace is in the same subscription you use in Defender for Cloud and that you have read/write permissions for the workspace.
- 1. Select **Microsoft Defender for Cloud on**, and **Save**.
+ 1. Select one or both "Servers" or "SQL servers on machines"(Foundational CSPM is the free default), and then select **Save**.
+
+ :::image type="content" source="media/working-with-log-analytics-agent/apply-plan-to-workspace.png" alt-text="Screenshot that shows where to set the workspace on which you're installing the agent." lightbox="media/working-with-log-analytics-agent/apply-plan-to-workspace.png":::
>[!NOTE] >If the workspace already has a **Security** or **SecurityCenterFree** solution enabled, the pricing will be set automatically.
defender-for-iot Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alerts.md
Title: Microsoft Defender for IoT alerts description: Learn about Microsoft Defender for IoT alerts across the Azure portal, OT network sensors, and on-premises management consoles. Previously updated : 12/12/2022 Last updated : 08/06/2023
While you can view alert details, investigate alert context, and triage and mana
|**OT network sensor consoles** | Alerts generated by that OT sensor | - View the alert's source and destination in the **Device map** <br>- View related events on the **Event timeline** <br>- Forward alerts directly to partner vendors <br>- Create alert comments <br> - Create custom alert rules <br>- Unlearn alerts | |**An on-premises management console** | Alerts generated by connected OT sensors | - Forward alerts directly to partner vendors <br> - Create alert exclusion rules |
+> [!TIP]
+> Any alerts generated from different sensors in the same zone within a 10-minute timeframe, with the same type, status, alert protocol, and associated devices, are listed as a single, unified alert.
+>
+> - The 10-minute timeframe is based on the alert's *first detection* time.
+> - The single, unified alert lists all of the sensors that detected the alert.
+> - Alerts are combined based on the *alert* protocol, and not the device protocol.
+>
+ For more information, see: - [Alert data retention](references-data-retention.md#alert-data-retention) - [Accelerating OT alert workflows](#accelerating-ot-alert-workflows) - [Alert statuses and triaging options](alerts.md#alert-statuses-and-triaging-options)
+- [Plan OT sites and zones](best-practices/plan-corporate-monitoring.md#plan-ot-sites-and-zones)
Alert options also differ depending on your location and user role. For more information, see [Azure user roles and permissions](roles-azure.md) and [On-premises users and roles](roles-on-premises.md).
dms Resource Custom Roles Mysql Database Migration Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-mysql-database-migration-service.md
+
+ Title: "Custom roles for MySQL to Azure Database for MySQL migrations using Database Migration Service"
+
+description: Learn to use the custom roles for MySQL to Azure Database for MySQL migrations.
++ Last updated : 08/07/2023++++
+# Custom roles for MySQL to Azure Database for MySQL migrations using Database Migration Service
+
+This article explains how to set up a custom role for MySQL to Azure Database for MySQL migrations using DMS.
+
+The role has no permission to create a new Database Migration Service and no permission to create a database migration project. Meaning the user assigned to the custom role will need to have an already created Database Migration Service and migration project under the assigned resource group. The user will then be able to create and run migration activities under the migration project.
+
+```json
+{
+ "properties": {
+ "roleName": "DmsCustomRoleDemoforMySQL",
+ "description": "",
+ "assignableScopes": [
+ "/subscriptions/<DMSSubscription>/resourceGroups/<dmsServiceRG>"
+ ],
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.DataMigration/locations/operationResults/read",
+ "Microsoft.DataMigration/locations/operationStatuses/read",
+ "Microsoft.DataMigration/services/read",
+ "Microsoft.DataMigration/services/stop/action",
+ "Microsoft.DataMigration/services/start/action",
+ "Microsoft.DataMigration/services/checkStatus/*",
+ "Microsoft.DataMigration/services/configureWorker/action",
+ "Microsoft.DataMigration/services/addWorker/action",
+ "Microsoft.DataMigration/services/removeWorker/action",
+ "Microsoft.DataMigration/services/updateAgentConfig/action",
+ "Microsoft.DataMigration/services/slots/read",
+ "Microsoft.DataMigration/services/projects/*",
+ "Microsoft.DataMigration/services/serviceTasks/read",
+ "Microsoft.DataMigration/services/serviceTasks/write",
+ "Microsoft.DataMigration/services/serviceTasks/delete",
+ "Microsoft.DataMigration/services/serviceTasks/cancel/action",
+ "Microsoft.DBforMySQL/flexibleServers/read",
+ "Microsoft.DBforMySQL/flexibleServers/databases/read",
+ "Microsoft.DBforMySQL/servers/read",
+ "Microsoft.DBforMySQL/servers/databases/read",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.DataMigration/skus/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ]
+ }
+}
+```
+You can use either the Azure portal, AZ PowerShell, Azure CLI or Azure REST API to create the roles.
+
+For more information, see the articles [Create custom roles using the Azure portal](../role-based-access-control/custom-roles-portal.md) and [Azure custom roles](../role-based-access-control/custom-roles.md).
+
+## Role assignment
+
+To assign a role to users, open the Azure portal, perform the following steps:
+
+1. Navigate to the resource, go to **Access Control**, and then scroll to find the custom roles you created.
+
+2. Select the appropriate role, select the User, and then save the changes.
+
+ The user now appears listed on the **Role assignments** tab.
+
+## Next steps
+
+* For information about Azure Database for MySQL - Flexible Server, see [Overview - Azure Database for MySQL Flexible Server](./../mysql/flexible-server/overview.md).
+* For information about Azure Database Migration Service, see [What is Azure Database Migration Service?](./dms-overview.md).
+* For information about known issues and limitations when migrating to Azure Database for MySQL - Flexible Server using DMS, see [Known Issues With Migrations To Azure Database for MySQL - Flexible Server](./known-issues-azure-mysql-fs-online.md).
+* For information about known issues and limitations when performing migrations using DMS, see [Common issues - Azure Database Migration Service](./known-issues-troubleshooting-dms.md).
+* For troubleshooting source database connectivity issues while using DMS, see article [Issues connecting source databases](./known-issues-troubleshooting-dms-source-connectivity.md).
dms Tutorial Mysql Azure External To Flex Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-external-to-flex-online-portal.md
+
+ Title: "Tutorial: Migrate from MySQL to Azure Database for MySQL - Flexible Server online using DMS via the Azure portal"
+
+description: "Learn to perform an online migration from MySQL to Azure Database for MySQL - Flexible Server by using Azure Database Migration Service."
+++ Last updated : 08/07/2023++++
+# Tutorial: Migrate from MySQL to Azure Database for MySQL - Flexible Server online using DMS via the Azure portal
+
+> [!NOTE]
+> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+
+You can migrate your on-premises or other cloud services MySQL Server to Azure Database for MySQL ΓÇô Flexible Server by using Azure Database Migration Service (DMS), a fully managed service designed to enable seamless migrations from multiple database sources to Azure data platforms. In this tutorial, weΓÇÖll perform an online migration of a sample database from an on-premises MySQL server to an Azure Database for MySQL - Flexible Server (both running version 5.7) using a DMS migration activity.
+
+> [!NOTE]
+> DMS online migration is now generally available. DMS supports migration to MySQL versions 5.7 and 8.0 and also supports migration from lower version MySQL servers (v5.6 and above) to higher version servers. In addition, DMS supports cross-region, cross-resource group, and cross-subscription migrations, so you can select a region, resource group, and subscription for the target server that is different than what is specified for your source server.
+
+In this tutorial, you'll learn how to:
+
+> [!div class="checklist"]
+>
+> * Implement best practices for creating a flexible server for faster data loads using DMS.
+> * Create and configure a target flexible server.
+> * Create a DMS instance.
+> * Create a MySQL migration project in DMS.
+> * Migrate a MySQL schema using DMS.
+> * Run the migration.
+> * Monitor the migration.
+> * Perform post-migration steps.
+> * Implement best practices for performing a migration.
+
+## Prerequisites
+
+To complete this tutorial, you need to:
+
+* Create or use an existing MySQL instance (the source server).
+* To complete the online migration successfully, ensure that the following prerequisites are in place:
+ * Use the MySQL command line tool of your choice to verify that log_bin is enabled on the source server by running the command: SHOW VARIABLES LIKE 'log_binΓÇÖ. If log_bin isn't enabled, be sure to enable it before starting the migration.
+ * Ensure that the user has ΓÇ£REPLICATION CLIENTΓÇ¥ and ΓÇ£REPLICATION SLAVEΓÇ¥ permissions on the source server for reading and applying the bin log.
+ * If you're targeting an online migration, you will need to configure the binlog expiration on the source server to ensure that binlog files aren't purged before the replica commits the changes. We recommend at least two days to start. The parameter will depend on the version of your MySQL server. For MySQL 5.7 the parameter is expire_logs_days (by default it is set to 0, which is no auto purge). For MySQL 8.0 it is binlog_expire_logs_seconds (by default it is set to 30 days). After a successful cutover, you can reset the value.
+* To complete a schema migration successfully, on the source server, the user performing the migration requires the following privileges:
+ * ΓÇ£READΓÇ¥ privilege on the source database.
+ * ΓÇ£SELECTΓÇ¥ privilege for the ability to select objects from the database
+ * If migrating views, the user must have the ΓÇ£SHOW VIEWΓÇ¥ privilege.
+ * If migrating triggers, the user must have the ΓÇ£TRIGGERΓÇ¥ privilege.
+ * If migrating routines (procedures and/or functions), the user must be named in the definer clause of the routine. Alternatively, based on version, the user must have the following privilege:
+ * For 5.7, have ΓÇ£SELECTΓÇ¥ access to the ΓÇ£mysql.procΓÇ¥ table.
+ * For 8.0, have ΓÇ£SHOW_ROUTINEΓÇ¥ privilege or have the ΓÇ£CREATE ROUTINE,ΓÇ¥ ΓÇ£ALTER ROUTINE,ΓÇ¥ or ΓÇ£EXECUTEΓÇ¥ privilege granted at a scope that includes the routine.
+ * If migrating events, the user must have the ΓÇ£EVENTΓÇ¥ privilege for the database from which the events are to be shown.
+
+## Limitations
+
+As you prepare for the migration, be sure to consider the following limitations.
+
+* When migrating non-table objects, DMS doesn't support renaming databases.
+* When migrating to a target server with bin_log enabled, be sure to enable log_bin_trust_function_creators to allow for creation of routines and triggers.
+* Currently, DMS doesn't support migrating the DEFINER clause for objects. All object types with definers on the source are dropped and after the migration, the default definer for all objects that support a definer clause and that are created during schema migration, will be set to the login used to run the migration.
+* Currently, DMS only supports migrating a schema as part of data movement. If nothing is selected for data movement, the schema migration won't occur. Note that selecting a table for schema migration also selects it for data movement.
+* Online migration support is limited to the ROW binlog format.
+* Online migration only replicates DML changes; replicating DDL changes isn't supported. Don't make any schema changes to the source while replication is in progress, if DMS detects DDL while replicating, it will generate a warning that can be viewed in the Azure portal.
+
+## Best practices for creating a flexible server for faster data loads using DMS
+
+DMS supports cross-region, cross-resource group, and cross-subscription migrations, so you're free to select appropriate region, resource group and subscription for your target flexible server. Before you create your target flexible server, consider the following configuration guidance to help ensure faster data loads using DMS.
+
+* Select the compute size and compute tier for the target flexible server based on the source single serverΓÇÖs pricing tier and VCores based on the detail in the following table.
+
+ | Single Server Pricing Tier | Single Server VCores | Flexible Server Compute Size | Flexible Server Compute Tier |
+ | - | - |:-:|:-:|
+ | Basic\* | 1 | General Purpose | Standard_D16ds_v4 |
+ | Basic\* | 2 | General Purpose | Standard_D16ds_v4 |
+ | General Purpose\* | 4 | General Purpose | Standard_D16ds_v4 |
+ | General Purpose\* | 8 | General Purpose | Standard_D16ds_v4 |
+ | General Purpose | 16 | General Purpose | Standard_D16ds_v4 |
+ | General Purpose | 32 | General Purpose | Standard_D32ds_v4 |
+ | General Purpose | 64 | General Purpose | Standard_D64ds_v4 |
+ | Memory Optimized | 4 | Business Critical | Standard_E4ds_v4 |
+ | Memory Optimized | 8 | Business Critical | Standard_E8ds_v4 |
+ | Memory Optimized | 16 | Business Critical | Standard_E16ds_v4 |
+ | Memory Optimized | 32 | Business Critical | Standard_E32ds_v4 |
+
+\* For the migration, select General Purpose 16 vCores compute for the target flexible server for faster migrations. Scale back to the desired compute size for the target server after migration is complete by following the compute size recommendation in the Performing post-migration activities section later in this article.
+
+* The MySQL version for the target flexible server must be greater than or equal to that of the source single server.
+* Unless you need to deploy the target flexible server in a specific zone, set the value of the Availability Zone parameter to ΓÇÿNo preferenceΓÇÖ.
+* For network connectivity, on the Networking tab, if the source single server has private endpoints or private links configured, select Private Access; otherwise, select Public Access.
+* Copy all firewall rules from the source single server to the target flexible server.
+* Copy all the name/value tags from the single to flex server during creation itself.
+
+## Create and configure the target flexible server
+
+With these best practices in mind, create your target flexible server, and then configure it.
+
+* Create the target flexible server. For guided steps, see the quickstart [Create an Azure Database for MySQL flexible server](./../mysql/flexible-server/quickstart-create-server-portal.md).
+* Configure the new target flexible server as follows:
+ * The user performing the migration requires the following permissions:
+ * Ensure that the user has ΓÇ£REPLICATION_APPLIERΓÇ¥ or ΓÇ£BINLOG_ADMINΓÇ¥ permission on target server for applying the bin log.
+ * Ensure that the user has ΓÇ£REPLICATION SLAVEΓÇ¥ permission on target server.
+ * Ensure that the user has ΓÇ£REPLICATION CLIENTΓÇ¥ and ΓÇ£REPLICATION SLAVEΓÇ¥ permission on source server for reading and applying the bin log.
+ * To create tables on the target, the user must have the ΓÇ£CREATEΓÇ¥ privilege.
+ * If migrating a table with ΓÇ£DATA DIRECTORYΓÇ¥ or ΓÇ£INDEX DIRECTORYΓÇ¥ partition options, the user must have the ΓÇ£FILEΓÇ¥ privilege.
+ * If migrating to a table with a ΓÇ£UNIONΓÇ¥ option, the user must have the ΓÇ£SELECT,ΓÇ¥ ΓÇ£UPDATE,ΓÇ¥ and ΓÇ£DELETEΓÇ¥ privileges for the tables you map to a MERGE table.
+ * If migrating views, you must have the ΓÇ£CREATE VIEWΓÇ¥ privilege.
+ Keep in mind that some privileges may be necessary depending on the contents of the views. Refer to the MySQL docs specific to your version for ΓÇ£CREATE VIEW STATEMENTΓÇ¥ for details.
+ * If migrating events, the user must have the ΓÇ£EVENTΓÇ¥ privilege.
+ * If migrating triggers, the user must have the ΓÇ£TRIGGERΓÇ¥ privilege.
+ * If migrating routines, the user must have the ΓÇ£CREATE ROUTINEΓÇ¥ privilege.
+ * Configure the server parameters on the target flexible server as follows:
+ * Set the TLS version and require_secure_transport server parameter to match the values on the source server.
+ * Set the sql_mode server parameter to match the values on the source server.
+ * Configure server parameters on the target server to match any non-default values used on the source server.
+ * To ensure faster data loads when using DMS, configure the following server parameters as described.
+ * max_allowed_packet ΓÇô set to 1073741824 (i.e., 1 GB) to prevent any connection issues due to large rows.
+ * slow_query_log ΓÇô set to OFF to turn off the slow query log. This will eliminate the overhead caused by slow query logging during data loads.
+ * innodb_buffer_pool_size ΓÇô can only be increased by scaling up compute for Azure Database for MySQL server. Scale up the server to 64 vCore General Purpose SKU from the Pricing tier of the portal during migration to increase the innodb_buffer_pool_size.
+ * innodb_io_capacity & innodb_io_capacity_max - Change to 9000 from the Server parameters in Azure portal to improve the IO utilization to optimize for migration speed.
+ * innodb_write_io_threads - Change to 4 from the Server parameters in Azure portal to improve the speed of migration.
+ * Configure the replicas on the target server to match those on the source server.
+ * Replicate the following server management features from the source single server to the target flexible server:
+ * Role assignments, Roles, Deny Assignments, classic administrators, Access Control (IAM)
+ * Locks (read-only and delete)
+ * Alerts
+ * Tasks
+ * Resource Health Alerts
+
+## Set up DMS
+
+With your target flexible server deployed and configured, you next need to set up DMS to migrate your single server to a flexible server.
+
+### Register the resource provider
+
+To register the Microsoft.DataMigration resource provider, perform the following steps.
+
+1. Before creating your first DMS instance, sign in to the Azure portal, and then search for and select **Subscriptions**.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/1-subscriptions.png" alt-text="Screenshot of a Select subscriptions from Azure Marketplace.":::
+
+2. Select the subscription that you want to use to create the DMS instance, and then select **Resource providers**.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/2-resource-provider.png" alt-text="Screenshot of a Select Resource Provider.":::
+
+3. Search for the term ΓÇ£MigrationΓÇ¥, and then, for **Microsoft.DataMigration**, select **Register**.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/3-register.png" alt-text="Screenshot of a Register your resource provider.":::
+
+### Create a Database Migration Service (DMS) instance
+
+1. In the Azure portal, select **+ Create a resource**, search for the term ΓÇ£Azure Database Migration ServiceΓÇ¥, and then select **Azure Database Migration Service** from the drop-down list.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/4-dms-portal-marketplace.png" alt-text="Screenshot of a Search Azure Database Migration Service.":::
+
+2. On the **Azure Database Migration Service** screen, select **Create**.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/5-dms-portal-marketplace-create.png" alt-text="Screenshot of a Create Azure Database Migration Service instance.":::
+
+3. On the **Select migration scenario and Database Migration Service** page, under **Migration scenario**, select **MySQL** as the source server type, and then select **Azure Database for MySQL** as target server type, and then select **Select**.
+ :::image type="content" source="media/tutorial-azure-mysql-external-to-flex-online/create-database-migration-service.png" alt-text="Screenshot of a Select Migration Scenario.":::
+
+4. On the **Create Migration Service** page, on the **Basics** tab, under **Project details**, select the appropriate subscription, and then select an existing resource group or create a new one.
+
+5. Under **Instance details**, specify a name for the service, select a region, and then verify that **Azure** is selected as the service mode.
+
+6. To the right of **Pricing tier**, select **Configure tier**.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/7-project-details.png" alt-text="Screenshot of a Select Configure Tier.":::
+
+7. On the **Configure** page, select the **Premium** pricing tier with 4 vCores for your DMS instance, and then select **Apply**.
+ DMS Premium 4-vCore is free for 6 months (183 days) from the DMS service creation date before incurring any charges. For more information on DMS costs and pricing tiers, see the [pricing page](https://aka.ms/dms-pricing).
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/8-configure-pricing-tier.png" alt-text="Screenshot of a Select Pricing tier.":::
+
+ Next, we need to specify the VNet that will provide the DMS instance with access to the source single server and the target flexible server.
+
+8. On the **Create Migration Service** page, select **Next : Networking >>**.
+
+9. On the **Networking** tab, select an existing VNet from the list or provide the name of new VNet to create, and then select **Review + Create**.
+ For more information, see the article [Create a virtual network using the Azure portal.](./../virtual-network/quick-create-portal.md).
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/8-1-networking.png" alt-text="Screenshot of a Select Networking.":::
+
+ > [!IMPORTANT]
+ > Your VNet must be configured with access to both the source single server and the target flexible server, so be sure to:
+ >
+ > * Create a server-level firewall rule or [configure VNet service endpoints](./../mysql/single-server/how-to-manage-vnet-using-portal.md) for both the source and target Azure Database for MySQL servers to allow the VNet for Azure Database Migration Service access to the source and target databases.
+ > * Ensure that your VNet Network Security Group (NSG) rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage, and Azure Monitor. For more information about VNet NSG traffic filtering, see [Filter network traffic with network security groups](./../virtual-network/virtual-network-vnet-plan-design-arm.md).
+
+ > [!NOTE]
+ > To add tags to the service, advance to the **Tags** tab by selecting **Next : Tags**. Adding tags to the service is optional.
+
+10. Navigate to the **Review + create** tab, review the configurations, view the terms, and then select **Create**.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/9-review-create.png" alt-text="Screenshot of a Select Review+Create.":::
+
+ Deployment of your instance of DMS now begins. The message **Deployment is in progress** appears for a few minutes, and then the message changes to **Your deployment is complete**.
+
+11. Select **Go to resource**.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/9-1-go-to-resource.png" alt-text="Screenshot of a Select Go to resource.":::
+
+12. Identify the IP address of the DMS instance from the resource overview page and create a firewall rule for your source single server and target flexible server allow-listing the IP address of the DMS instance.
+
+### Create a migration project
+
+To create a migration project, perform the following steps.
+
+1. In the Azure portal, select **All services**, search for Azure Database Migration Service, and then select **Azure Database Migration Services**.
+
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/10-dms-search.png" alt-text="Screenshot of a Locate all instances of Azure Database Migration Service.":::
+
+2. In the search results, select the DMS instance that you created, and then select **+ New Migration Project**.
+
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/11-select-create.png" alt-text="Screenshot of a Select a new migration project.":::
+
+3. On the **New migration project** page, specify a name for the project, in the **Source server type** selection box, select **MySQL**, in the **Target server type** selection box, select **Azure Database For MySQL - Flexible Server**, in the **Migration activity type** selection box, select **Online data migration**, and then select **Create and run activity**.
+
+ > [!NOTE]
+ > Selecting **Create project only** as the migration activity type will only create the migration project; you can then run the migration project at a later time.
+
+ :::image type="content" source="media/tutorial-azure-mysql-external-to-flex-online/create-project-external.png" alt-text="Screenshot of a Create a new migration project.":::
+
+### Configure the migration project
+
+To configure your DMS migration project, perform the following steps.
+
+1. On the **Select source** screen, we have to ensure that DMS is in the VNet which has connectivity to the source server. Here you will input source server name, server port, username, and password to your source server.
+ :::image type="content" source="media/tutorial-azure-mysql-external-to-flex-online/select-source-server.png" alt-text="Screenshot of an Add source details screen.":::
+
+2. Select **Next : Select target>>**, and then, on the **Select target** screen, locate the server based on the subscription, location, and resource group. The user name is auto populated, then provide the password for the target flexible server.
+
+ :::image type="content" source="media/tutorial-mysql-to-azure-mysql-online/select-target-online.png" alt-text="Screenshot of a Select target.":::
+
+3. Select **Next : Select databases>>**, and then, on the **Select databases** tab, under **Server migration options**, select **Migrate all applicable databases** or under **Select databases** select the server objects that you want to migrate.
+
+ > [!NOTE]
+ > There is now a **Migrate all applicable databases** option. When selected, this option will migrate all user created databases and tables. Note that because Azure Database for MySQL - Flexible Server does not support mixed case databases, mixed case databases on the source will not be included for an online migration.
+
+ :::image type="content" source="media/tutorial-azure-mysql-external-to-flex-online/select-databases.png" alt-text="Screenshot of a Select database.":::
+
+4. In the **Select databases** section, under **Source Database**, select the database(s) to migrate.
+
+ The non-table objects in the database(s) you specified will be migrated, while the items you didnΓÇÖt select will be skipped. You can only select the source and target databases whose names match that on the source and target server.
+ If you select a database on the source server that doesnΓÇÖt exist on the target server, it will be created on the target server.
+
+5. Select **Next : Select tables>>** to navigate to the **Select tables** tab.
+
+ Before the tab populates, DMS fetches the tables from the selected database(s) on the source and target and then determines whether the table exists and contains data.
+
+6. Select the tables that you want to migrate.
+
+ If the selected source table doesn't exist on the target server, the online migration process will ensure that the table schema and data is migrated to the target server.
+ :::image type="content" source="media/tutorial-azure-mysql-external-to-flex-online/select-tables.png" alt-text="Screenshot of a Select Tables.":::
+
+ DMS validates your inputs, and if the validation passes, you'll be able to start the migration.
+
+7. After configuring for schema migration, select **Review and start migration**.
+ > [!NOTE]
+ > You only need to navigate to the **Configure migration settings** tab if you are trying to troubleshoot failing migrations.
+
+8. On the **Summary** tab, in the **Activity name** text box, specify a name for the migration activity, and then review the summary to ensure that the source and target details match what you previously specified.
+ :::image type="content" source="media/tutorial-azure-mysql-external-to-flex-online/summary-page.png" alt-text="Screenshot of a Select Summary.":::
+
+9. Select **Start migration**.
+
+ The migration activity window appears, and the Status of the activity is Initializing. The Status changes to Running when the table migrations start.
+ :::image type="content" source="media/tutorial-mysql-to-azure-mysql-online/running-online-migration.png" alt-text="Screenshot of a Running status.":::
+
+### Monitor the migration
+
+1. After the **Initial Load** activity is completed, navigate to the **Initial Load** tab to view the completion status and the number of tables completed.
+ :::image type="content" source="media/tutorial-azure-mysql-external-to-flex-online/initial-load.png" alt-text="Screenshot of a completed initial load migration.":::
+
+ After the **Initial Load** activity is completed, you're navigated to the **Replicate Data Changes** tab automatically. You can monitor the migration progress as the screen is auto-refreshed every 30 seconds.
+
+2. Select **Refresh** to update the display and view the seconds behind source when needed.
+
+ :::image type="content" source="media/tutorial-azure-mysql-external-to-flex-online/replicate-changes.png" alt-text="Screenshot of a Monitoring migration.":::
+
+3. Monitor the **Seconds behind source** and as soon as it nears 0, proceed to start cutover by navigating to the **Start Cutover** menu tab at the top of the migration activity screen.
+
+4. Follow the steps in the cutover window before you're ready to perform a cutover.
+
+5. After completing all steps, select **Confirm**, and then select **Apply**.
+ :::image type="content" source="media/tutorial-mysql-to-azure-mysql-online/21-complete-cutover-online.png" alt-text="Screenshot of a Perform cutover.":::
+
+## Perform post-migration activities
+
+When the migration has finished, be sure to complete the following post-migration activities.
+
+* Perform sanity testing of the application against the target database to certify the migration.
+* Update the connection string to point to the new flexible server.
+* Delete the source single server after you have ensured application continuity.
+* If you scaled-up the target flexible server for faster migration, scale it back by selecting the compute size and compute tier for the flexible server based on the source single serverΓÇÖs pricing tier and VCores, based on the detail in the following table.
+
+ | Single Server Pricing Tier | Single Server VCores | Flexible Server Compute Size | Flexible Server Compute Tier |
+ | - | - |:-:|:-:|
+ | Basic | 1 | Burstable | Standard_B1s |
+ | Basic | 2 | Burstable | Standard_B2s |
+ | General Purpose | 4 | General Purpose | Standard_D4ds_v4 |
+ | General Purpose | 8 | General Purpose | Standard_D8ds_v4 |
+* To clean up the DMS resources, perform the following steps:
+ 1. In the Azure portal, select **All services**, search for Azure Database Migration Service, and then select **Azure Database Migration Services**.
+ 2. Select your migration service instance from the search results, and then select **Delete service**.
+ 3. In the confirmation dialog box, in the **TYPE THE DATABASE MIGRATION SERVICE NAME** textbox, specify the name of the instance, and then select **Delete**.
+
+## Migration best practices
+
+When performing a migration, be sure to consider the following best practices.
+
+* As part of discovery and assessment, take the server SKU, CPU usage, storage, database sizes, and extensions usage as some of the critical data to help with migrations.
+* Perform test migrations before migrating for production:
+ * Test migrations are important for ensuring that you cover all aspects of the database migration, including application testing. The best practice is to begin by running a migration entirely for testing purposes. After a newly started migration enters the Replicate Data Changes phase with minimal lag, only use your Flexible Server target for running test workloads. Use that target for testing the application to ensure expected performance and results. If you're migrating to a higher MySQL version, test for application compatibility.
+ * After testing is completed, you can migrate the production databases. At this point, you need to finalize the day and time of production migration. Ideally, there's low application use at this time. All stakeholders who need to be involved should be available and ready. The production migration requires close monitoring. For an online migration, the replication must be completed before you perform the cutover, to prevent data loss.
+* Redirect all dependent applications to access the new primary database and make the source server read-only. Then, open the applications for production usage.
+* After the application starts running on the target flexible server, monitor the database performance closely to see if performance tuning is required.
+
+## Next steps
+
+* For information about Azure Database for MySQL - Flexible Server, see [Overview - Azure Database for MySQL Flexible Server](./../mysql/flexible-server/overview.md).
+* For information about Azure Database Migration Service, see [What is Azure Database Migration Service?](./dms-overview.md).
+* For information about known issues and limitations when migrating to Azure Database for MySQL - Flexible Server using DMS, see [Known Issues With Migrations To Azure Database for MySQL - Flexible Server](./known-issues-azure-mysql-fs-online.md).
+* For information about known issues and limitations when performing migrations using DMS, see [Common issues - Azure Database Migration Service](./known-issues-troubleshooting-dms.md).
+* For troubleshooting source database connectivity issues while using DMS, see article [Issues connecting source databases](./known-issues-troubleshooting-dms-source-connectivity.md).
event-grid Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/policy-reference.md
Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
event-hubs Event Hubs Capture Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-capture-managed-identity.md
The Event Hubs Capture feature also support capturing data to a capture destinat
For that you can use the same ARM templates given in [enabling capture with ARM template guide](./event-hubs-resource-manager-namespace-event-hub-enable-capture.md) with corresponding managed identity.
-For example, following ARM template can be used to create an event hub with capture enabled. Azure Storage or Azure Data Lake Storage Gen 2 can be used as the capture destination and system assigned identity is used as the authentication method. The resource ID of the destination can point to a resource in a different subscription.
+For example, following ARM template can be used to create an event hub with capture enabled. Azure Storage or Azure Data Lake Storage Gen 2 can be used as the capture destination and user assigned identity is used as the authentication method. The resource ID of the destination can point to a resource in a different subscription.
```json "resources":[
For example, following ARM template can be used to create an event hub with capt
"dependsOn": [ "[concat('Microsoft.EventHub/namespaces/', parameters('eventHubNamespaceName'))]" ],
- "identity": {
- "type": "SystemAssigned",
- },
"properties": { "messageRetentionInDays": "[parameters('messageRetentionInDays')]", "partitionCount": "[parameters('partitionCount')]",
For example, following ARM template can be used to create an event hub with capt
"storageAccountResourceId": "[parameters('destinationStorageAccountResourceId')]", "blobContainer": "[parameters('blobContainerName')]", "archiveNameFormat": "[parameters('captureNameFormat')]"
- }
+ },
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "xxxxxxxx": {}
+ }
+ }
} } }
event-hubs Event Hubs Dotnet Standard Getstarted Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-dotnet-standard-getstarted-send.md
Replace the contents of **Program.cs** with the following code:
:::image type="content" source="./media/getstarted-dotnet-standard-send-v2/verify-messages-portal-2.png" alt-text="Image of the Azure portal page to verify that the event hub sent events to the receiving app" lightbox="./media/getstarted-dotnet-standard-send-v2/verify-messages-portal-2.png":::
+## Schema validation for Event Hubs SDK based applications
+
+You can use Azure Schema Registry to perform schema validation when you stream data with your Event Hubs SDK-based applications.
+Azure Schema Registry of Event Hubs provides a centralized repository for managing schemas and you can seamlessly connect your new or existing applications with Schema Registry.
+
+To learn more, see [Validate schemas with Event Hubs SDK](schema-registry-dotnet-send-receive-quickstart.md).
+ ## Clean up resources Delete the resource group that has the Event Hubs namespace or delete only the namespace if you want to keep the resource group.
event-hubs Event Hubs Quickstart Kafka Enabled Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-quickstart-kafka-enabled-event-hubs.md
If your Event Hubs Kafka cluster has events, you'll now start receiving them fro
+## Schema validation for Kafka with Schema Registry
+
+You can use Azure Schema Registry to perform schema validation when you stream data with your Kafka applications using Event Hubs.
+Azure Schema Registry of Event Hubs provides a centralized repository for managing schemas and you can seamlessly connect your new or existing Kafka applications with Schema Registry.
+
+To learn more, see [Validate schemas for Apache Kafka applications using Avro](schema-registry-kafka-java-send-receive-quickstart.md).
+ ## Next steps In this article, you learned how to stream into Event Hubs without changing your protocol clients or running your own clusters. To learn more, see [Apache Kafka developer guide for Azure Event Hubs](apache-kafka-developer-guide.md).
event-hubs Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/policy-reference.md
Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
firewall Firewall Structured Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-structured-logs.md
To enable Azure Firewall structured logs, you must first configure a Log Analyti
Once you configure the Log Analytics workspace, you can enable structured logs in Azure Firewall by navigating to the Firewall's **Diagnostic settings** page in the Azure portal. From there, you must select the **Resource specific** destination table and select the type of events you want to log.
+> [!NOTE]
+> There's no requirement to enable this feature with a feature flag or Azure PowerShell commands.
+ :::image type="content" source="media/firewall-structured-logs/diagnostics-setting-resource-specific.png" alt-text="Screenshot of Diagnostics settings page."::: ## Structured log queries
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-features.md
Web categories lets administrators allow or deny user access to web site categor
Azure Firewall Premium web categories are only available in firewall policies. Ensure that your policy SKU matches the SKU of your firewall instance. For example, if you have a Firewall Premium instance, you must use a Firewall Premium policy.
-> [!IMPORTANT]
-> Microsoft is transitioning to an updated and new Web Content Filtering category feed in the next couple weeks. This will contain more granularity and additional subcategorizations.
->
->As a result, the following web categories are are no longer available:
-> - Child inappropriate, Greeting cards, and School Cheating.
->
-> In addition, the *Category check* and *Category change* features are temporarily disabled for the next few months. This article will be updated when these features return.
->
-> To mitigate, we recommend configuring critical websites (FQDNs and URLs) directly in application rules through the Azure portal/Azure PowerShell/CLI as a backup. For more information, see [Deploy and configure Azure Firewall using the Azure portal](tutorial-firewall-deploy-portal.md#configure-an-application-rule).
->
-> Web Category logging will continue to function as expected. We donΓÇÖt predict any other major changes to the classification behavior, but we encourage you to report any categorization issues or request to perform a Category Check through Microsoft Azure support.
- For example, if Azure Firewall intercepts an HTTPS request for `www.google.com/news`, the following categorization is expected: - Firewall Standard ΓÇô only the FQDN part is examined, so `www.google.com` is categorized as *Search Engine*.
firewall Protect Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/protect-azure-kubernetes-service.md
See [virtual network route table documentation](../virtual-network/virtual-netwo
> For applications outside of the kube-system or gatekeeper-system namespaces that needs to talk to the API server, an additional network rule to allow TCP communication to port 443 for the API server IP in addition to adding application rule for fqdn-tag AzureKubernetesService is required.
-Below are three network rules you can use to configure on your firewall, you may need to adapt these rules based on your deployment. The first rule allows access to port 9000 via TCP. The second rule allows access to port 1194 and 123 via UDP. Both these rules will only allow traffic destined to the Azure Region CIDR that we're using, in this case East US.
+ You can use the following three network rules to configure your firewall. You may need to adapt these rules based on your deployment. The first rule allows access to port 9000 via TCP. The second rule allows access to port 1194 and 123 via UDP. Both these rules will only allow traffic destined to the Azure Region CIDR that we're using, in this case East US.
Finally, we add a third network rule opening port 123 to an Internet time server FQDN (for example:`ntp.ubuntu.com`) via UDP. Adding an FQDN as a network rule is one of the specific features of Azure Firewall, and you need to adapt it when using your own options. After setting the network rules, we'll also add an application rule using the `AzureKubernetesService` that covers the needed FQDNs accessible through TCP port 443 and port 80. In addition, you may need to configure additional network and application rules based on your deployment. For more information, see [Outbound network and FQDN rules for Azure Kubernetes Service (AKS) clusters](../aks/outbound-rules-control-egress.md#required-outbound-network-rules-and-fqdns-for-aks-clusters).
-```azurecli
-
-```
#### Add FW Network Rules
+```azurecli
az network firewall network-rule create -g $RG -f $FWNAME --collection-name 'aksfwnr' -n 'apiudp' --protocols 'UDP' --source-addresses '*' --destination-addresses "AzureCloud.$LOC" --destination-ports 1194 --action allow --priority 100 az network firewall network-rule create -g $RG -f $FWNAME --collection-name 'aksfwnr' -n 'apitcp' --protocols 'TCP' --source-addresses '*' --destination-addresses "AzureCloud.$LOC" --destination-ports 9000 az network firewall network-rule create -g $RG -f $FWNAME --collection-name 'aksfwnr' -n 'time' --protocols 'UDP' --source-addresses '*' --destination-fqdns 'ntp.ubuntu.com' --destination-ports 123
+```
#### Add FW Application Rules
+```azurecli
az network firewall application-rule create -g $RG -f $FWNAME --collection-name 'aksfwar' -n 'fqdn' --source-addresses '*' --protocols 'http=80' 'https=443' --fqdn-tags "AzureKubernetesService" --action allow --priority 100 ```
firewall Web Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/web-categories.md
Web categories lets administrators allow or deny user access to web site categor
For more information, see [Azure Firewall Premium features](premium-features.md#web-categories).
-> [!IMPORTANT]
-> Microsoft is transitioning to an updated and new Web Content Filtering category feed. This will contain more granularity and additional subcategorizations.
->
->As a result, the following web categories are are no longer available:
-> - Child inappropriate, Greeting cards, and School Cheating.
->
-> In addition, the *Category check* and *Category change* features are temporarily disabled for the next few months.
->
-> To mitigate, we recommend configuring critical websites (FQDNs and URLs) directly in application rules through the Azure portal/Azure PowerShell/CLI as a backup. For more information, see [Deploy and configure Azure Firewall using the Azure portal](tutorial-firewall-deploy-portal.md#configure-an-application-rule).
->
-> Web Category logging will continue to function as expected. We donΓÇÖt predict any other major changes to the classification behavior, but we encourage you to report any categorization issues or request to perform a Category Check through Microsoft Azure support.
-- ## Liability
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 08/03/2023 Last updated : 08/08/2023
The name on each built-in links to the initiative definition source on the
[!INCLUDE [azure-policy-reference-policysets-cosmos-db](../../../../includes/policy/reference/bycat/policysets-cosmos-db.md)]
+## General
++ ## Guest Configuration [!INCLUDE [azure-policy-reference-policysets-guest-configuration](../../../../includes/policy/reference/bycat/policysets-guest-configuration.md)]
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 08/03/2023 Last updated : 08/08/2023
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md
description: Latest release notes for Azure HDInsight. Get development tips and
Previously updated : 07/28/2023 Last updated : 08/08/2023 # Azure HDInsight release notes
For workload specific versions, see
## ![Icon showing Whats new.](./media/hdinsight-release-notes/whats-new.svg) What's new * HDInsight 5.1 is now supported with ESP cluster. * Upgraded version of Ranger 2.3.0 and Oozie 5.2.1 are now part of HDInsight 5.1
-* The Spark 3.3.1 (HDInsight 5.1) cluster comes with Hive Warehouse Connector (HWC) 2.1, which works together with the Interactive Query (HDInsight 5.1) cluster.
+* The Spark 3.3.1 (HDInsight 5.1) cluster comes with Hive Warehouse Connector (HWC) 2.1, which works together with the Interactive Query (HDInsight 5.1) cluster.
+
+> [!IMPORTANT]
+> This release addresses the following CVEs released by [MSRC](https://msrc.microsoft.com/update-guide/vulnerability) on August 8, 2023. The action is to update to the latest image **2307201242**. Customers are advised to plan accordingly.
+
+|CVE | Severity| CVE Title|
+|-|-|-|
+|[CVE-2023-35393](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-35393)| Important|Azure Apache Hive Spoofing Vulnerability|
+|[CVE-2023-35394](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-35394)| Important|Azure HDInsight Jupyter Notebook Spoofing Vulnerability|
+|[CVE-2023-36877](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-36877)| Important|Azure Apache Oozie Spoofing Vulnerability|
+|[CVE-2023-36881](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-36881)| Important|Azure Apache Ambari Spoofing Vulnerability|
+|[CVE-2023-38188](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-38188)| Important|Azure Apache Hadoop Spoofing Vulnerability|
+
## ![Icon showing coming soon.](./media/hdinsight-release-notes/clock.svg) Coming soon
-* The max length of cluster name will be changed to 45 from 59 characters, to improve the security posture of clusters. Customers need to plan for the updates before 30 September 2023.
+* The max length of cluster name will be changed to 45 from 59 characters, to improve the security posture of clusters. Customers need to plan for the updates before 30, September 2023.
* Cluster permissions for secure storage * Customers can specify (during cluster creation) whether a secure channel should be used for HDInsight cluster nodes to contact the storage account. * In-line quota update. * Request quotas increase directly from the My Quota page, which will be a direct API call, which is faster. If the API call fails, then customers need to create a new support request for quota increase. * HDInsight Cluster Creation with Custom VNets.
- * To improve the overall security posture of the HDInsight clusters, HDInsight clusters using custom VNETs need to ensure that the user needs to have permission for `Microsoft Network/virtualNetworks/subnets/join/action` to perform create operations. Customers would need to plan accordingly as this change would be a mandatory check to avoid cluster creation failures before 30 September 2023.ΓÇ»
+ * To improve the overall security posture of the HDInsight clusters, HDInsight clusters using custom VNETs need to ensure that the user needs to have permission for `Microsoft Network/virtualNetworks/subnets/join/action` to perform create operations. Customers would need to plan accordingly as this change would be a mandatory check to avoid cluster creation failures before 30, September 2023.ΓÇ»
* Basic and Standard A-series VMs Retirement.
- * On 31 August 2024, we'll retire Basic and Standard A-series VMs. Before that date, you need to migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs). To avoid service disruptions, [migrate your workloads](https://aka.ms/Av1retirement) from Basic and Standard A-series VMs to Av2-series VMs before 31 August 2024.
+ * On 31 August 2024, we'll retire Basic and Standard A-series VMs. Before that date, you need to migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs). To avoid service disruptions, [migrate your workloads](https://aka.ms/Av1retirement) from Basic and Standard A-series VMs to Av2-series VMs before 31, August 2024.
* Non-ESP ABFS clusters [Cluster Permissions for Word Readable] * Plan to introduce a change in non-ESP ABFS clusters, which restricts non-Hadoop group users from executing Hadoop commands for storage operations. This change to improve cluster security posture. Customers need to plan for the updates before 30 September 2023.ΓÇ»
hdinsight Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/policy-reference.md
Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
healthcare-apis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md
Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
Azure Health Data Services is a set of managed API services based on open standa
#### FHIR Service **Bug Fix: Continous retry on Import operation**+ We observed an issue where $import kept on retrying when NDJSON file size is greater than 2GB. The issue is fixed, for details visit [3342](https://github.com/microsoft/fhir-server/pull/3342). **Bug Fix: Patient and Group level export job restart on interruption**+ Patient and Group level exports on interruption would restart from the beginning. Bug is fixed to restart the export jobs from the last sucessfully completed page of results. For more details visit [3205](https://github.com/microsoft/fhir-server/pull/3205).
+#### DICOM Service
+**API Version 2 is Generally Available (GA)**
+
+The DICOM service API v2 is now Generally Available (GA) and introduces [several changes and new features](dicom/dicom-service-v2-api-changes.md). Most notable is the change to validation of DICOM attributes during store (STOW) operations - beginning with v2, the request fails only if **required attributes** fail validation. See the [DICOM Conformance Statement v2](dicom/dicom-services-conformance-statement-v2.md) for full details.
+ ## June 2023 #### Azure Health Data Services
Patient and Group level exports on interruption would restart from the beginning
#### FHIR Service **Feature Enhancement: Incremental Import**+ $Import operation now supports new capability of "Incremental Load" mode, which is optimized for periodically loading data into the FHIR service. With Incremental Load mode, customers can:
iot-hub Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/policy-reference.md
Title: Built-in policy definitions for Azure IoT Hub description: Lists Azure Policy built-in policy definitions for Azure IoT Hub. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
key-vault Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/policy-reference.md
Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
lab-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/policy-reference.md
Title: Built-in policy definitions for Lab Services description: Lists Azure Policy built-in policy definitions for Azure Lab Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
lighthouse Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/samples/policy-reference.md
Title: Built-in policy definitions for Azure Lighthouse description: Lists Azure Policy built-in policy definitions for Azure Lighthouse. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
load-balancer Cross Region Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/cross-region-overview.md
Cross-region load balancer is a Layer-4 pass-through network load balancer. This
Floating IP can be configured at both the global IP level and regional IP level. For more information, visit [Multiple frontends for Azure Load Balancer](./load-balancer-multivip-overview.md)
+It is important to note that floating IP configured on the Azure cross-region Load Balancer operates independently of floating IP configurations on backend regional load balancers. If floating IP is enabled on the cross-region load balancer, the appropriate loopback interface needs to be added to the backend VMs.
+ ### Health Probes Azure cross-region Load Balancer utilizes the health of the backend regional load balancers when deciding where to distribute traffic to. Health checks by cross-region load balancer are done automatically every 5 seconds, given that a user has set up health probes on their regional load balancer.  
Cross-region load balancer routes the traffic to the appropriate regional load b
* UDP traffic isn't supported on Cross-region Load Balancer for IPv6.
+* UDP traffic on port 3 isn't supported on Cross-Region Load Balancer
+ * Outbound rules aren't supported on Cross-region Load Balancer. For outbound connections, utilize [outbound rules](./outbound-rules.md) on the regional load balancer or [NAT gateway](../nat-gateway/nat-overview.md). ## Pricing and SLA
load-balancer Manage Inbound Nat Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage-inbound-nat-rules.md
In this example, you create an inbound NAT rule to forward port **500** to backe
Use [az network lb inbound-nat-rule create](/cli/azure/network/lb/inbound-nat-rule#az-network-lb-inbound-nat-rule-create) to create the NAT rule.
+Use [az network nic ip-config inbound-nat-rule add](/cli/azure/network/nic/ip-config/inbound-nat-rule) to add the inbound NAT rule to a VM's NIC
+ ```azurecli az network lb inbound-nat-rule create \ --backend-port 443 \
Use [az network lb inbound-nat-rule create](/cli/azure/network/lb/inbound-nat-ru
--name myInboundNATrule \ --protocol Tcp \ --resource-group myResourceGroup \
- --backend-pool-name myBackendPool \
--frontend-ip-name myFrontend \
- --frontend-port-range-end 1000 \
- --frontend-port-range-start 500
+ --frontend-port 500
+
+ az network nic ip-config inbound-nat-rule add \
+ --resource-group myResourceGroup \
+ --nic-name MyNic \
+ --ip-config-name MyIpConfig \
+ --inbound-nat-rule MyNatRule \
+ --lb-name myLoadBalancer
```
load-testing How To Move Between Resource Groups Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-move-between-resource-groups-subscriptions.md
+
+ Title: Move across resource group or subscription
+
+description: Learn how to move an Azure Load testing resource to another resource group or subscription.
++++ Last updated : 07/12/2023+++
+# Move an Azure Load Testing resource to another resource group or subscription
+
+This article describes how to move your Azure Load Testing resource to either another Azure subscription or another resource group under the same subscription.
+
+If you want to move Azure Load Testing to a new region, see [Move an Azure Load Testing resource to another region](./how-to-move-between-regions.md).
+
+When you move an Azure Load Testing resource across resource groups or subscriptions, the following guidance applies:
+
+- Moving a resource to a new resource group or subscription is a metadata change that shouldn't affect the data. For example, the test and test runs data is preserved.
+
+- Moving a resource changes the ID of the resource. The standard format for a resource ID is `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}`. When a resource is moved to a new resource group or subscription, one or more values in the path are impacted. After the resource has been moved, you'll need to update your tools and scripts to use the new resource IDs.
+
+- Moving a resource across subscriptions is allowed only for subscriptions in the same tenant.
+
+- Resource move is not supported for Azure Load Testing resources that are encrypted with a customer-managed key.
+
+- Moving a resource only moves it to a new resource group or subscription. It doesn't change the location of the resource.
+
+- Any service principal that is currently scoped to a resource, resource group or subscription might not have access to the resource after the move.
+
+- Automated resource provisioning using ARM templates or Bicep must be updated to the new resource group and / or subscription.
+
+- For tests that previously ran from Azure Pipelines, the URL to view detailed results from Azure portal will not work after the resources have been moved.
+
+- If the resource is moved across subscriptions, the service limits of the target subscription apply to the resource after the move.
+
+- Moving a resource that has a test that is configured for private endpoint testing to another subscription, results in an error while running the test. When the move is complete, you must update the test with a VNet and subnet from the new subscription.
+
+## Move across resource groups or subscriptions
+
+You can move an Azure Load Testing resource to a different resource group or subscription by using the Azure portal.
+
+1. Go to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to the resource that you want to move.
+
+1. On the subscription or resource group overview page, select **Move**.
+
+1. If you're moving the resource to another subscription, select the destination **Subscription**.
+
+1. If you're moving the resource to another resource group, select the destination **Resource group**, or create a new resource group.
+
+1. Select **Next**.
+
+1. When the validation is completes, acknowledge the warning regarding moving resources.
+
+1. Select **OK**.
+
+## Next steps
+
+- You can move many different types of resources between resource groups and subscriptions. For more information, see [Move resources to a new resource group or subscription](/azure/azure-resource-manager/management/move-resource-group-and-subscription).
logic-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/policy-reference.md
Title: Built-in policy definitions for Azure Logic Apps description: Lists Azure Policy built-in policy definitions for Azure Logic Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023 ms.suite: integration
logic-apps Workflow Definition Language Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/workflow-definition-language-functions-reference.md
Suppose that you have this `'items'` XML string:
</produce> ```
-This example passes in the XPath expression, `'/produce/item/name'`, to find the nodes that match the `<name></name>` node in the `'items'` XML string, and returns an array with those node values:
+This example passes in the XPath expression, `'/produce/item/name/text()'`, to find the nodes that match the `<name></name>` node in the `'items'` XML string, and returns an array with those node values:
-`xpath(xml(parameters('items')), '/produce/item/name')`
+`xpath(xml(parameters('items')), '/produce/item/name/text()')`
The example also uses the [parameters()](#parameters) function to get the XML string from `'items'` and convert the string to XML format by using the [xml()](#xml) function.
-Here's the result array with the nodes that match `<name></name`:
+Here's the result array populated with values of the nodes that match `<name></name>`:
-`[ <name>Gala</name>, <name>Honeycrisp</name> ]`
+`[ Gala, Honeycrisp ]`
*Example 2*
machine-learning Concept Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-instance.md
You can also clone the latest Azure Machine Learning samples to your folder unde
Writing small files can be slower on network drives than writing to the compute instance local disk itself. If you're writing many small files, try using a directory directly on the compute instance, such as a `/tmp` directory. Note these files won't be accessible from other compute instances.
-Don't store training data on the notebooks file share. You can use the `/tmp` directory on the compute instance for your temporary data. However, don't write large files of data on the OS disk of the compute instance. OS disk on compute instance has 128-GB capacity. You can also store temporary training data on temporary disk mounted on /mnt. Temporary disk size is based on the VM size chosen and can store larger amounts of data if a higher size VM is chosen. Any software packages you install are saved on the OS disk of compute instance. Note customer managed key encryption is currently not supported for OS disk. The OS disk for compute instance is encrypted with Microsoft-managed keys.
+Don't store training data on the notebooks file share. For information on the various options to store data, see [Access data in a job](how-to-read-write-data-v2.md).
+
+You can use the `/tmp` directory on the compute instance for your temporary data. However, don't write large files of data on the OS disk of the compute instance. OS disk on compute instance has 128-GB capacity. You can also store temporary training data on temporary disk mounted on /mnt. Temporary disk size is based on the VM size chosen and can store larger amounts of data if a higher size VM is chosen. Any software packages you install are saved on the OS disk of compute instance. Note customer managed key encryption is currently not supported for OS disk. The OS disk for compute instance is encrypted with Microsoft-managed keys.
:::moniker range="azureml-api-1" You can also mount [datastores and datasets](v1/concept-azure-machine-learning-architecture.md?view=azureml-api-1&preserve-view=true#datasets-and-datastores).
machine-learning How To Launch Vs Code Remote https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-launch-vs-code-remote.md
You can create the connection from either the **Notebooks** or **Compute** secti
:::image type="content" source="media/how-to-launch-vs-code-remote/vs-code-from-compute.png" alt-text="Screenshot of how to connect to Compute Instance VS Code Azure Machine Learning studio." lightbox="media/how-to-launch-vs-code-remote/vs-code-from-compute.png":::
+If you don't see these options, make sure you've enabled the **Connect compute instances to Visual Studio Code for the Web** preview feature, as shown in the [Prerequisites](#prerequisites) section.
+ # [Studio -> VS Code (Desktop)](#tab/vscode-desktop) This option launches the VS Code desktop application, connected to your compute instance.
machine-learning How To Manage Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-quotas.md
Available resources:
+ **Low-priority cores per region** have a default limit of 100 to 3,000, depending on your subscription offer type. The number of low-priority cores per subscription can be increased and is a single value across VM families.
-+ **Clusters per region** have a default limit of 200 and it can be increased up to a value of 500 per region within a given subscription. This limit is shared between training clusters, compute instances and managed online endpoint deployments. A compute instance is considered a single-node cluster for quota purposes.
-
- > [!TIP]
- > Starting 1 September 2023, Microsoft will automatically increase cluster quota limits from 200 to 500 on your behalf when usage approaches the 200 default limit. This change eliminates the need to file for a support ticket to increase the quota on unique compute resources allowed per region.
++ **Clusters per region** have a default limit of 200 and it can be increased up to a value of 500 per region within a given subscription. This limit is shared between training clusters, compute instances and managed online endpoint deployments. A compute instance is considered a single-node cluster for quota purposes. Starting 1 September 2023, cluster quota limits will automatically be increased from 200 to 500 on your behalf when usage is approaching close to the 200 default limit, eliminating the need to file for a support ticket. The following table shows more limits in the platform. Reach out to the Azure Machine Learning product team through a **technical** support ticket to request an exception.
machine-learning Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/policy-reference.md
Title: Built-in policy definitions for Azure Machine Learning description: Lists Azure Policy built-in policy definitions for Azure Machine Learning. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
machine-learning How To Monitor Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-monitor-datasets.md
description: Learn how to set up data drift detection in Azure Learning. Create
-+ Previously updated : 08/17/2022 Last updated : 08/08/2023 #Customer intent: As a data scientist, I want to detect data drift in my datasets and set alerts for when drift is large.
[!INCLUDE [sdk v1](../includes/machine-learning-sdk-v1.md)]
-Learn how to monitor data drift and set alerts when drift is high.
+Learn how to monitor data drift and set alerts when drift is high.
With Azure Machine Learning dataset monitors (preview), you can: * **Analyze drift in your data** to understand how it changes over time.
-* **Monitor model data** for differences between training and serving datasets. Start by [collecting model data from deployed models](how-to-enable-data-collection.md).
+* **Monitor model data** for differences between training and serving datasets. Start by [collecting model data from deployed models](how-to-enable-data-collection.md).
* **Monitor new data** for differences between any baseline and target dataset. * **Profile features in data** to track how statistical properties change over time.
-* **Set up alerts on data drift** for early warnings to potential issues.
+* **Set up alerts on data drift** for early warnings to potential issues.
* **[Create a new dataset version](how-to-version-track-datasets.md)** when you determine the data has drifted too much. An [Azure Machine Learning dataset](how-to-create-register-datasets.md) is used to create the monitor. The dataset must include a timestamp column.
-You can view data drift metrics with the Python SDK or in Azure Machine Learning studio. Other metrics and insights are available through the [Azure Application Insights](../../azure-monitor/app/app-insights-overview.md) resource associated with the Azure Machine Learning workspace.
+You can view data drift metrics with the Python SDK or in Azure Machine Learning studio. Other metrics and insights are available through the [Azure Application Insights](../../azure-monitor/app/app-insights-overview.md) resource associated with the Azure Machine Learning workspace.
> [!IMPORTANT] > Data drift detection for datasets is currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ## Prerequisites
To create and work with dataset monitors, you need:
## What is data drift?
-Data drift is one of the top reasons model accuracy degrades over time. For machine learning models, data drift is the change in model input data that leads to model performance degradation. Monitoring data drift helps detect these model performance issues.
+Model accuracy degrades over time, largely because of data drift. For machine learning models, data drift is the change in model input data that leads to model performance degradation. Monitoring data drift helps detect these model performance issues.
Causes of data drift include: -- Upstream process changes, such as a sensor being replaced that changes the units of measurement from inches to centimeters.
+- Upstream process changes, such as a sensor being replaced that changes the units of measurement from inches to centimeters.
- Data quality issues, such as a broken sensor always reading 0. - Natural drift in the data, such as mean temperature changing with the seasons.-- Change in relation between features, or covariate shift.
+- Change in relation between features, or covariate shift.
-Azure Machine Learning simplifies drift detection by computing a single metric abstracting the complexity of datasets being compared. These datasets may have hundreds of features and tens of thousands of rows. Once drift is detected, you drill down into which features are causing the drift. You then inspect feature level metrics to debug and isolate the root cause for the drift.
+Azure Machine Learning simplifies drift detection by computing a single metric abstracting the complexity of datasets being compared. These datasets may have hundreds of features and tens of thousands of rows. Once drift is detected, you drill down into which features are causing the drift. You then inspect feature level metrics to debug and isolate the root cause for the drift.
This top down approach makes it easy to monitor data instead of traditional rules-based techniques. Rules-based techniques such as allowed data range or allowed unique values can be time consuming and error prone. In Azure Machine Learning, you use dataset monitors to detect and alert for data drift.
-
-### Dataset monitors
+
+### Dataset monitors
With a dataset monitor you can:
With a dataset monitor you can:
* Analyze historical data for drift. * Profile new data over time.
-The data drift algorithm provides an overall measure of change in data and indication of which features are responsible for further investigation. Dataset monitors produce a number of other metrics by profiling new data in the `timeseries` dataset.
+The data drift algorithm provides an overall measure of change in data and indication of which features are responsible for further investigation. Dataset monitors produce many other metrics by profiling new data in the `timeseries` dataset.
-Custom alerting can be set up on all metrics generated by the monitor through [Azure Application Insights](../../azure-monitor/app/app-insights-overview.md). Dataset monitors can be used to quickly catch data issues and reduce the time to debug the issue by identifying likely causes.
+Custom alerting can be set up on all metrics generated by the monitor through [Azure Application Insights](../../azure-monitor/app/app-insights-overview.md). Dataset monitors can be used to quickly catch data issues and reduce the time to debug the issue by identifying likely causes.
Conceptually, there are three primary scenarios for setting up dataset monitors in Azure Machine Learning. Scenario | Description | Monitor a model's serving data for drift from the training data | Results from this scenario can be interpreted as monitoring a proxy for the model's accuracy, since model accuracy degrades when the serving data drifts from the training data.
-Monitor a time series dataset for drift from a previous time period. | This scenario is more general, and can be used to monitor datasets involved upstream or downstream of model building. The target dataset must have a timestamp column. The baseline dataset can be any tabular dataset that has features in common with the target dataset.
+Monitor a time series dataset for drift from a previous time period. | This scenario is more general, and can be used to monitor datasets involved upstream or downstream of model building. The target dataset must have a timestamp column. The baseline dataset can be any tabular dataset that has features in common with the target dataset.
Perform analysis on past data. | This scenario can be used to understand historical data and inform decisions in settings for dataset monitors. Dataset monitors depend on the following Azure services. |Azure service |Description | |||
-| *Dataset* | Drift uses Machine Learning datasets to retrieve training data and compare data for model training. Generating profile of data is used to generate some of the reported metrics such as min, max, distinct values, distinct values count. |
-| *Azureml pipeline and compute* | The drift calculation job is hosted in azureml pipeline. The job is triggered on demand or by schedule to run on a compute configured at drift monitor creation time.
+| *Dataset* | Drift uses Machine Learning datasets to retrieve training data and compare data for model training. Generating profile of data is used to generate some of the reported metrics such as min, max, distinct values, distinct values count. |
+| *Azure Machine Learning pipeline and compute* | The drift calculation job is hosted in an Azure Machine Learning pipeline. The job is triggered on demand or by schedule to run on a compute configured at drift monitor creation time.
| *Application insights*| Drift emits metrics to Application Insights belonging to the machine learning workspace. | *Azure blob storage*| Drift emits metrics in json format to Azure blob storage.
-### Baseline and target datasets
+### Baseline and target datasets
-You monitor [Azure machine learning datasets](how-to-create-register-datasets.md) for data drift. When you create a dataset monitor, you will reference your:
+You monitor [Azure Machine Learning datasets](how-to-create-register-datasets.md) for data drift. When you create a dataset monitor, you reference your:
* Baseline dataset - usually the training dataset for a model. * Target dataset - usually model input data - is compared over time to your baseline dataset. This comparison means that your target dataset must have a timestamp column specified.
-The monitor will compare the baseline and target datasets.
+The monitor compares the baseline and target datasets.
## Create target dataset
The target dataset needs the `timeseries` trait set on it by specifying the time
The [`Dataset`](/python/api/azureml-core/azureml.data.tabulardataset#with-timestamp-columns-timestamp-none--partition-timestamp-none--validate-false-kwargs-) class [`with_timestamp_columns()`](/python/api/azureml-core/azureml.data.tabulardataset#with-timestamp-columns-timestamp-none--partition-timestamp-none--validate-false-kwargs-) method defines the time stamp column for the dataset.
-```python
+```python
from azureml.core import Workspace, Dataset, Datastore # get workspace object ws = Workspace.from_config()
-# get datastore object
+# get datastore object
dstore = Datastore.get(ws, 'your datastore name') # specify datastore paths
dstore_paths = [(dstore, 'weather/*/*/*/*/data.parquet')]
# specify partition format partition_format = 'weather/{state}/{date:yyyy/MM/dd}/data.parquet'
-# create the Tabular dataset with 'state' and 'date' as virtual columns
+# create the Tabular dataset with 'state' and 'date' as virtual columns
dset = Dataset.Tabular.from_parquet_files(path=dstore_paths, partition_format=partition_format) # assign the timestamp attribute to a real or virtual column in the dataset
In the following example, all data under the subfolder *NoaaIsdFlorida/2019* is
[![Partition format](./media/how-to-monitor-datasets/partition-format.png)](media/how-to-monitor-datasets/partition-format-expand.png#lightbox)
-In the **Schema** settings, specify the **timestamp** column from a virtual or real column in the specified dataset. This type indicates that your data has a time component.
+In the **Schema** settings, specify the **timestamp** column from a virtual or real column in the specified dataset. This type indicates that your data has a time component.
:::image type="content" source="media/how-to-monitor-datasets/timestamp.png" alt-text="Set the timestamp":::
-If your data is already partitioned by date or time, as is the case here, you can also specify the **Partition timestamp**. This allows more efficient processing of dates and enables timeseries APIs that you can leverage during training.
+If your data is already partitioned by date or time, as is the case here, you can also specify the **Partition timestamp**. This allows more efficient processing of dates and enables time series APIs that you can apply during training.
:::image type="content" source="media/how-to-monitor-datasets/timeseries-partitiontimestamp.png" alt-text="Partition timestamp":::
If your data is already partitioned by date or time, as is the case here, you ca
## Create dataset monitor
-Create a dataset monitor to detect and alert to data drift on a new dataset. Use either the [Python SDK](#sdk-monitor) or [Azure Machine Learning studio](#studio-monitor).
+Create a dataset monitor to detect and alert to data drift on a new dataset. Use either the [Python SDK](#sdk-monitor) or [Azure Machine Learning studio](#studio-monitor).
+
+As described later, a dataset monitor runs at a set frequency (daily, weekly, monthly) intervals. It analyzes new data available in the target dataset since its last run. In some cases, such analysis of the most recent data may not suffice:
+
+- The new data from the upstream source was delayed due to a broken data pipeline, and this new data wasn't available when the dataset monitor ran.
+- A time series dataset had only historical data, and you want to analyze drift patterns in the dataset over time. For example: compare traffic flowing to a website, in both winter and summer seasons, to identify seasonal patterns.
+- You're new to Dataset Monitors. You want to evaluate how the feature works with your existing data before you set it up to monitor future days. In such scenarios, you can submit an on-demand run, with a specific target dataset set date range, to compare with the baseline dataset.
+
+The **backfill** function runs a backfill job, for a specified start and end date range. A backfill job fills in expected missing data points in a data set, as a way to ensure data accuracy and completeness.
# [Python SDK](#tab/python) <a name="sdk-monitor"></a> [!INCLUDE [sdk v1](../includes/machine-learning-sdk-v1.md)]
-See the [Python SDK reference documentation on data drift](/python/api/azureml-datadrift/azureml.datadrift) for full details.
+See the [Python SDK reference documentation on data drift](/python/api/azureml-datadrift/azureml.datadrift) for full details.
The following example shows how to create a dataset monitor using the Python SDK
baseline = target.time_before(datetime(2019, 2, 1))
features = ['latitude', 'longitude', 'elevation', 'windAngle', 'windSpeed', 'temperature', 'snowDepth', 'stationName', 'countryOrRegion'] # set up data drift detector
-monitor = DataDriftDetector.create_from_datasets(ws, 'drift-monitor', baseline, target,
- compute_target='cpu-cluster',
- frequency='Week',
- feature_list=None,
- drift_threshold=.6,
+monitor = DataDriftDetector.create_from_datasets(ws, 'drift-monitor', baseline, target,
+ compute_target='cpu-cluster',
+ frequency='Week',
+ feature_list=None,
+ drift_threshold=.6,
latency=24) # get data drift detector by name
monitor = monitor.enable_schedule()
<a name="studio-monitor"></a> 1. Navigate to the [studio's homepage](https://ml.azure.com).
-1. Select the **Data** tab on the left.
+1. Select the **Data** tab.
1. Select **Dataset monitors**. ![Monitor list](./media/how-to-monitor-datasets/monitor-list.png)
-1. Click on the **+Create monitor** button and continue through the wizard by clicking **Next**.
+1. Select the **+Create monitor** button, and select **Next** to continue through the wizard.
:::image type="content" source="media/how-to-monitor-datasets/wizard.png" alt-text="Create a monitor wizard":::
-* **Select target dataset**. The target dataset is a tabular dataset with timestamp column specified which will be analyzed for data drift. The target dataset must have features in common with the baseline dataset, and should be a `timeseries` dataset, which new data is appended to. Historical data in the target dataset can be analyzed, or new data can be monitored.
+* **Select target dataset**. The target dataset is a tabular dataset with timestamp column specified which to analyze for data drift. The target dataset must have features in common with the baseline dataset, and should be a `timeseries` dataset, which new data is appended to. Historical data in the target dataset can be analyzed, or new data can be monitored.
-* **Select baseline dataset.** Select the tabular dataset to be used as the baseline for comparison of the target dataset over time. The baseline dataset must have features in common with the target dataset. Select a time range to use a slice of the target dataset, or specify a separate dataset to use as the baseline.
+* **Select baseline dataset.** Select the tabular dataset to be used as the baseline for comparison of the target dataset over time. The baseline dataset must have features in common with the target dataset. Select a time range to use a slice of the target dataset, or specify a separate dataset to use as the baseline.
-* **Monitor settings**. These settings are for the scheduled dataset monitor pipeline, which will be created.
+* **Monitor settings**. These settings are for the scheduled dataset monitor pipeline to create.
- | Setting | Description | Tips | Mutable |
+ | Setting | Description | Tips | Mutable |
| - | -- | - | - | | Name | Name of the dataset monitor. | | No |
- | Features | List of features that will be analyzed for data drift over time. | Set to a model's output feature(s) to measure concept drift. Don't include features that naturally drift over time (month, year, index, etc.). You can backfill and existing data drift monitor after adjusting the list of features. | Yes |
- | Compute target | Azure Machine Learning compute target to run the dataset monitor jobs. | | Yes |
- | Enable | Enable or disable the schedule on the dataset monitor pipeline | Disable the schedule to analyze historical data with the backfill setting. It can be enabled after the dataset monitor is created. | Yes |
- | Frequency | The frequency that will be used to schedule the pipeline job and analyze historical data if running a backfill. Options include daily, weekly, or monthly. | Each job compares data in the target dataset according to the frequency: <li>Daily: Compare most recent complete day in target dataset with baseline <li>Weekly: Compare most recent complete week (Monday - Sunday) in target dataset with baseline <li>Monthly: Compare most recent complete month in target dataset with baseline | No |
- | Latency | Time, in hours, it takes for data to arrive in the dataset. For instance, if it takes three days for data to arrive in the SQL DB the dataset encapsulates, set the latency to 72. | Cannot be changed after the dataset monitor is created | No |
- | Email addresses | Email addresses for alerting based on breach of the data drift percentage threshold. | Emails are sent through Azure Monitor. | Yes |
+ | Features | List of features that to analyze for data drift over time. | Set to a model's output feature(s) to measure concept drift. Don't include features that naturally drift over time (month, year, index, etc.). You can backfill and existing data drift monitor after adjusting the list of features. | Yes |
+ | Compute target | Azure Machine Learning compute target to run the dataset monitor jobs. | | Yes |
+ | Enable | Enable or disable the schedule on the dataset monitor pipeline | Disable the schedule to analyze historical data with the backfill setting. It can be enabled after the dataset monitor is created. | Yes |
+ | Frequency | The frequency that to use, to schedule the pipeline job and analyze historical data if running a backfill. Options include daily, weekly, or monthly. | Each job compares data in the target dataset according to the frequency: <li>Daily: Compare most recent complete day in target dataset with baseline <li>Weekly: Compare most recent complete week (Monday - Sunday) in target dataset with baseline <li>Monthly: Compare most recent complete month in target dataset with baseline | No |
+ | Latency | Time, in hours, it takes for data to arrive in the dataset. For instance, if it takes three days for data to arrive in the SQL DB the dataset encapsulates, set the latency to 72. | Can't be changed after the creation of the dataset monitor | No |
+ | Email addresses | Email addresses for alerting based on breach of the data drift percentage threshold. | Emails are sent through Azure Monitor. | Yes |
| Threshold | Data drift percentage threshold for email alerting. | Further alerts and events can be set on many other metrics in the workspace's associated Application Insights resource. | Yes |
-After finishing the wizard, the resulting dataset monitor will appear in the list. Select it to go to that monitor's details page.
+After completion of the wizard, the resulting dataset monitor will appear in the list. Select it to go to that monitor's details page.
## Understand data drift results
-This section shows you the results of monitoring a dataset, found in the **Datasets** / **Dataset monitors** page in Azure studio. You can update the settings as well as analyze existing data for a specific time period on this page.
+This section shows you the results of monitoring a dataset, found in the **Datasets** / **Dataset monitors** page in Azure studio. You can update the settings, and analyze existing data for a specific time period on this page.
Start with the top-level insights into the magnitude of data drift and a highlight of features to be further investigated. :::image type="content" source="media/how-to-monitor-datasets/drift-overview.png" alt-text="Drift overview":::
-| Metric | Description |
-| | -- |
-| Data drift magnitude | A percentage of drift between the baseline and target dataset over time. Ranging from 0 to 100, 0 indicates identical datasets and 100 indicates the Azure Machine Learning data drift model can completely tell the two datasets apart. Noise in the precise percentage measured is expected due to machine learning techniques being used to generate this magnitude. |
-| Top drifting features | Shows the features from the dataset that have drifted the most and are therefore contributing the most to the Drift Magnitude metric. Due to covariate shift, the underlying distribution of a feature does not necessarily need to change to have relatively high feature importance. |
-| Threshold | Data Drift magnitude beyond the set threshold will trigger alerts. This can be configured in the monitor settings. |
+| Metric | Description |
+| | -- |
+| Data drift magnitude | A percentage of drift between the baseline and target dataset over time. This percentage ranges from 0 to 100, 0 indicates identical datasets and 100 indicates the Azure Machine Learning data drift model can completely tell the two datasets apart. Noise in the precise percentage measured is expected due to machine learning techniques being used to generate this magnitude. |
+| Top drifting features | Shows the features from the dataset that have drifted the most and are therefore contributing the most to the Drift Magnitude metric. Due to covariate shift, the underlying distribution of a feature doesn't necessarily need to change to have relatively high feature importance. |
+| Threshold | Data Drift magnitude beyond the set threshold triggers alerts. Configure the threshold value in the monitor settings. |
### Drift magnitude trend
-See how the dataset differs from the target dataset in the specified time period. The closer to 100%, the more the two datasets differ.
+See how the dataset differs from the target dataset in the specified time period. The closer to 100%, the more the two datasets differ.
:::image type="content" source="media/how-to-monitor-datasets/drift-magnitude.png" alt-text="Drift magnitude trend"::: ### Drift magnitude by features
-This section contains feature-level insights into the change in the selected feature's distribution, as well as other statistics, over time.
+This section contains feature-level insights into the change in the selected feature's distribution, and other statistics, over time.
-The target dataset is also profiled over time. The statistical distance between the baseline distribution of each feature is compared with the target dataset's over time. Conceptually, this is similar to the data drift magnitude. However this statistical distance is for an individual feature rather than all features. Min, max, and mean are also available.
+The target dataset is also profiled over time. The statistical distance between the baseline distribution of each feature is compared with the target dataset's over time. Conceptually, this resembles the data drift magnitude. However this statistical distance is for an individual feature rather than all features. Min, max, and mean are also available.
-In the Azure Machine Learning studio, click on a bar in the graph to see the feature-level details for that date. By default, you see the baseline dataset's distribution and the most recent job's distribution of the same feature.
+In the Azure Machine Learning studio, select a bar in the graph to see the feature-level details for that date. By default, you see the baseline dataset's distribution and the most recent job's distribution of the same feature.
:::image type="content" source="media/how-to-monitor-datasets/drift-by-feature.gif" alt-text="Drift magnitude by features":::
These metrics can also be retrieved in the Python SDK through the `get_metrics()
### Feature details
-Finally, scroll down to view details for each individual feature. Use the dropdowns above the chart to select the feature, and additionally select the metric you want to view.
+Finally, scroll down to view details for each individual feature. Use the dropdowns above the chart to select the feature, and additionally select the metric you want to view.
:::image type="content" source="media/how-to-monitor-datasets/numeric-feature.gif" alt-text="Numeric feature graph and comparison":::
Metrics in the chart depend on the type of feature.
* Numeric features
- | Metric | Description |
- | | -- |
+ | Metric | Description |
+ | | -- |
| Wasserstein distance | Minimum amount of work to transform baseline distribution into the target distribution. | | Mean value | Average value of the feature. | | Min value | Minimum value of the feature. | | Max value | Maximum value of the feature. | * Categorical features
-
- | Metric | Description |
- | | -- |
- | Euclidian distance     |  Computed for categorical columns. Euclidean distance is computed on two vectors, generated from empirical distribution of the same categorical column from two datasets. 0 indicates there is no difference in the empirical distributions.  The more it deviates from 0, the more this column has drifted. Trends can be observed from a time series plot of this metric and can be helpful in uncovering a drifting feature.  |
+
+ | Metric | Description |
+ | | -- |
+ | Euclidian distance     |  Computed for categorical columns. Euclidean distance is computed on two vectors, generated from empirical distribution of the same categorical column from two datasets. 0 indicates no difference in the empirical distributions.  The more it deviates from 0, the more this column has drifted. Trends can be observed from a time series plot of this metric and can be helpful in uncovering a drifting feature.  |
| Unique values | Number of unique values (cardinality) of the feature. |
-On this chart, select a single date to compare the feature distribution between the target and this date for the displayed feature. For numeric features, this shows two probability distributions. If the feature is numeric, a bar chart is shown.
+On this chart, select a single date to compare the feature distribution between the target and this date for the displayed feature. For numeric features, this shows two probability distributions. If the feature is numeric, a bar chart is shown.
:::image type="content" source="media/how-to-monitor-datasets/select-date-to-compare.gif" alt-text="Select a date to compare to target"::: ## Metrics, alerts, and events
-Metrics can be queried in the [Azure Application Insights](../../azure-monitor/app/app-insights-overview.md) resource associated with your machine learning workspace. You have access to all features of Application Insights including set up for custom alert rules and action groups to trigger an action such as, an Email/SMS/Push/Voice or Azure Function. Refer to the complete Application Insights documentation for details.
+Metrics can be queried in the [Azure Application Insights](../../azure-monitor/app/app-insights-overview.md) resource associated with your machine learning workspace. You have access to all features of Application Insights including set up for custom alert rules and action groups to trigger an action such as, an Email/SMS/Push/Voice or Azure Function. Refer to the complete Application Insights documentation for details.
-To get started, navigate to the [Azure portal](https://portal.azure.com) and select your workspace's **Overview** page. The associated Application Insights resource is on the far right:
+To get started, navigate to the [Azure portal](https://portal.azure.com) and select your workspace's **Overview** page. The associated Application Insights resource is on the far right:
[![Azure portal overview](./media/how-to-monitor-datasets/ap-overview.png)](media/how-to-monitor-datasets/ap-overview-expanded.png#lightbox)
You can use an existing action group, or create a new one to define the action t
![New action group](./media/how-to-monitor-datasets/action-group.png) - ## Troubleshooting Limitations and known issues for data drift monitors:
-* The time range when analyzing historical data is limited to 31 intervals of the monitor's frequency setting.
+* The time range when analyzing historical data is limited to 31 intervals of the monitor's frequency setting.
* Limitation of 200 features, unless a feature list is not specified (all features used). * Compute size must be large enough to handle the data. * Ensure your dataset has data within the start and end date for a given monitor job.
-* Dataset monitors will only work on datasets that contain 50 rows or more.
-* Columns, or features, in the dataset are classified as categorical or numeric based on the conditions in the following table. If the feature does not meet these conditions - for instance, a column of type string with >100 unique values - the feature is dropped from our data drift algorithm, but is still profiled.
+* Dataset monitors only work on datasets that contain 50 rows or more.
+* Columns, or features, in the dataset are classified as categorical or numeric based on the conditions in the following table. If the feature doesn't meet these conditions - for instance, a column of type string with >100 unique values - the feature is dropped from our data drift algorithm, but is still profiled.
- | Feature type | Data type | Condition | Limitations |
+ | Feature type | Data type | Condition | Limitations |
| | | | -- |
- | Categorical | string | The number of unique values in the feature is less than 100 and less than 5% of the number of rows. | Null is treated as its own category. |
- | Numerical | int, float | The values in the feature are of a numerical data type and do not meet the condition for a categorical feature. | Feature dropped if >15% of values are null. |
+ | Categorical | string | The number of unique values in the feature is less than 100 and less than 5% of the number of rows. | Null is treated as its own category. |
+ | Numerical | int, float | The values in the feature are of a numerical data type, and don't meet the condition for a categorical feature. | Feature dropped if >15% of values are null. |
-* When you have created a data drift monitor but cannot see data on the **Dataset monitors** page in Azure Machine Learning studio, try the following.
+* When you have created a data drift monitor but can't see data on the **Dataset monitors** page in Azure Machine Learning studio, try the following.
- 1. Check if you have selected the right date range at the top of the page.
- 1. On the **Dataset Monitors** tab, select the experiment link to check job status. This link is on the far right of the table.
- 1. If the job completed successfully, check the driver logs to see how many metrics have been generated or if there's any warning messages. Find driver logs in the **Output + logs** tab after you click on an experiment.
+ 1. Check if you have selected the right date range at the top of the page.
+ 1. On the **Dataset Monitors** tab, select the experiment link to check job status. This link is on the far right of the table.
+ 1. If the job completed successfully, check the driver logs to see how many metrics have been generated or if there's any warning messages. Find driver logs in the **Output + logs** tab after you select an experiment.
-* If the SDK `backfill()` function does not generate the expected output, it may be due to an authentication issue. When you create the compute to pass into this function, do not use `Run.get_context().experiment.workspace.compute_targets`. Instead, use [ServicePrincipalAuthentication](/python/api/azureml-core/azureml.core.authentication.serviceprincipalauthentication) such as the following to create the compute that you pass into that `backfill()` function:
+* If the SDK `backfill()` function doesn't generate the expected output, it may be due to an authentication issue. When you create the compute to pass into this function, don't use `Run.get_context().experiment.workspace.compute_targets`. Instead, use [ServicePrincipalAuthentication](/python/api/azureml-core/azureml.core.authentication.serviceprincipalauthentication) such as the following to create the compute that you pass into that `backfill()` function:
```python auth = ServicePrincipalAuthentication(
Limitations and known issues for data drift monitors:
compute = ws.compute_targets.get("xxx") ```
-* From the Model Data Collector, it can take up to (but usually less than) 10 minutes for data to arrive in your blob storage account. In a script or Notebook, wait 10 minutes to ensure cells below will run.
+* From the Model Data Collector, it can take up to 10 minutes for data to arrive in your blob storage account. However, it usually takes less time. In a script or Notebook, wait 10 minutes to ensure that the cells below successfully run.
```python import time
mariadb Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/policy-reference.md
Previously updated : 08/03/2023 Last updated : 08/08/2023 # Azure Policy built-in definitions for Azure Database for MariaDB
migrate Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/policy-reference.md
Title: Built-in policy definitions for Azure Migrate description: Lists Azure Policy built-in policy definitions for Azure Migrate. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
mysql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/policy-reference.md
Previously updated : 08/03/2023 Last updated : 08/08/2023 # Azure Policy built-in definitions for Azure Database for MySQL
network-watcher Network Insights Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-insights-topology.md
Title: Network Insights topology
+ Title: Network Insights topology (preview)
description: An overview of topology, which provides a pictorial representation of the resources.-- Previously updated : 11/16/2022-+++ Last updated : 08/08/2023+
-# Topology (Preview)
+# Topology (preview)
-Topology provides a visualization of the entire network for understanding network configuration. It provides an interactive interface to view resources and their relationships in Azure spanning across multiple subscriptions, resource groups and locations. You can also drill down to a resource view for resources to view their component level visualization.
+Topology provides a visualization of the entire network for understanding network configuration. It provides an interactive interface to view resources and their relationships in Azure across multiple subscriptions, resource groups and locations. You can also drill down to a resource view for resources to view their component level visualization.
## Prerequisites
Topology provides a visualization of the entire network for understanding networ
## Supported resource types
-The following are the resource types supported by Topology:
+The following are the resource types supported by topology:
- Application gateways-- ExpressRoute Circuits
+- Azure Bastion hosts
+- Azure Front Door profiles
+- ExpressRoute circuits
- Load balancers-- Network Interfaces-- Network Security Groups-- PrivateLink Endpoints-- PrivateLink Services-- Public IP Addresses-- Virtual Machines-- Virtual Network Gateways-- Virtual Networks
+- Network interfaces
+- Network security groups
+- Private endpoints
+- Private Link services
+- Public IP addresses
+- Virtual machines
+- Virtual network gateways
+- Virtual networks
## View Topology
networking Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/policy-reference.md
Title: Built-in policy definitions for Azure networking services description: Lists Azure Policy built-in policy definitions for Azure networking services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
notification-hubs Export Modify Registrations Bulk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/export-modify-registrations-bulk.md
There are scenarios in which it is required to create or modify large numbers of
This article explains how to perform a large number of operations on a notification hub, or to export all registrations, in bulk.
+> **_NOTE:_** Bulk import/export is only available for the 'standard' pricing tier
+ ## High-level flow Batch support is designed to support long-running jobs involving millions of registrations. To achieve this scale, batch support uses Azure Storage to store job details and output. For bulk update operations, the user is required to create a file in a blob container, whose content is the list of registration update operations. When starting the job, the user provides a URL to the input blob, along with a URL to an output directory (also in a blob container). After the job has started, the user can check the status by querying a URL location provided at starting of the job. A specific job can only perform operations of a specific kind (creates, updates, or deletes). Export operations are performed analogously.
notification-hubs Notification Hubs Push Notification Fixer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-push-notification-fixer.md
The Token obtained from the Token Provider is wrong
This message indicates either that the credentials configured in Notification Hubs are invalid or that there's an issue with the registrations in the hub. Delete this registration and let the client re-create the registration before sending the message. > [!NOTE]
-> Use of the `EnableTestSend` property is heavily throttled. Use this option only in a development/test environment and with a limited set of registrations. Debug notifications are sent to only 10 devices. There's also a limit on processing debug sends, at 10 per minute.
+> Use of the `EnableTestSend` property is heavily throttled. Use this option only in a development/test environment and with a limited set of registrations. Debug notifications are sent to only 10 devices. There's also a limit on processing debug sends, at 10 per minute. Debug notifications are also excluded the the Azure Notification Hubs SLA.
### Review telemetry
For more information about programmatic access, see [Programmatic access](/previ
[5]: ./media/notification-hubs-push-notification-fixer/PortalDashboard.png [6]: ./media/notification-hubs-push-notification-fixer/PortalAnalytics.png [7]: ./media/notification-hubs-ios-get-started/notification-hubs-test-send.png
-[8]: ./media/notification-hubs-push-notification-fixer/VSRegistrations.png
[9]: ./media/notification-hubs-push-notification-fixer/vsserverexplorer.png [10]: ./media/notification-hubs-push-notification-fixer/VSTestNotification.png
For more information about programmatic access, see [Programmatic access](/previ
[Templates]: /previous-versions/azure/azure-services/dn530748(v=azure.100) [APNs overview]: https://developer.apple.com/library/content/documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/APNSOverview.html [About FCM messages]: https://firebase.google.com/docs/cloud-messaging/concept-options
-[Deep dive: Visual Studio 2013 Update 2 RC and Azure SDK 2.3]: https://azure.microsoft.com/blog/2014/04/09/deep-dive-visual-studio-2013-update-2-rc-and-azure-sdk-2-3/#NotificationHubs
-[Announcing release of Visual Studio 2013 Update 3 and Azure SDK 2.4]: https://azure.microsoft.com/blog/2014/08/04/announcing-release-of-visual-studio-2013-update-3-and-azure-sdk-2-4/
[EnableTestSend]: /dotnet/api/microsoft.azure.notificationhubs.notificationhubclient.enabletestsend
notification-hubs Notification Hubs Push Notification Registration Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-push-notification-registration-management.md
An installation can contain the following properties. For a complete listing of
Registrations and installations must contain a valid PNS handle for each device/channel. Because PNS handles can only be obtained in a client app on the device, one pattern is to register directly on that device with the client app. On the other hand, security considerations and business logic related to tags might require you to manage device registration in the app back-end.
+When the push is made to a handle that has been expired by the PNS, Azure Notification Hubs automatically cleans the associated installation/registration record based on the response received from the PNS server. To clean expired records from a secondary notification hub, add custom logic that processes feedback from each send. Then, expire installation/registration in the secondary notification hub.
+ > [!NOTE] > The Installations API does not support the Baidu service (although the Registrations API does). ### Templates
-> [!NOTE]
-> Microsoft Push Notification Service (MPNS) has been deprecated and is no longer supported.
- If you want to use [Templates](notification-hubs-templates-cross-platform-push-messages.md), the device installation also holds all templates associated with that device in a JSON format (see sample above). The template names help target different templates for the same device.
-Each template name maps to a template body and an optional set of tags. Moreover, each platform can have additional template properties. For Windows Store (using WNS) and Windows Phone 8 (using MPNS), an additional set of headers can be part of the template. In the case of APNs, you can set an expiry property to either a constant or to a template expression. For a complete listing of the installation properties see, [Create or Overwrite an Installation with REST](/rest/api/notificationhubs/create-overwrite-installation) topic.
+Each template name maps to a template body and an optional set of tags. Moreover, each platform can have additional template properties. For Windows Store (using WNS), an additional set of headers can be part of the template. In the case of APNs, you can set an expiry property to either a constant or to a template expression. For a complete listing of the installation properties see, [Create or Overwrite an Installation with REST](/rest/api/notificationhubs/create-overwrite-installation) topic.
### Secondary Tiles for Windows Store Apps
public async Task<HttpResponseMessage> Put(DeviceInstallation deviceUpdate)
switch (deviceUpdate.Platform) {
- case "mpns":
- installation.Platform = NotificationPlatform.Mpns;
- break;
case "wns": installation.Platform = NotificationPlatform.Wns; break;
orbital Receive Real Time Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/receive-real-time-telemetry.md
The ground station provides telemetry using Avro as a schema. The schema is belo
] } ```
+| **Telemetry Point** | **Source Device / Point** | **Possible Values** | **Definition** |
+| : | : | : | :- |
+| version | Manually set internally | | Release version of the telemetry |
+| contactID | Contact resource | | Identification number of the contact |
+| contactPlatformIdentifier | Contact resource | | |
+| gpsTime | Coversion of utcTime | | Time in GPS time that the customer telemetry message was generated. |
+| utcTime | Current time | | Time in UTC time that the customer telemetry message was generated. |
+| azimuthDecimalDegrees | ACU: AntennaAzimuth | | Antenna's azimuth in decimal degrees. |
+| elevationDecimalDegrees | ACU: AntennaElevation | | Antenna's elevation in decimal degrees. |
+| contactTleLine1 | ACU: Satellite[0].Model.Value | ΓÇó String: TLE <br> ΓÇó "Empty TLE Line 1" if metric is null | First line of the TLE used for the contact. |
+| contactTLeLine2 | ACU: Satellite[0].Model.Value | ΓÇó String: TLE <br> ΓÇó "Empty TLE Line 2" if metric is null | Second line of the TLE used for the contact. |
+| antennaType | Respective 1P/3P telemetry builders set this value | MICROSOFT, KSAT, VIASAT | Antenna network used for the contact. |
+| direction | Contact profile link | Uplink, Downlink | Direction of the link used for the contact. |
+| polarization | Contact profile link | RHCP, LHCP, DualRhcpLhcp, LinearVertical, LinearHorizontal | Polarization of the link used for the contact. |
+| uplinkEnabled | ACU: SBandCurrent or UHFTotalCurrent | ΓÇó NULL (Invalid CenterFrequencyMhz or Downlink direction) <br> ΓÇó False (Bands other than S and UHF or Amp Current < Threshold) <br> ΓÇó True (S/UHF-band, Uplink, Amp Current > Threshold) | Idicates whether uplink was enabled for the contact. |
+| endpointName | Contact profile link channel | | Name of the endpoint used for the contact. |
+| inputEbN0InDb | Modem: measuredEbN0 | ΓÇó NULL (Modem model other than QRadio or QRx) <br> ΓÇó Double: Input EbN0 | Input energy per bit to noise power spectral density in dB. |
+| inputEsN0InDb | Not used in 1P telemetry | NULL (Not used in 1P telemetry) | Input energy per symbol to noise power spectral density in dB. |
+| inputRfPowerDbm | Digitizer: inputRfPower | ΓÇó NULL (Uplink) <br> ΓÇó 0 (Digitizer driver other than SNNB or SNWB) <br> ΓÇó Double: Input Rf Power | Input RF power in dBm. |
+| modemLockStatus | Modem: carrierLockState | ΓÇó NULL (Modem model other than QRadio or QRx; couldnΓÇÖt parse lock status Enum) <br> ΓÇó Empty string (if metric reading was null) <br> ΓÇó String: Lock status | Confirmation that the modem was locked. |
+| commandsSent | Modem: commandsSent | ΓÇó 0 (if not Uplink and QRadio) <br> ΓÇó Double: # of commands sent | Confirmation that commands were sent during the contact. |
+ ## Changelog 2023-06-05 - Updatd schema to show metrics under channels instead of links.
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-monitoring.md
Previously updated : 11/30/2021 Last updated : 8/8/2023 # Monitor metrics on Azure Database for PostgreSQL - Flexible Server
Monitoring data about your servers helps you troubleshoot and optimize for your
Azure Database for PostgreSQL provides various metrics that give insight into the behavior of the resources that support the Azure Database for PostgreSQL server. Each metric is emitted at a 1-minute interval and has up to [93 days of history](../../azure-monitor/essentials/data-platform-metrics.md#retention-of-metrics). You can configure alerts on the metrics. Other options include setting up automated actions, performing advanced analytics, and archiving the history. For more information, see the [Azure Metrics overview](../../azure-monitor/essentials/data-platform-metrics.md).
+> [!NOTE]
+> While metrics are stored for 93 days, you can only query (in the Metrics tile) for a maximum of 30 days' worth of data on any single chart. If you see a blank chart or your chart displays only part of metric data, verify that the difference between start and end dates in the time picker doesn't exceed the 30-day interval. After you've selected a 30-day interval, you can pan the chart to view the full retention window.
+ ### List of metrics The following metrics are available for a flexible server instance of Azure Database for PostgreSQL:
postgresql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/policy-reference.md
Previously updated : 08/03/2023 Last updated : 08/08/2023 # Azure Policy built-in definitions for Azure Database for PostgreSQL
reliability Cross Region Replication Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/cross-region-replication-azure.md
Regions are paired for cross-region replication based on proximity and other fac
| India |West India |South India | | Japan |Japan East |Japan West | | Korea |Korea Central |Korea South\* |
-| North America |East US |West US |
-| North America |East US 2 |Central US |
-| North America |North Central US |South Central US |
-| North America |West US 2 |West Central US |
-| North America |West US 3 |East US |
+| United States |East US |West US |
+| United States |East US 2 |Central US |
+| United States |North Central US |South Central US |
+| United States |West US 2 |West Central US |
+| United States|West US 3 |East US |
| Norway | Norway East | Norway West\* | | South Africa | South Africa North |South Africa West\* | | Sweden | Sweden Central |Sweden South\* |
Regions are paired for cross-region replication based on proximity and other fac
## Regions with availability zones and no region pair
-Azure continues to expand globally with Qatar as the first region with no regional pair and achieves high availability by leveraging [availability zones](../reliability/availability-zones-overview.md) and [locally redundant or zone-redundant storage (LRS/ZRS)](../storage/common/storage-redundancy.md#zone-redundant-storage). Regions without a pair will not have [geo-redundant storage (GRS)](../storage/common/storage-redundancy.md#geo-redundant-storage). Such regions follow [data residency](https://azure.microsoft.com/global-infrastructure/data-residency/#overview) guidelines allowing the option to keep data resident within the same region. Customers are responsible for data resiliency based on their Recovery Point Objective or Recovery Time Objective (RTO/RPO) needs and may move, copy, or access their data from any location globally. In the rare event that an entire Azure region is unavailable, customers will need to plan for their Cross Region Disaster Recovery per guidance from [Azure services that support high availability](../reliability/availability-zones-service-support.md#azure-services-with-availability-zone-support) and [Azure Resiliency ΓÇô Business Continuity and Disaster Recovery](https://azure.microsoft.com/mediahandler/files/resourcefiles/resilience-in-azure-whitepaper/resiliency-whitepaper-2022.pdf)
+Azure continues to expand globally with Qatar as the first region with no regional pair and achieves high availability by leveraging [availability zones](../reliability/availability-zones-overview.md) and [locally redundant or zone-redundant storage (LRS/ZRS)](../storage/common/storage-redundancy.md#zone-redundant-storage). Regions without a pair will not have [geo-redundant storage (GRS)](../storage/common/storage-redundancy.md#geo-redundant-storage). Such regions follow [data residency](https://azure.microsoft.com/global-infrastructure/data-residency/#overview) guidelines allowing the option to keep data resident within the same region. Customers are responsible for data resiliency based on their Recovery Point Objective or Recovery Time Objective (RTO/RPO) needs and may move, copy, or access their data from any location globally. In the rare event that an entire Azure region is unavailable, customers will need to plan for their Cross Region Disaster Recovery per guidance from [Azure services that support high availability](../reliability/availability-zones-service-support.md#azure-services-with-availability-zone-support) and [Azure Resiliency ΓÇô Business Continuity and Disaster Recovery](https://azure.microsoft.com/mediahandler/files/resourcefiles/resilience-in-azure-whitepaper/resiliency-whitepaper-2022.pdf).
## Next steps
role-based-access-control Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/policy-reference.md
Title: Built-in policy definitions for Azure RBAC description: Lists Azure Policy built-in policy definitions for Azure RBAC. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
role-based-access-control Rbac And Directory Admin Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/rbac-and-directory-admin-roles.md
na Previously updated : 06/07/2023 Last updated : 08/08/2023
The following diagram is a high-level view of how the Azure roles, Azure AD role
| Azure role | Permissions | Notes | | | | | | [Owner](built-in-roles.md#owner) | <ul><li>Full access to all resources</li><li>Delegate access to others</li></ul> | The Service Administrator and Co-Administrators are assigned the Owner role at the subscription scope<br>Applies to all resource types. |
-| [Contributor](built-in-roles.md#contributor) | <ul><li>Create and manage all of types of Azure resources</li><li>Create a new tenant in Azure Active Directory</li><li>Can't grant access to others</li></ul> | Applies to all resource types. |
+| [Contributor](built-in-roles.md#contributor) | <ul><li>Create and manage all of types of Azure resources</li><li>Can't grant access to others</li></ul> | Applies to all resource types. |
| [Reader](built-in-roles.md#reader) | <ul><li>View Azure resources</li></ul> | Applies to all resource types. | | [User Access Administrator](built-in-roles.md#user-access-administrator) | <ul><li>Manage user access to Azure resources</li></ul> | |
search Cognitive Search Aml Skill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-aml-skill.md
-+ Last updated 12/01/2022
search Cognitive Search Common Errors Warnings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-common-errors-warnings.md
-+ Last updated 08/01/2023
search Cognitive Search Concept Annotations Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-concept-annotations-syntax.md
description: Explains the annotation syntax and how to reference inputs and outp
-+ Last updated 09/16/2022
search Cognitive Search Defining Skillset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-defining-skillset.md
Title: Create a skillset
-description: A skillset defines data extraction, natural language processing, and image analysis steps. A skillset is attached to indexer. It's used to enrich and extract information from source data for use in Azure Cognitive Search.
+description: A skillset defines content extraction, natural language processing, and image analysis steps. A skillset is attached to indexer. It's used to enrich and extract information from source data for use in Azure Cognitive Search.
Previously updated : 07/14/2022 Last updated : 08/08/2023 # Create a skillset in Azure Cognitive Search
search Cognitive Search How To Debug Skillset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-how-to-debug-skillset.md
-+ Last updated 10/19/2022
search Cognitive Search Output Field Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-output-field-mapping.md
description: Export the enriched content created by a skillset by mapping its ou
-+ Last updated 09/14/2022
search Cognitive Search Skill Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-ocr.md
-+ Last updated 06/24/2022
search Cognitive Search Working With Skillsets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-working-with-skillsets.md
description: Skillsets are where you author an AI enrichment pipeline in Azure C
-+ Previously updated : 07/14/2022 Last updated : 08/08/2023 # Skillset concepts in Azure Cognitive Search
search Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Search description: Lists Azure Policy built-in policy definitions for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
search Query Odata Filter Orderby Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/query-odata-filter-orderby-syntax.md
Previously updated : 07/18/2022 Last updated : 08/08/2023
search Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-dotnet.md
Previously updated : 07/26/2023 Last updated : 08/02/2023 # C# samples for Azure Cognitive Search
The following samples are also published by the Cognitive Search team, but aren'
| Samples | Repository | Description | |||-|
+| [DotNetVectorDemo](https://github.com/Azure/cognitive-search-vector-pr/blob/main/demo-dotnet/readme.md) | [cognitive-search-vector-pr](https://github.com/Azure/cognitive-search-vector-pr) | Calls Azure OpenAI to generate embeddings and Azure Cognitive Search to create, load, and query an index. |
| [Query multiple services](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/main/multiple-search-services) | [azure-search-dotnet-scale](https://github.com/Azure-Samples/azure-search-dotnet-samples) | Issue a single query across multiple search services and combine the results into a single page. | | [Check storage](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/main/check-storage-usage/README.md) | [azure-search-dotnet-utilities](https://github.com/Azure-Samples/azure-search-dotnet-utilities) | Invokes an Azure function that checks search service storage on a schedule. | | [Export an index](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/main/export-dat) | [azure-search-dotnet-utilities](https://github.com/Azure-Samples/azure-search-dotnet-utilities) | C# console app that partitions and export a large index. |
search Samples Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-python.md
Previously updated : 07/27/2023 Last updated : 08/02/2023 # Python samples for Azure Cognitive Search
A demo repo provides proof-of-concept source code for examples or scenarios show
| Repository | Description | ||-|
+| [**azure-search-vector-python-sample.ipynb**](https://github.com/Azure/cognitive-search-vector-pr/blob/main/demo-python/code/azure-search-vector-image-python-sample.ipynb) | Uses the latest beta release of the **azure.search.documents** library in the Azure SDK for Python to generate embeddings, create and load an index, and run several vector queries. For more vector search Python demos, see [cognitive-search-vector-pr/demo-python](https://github.com/Azure/cognitive-search-vector-pr/blob/main/demo-python). |
| [**ChatGPT + Enterprise data with Azure OpenAI and Cognitive Search**](https://github.com/Azure-Samples/azure-search-openai-demo/blob/main/README.md) | Python code showing how to use Cognitive Search with the large language models in Azure OpenAI. For background, see this Tech Community blog post: [Revolutionize your Enterprise Data with ChatGPT](https://techcommunity.microsoft.com/t5/ai-applied-ai-blog/revolutionize-your-enterprise-data-with-chatgpt-next-gen-apps-w/ba-p/3762087). |
search Search Api Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-api-migration.md
-+ Last updated 10/03/2022
search Search Api Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-api-preview.md
-+ Last updated 06/29/2023
search Search Data Sources Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-data-sources-gallery.md
-+ layout: LandingPage Last updated 10/17/2022
search Search Dotnet Mgmt Sdk Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-dotnet-mgmt-sdk-migration.md
ms.devlang: csharp-+ Last updated 10/03/2022
search Search Faceted Navigation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-faceted-navigation.md
Previously updated : 07/22/2022 Last updated : 08/08/2023
Code in the presentation layer does the heavy lifting in a faceted navigation ex
Facets are dynamic and returned on a query. A search response brings with it all of the facet categories used to navigate the documents in the result. The query executes first, and then facets are pulled from the current results and assembled into a faceted navigation structure.
-In Cognitive Search, facets are one layer deep and can't be hierarchical. If you aren't familiar with faceted navigation structured, the following example shows one on the left. Counts indicate the number of matches for each facet. The same document can be represented in multiple facets.
+In Cognitive Search, facets are one layer deep and can't be hierarchical. If you aren't familiar with faceted navigation structures, the following example shows one on the left. Counts indicate the number of matches for each facet. The same document can be represented in multiple facets.
:::image source="media/search-faceted-navigation/azure-search-facet-nav.png" alt-text="Screenshot of faceted search results.":::
Facets can be calculated over single-value fields as well as collections. Fields
* Short descriptive values (one or two words) that will render nicely in a navigation tree
-The contents of a field, and not the field itself, produces the facets in a faceted navigation structure. If the facet is a string field *Color*, facets will be blue, green, and any other value for that field.
+The values within a field, and not the field name itself, produces the facets in a faceted navigation structure. If the facet is a string field named *Color*, facets will be blue, green, and any other value for that field.
As a best practice, check fields for null values, misspellings or case discrepancies, and single and plural versions of the same word. By default, filters and facets don't undergo lexical analysis or [spell check](speller-how-to-add.md), which means that all values of a "facetable" field are potential facets, even if the words differ by one character. Optionally, you can [assign a normalizer](search-normalizers.md) to a "filterable" and "facetable" field to smooth out variations in casing and characters.
The response for the example above includes the faceted navigation structure at
"concierge" ], "ParkingIncluded": false,
- . . .
+ }
+ ]
+}
``` ## Facets syntax
search Search Get Started Vector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-vector.md
Last updated 07/07/2023
# Quickstart: Use preview REST APIs for vector search queries > [!IMPORTANT]
-> Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [alpha SDKs](https://github.com/Azure/cognitive-search-vector-pr#readme).
+> Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr#readme).
Get started with vector search in Azure Cognitive Search using the **2023-07-01-Preview** REST APIs that create, load, and query a search index. Search indexes now support vector fields in the fields collection. When querying the search index, you can build vector-only queries, or create hybrid queries that target vector fields *and* textual fields configured for filters, sorts, facets, and semantic ranking.
Sample data consists of text and vector descriptions of 108 Azure services, gene
+ Vector data (text embeddings) is used for vector search. Currently, Cognitive Search doesn't generate vectors for you. For this quickstart, vector data was generated previously and copied into the "Upload Documents" request and into the query requests.
- For documents, we generated vector data using demo code that calls Azure OpenAI for the embeddings. Demo code is currently using alpha builds of the Azure SDKs and is available in [Python](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-python) and [C#](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-dotnet).
+ For documents, we generated vector data using demo code that calls Azure OpenAI for the embeddings. Samples are currently using beta versions of the Azure SDKs and are available in [Python](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-python), [C#](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-dotnet), and [JavaScript](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-javascript).
For queries, we used the "Create Query Embeddings" request that calls Azure OpenAI and outputs embeddings for a search string. If you want to formulate your own vector queries against the sample data of 108 Azure services, provide your Azure OpenAI connection information in the Postman collection variables. Your Azure OpenAI service must have a deployment of an embedding model that's identical to the one used to generate embeddings in your search corpus. For this quickstart, the following parameters were used:
search Search Howto Complex Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-complex-data-types.md
tags: complex data types; compound data types; aggregate data types-+ Last updated 01/30/2023
search Search Howto Create Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-create-indexers.md
-+ Last updated 12/06/2022
search Search Howto Index Cosmosdb Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb-gremlin.md
-+ Last updated 01/18/2023
search Search Howto Index Cosmosdb Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb-mongodb.md
description: Set up a search indexer to index data stored in Azure Cosmos DB for
-+ Last updated 01/18/2023
search Search Howto Index Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb.md
description: Set up a search indexer to index data stored in Azure Cosmos DB for
-+ Last updated 01/18/2023
search Search Howto Managed Identities Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-cosmos-db.md
Last updated 09/19/2022-+ # Set up an indexer connection to Azure Cosmos DB via a managed identity
search Search Howto Managed Identities Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-data-sources.md
-+ Last updated 12/08/2022
search Search Howto Run Reset Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-run-reset-indexers.md
-+ Last updated 12/06/2022
search Search Howto Schedule Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-schedule-indexers.md
-+ Last updated 12/06/2022
search Search Indexer Howto Access Ip Restricted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-ip-restricted.md
-+ Last updated 07/19/2023
search Search Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-overview.md
-+ Last updated 12/07/2022
search Search Indexer Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-troubleshooting.md
-+ Last updated 04/04/2023
search Search Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-overview.md
-+ Last updated 12/21/2022
search Search What Is Azure Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-what-is-azure-search.md
Last updated 07/10/2023-+ # What's Azure Cognitive Search?
search Troubleshoot Shared Private Link Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/troubleshoot-shared-private-link-resources.md
-+ Last updated 02/22/2023
search Tutorial Multiple Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-multiple-data-sources.md
Last updated 08/29/2022-+ # Tutorial: Index from multiple data sources using the .NET SDK
search Vector Search How To Chunk Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-chunk-documents.md
Last updated 06/29/2023
# Chunking large documents for vector search solutions in Cognitive Search > [!IMPORTANT]
-> Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [alpha SDKs](https://github.com/Azure/cognitive-search-vector-pr#readme).
+> Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr#readme).
This article describes several approaches for chunking large documents so that you can generate embeddings for vector search. Chunking is only required if source documents are too large for the maximum input size imposed by models.
search Vector Search How To Create Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-create-index.md
Last updated 07/31/2023
# Add vector fields to a search index > [!IMPORTANT]
-> Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [alpha SDKs](https://github.com/Azure/cognitive-search-vector-pr#readme).
+> Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr#readme).
In Azure Cognitive Search, vector data is indexed as *vector fields* within a [search index](search-what-is-an-index.md), using a *vector configuration* to create the embedding space.
search Vector Search How To Generate Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-generate-embeddings.md
Last updated 07/10/2023
# Create and use embeddings for search queries and documents > [!IMPORTANT]
-> Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [alpha SDKs](https://github.com/Azure/cognitive-search-vector-pr#readme).
+> Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr#readme).
Cognitive Search doesn't host vectorization models, so one of your challenges is creating embeddings for query inputs and outputs. You can use any embedding model, but this article assumes Azure OpenAI embeddings models. Demos in the [sample repository](https://github.com/Azure/cognitive-search-vector-pr/tree/main) tap the [similarity embedding models](/azure/ai-services/openai/concepts/models#embeddings-models) of Azure OpenAI.
search Vector Search How To Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-query.md
Last updated 07/31/2023
# Query vector data in a search index > [!IMPORTANT]
-> Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [alpha SDKs](https://github.com/Azure/cognitive-search-vector-pr#readme).
+> Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr#readme).
In Azure Cognitive Search, if you added vector fields to a search index, this article explains how to query those fields. It also explains how to combine vector queries with full text search and semantic search for hybrid query combination scenarios.
All results are returned in plain text, including vectors. If you use Search Exp
+ A search index containing vector fields. See [Add vector fields to a search index](vector-search-how-to-query.md).
-+ Use REST API version 2023-07-01-preview or Azure portal to query vector fields. You can also use alpha versions of the Azure SDKs. For more information, see [this readme](https://github.com/Azure/cognitive-search-vector-pr/blob/main/README.md).
++ Use REST API version 2023-07-01-preview or Azure portal to query vector fields. You can also use [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr/tree/main). + (Optional) If you want to also use [semantic search (preview)](semantic-search-overview.md) and vector search together, your search service must be Basic tier or higher, with [semantic search enabled](semantic-search-overview.md#enable-semantic-search).
api-key: {{admin-api-key}}
You can issue a search request containing multiple query vectors using the "vectors" query parameter. The queries execute concurrently in the search index, each one looking for similarities in the target vector fields. The result set is a union of the documents that matched both vector queries. A common example of this query request is when using models such as [CLIP](https://openai.com/research/clip) for a multi-modal vector search where the same model can vectorize image and non-image content.
-You must use REST for this scenario. Currently, there isn't support for multiple vector queries in the alpha SDKs.
- + `vectors.value` property contains the vector query generated from the embedding model used to create image and text vectors in the search index. + `vectors.fields` contains the image vectors and text vectors in the search index. This is the searchable data. + `vectors.k` is the number of nearest neighbor matches to include in results.
search Vector Search Index Size https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-index-size.md
Last updated 07/07/2023
# Vector index size limit > [!IMPORTANT]
-> Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [alpha SDKs](https://github.com/Azure/cognitive-search-vector-pr#readme).
+> Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr#readme).
When you index documents with vector fields, Azure Cognitive Search constructs internal vector indexes using the algorithm parameters that you specified for the field. Because Cognitive Search imposes limits on vector index size, it's important that you know how to retrieve metrics about the vector index size, and how to estimate the vector index size requirements for your use case.
search Vector Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-overview.md
Last updated 07/28/2023
# Vector search within Azure Cognitive Search > [!IMPORTANT]
-> Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [alpha SDKs](https://github.com/Azure/cognitive-search-vector-pr#readme).
+> Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr#readme).
This article is a high-level introduction to vector search in Azure Cognitive Search. It also explains integration with other Azure services and covers [terms and concepts](#vector-search-concepts) related to vector search development.
You can index vector data as fields in documents alongside textual and other typ
Azure Cognitive Search doesn't generate vector embeddings for your content. You need to provide the embeddings yourself by using a service such as Azure OpenAI. See [How to generate embeddings](./vector-search-how-to-generate-embeddings.md) to learn more.
-Vector search does not support customer-managed keys (CMK) at this time. This means you will not be able to add vector fields to a index with CMK enabled.
+Vector search does not support customer-managed keys (CMK) at this time. This means you will not be able to add vector fields to an index with CMK enabled.
## Availability and pricing
search Vector Search Ranking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-ranking.md
Last updated 07/14/2023
# Vector query execution and scoring in Azure Cognitive Search > [!IMPORTANT]
-> Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [alpha SDKs](https://github.com/Azure/cognitive-search-vector-pr#readme).
+> Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr#readme).
This article is for developers who need a deeper understanding of vector query execution and ranking in Azure Cognitive Search.
search Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/whats-new.md
Previously updated : 08/01/2023 Last updated : 08/02/2023
Learn about the latest updates to Azure Cognitive Search functionality, docs, an
| Item&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Type | Description | |--||--|
-| [**azure-search-vector-sample.js**](https://github.com/Azure/cognitive-search-vector-pr/blob/main/demo-javascript/code/azure-search-vector-sample.js) | Sample | Uses Node.js and the **@azure/search-documents 12.0.0-beta.2** library in the Azure SDK for JavaScript to generate embeddings, create and load an index, and run several vector queries. |
+| [**Vector demo (Azure SDK for JavaScript)**](https://github.com/Azure/cognitive-search-vector-pr/blob/main/demo-javascript/code/azure-search-vector-sample.js) | Sample | Uses Node.js and the **@azure/search-documents 12.0.0-beta.2** library in the Azure SDK for JavaScript to generate embeddings, create and load an index, and run several vector queries. |
+| [**Vector demo (Azure SDK for .NET)**](https://github.com/Azure/cognitive-search-vector-pr/blob/main/demo-dotnet/readme.md) | Sample | Uses the **Azure.Search.Documents 11.5.0-beta.3** library to generate embeddings, create and load an index, and run several vector queries. |
+| [**Vector demo (Azure SDK for Python)**](https://github.com/Azure/cognitive-search-vector-pr/blob/main/demo-python/code/azure-search-vector-image-python-sample.ipynb) | Sample | Uses the latest beta release of the **azure.search.documents** library in the Azure SDK for Python to generate embeddings, create and load an index, and run several vector queries. For more vector search Python demos, see [cognitive-search-vector-pr/demo-python](https://github.com/Azure/cognitive-search-vector-pr/blob/main/demo-python). |
## June 2023
security Infrastructure Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-monitoring.md
editor: TomSh ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e-+ na Last updated 06/28/2018 - # Azure infrastructure monitoring
To learn more about what Microsoft does to secure the Azure infrastructure, see:
- [Azure production operations and management](infrastructure-operations.md) - [Azure infrastructure integrity](infrastructure-integrity.md) - [Azure customer data protection](protection-customer-data.md)+
sentinel Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing.md
Any other services you use could have associated costs.
After you enable Microsoft Sentinel on a Log Analytics workspace consider these configuration options: - Retain all data ingested into the workspace at no charge for the first 90 days. Retention beyond 90 days is charged per the standard [Log Analytics retention prices](https://azure.microsoft.com/pricing/details/monitor/).-- Specify different retention settings for individual data types. Learn about [retention by data type](../azure-monitor/logs/data-retention-archive.md#set-retention-and-archive-policy-by-table).
+- Specify different retention settings for individual data types. Learn about [retention by data type](../azure-monitor/logs/data-retention-archive.md#configure-retention-and-archive-at-the-table-level.
- Enable long-term retention for your data and have access to historical logs by enabling archived logs. Data archive is a low-cost retention layer for archival storage. It's charged based on the volume of data stored and scanned. Learn how to [configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-archive.md). Archived logs are in public preview. The 90 day retention doesn't apply to basic logs. If you want to extend data retention for basic logs beyond eight days, store that data in archived logs for up to seven years.
sentinel Monitor Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-your-data.md
Title: Visualize your data using Azure Monitor Workbooks in Microsoft Sentinel | Microsoft Docs
+ Title: Visualize your data using workbooks in Microsoft Sentinel | Microsoft Docs
description: Learn how to visualize your data using workbooks in Microsoft Sentinel.
service-bus-messaging Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/policy-reference.md
Title: Built-in policy definitions for Azure Service Bus Messaging description: Lists Azure Policy built-in policy definitions for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
service-fabric Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/policy-reference.md
Previously updated : 08/03/2023 Last updated : 08/08/2023 # Azure Policy built-in definitions for Azure Service Fabric
service-health Service Health Portal Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/service-health-portal-update.md
Last updated 06/10/2022
We're updating the Azure Service Health portal experience. The new experience lets users engage with service events and manage actions to maintain the business continuity of impacted applications.
-We are rolling out the new experience in phases. Some users will see the updated experience below. Others will still see the [classic Service Health portal experience](service-health-overview.md).
+The new experience will be rolled out in phases. Some users will see the updated experience, while others will still see the [classic Service Health portal experience](service-health-overview.md). In the new experience, you can select \*\*Switch to Classic\*\* to switch back to the old experience.
+ ## Highlights of the new experience -- **Tenant level view** - Users who are Tenant Admins can now see Service Issues that happen at a Tenant level. Service Issues blade and Health History blades are updated to show incidents both at Tenant and Subscription levels. Users can filter on the scope (Tenant or Subscription) within the blades. The scope column indicates when an event is at the Tenant or Subscription level. Classic view does not support tenant-level events. Tenant-level events are only available in the new user interface.-- **Enhanced Map** - The Service Issues blade shows an enhanced version of the map with all the user services across the world. This version helps you find services that might be impacted by an outage easily.-- **Issues Details** - The issues details look and feel has been updated, for better readability.-- **Removal of personalized dashboard** - Users can no longer pin a personalized map to the dashboard. This feature has been deprecated in the new experience.
+##### Health Alerts Blade
+The Health Alerts blade has been updated for better usability. Users can search for and sort their alert rule by name. Users can also group their alert rules by subscription and status.
-## Coming soon
-The following user interfaces are updated to the new experience.
+In the new updated Health Alerts experience, users can click on their alert rule for additional details and see their alert firing history.
-> [!div class="checklist"]
-> * Security Advisories
-> * Planned Maintenance
-> * Health Advisories
-## Service issues window
+>[!Note]
+>The classic experience for the Health Alerts blade will be retired. Users will not be able to switch back from the new experience once it is rolled out.
-Groups of users will be automatically switched to the new Service Health experience over time. In the new experience, you can select \*\*Switch to Classic\*\* to switch back to the old experience.
+##### Tenant Level View
+ Users with [tenant admin access](admin-access-reference.md#roles-with-tenant-admin-access), can now view events at the tenant scope. The Service Issues, Health Advisories, Security Advisories, and Health History blades are updated to show events both at tenant and subscription levels.
-In the new experience, you can now see events at both Tenant and Subscription level scope. If you have [tenant admin access](admin-access-reference.md#roles-with-tenant-admin-access), you can view events at the Tenant scope.
+##### Filtering and Sorting
+Users can filter on the scope (tenant or subscription) within the blades. The scope column indicates when an event is at the tenant or subscription level. Classic view does not support tenant-level events. Tenant-level events are only available in the new user interface.
-If you have Subscription access, then you can view events that impact all the subscriptions you have access to.
+##### Enhanced Map
+The Service Issues blade shows an enhanced version of the map with all the user services across the world. This version helps you find services that might be impacted by an outage easily.
-You can use the scope column in the details view to filter on scope (Tenant vs Subscriber).
+##### Issues Details
+The issues details look and feel has been updated, for better readability.
+##### Removal of Personalized Dashboard
+Users can no longer pin a personalized map to the dashboard. This feature has been deprecated in the new experience.
-## Health history window
+## Coming Soon
-You can now see events at both Tenant and Subscription level scope in Health History blade if you have Tenant level administrator access. The scope column in the details view indicates if the incident is a Tenant or Subscription level incident. You can also filter on scope (Tenant vs Subscriber).
+The following user interface(s) will be updated to the new experience.
+> [!div class="checklist"]
+> * Planned Maintenance
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
For Site Recovery components, we support N-4 versions, where N is the latest rel
**Update** | **Unified Setup** | **Replication appliance / Configuration server** | **Mobility service agent** | **Site Recovery Provider** | **Recovery Services agent** | | | | |
+[Rollup 68](https://support.microsoft.com/topic/a81c2d22-792b-4cde-bae5-dc7df93a7810) | 9.55.6765.1 | 9.55.6765.1 / 5.1.8095.0 | 9.55.6765.1 | 5.23.0720.4 (VMware) & 5.1.8095.0 (Hyper-V) | 2.0.9261.0 (VMware) & 2.0.9260.0 (Hyper-V)
[Rollup 67](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f) | 9.54.6682.1 | 9.54.6682.1 / 5.1.8095.0 | 9.54.6682.1 | 5.23.0428.1 (VMware) & 5.1.8095.0 (Hyper-V) | 2.0.9261.0 (VMware) & 2.0.9260.0 (Hyper-V) [Rollup 66](https://support.microsoft.com/en-us/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | 9.53.6615.1 | 9.53.6615.1 / 5.1.8095.0 | 9.53.6615.1 | 5.1.8103.0 (Modernized VMware), 5.1.8095.0 (Hyper-V) & 5.23.0210.5 (Classic VMware) | 2.0.9260.0 [Rollup 65](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | 9.52.6522.1 | 9.52.6522.1 / 5.1.7870.0 | 9.52.6522.1 | 5.1.7870.0 (VMware) & 5.1.7882.0 (Hyper-V) | 2.0.9259.0 [Rollup 64](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | 9.51.6477.1 | 9.51.6477.1 / 5.1.7802.0 | 9.51.6477.1 | 5.1.7802.0 | 2.0.9257.0
-[Rollup 63](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 9.50.6419.1 | 9.50.6419.1 / 5.1.7626.0 | 9.50.6419.1 | 5.1.7626.0 | 2.0.9249.0
+ [Learn more](service-updates-how-to.md) about update installation and support.
+## Updates (August 2023)
+
+### Update Rollup 68
+
+[Update rollup 68](https://support.microsoft.com/topic/a81c2d22-792b-4cde-bae5-dc7df93a7810) provides the following updates:
+
+**Update** | **Details**
+ |
+**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article.
+**Issue fixes/improvements** | Many fixes and improvement as detailed in the rollup KB article.
+**Azure VM disaster recovery** | Added support for RHEL 8.8 Linux distros.
+**VMware VM/physical disaster recovery to Azure** | Added support for RHEL 8.8 Linux distros.
+ ## Updates (May 2023) ### Update Rollup 67
site-recovery Vmware Physical Mobility Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-mobility-service-overview.md
During a push installation of the Mobility service, the following steps are perf
1. As part of the agent installation, the Volume Shadow Copy Service (VSS) provider for Azure Site Recovery is installed. The VSS provider is used to generate application-consistent recovery points. - If the VSS provider installation fails, the agent installation will fail. To avoid failure of the agent installation, use [version 9.23](https://support.microsoft.com/help/4494485/update-rollup-35-for-azure-site-recovery) or higher to generate crash-consistent recovery points and do a manual install of the VSS provider.
+### Mobility service agent version 9.55 and higher
+
+1. The modernized architecture of mobility agent is set as default for the version 9.55 and above. Follow the instructions [here](#install-the-mobility-service-using-ui-modernized) to install the agent.
+2. To install the modernized architecture of mobility agent on versions 9.54 and above, follow the instructions [here](#install-the-mobility-service-using-command-prompt-modernized).
+ ## Install the Mobility service using UI (Modernized)
Locate the installer files for the serverΓÇÖs operating system using the followi
- On the appliance, go to the folder *E:\Software\Agents*. - Copy the installer corresponding to the source machineΓÇÖs operating system and place it on your source machine in a local folder, such as *C:\Program Files (x86)\Microsoft Azure Site Recovery*. + **Use the following steps to install the mobility service:**
+>[!NOTE]
+> If installing the agent version 9.54 and below, then ensure that the section [here](#install-the-mobility-service-using-command-prompt-modernized) is followed. For agent version 9.55 and above, the continue to follow the steps below.
+ 1. Copy the installation file to the location *C:\Program Files (x86)\Microsoft Azure Site Recovery*, and run it. This will launch the installer UI: ![Image showing Install UI option for Mobility Service](./media/vmware-physical-mobility-service-overview-modernized/mobility-service-install.png)
spring-apps Access App Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/access-app-virtual-network.md
export SERVICE_RUNTIME_RG=$(az spring show \
export IP_ADDRESS=$(az network lb frontend-ip list \ --lb-name kubernetes-internal \ --resource-group $SERVICE_RUNTIME_RG \
- --query "[0].privateIpAddress" \
+ --query "[0].privateIPAddress" \
--output tsv) ```
spring-apps How To Enterprise Deploy Static File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-deploy-static-file.md
For more information, see the [Use a customized server configuration file](#use-
The [Paketo buildpacks samples](https://github.com/paketo-buildpacks/samples/tree/main/web-servers) demonstrate common use cases for several different application types, including the following use cases: - Serving static files with a default server configuration file using `BP_WEB_SERVER` to select either [HTTPD](https://github.com/paketo-buildpacks/samples/blob/main/web-servers/no-config-file-sample/HTTPD.md) or [NGINX](https://github.com/paketo-buildpacks/samples/blob/main/web-servers/no-config-file-sample/NGINX.md).-- Using Node Package Manager to build a [React app](https://github.com/paketo-buildpacks/samples/tree/main/web-servers/javascript-frontend-sample) into static files that a web server can serve. Use the following steps:
+- Using Node Package Manager to build a [React app](https://github.com/paketo-buildpacks/samples/tree/main/web-servers/react-frontend-sample) into static files that a web server can serve. Use the following steps:
1. Define a script under the `scripts` property of the *package.json* file that builds your production-ready static assets. For React, it's `build`. 1. Find out where static assets are stored after the build script runs. For React, static assets are stored in `./build` by default. 1. Set `BP_NODE_RUN_SCRIPTS` to the name of the build script.
spring-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/policy-reference.md
Title: Built-in policy definitions for Azure Spring Apps description: Lists Azure Policy built-in policy definitions for Azure Spring Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
static-web-apps Branch Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/branch-environments.md
You can configure your site to deploy every change made to branches that aren't
## Configuration
-To enable stable URL environments, make the following changes to your [configuration .yml file](build-configuration.md?tabs=github-actions).
+To enable stable URL environments, make the following changes to your [configuration.yml file](build-configuration.md?tabs=github-actions).
- Set the `production_branch` input to your production branch name on the `static-web-apps-deploy` job in GitHub action or on the AzureStaticWebApp task. This action ensures changes to your production branch are deployed to the production environment, while changes to other branches are deployed to a preview environment. - List the branches you want to deploy to preview environments in the trigger array in your workflow configuration so that changes to those branches also trigger the GitHub Actions or Azure Pipelines deployment.
storage-mover Agent Register https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/agent-register.md
Previously updated : 07/24/2023 Last updated : 08/07/2023 <!--
The agent displays detailed progress. Once the registration is complete, you're
## Authentication and Authorization
-To accomplish seamless authentication with Azure and authorization to various Azure resources, the agent is registered with two Azure
+To accomplish seamless authentication with Azure and authorization to various Azure resources, the agent is registered with the following Azure
-1. Azure Storage Mover (Microsoft.StorageMover)
-1. Azure ARC (Microsoft.HybridCompute)
+- Azure Storage Mover (Microsoft.StorageMover)
+- Azure ARC (Microsoft.HybridCompute)
### Azure Storage Mover service
The agent is automatically authorized to converse with the Storage Mover service
#### Just-in-time authorization
-Perhaps the most important resource the agent needs to be authorized for access is the Azure Storage that is the target for a migration job. Authorization takes place through [Role-based access control](../role-based-access-control/overview.md). For an Azure blob container as a target, the registered agent's managed identity is assigned to the built-in role "Storage Blob Data Contributor" of the target container (not the whole storage account).
+For a migration job, access to the target endpoint is perhaps the most important resource for which an agent must be authorized. Authorization takes place through [Role-based access control](../role-based-access-control/overview.md). For an Azure blob container as a target, the registered agent's managed identity is assigned to the built-in role `Storage Blob Data Contributor` of the target container (not the whole storage account). Similarly, when accessing an Azure file share target, the registered agent's managed identity is assigned to the built-in role `Storage File Data Privileged Contributor`.
-This assignment is made in the admin's sign-in context in the Azure portal. Therefore, the admin must be a member of the role-based access control (RBAC) control plane role "Owner" for the target container. This assignment is made just-in-time when you start a migration job. It is at this point that you've selected an agent to execute a migration job. As part of this start action, the agent is given permissions to the data plane of the target container. The agent isn't authorized to perform any management plane actions, such as deleting the target container or configuring any features on it.
+These assignments are made in the admin's sign-in context in the Azure portal. Therefore, the admin must be a member of the role-based access control (RBAC) control plane role "Owner" for the target container. This assignment is made just-in-time when you start a migration job. It is at this point that you've selected an agent to execute a migration job. As part of this start action, the agent is given permissions to the data plane of the target container. The agent isn't authorized to perform any management plane actions, such as deleting the target container or configuring any features on it.
> [!WARNING] > Access is granted to a specific agent just-in-time for running a migration job. However, the agent's authorization to access the target is not automatically removed. You must either manually remove the agent's managed identity from a specific target or unregister the agent to destroy the service principal. This action removes all target storage authorization as well as the ability of the agent to communicate with the Storage Mover and Azure ARC services. ## Next steps
-Create a project to collate the different source shares that need to be migrated together.
+Define your source and target endpoints in preparation for migrating your data.
> [!div class="nextstepaction"]
-> [Create and manage a project](project-manage.md)
+> [Create and manage source and target endpoints](endpoint-manage.md)
storage-mover Endpoint Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/endpoint-manage.md
+
+ Title: How to manage Azure Storage Mover endpoints
+description: Learn how to manage Azure Storage Mover endpoints
++++ Last updated : 08/07/2023+++
+<!--
+!########################################################
+STATUS: DRAFT
+
+CONTENT:
+
+REVIEW Stephen/Fabian: Reviewed - Stephen
+REVIEW Engineering: not reviewed
+EDIT PASS: started
+
+Initial doc score: 93
+Current doc score: 100 (3269 words and 0 issues)
+
+!########################################################
+-->
+
+# Manage Azure Storage Mover endpoints
+
+While the term *endpoint* is often used in networking, it's used in the context of the Storage Mover service to describe a storage location with a high level of detail.
+
+A storage mover endpoint is a resource that contains the path to either a source or destination location and other relevant information. Endpoints are used in the creation of a job definition. Only certain types of endpoints may be used as a source or a target, respectively.
+
+This article guides you through the creation and management of Azure Storage Mover endpoints. To follow these examples, you need a top-level storage mover resource. If you haven't yet created one, follow the steps within the [Create a Storage Mover resource](storage-mover-create.md) article before continuing.
+
+After you complete the steps within this article, you'll be able to create and manage endpoints using the Azure portal and Azure PowerShell.
+
+## Endpoint resource overview
+
+Within the Azure Storage Mover resource hierarchy, a migration project is used to organize migration jobs into logical tasks or components. A migration project in turn contains at least one job definition, which describes both the source and target locations for your migration project. The [Understanding the Storage Mover resource hierarchy](resource-hierarchy.md) article contains more detailed information about the relationships between a Storage Mover, its endpoints, and its projects.
+
+Because a migration requires both a well-defined source and target, endpoints are parented to the top-level storage mover resource. This placement allows you to reuse endpoints across any number of job definitions. While there's only a single endpoint resource, the properties of each endpoint may vary based on its type. For example, NFS (Network File System) shares, SMB (Server Message Block) shares, and Azure Storage blob container endpoints each require fundamentally different information.
++
+### SMB endpoints
+
+ SMB uses the ACL (access control list) concept and user-based authentication to provide access to shared files for selected users. To maintain security, Storage Mover relies on Azure Key Vault integration to securely store and tightly control access to user credentials and other secrets. During a migration, storage mover agent resources connect to your SMB endpoints with Key Vault secrets rather than with unsecure hard-coded credentials. This approach greatly reduces the chance that secrets may be accidentally leaked.
+
+After your local file share source is configured, add secrets for both a username and a password to your Key Vault. You need to supply both your Key Vault's name or Uniform Resource Identifier (URI), and the names or URIs of the credential secrets when creating your SMB endpoints.
+
+Agent access to both your Key Vault and target storage resources is controlled through the Azure RBAC (role-based access control) authorization system. This system allows you to define access based on attributes associated with managed identities, security principals, and resources. It's important to note that the required RBAC role assignments are automatically applied when SMB endpoints are created within the Azure portal. However, any endpoint created programmatically requires you to make the following assignments manually:
+
+|Role |Resource |
+|--|--|
+|*Key Vault Secrets User* | The Key Vault resource used to store your SMB source's credential |
+|*Storage File Data Privileged Contributor* | Your target file share resource |
+
+There are many use cases that require preserving metadata values such as file and folder timestamps, ACLs, and file attributes. Storage Mover supports the same level of file fidelity as the underlying Azure file share. Azure Files in turn [supports a subset](/rest/api/storageservices/set-file-properties) of the [NTFS file properties](/windows/win32/fileio/file-attribute-constants). The following table represents common metadata that is migrated:
+
+|Metadata property |Outcome |
+|--|--|
+|Directory structure |The original directory structure of the source is preserved on the target share. |
+|Access permissions |Permissions on the source file or directory are preserved on the target share. |
+|Symbolic links |Symbolic links on the source are preserved and mapped on the target share. |
+|Create timestamp |The original create timestamp of the source file is preserved on the target share. |
+|Change timestamp |The original change timestamp of the source file is preserved on the target share. |
+|Modified timestamp |The original modified timestamp of the source file is preserved on the target share. |
+|Last access timestamp |This last access timestamp isn't supported for a file or directory on the target share. |
+|Other metadata |Other metadata of the source item is preserved if the target share is supporting it. |
+
+### NFS endpoints
+
+Using the NFS protocol, you can transfer files between computers running Windows and other non-Windows operating systems, such as Linux or UNIX. The current Azure Storage Mover release supports migrations from NFS shares on a NAS or server device within your network to an Azure blob container only.
+
+Unlike SMB, NFS doesn't utilize the ACL concept or user-based authentication. This difference allows NFS endpoints to be accessed without Azure Key Vault integration. In addition, Storage Mover processes metadata differently for both NFS mount sources and their blob container target counterparts. The following table identifies outcomes for common metadata encountered during migration:
+
+|Metadata property |Outcome |
+|--|-|
+|Directory structure |Blob containers don't have a traditional file system, but instead support "virtual folders." The path of a file within a folder is prepended to the file name and placed in a flat list within the blob container. Empty folders are represented as an empty blob in the target container. As with files, the source folder's metadata is persisted in the custom metadata field of this blob.|
+|Access permissions |Permissions on the source file are preserved in custom blob metadata but don't work as they did within the source.|
+|Symbolic links |A target file is migrated if its symbolic link can be resolved. A symbolic link that can't be resolved is logged as a failed file.|
+|Create timestamp |The original timestamp of the source file is preserved as custom blob metadata. The blob-native timestamp reflects the time at which the file was migrated.|
+|Change timestamp |The original timestamp of the source file is preserved as custom blob metadata. There's no blob-native timestamp of this type.|
+|Modified timestamp |The original timestamp of the source file is preserved as custom blob metadata. The blob-native timestamp reflects the time at which the file was migrated.|
+|Last accessed timestamp|This timestamp is preserved as custom blob metadata if it exists on the source file. There's no blob-native timestamp of this type.|
+|Other metadata |Other metadata is persisted in a custom metadata field of the target blob if it exists on source items. Only 4 KiB of metadata can be stored. Metadata of a larger size isn't migrated.|
+
+## Create an endpoint
+
+Before you can create a job definition, you need to create endpoints for your source and target data sources.
+
+> [!IMPORTANT]
+> If you have not yet deployed a Storage Mover resource using the resource provider, you'll need to create your top level resource before attempting the steps in this example.
+
+Azure Storage Mover supports migration scenarios using NFS and SMB protocols. The steps to create both endpoints are similar. The key differentiator between the creation of NFS- and SMB-enabled endpoints is the use of Azure Key Vault to store the shared credential for SMB resources. When a migration job is run, the agents use the shared credential stored within Key Vault. Access to Key Vault secrets are managed by granting an RBAC role assignment to the agent's managed identity.
+
+### Create a source endpoint
+
+Source endpoints identify locations from which your data is migrated. Source endpoints are used to define the origin the data specified within your migration project. Azure Storage Mover handles source locations in the form of file shares. These locations may reside on Network Attached Storage (NAS), a server, or even on a workstation. Common protocols for file shares are SMB (Server Message Block) and NFS (Network File System).
+
+The following steps describe the process of creating a source endpoint.
+
+### [Azure portal](#tab/portal)
+
+ 1. In the [Azure portal](https://portal.azure.com), navigate to your **Storage mover** resource page. Select **Storage endpoints** from within the navigation pane to access your endpoints.
+
+ :::image type="content" source="media/endpoint-manage/storage-mover.png" alt-text="Screenshot of the Storage Mover resource page within the Azure portal showing the location of the Storage Endpoints link." lightbox="media/endpoint-manage/storage-mover-lrg.png":::
+
+ On the **Storage endpoints** page, the default **Storage endpoints** view displays the names of any provisioned source endpoints and a summary of their associated properties. You can select **Target endpoints** to view the corresponding destination endpoints. You can also filter the results further by selecting either the **Protocol version** or **Host** filter and selecting the appropriate option.
+
+ :::image type="content" source="media/endpoint-manage/endpoint-filter.png" alt-text="Screenshot of the Storage Endpoints page within the Azure portal showing the location of the endpoint filters." lightbox="media/endpoint-manage/endpoint-filter-lrg.png":::
+
+ 1. Select **Create endpoint** to expand the **Endpoint type** menu. Select **Create source endpoint** to open the **Create source endpoint** pane as shown in the following image.
+
+ :::image type="content" source="media/endpoint-manage/endpoint-source-create.png" alt-text="Screenshot of the Endpoint Overview page highlighting the location of the Create Endpoint link." lightbox="media/endpoint-manage/endpoint-source-create-lrg.png":::
+
+ 1. Within the **Create source endpoint** pane, provide values for the required **Host name or IP** and **Share name** values. The host name or IP address value must be either an IPv4 address, or fully qualified domain or host name. You may also add an optional **Description** value of up to 1024 characters in length. Next, select **Protocol version** to expand the protocol selection menu and select the appropriate option for your source target.
+
+ Storage mover agents use secrets stored within Key Vault to connect to SMB endpoints. When you create an SMB source endpoint, you need to provide both the name of the Key Vault containing the secrets and the names of the secrets themselves.
+
+ First, select **Key vault** to expand the menu and select the name of the Key Vault containing your secrets. You can supply a value with which to filter the list of Key Vaults if necessary.
+
+ :::image type="content" source="media/endpoint-manage/key-vault.png" alt-text="Screenshot of the Create Source pane showing the drop-down list containing a resource group's Key Vaults.":::
+
+ After you've selected the appropriate Key Vault, you can supply values for the required **Select secret for username** and **Select secret for password** fields. These values can be supplied by providing the URI to the secrets, or by selecting the secrets from a list. Select the **Select secret** button to enable the menu and select the username and password values. Alternatively, you can enable the **Enter secret from URI** option and supply the appropriate URI to the username and password secret.
+
+ The values for host and share name are concatenated to form the full migration source path. The path value is displayed in the **Full source path** field. Copy the path provided and verify that you're able to access it before committing your changes. Finally, when you've confirmed that all values are correct and that you can access the source path, select **Create** to add your new endpoint.
+
+ :::image type="content" source="media/endpoint-manage/secrets.png" alt-text="Screenshot of the Create Endpoint pane showing the location of the Secrets options." lightbox="media/endpoint-manage/secrets-lrg.png":::
+
+ Your new endpoint is deployed and now appears within the list of source endpoints as shown in the following sample image.
+
+ :::image type="content" source="media/endpoint-manage/endpoint-added.png" alt-text="Screenshot of the Endpoint Overview page with the newly created endpoint displayed." lightbox="media/endpoint-manage/endpoint-added-lrg.png":::
+
+### [PowerShell](#tab/powershell)
+
+ The `New-AzStorageMoverSmbEndpoint` and `New-AzStorageMoverNfsEndpoint` cmdlets are used to create a new endpoint within a [storage mover resource](storage-mover-create.md) you previously deployed.
+
+ If you haven't yet installed the `Az.StorageMover` module:
+
+ ```powershell
+ ## Ensure you are running the latest version of PowerShell 7
+ $PSVersionTable.PSVersion
+
+ ## Your local execution policy must be set to at least remote signed or less restrictive
+ Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
+
+ ## If you don't have the general Az PowerShell module, install it first
+ Install-Module -Name Az -Scope CurrentUser -Repository PSGallery -Force
+
+ ## Lastly, the Az.StorageMover module is not installed by default and must be manually requested.
+ Install-Module -Name Az.StorageMover -Scope CurrentUser -Repository PSGallery -Force
+ ```
+
+ See the [Install Azure PowerShell](/powershell/azure/install-azure-powershell) article for more help installing Azure PowerShell.
+
+ > [!CAUTION]
+ > Renaming endpoint resources is not supported. It's a good idea to ensure that you've named the project appropriately since you won't be able to change much of the endpoint name after it is provisioned. You may, however, choose to create a new endpoint with the same properties and a different name as shown in a later section. Refer to the [resource naming convention](../azure-resource-manager/management/resource-name-rules.md#microsoftstoragesync) to choose a supported name.
+
+ 1. It's always a good idea to create and use variables to store lengthy or potentially complex strings. Copy the sample code block and supply values for the required parameters. The `-Description` parameter is optional and is added in the [View and edit an endpoint's properties](#view-and-edit-an-endpoints-properties) section.
+
+ ```powershell
+
+ ## Set variables
+ $subscriptionID = "[Subscription GUID]"
+ $resourceGroupName = "[Resource group name]"
+ $storageMoverName = "[Storage mover resource's name]"
+ $sourceHost = "[Source share's host name or IP address]"
+ $sourceShare = "[Source share's name]"
+ $targetResourceID = "/subscriptions/[GUID]/resourceGroups/demoResrouceGroup/"
+ $targetResourceID += "providers/Microsoft.Storage/storageAccounts/demoAccount"
+
+ ## For SMB endpoints
+ $smbFileshare = "[Target fileshare name]"
+ $usernameURI = "https://demo.vault.azure.net/secrets/demoUser"
+ $passwordURI = "https://demo.vault.azure.net/secrets/demoPassword"
+
+ ## For NFS endpoints
+ $nfsContainer = "[Blob container target]"
+
+ ```
+
+ 1. Connect to your Azure account by using the `Connect-AzAccount` cmdlet. Specify the ID for your subscription by providing a value for the `-Subscription` parameter as shown in the example.
+
+ ```powershell
+
+ Connect-AzAccount -Subscription $subscriptionID
+
+ ```
+
+ 1. After you've successfully connected, you can create new source endpoint resources. Depending on your requirement, you can use the `New-AzStorageMoverSmbEndpoint` cmdlet to create an SMB endpoint as shown in the following example.
+
+ ```powershell
+
+ New-AzStorageMoverSmbEndpoint `
+ -Name "smbSourceEndpoint"
+ -ResourceGroupName $resourceGroupName `
+ -StorageMoverName $storageMoverName `
+ -Host $sourceHost `
+ -ShareName $sourceShare `
+ -CredentialsUsernameUri $usernameURI `
+ -CredentialsPasswordUri $passwordURI
+
+ ```
+
+ Alternatively, you can create new NFS source endpoint by using the `New-AzStorageMoverNfsEndpoint` cmdlet as shown.
+
+ ```powershell
+
+ New-AzStorageMoverNfsEndpoint `
+ -Name "nfsSourceEndpoint" `
+ -ResourceGroupName $resourceGroupName `
+ -StorageMoverName $storageMoverName `
+ -Host $sourceHost `
+ -Export $sourceShare `
+
+ ```
+
+ The following sample response contains the `ProvisioningState` property, which indicates that the endpoint was successfully created.
+
+ ```Response
+
+ Id : /subscriptions/<GUID>/resourceGroups/
+ demoResourceGroup/providers/Microsoft.StorageMover/
+ storageMovers/demoMover/endpoints/smbTargetEndpoint
+ Name : demoTarget
+ Property : {
+ "endpointType": "AzureStorageSmbFileShare",
+ "description": "",
+ "provisioningState": "Succeeded",
+ "storageAccountResourceId": "/subscriptions/[GUID]/
+ resourceGroups/demoResourceGroup/providers/Microsoft.Storage/
+ storageAccounts/contosoeuap",
+ "fileShareName": "demoFileshare"
+ }
+ SystemDataCreatedAt : 6/22/2023 1:19:00 AM
+ SystemDataCreatedBy : user@contoso.com
+ SystemDataCreatedByType : User
+ SystemDataLastModifiedAt : 6/22/2023 1:19:00 AM
+ SystemDataLastModifiedBy : user@contoso.com
+ SystemDataLastModifiedByType : User
+ Type : microsoft.storagemover/storagemovers/endpoints
+
+ ```
+++
+### Create a target endpoint
+
+Target endpoints identify locations to which your data is migrated. Azure offers various types of cloud storage. A fundamental aspect of file migrations to Azure is determining which Azure storage option is right for your data. The number of files and folders, their directory structure, access protocol, file fidelity and other aspects are important inputs into a complete cloud solution design.
+
+If you need help with choosing the right Azure target storage for your cloud solution design, refer to the [Cloud migration basics](migration-basics.md) article.
+
+The following steps describe the process of creating a target endpoint.
+
+### [Azure portal](#tab/portal)
+
+ 1. In the [Azure portal](https://portal.azure.com), navigate to your **Storage mover** resource page. Select **Storage endpoints** from within the navigation pane to access your endpoints.
+
+ :::image type="content" source="media/endpoint-manage/storage-mover.png" alt-text="Screenshot of the Storage Mover resource page within the Azure portal showing the location of the Storage Endpoints links." lightbox="media/endpoint-manage/storage-mover-lrg.png":::
+
+ On the **Storage endpoints** page, the default **Storage endpoints** view displays the names of any provisioned source endpoints and a summary of their associated properties. Select **Target endpoints** to view the existing destination endpoints. You can filter the results further by selecting the **Storage account** filter and the appropriate option.
+
+ :::image type="content" source="media/endpoint-manage/endpoint-target-filter.png" alt-text="Screenshot of the Storage Endpoints page within the Azure portal showing the location of the target endpoint filters." lightbox="media/endpoint-manage/endpoint-target-filter-lrg.png":::
+
+ 1. Select **Create endpoint** to expand the **Endpoint type** menu. Select **Create target endpoint** to open the **Create target endpoint** pane as shown in the following image.
+
+ :::image type="content" source="media/endpoint-manage/endpoint-target-pane.png" alt-text="Screenshot of the Endpoint Overview page highlighting the location of the Create Endpoint list." lightbox="media/endpoint-manage/endpoint-target-pane-lrg.png":::
+
+ 1. Within the **Create target endpoint** pane, select your subscription and destination storage account from within the **Subscription** and **Storage account** lists, respectively. Next, select the appropriate **Target type** option corresponding to your target endpoint.
+
+ [!INCLUDE [protocol-endpoint-agent](includes/protocol-endpoint-agent.md)]
+
+ Depending on the target type you choose, select either your **Blob container** or your **File share** from the corresponding drop-down list. Finally, you may add an optional **Description** value for your target of up to 1024 characters in length and select **Create** to deploy your endpoint.
+
+ :::image type="content" source="media/endpoint-manage/endpoint-target-create.png" alt-text="Screenshot of the Create Endpoint pane showing the location of the required fields and Create button." lightbox="media/endpoint-manage/endpoint-target-create-lrg.png":::
+
+ Your new endpoint is deployed and now appears within your list of endpoints as shown in the following example image.
+
+ :::image type="content" source="media/endpoint-manage/endpoint-added.png" alt-text="Screenshot of the Endpoint Overview page with the newly created endpoint displayed." lightbox="media/endpoint-manage/endpoint-added-lrg.png":::
+
+### [PowerShell](#tab/powershell)
+
+ 1. To create a new SMB fileshare, use the `New-AzStorageMoverAzStorageSmbFileShareEndpoint` cmdlet as shown in the following example.
+
+ ```powershell
+
+ New-AzStorageMoverAzStorageSmbFileShareEndpoint `
+ -Name "smbTargetEndpoint" `
+ -ResourceGroupName $resourceGroupName `
+ -StorageMoverName $storageMoverName `
+ -StorageAccountResourceId $targetResourceID `
+ -FileShareName $targetFileshare
+
+ ```
+
+ Use the `New-AzStorageMoverAzStorageContainerEndpoint` PowerShell cmdlet to create a new NFS blob container. The following example provides provisioning guidance.
+
+ ```powershell
+
+ New-AzStorageMoverAzStorageContainerEndpoint `
+ -Name "nfsTargetEndpoint" `
+ -ResourceGroupName $resourceGroupName `
+ -StorageMoverName $storageMoverName `
+ -BlobContainerName $targetEpContainer `
+ -StorageAccountResourceId $targetResourceID `
+
+ ```
+
+ The following sample response contains the `ProvisioningState` property, which indicates that the endpoint was successfully created.
+
+ ```Response
+
+ Id : /subscriptions/<GUID>/resourceGroups/
+ demoResourceGroup/providers/Microsoft.StorageMover/
+ storageMovers/demoMover/endpoints/smbTargetEndpoint
+ Name : demoTarget
+ Property : {
+ "endpointType": "AzureStorageSmbFileShare",
+ "description": "",
+ "provisioningState": "Succeeded",
+ "storageAccountResourceId": "/subscriptions/[GUID]/
+ resourceGroups/demoResourceGroup/providers/Microsoft.Storage/
+ storageAccounts/contosoeuap",
+ "fileShareName": "demoFileshare"
+ }
+ SystemDataCreatedAt : 6/22/2023 1:19:00 AM
+ SystemDataCreatedBy : user@contoso.com
+ SystemDataCreatedByType : User
+ SystemDataLastModifiedAt : 6/22/2023 1:19:00 AM
+ SystemDataLastModifiedBy : user@contoso.com
+ SystemDataLastModifiedByType : User
+ Type : microsoft.storagemover/storagemovers/endpoints
+
+ ```
+++
+## View and edit an endpoint's properties
+
+Depending on your use case, you may need to retrieve either a specific endpoint, or a complete list of all your endpoint resources. You may also need to add or edit an endpoint's description.
+
+Follow the steps in this section to view endpoints accessible to your Storage Mover resource.
+
+### [Azure portal](#tab/portal)
+
+ 1. To create an endpoint using the Navigate to the [Azure portal](https://portal.azure.com), navigate to the **Storage mover** resource page. Select **Storage endpoints** from within the navigation pane to access your endpoints as shown in the sample image.
+
+ :::image type="content" source="media/endpoint-manage/storage-mover.png" alt-text="Screenshot of the Storage Mover resource page within the Azure portal showing the location of the Storage Endpoints link." lightbox="media/endpoint-manage/storage-mover-lrg.png":::
+
+ 1. On the **Storage endpoints** page, the default **Storage endpoints** view displays the names of any provisioned source endpoints and a summary of their associated properties. To view provisioned destination endpoint, select **Target endpoints**. You can filter the results further by selecting the **Protocol** or **Host** filters and the relevant option.
+
+ :::image type="content" source="media/endpoint-manage/endpoint-filter.png" alt-text="Screenshot of the Storage Endpoints page within the Azure portal showing the endpoint details and the location of the target endpoint filters." lightbox="media/endpoint-manage/endpoint-filter-lrg.png":::
+
+ At this time, the Azure Portal doesn't provide the ability to to directly modify provisioned endpoints. An endpoint's description, however, can be modified using Azure PowerShell by following [this example](endpoint-manage.md?tabs=powershell#view-and-edit-an-endpoints-properties). Endpoint resources that require updating within the Azure Portal should be deleted and recreated.
+
+### [PowerShell](#tab/powershell)
+
+1. Use the `Get-AzStorageMoverEndpoint` cmdlet to retrieve a list of endpoint resources. Optionally, you can supply a `-Name` parameter value to retrieve a specific resource. Calling the cmdlet without the optional parameter returns a list of all provisioned endpoints associated with your storage mover resource.
+
+ The following example retrieves a specific project resource by specifying the **demoTarget** name value.
+
+ ```powershell
+
+ Get-AzStorageMoverEndpoint `
+ -ResourceGroupName $resourceGroupName `
+ -StorageMoverName $storageMoverName `
+ -Name "demoTarget"
+
+ ```
+
+ The sample response contains the specified project's properties, including the empty `Description` property.
+
+ ```Response
+
+ Id : /subscriptions/<GUID>/resourceGroups/
+ demoResourceGroup/providers/Microsoft.StorageMover/
+ storageMovers/demoMover/endpoints/smbTargetEndpoint
+ Name : demoTarget
+ Property : {
+ "endpointType": "AzureStorageSmbFileShare",
+ "description": "",
+ "provisioningState": "Succeeded",
+ "storageAccountResourceId": "/subscriptions/[GUID]/
+ resourceGroups/demoResourceGroup/providers/Microsoft.Storage/
+ storageAccounts/contosoeuap",
+ "fileShareName": "demoFileshare"
+ }
+ SystemDataCreatedAt : 6/22/2023 1:19:00 AM
+ SystemDataCreatedBy : user@contoso.com
+ SystemDataCreatedByType : User
+ SystemDataLastModifiedAt : 6/22/2023 1:19:00 AM
+ SystemDataLastModifiedBy : user@contoso.com
+ SystemDataLastModifiedByType : User
+ Type : microsoft.storagemover/storagemovers/endpoints
+
+ ```
+
+1. You can add the missing description to the endpoint returned in the previous example by including a pipeline operator and the `Update-AzStorageMoverAzStorageSmbFileShareEndpoint` cmdlet. The following example provides the missing **SMB fileshare endpoint** value with the `-Description` parameter.
+
+ ```powershell
+
+ Get-AzStorageMoverEndpoint `
+ -ResourceGroupName $resourceGroupName `
+ -StorageMoverName $storageMoverName `
+ -Name "demoTarget" | `
+ Update-AzStorageMoverAzStorageSmbFileShareEndpoint `
+ -Description "SMB fileshare endpoint"
+
+ ```
+
+ The response now contains the updated `Description` property value.
+
+ ```Response
+
+ Id : /subscriptions/<GUID>/resourceGroups/
+ demoResourceGroup/providers/Microsoft.StorageMover/
+ storageMovers/demoMover/endpoints/smbTargetEndpoint
+ Name : demoTarget
+ Property : {
+ "endpointType": "AzureStorageSmbFileShare",
+ "description": "SMB fileshare endpoint",
+ "provisioningState": "Succeeded",
+ "storageAccountResourceId": "/subscriptions/[GUID]/
+ resourceGroups/demoResourceGroup/providers/Microsoft.Storage/
+ storageAccounts/contosoeuap",
+ "fileShareName": "demoFileshare"
+ }
+ SystemDataCreatedAt : 6/22/2023 1:19:00 AM
+ SystemDataCreatedBy : user@contoso.com
+ SystemDataCreatedByType : User
+ SystemDataLastModifiedAt : 6/22/2023 1:19:00 AM
+ SystemDataLastModifiedBy : user@contoso.com
+ SystemDataLastModifiedByType : User
+ Type : microsoft.storagemover/storagemovers/endpoints
+
+ ```
+++
+## Delete an endpoint
+
+The removal of an endpoint resource should be a relatively rare occurrence in your production environment, though there may be occasions where it may be helpful. To delete a Storage Mover endpoint resource, follow the provided example.
+
+> [!WARNING]
+> Deleting an endpoint is a permanent action and cannot be undone. It's a good idea to ensure that you're prepared to delete the endpoint since you will not be able to restore it at a later time.
+
+# [Azure portal](#tab/portal)
+
+ 1. To delete an endpoint using the [Azure portal](https://portal.azure.com), navigate to the **Storage mover** resource page. Select **Storage endpoints** from within the navigation pane to access your endpoints as indicated in the following image.
+
+ :::image type="content" source="media/endpoint-manage/storage-mover.png" alt-text="Screenshot of the Storage Mover resource page within the Azure portal showing the location of the Storage Endpoints link." lightbox="media/endpoint-manage/storage-mover-lrg.png":::
+
+ 1. The default **Source endpoints** view displays the names of any provisioned source endpoints and a summary of their associated data. You can select the **Destination endpoints** filter to view the corresponding destination endpoints.
+
+ Locate the name of the endpoint you want to delete and select the corresponding checkbox. After verifying that you've selected the appropriate endpoint, select **Delete** as shown in the following image.
+
+ :::image type="content" source="media/endpoint-manage/endpoint-delete.png" alt-text="Screenshot of the Storage Mover resource page within the Azure portal showing the location of the Delete button." lightbox="media/endpoint-manage/endpoint-delete-lrg.png":::
+
+ Your new endpoint is deleted and no longer appears within your list of endpoints as shown in the following example image.
+
+ :::image type="content" source="media/endpoint-manage/endpoint-without.png" alt-text="Screenshot of the Endpoint Overview page inferring a newly deleted endpoint." lightbox="media/endpoint-manage/endpoint-without-lrg.png":::
+
+# [PowerShell](#tab/powershell)
+
+Use the `Remove-AzStorageMoverEndpoint` to permanently delete an endpoint resource. Provide the endpoint's name with the `-Name` parameter, and resource group and storage mover resource names with the `-ResourceGroupName` and `-StorageMoverName` parameters, respectively.
+
+```powershell
+
+Remove-AzStorageMoverEndpoint `
+ -ResourceGroupName $resourceGroupName `
+ -StorageMoverName $storageMoverName `
+ -Name "demoTarget"
+
+```
+++
+## Next steps
+
+Create a project to collate the different source shares that need to be migrated together.
+> [!div class="nextstepaction"]
+> [Create and manage a project](project-manage.md)
storage-mover Job Definition Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/job-definition-create.md
Previously updated : 09/26/2022 Last updated : 08/04/2023 <!--
REVIEW Stephen/Fabian: Reviewed - Stephen
REVIEW Engineering: not reviewed EDIT PASS: started
-Initial doc score: 100 (1532 words and 0 issues)
+Initial doc score: 100 (1725 words and 0 issues)
!######################################################## -->
Before you begin following the examples in this article, it's important that you
There are three prerequisites to the definition the migration of your source shares:
-1. You need to have deployed a storage mover resource.
- Follow the steps in the *[Create a storage mover resource](storage-mover-create.md)* article to deploy a storage mover resource to the desired region within your Azure subscription.
-1. You need to deploy and register an Azure Storage Mover agent virtual machine (VM).
- Follow the steps in the [Azure Storage Mover agent VM deployment](agent-deploy.md) and [agent registration](agent-register.md) articles to deploy at least one agent.
-1. Finally, to define a migration, you need to create a job definition.
- Job definitions are organized in a migration project. You need at least one migration project in your storage mover resource. If you haven't already, follow the deployment steps in the [manage projects](project-manage.md) article to create a migration project.
+- **An existing storage mover resource.**<br/>
+ If you haven't deployed a storage mover resource, follow the steps in the *[Create a storage mover resource](storage-mover-create.md)* article. These steps help you deploy a storage mover resource to the desired region within your Azure subscription.
+- **At least one existing Azure Storage Mover agent virtual machine (VM).**<br/>
+ The steps in the [Azure Storage Mover agent VM deployment](agent-deploy.md) and [agent registration](agent-register.md) articles guide you through the deployment and registration process.
+- **Finally, you need to create a job definition to define a migration.**<br/>
+ Job definitions are organized in a migration project. You need at least one migration project in your storage mover resource. If you haven't already done so, follow the deployment steps in the [manage projects](project-manage.md) article to create a migration project.
## Create and start a job definition
-A job definition is created within a project resource. Creating a job definition requires you to select or configure a project, a source and target storage endpoint, and a job name. If you've followed the examples contained in previous articles, you may have an existing project within a previously deployed storage mover resource. Follow the steps below to add a job definition to a project.
+A job definition is created within a project resource. Creating a job definition requires you to select or configure a project, a source and target storage endpoint, and a job name. If you've followed the examples contained in previous articles, you may have an existing project within a previously deployed storage mover resource. Follow the steps in this section to add a job definition to a project.
-Storage endpoints are separate resources in your storage mover. Endpoints must be created before they can be referenced by a job definition.
+Storage endpoints are separate resources in your storage mover. You need to create a source and target endpoint before you can reference them within a job definition. The examples in this section describe the process of creating endpoints.
-Refer to the [resource naming convention](../azure-resource-manager/management/resource-name-rules.md#microsoftstoragesync) for help with choosing supported resource names.
+Refer to the [resource naming convention](../azure-resource-manager/management/resource-name-rules.md#microsoftstoragesync) article for help with choosing supported resource names.
### [Azure portal](#tab/portal)
Refer to the [resource naming convention](../azure-resource-manager/management/r
:::image type="content" source="media/job-definition-create/project-explorer-sml.png" alt-text="Screen capture of the Project Explorer's Overview tab within the Azure portal." lightbox="media/job-definition-create/project-explorer-lrg.png":::
- From within the project explorer pane or the results list, select the name of an available project. The project's properties and job summary data are displayed in the **details** pane. Any existing job definitions defined for the project will also be displayed. The status of any deployed jobs will also be shown.
+ From within the project explorer pane or the results list, select the name of an available project. The project's properties and job summary data are displayed in the **details** pane. Any existing job definitions and deployed jobs defined for the project are also shown.
- In the actions menu within the project's details pane, select **Create job definition** to open the **Create a migration job** window. If no job definitions exist within the project, you can also select **Create a job definition** near the bottom of the pane, as shown in the example below.
+ In the actions menu within the project's details pane, select **Create job definition** to open the **Create a migration job** window. If no job definitions exist within the project, you can also select **Create a job definition** near the bottom of the pane, as shown in the following example.
:::image type="content" source="media/job-definition-create/project-selected-sml.png" alt-text="Screen capture of the Project Explorer's Overview tab within the Azure portal highlighting the use of filters." lightbox="media/job-definition-create/project-selected-lrg.png":::
Refer to the [resource naming convention](../azure-resource-manager/management/r
:::image type="content" source="media/job-definition-create/tab-basics-sml.png" alt-text="Screen capture of the migration job's Basics tab, showing the location of the data fields." lightbox="media/job-definition-create/tab-basics-lrg.png":::
-1. In the **Source** tab, select an option within the **Source endpoint** field.
+1. In the **Source** tab, select an option within the **Source endpoint** field. You can choose to either use an existing source endpoint or create a new endpoint resource.
- If you want to use a source endpoint you've previously defined, choose the **Select an existing endpoint** option. Next, select the **Select an existing endpoint as a source** link to open the source endpoint pane. This pane displays a detailed list of your previously defined endpoints. Select the appropriate endpoint and select **Select** to return to the **Source** tab and populate the **Existing source endpoint** field.
+ If you want to use an existing source endpoint that you've previously defined, choose the **Select an existing endpoint** option. Next, select the **Select an existing endpoint as a source** link to open the source endpoint pane. This pane displays a detailed list of your previously defined endpoints. Select the appropriate endpoint and select **Select** to return to the **Source** tab and populate the **Existing source endpoint** field.
:::image type="content" source="media/job-definition-create/endpoint-source-existing-sml.png" alt-text="Screen capture of the Source tab illustrating the location of the Existing Source Endpoint field." border="false" lightbox="media/job-definition-create/endpoint-source-existing-lrg.png":::
- To define a new source endpoint from which to migrate, select the **Create a new endpoint** option. Next, provide values for the required **Host name or IP**, **Share name**, and **Protocol version** fields. You may also add an optional description value of less than 1024 characters.
+ To define a new source endpoint from which to migrate your data, select the **Create a new endpoint** option. Next, provide values for the required **Host name or IP**, **Share name**, and **Protocol version** fields. You may also add an optional description value of less than 1024 characters.
:::image type="content" source="media/job-definition-create/endpoint-source-new-sml.png" alt-text="Screen capture of the Source tab illustrating the location of the New Source Endpoint fields." lightbox="media/job-definition-create/endpoint-source-new-lrg.png":::
+ Only certain types of endpoints may be used as a source or a target, respectively. The steps to create different endpoint types are similar, as are their corresponding data fields. The key differentiator between the creation of NFS- and SMB-enabled endpoints is the use of Azure Key Vault to store the shared credential for SMB resources. When you create an endpoint resource that supports the SMB protocol, you're required to provide values for the Key Vault name, and the names of the username and password secrets as well.
+
+ Select the name of the Key Vault from the **Key Vault** drop-down lists. You can provide values for the **Secret for username** and **Secret for password** by selecting the relevant secret from the corresponding drop-down list. Alternatively, you can provide the URI to the secret as shown in the following screen capture.
+
+ For more details on endpoint resources, see the [Managing Storage Mover endpoints](endpoint-manage.md) article.
+
+ :::image type="content" source="media/job-definition-create/endpoint-smb-new-sml.png" alt-text="Screen capture of the fields required to create a new SMB source endpoint resource." lightbox="media/job-definition-create/endpoint-smb-new-lrg.png":::
+ <a name="sub-path"></a>
- By default, migration jobs start from the root of your share. However, if your use case involves copying data from a specific path within your source share, you can provide the path in the **Sub-path** field. Supplying this value will start the data migration from the location you've specified. If the sub path you've specified isn't found, no data will be copied.
+ By default, migration jobs start from the root of your share. However, if your use case involves copying data from a specific path within your source share, you can provide the path in the **Sub-path** field. Supplying this value starts the data migration from the location you've specified. If the sub path you've specified isn't found, no data is copied.
Prior to creating an endpoint and a job resource, it's important to verify that the path you've provided is correct and that the data is accessible. You're unable to modify endpoints or job resources after they're created. If the specified path is wrong, your only option is to delete the resources and re-create them.
Refer to the [resource naming convention](../azure-resource-manager/management/r
1. In the **Target** tab, select an option for the **Target endpoint** field.
- As with the source endpoint, choose the **Select an existing endpoint reference** option if you want to use a previously defined endpoint. Next, select the **Select an existing endpoint as a target** link to open the target endpoint pane. A detailed list of your previously defined endpoints is displayed. First, select the desired endpoint, then **Select** to populate the **Existing source endpoint** field and return to the **Source** tab.
+ As with the source endpoint, choose the **Select an existing endpoint reference** option if you want to use a previously defined endpoint. Next, select the **Select an existing endpoint as a target** link to open the target endpoint pane. A detailed list of your previously defined endpoints is displayed. From the endpoint list, select the desired endpoint, then **Select** to populate the **Existing source endpoint** field and return to the **Source** tab.
:::image type="content" source="media/job-definition-create/endpoint-target-existing-sml.png" alt-text="Screen capture of the Target tab illustrating the location of the Existing Target Endpoint field." border="false" lightbox="media/job-definition-create/endpoint-target-existing-lrg.png":::
- Similarly, to define a new target endpoint, choose the **Create a new endpoint** option. Next, select values from the drop-down lists for the required **Subscription**, **Storage account**, and **Container** fields. You may also add an optional description value of less than 1024 characters.
+ Similarly, to define a new target endpoint, choose the **Create a new endpoint** option. Next, select values from the drop-down lists for the required **Subscription** and **Storage account** fields. You may also add an optional description value of less than 1024 characters. Depending on your use case, select the appropriate ***Target type**.
+
+ Recall that certain types of endpoints may be used as a source or a target, respectively.
+
+ [!INCLUDE [protocol-endpoint-agent](includes/protocol-endpoint-agent.md)]
+
+ > [!IMPORTANT]
+ > Support for the SMB protocol is currently in public preview and some functionality may not yet be available. Currently, the only supported migration path consists of an SMB mount source to an Azure file share destination.
:::image type="content" source="media/job-definition-create/endpoint-target-new-sml.png" alt-text="Screen capture of the Target tab illustrating the location of the New Target Endpoint fields." lightbox="media/job-definition-create/endpoint-target-new-lrg.png":::
- A target subpath value can be used to specify a location within the target container where your migrated data will be copied. The subpath value is relative to the container's root. Omitting the subpath value results in the data being copied to the root, while providing a unique value will generate a new subfolder.
+ A target subpath value can be used to specify a location within the target container where your migrated data to be copied. The subpath value is relative to the container's root. You can provide a unique value to generate a new subfolder. If you omit the subpath value, data is copied to the root.
After ensuring the accuracy of your settings, select **Next** to continue.
Refer to the [resource naming convention](../azure-resource-manager/management/r
<a name="copy-modes"></a> **Merge source into target:**
- - Files will be kept in the target, even if they donΓÇÖt exist in the source.
- - Files with matching names and paths will be updated to match the source.
- - File or folder renames between copies lead to duplicate content in the target.
-
+ - Files are in the target, even if they donΓÇÖt exist in the source.
+ - Files with matching names and paths are updated to match the source.
+ - File or folder renames between copies create duplicate content in the target.
+ **Mirror source to target:**
- - Files in the target will be deleted if they donΓÇÖt exist in the source.
- - Files and folders in the target will be updated to match the source.
- - File or folder renames between copies won't lead to duplicate content. A renamed item on the source side leads to the deletion of the item with the original name in the target. Additionally, the renamed item is also uploaded to the target. If the renamed item is a folder, the described behavior of delete and reupload applies to all files and folders contained in it. Avoid renaming folders during a migration, especially near the root level of your source data.
+ - Files in the target are deleted if they donΓÇÖt exist in the source.
+ - Files and folders in the target are updated to match the source.
+ - File or folder renames between copies don't generate duplicate content. A renamed item on the source side leads to the deletion of the item with the original name in the target. Additionally, the renamed item is also uploaded to the target. If the renamed item is a folder, the described behavior of delete and reupload applies to all files and folders contained in it. Avoid renaming folders during a migration, especially near the root level of your source data.
- **Migration outcomes** are based upon the specific storage types of the source and target endpoints. For example, because blob storage only supports "virtual" folders, source files in folders will have their paths prepended to their names and placed in a flat list within a blob container. Empty folders will be represented as an empty blob in the target. Source folder metadata is persisted in the custom metadata field of a blob, as they are with files.
+ **Migration outcomes** are based upon the specific storage types of the source and target endpoints. For example, because blob storage only supports "virtual" folders, source files in folders have their paths prepended to their names and placed in a flat list within a blob container. Empty folders are represented as an empty blob in the target. Source folder metadata is persisted in the custom metadata field of a blob, as they are with files.
After viewing the effects of the copy mode and migration outcomes, select **Next** to review the values from the previous tabs.
Refer to the [resource naming convention](../azure-resource-manager/management/r
### [PowerShell](#tab/powershell)
-You need to use several cmdlets to create a new job definition.
+You need to use several cmdlets to create a new job definition. As previously mentioned, source and target endpoints must be created before they're referenced by a job definition.
-Use the `New-AzStorageMoverJobDefinition` cmdlet to create new job definition resource in a project. The following example assumes that you aren't reusing *storage endpoints* you've previously created.
+Use the `New-AzStorageMoverJobDefinition` cmdlet to create new job definition resource in a project. The following examples assume that you aren't reusing *storage endpoints* you've previously created.
```powershell
$sourceEpName = "Your source endpoint name could be the name of the share
$sourceEpDescription = "Optional, up to 1024 characters" $sourceEpHost = "The IP address or DNS name of the source share NAS or server" $sourceEpExport = "The name of your source share"
-## Note that Host and Export will be concatenated to Host:/Export to form the full path
+## Note that Host and Export will be concatenated to [Host]:/[Export] to form the full path
## to the source NFS share New-AzStorageMoverNfsEndpoint `
storage-mover Migration Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/migration-basics.md
Previously updated : 06/21/2022 Last updated : 08/07/2023 <!--
REVIEW Stephen/Fabian: not reviewed
REVIEW Engineering: not reviewed EDIT PASS: not started
+Initial doc score: 83 (1761 words and 29 issues)
+Current doc score: 100 (1749 words and 0 issues)
+ !######################################################## --> # Cloud migration basics for file and folder storage
-Every migration starts with a business need. A certain workload will be transformed by a cloud migration of the files and folders it depends on. A workload can be either an application or direct user access. In either case, the workload has a dependency on storage that you'll move to the cloud. The workload might also move to the cloud, or remain where it's and will need to be instructed to point to the new cloud storage location. These details are recorded in your *cloud solution design* that has a storage section.
+Every migration starts with a business need. A cloud migration transforms a workload by moving the files and folders on which it depends. A workload can be either an application or direct user access. In either case, the workload has a dependency on storage that you move to the cloud. The workload might also move to the cloud, or remain where in place but require a configuration change in order to point to the new cloud storage location. These details are recorded in your *cloud solution design* that has a storage section.
The purpose of this article is to provide insight into how you can achieve a storage migration to Azure, such that you can realize your cloud solution design for storage. :::image type="content" source="media/migration-basics/migration-phases.png" alt-text="Summary illustration showing migration phases: Discover, Assess, Plan, Deploy, Migrate, Post-Migrate to illustrate the sections to come in this article." lightbox="media/migration-basics/migration-phases-large.png":::
-Migrating files and folders to the cloud requires careful planning and many considerations along the way to achieve an optimal result. Azure Storage Mover provides a growing list of features and migration scenarios that support you on your journey. In this article we'll break common tasks of a migration into phases that each have their own section.
+Migrating files and folders to the cloud requires careful planning and many considerations along the way to achieve an optimal result. Azure Storage Mover provides a growing list of features and migration scenarios that support you on your journey. In this article, we break common tasks of a migration into phases that each have their own section.
## Phase 1: Discovery
-In the discovery phase, you decide which source locations are part of your migration project. Azure Storage Mover handles source locations in form of file shares. These locations could reside on Network Attached Storage (NAS), a server, or even on a workstation. Common protocols for file shares are SMB and NFS.
+In the discovery phase, you decide which source locations are part of your migration project. Azure Storage Mover handles source locations in form of file shares. These locations could reside on Network Attached Storage (NAS), a server, or even on a workstation. Common protocols for file shares are SMB (Server Message Block) and NFS (Network File System).
-If your workload uses Direct Attached Storage (DAS), then most likely Azure Storage Mover can still assist with your cloud migration. You may be able to create a file share on the local folder path and then share out the location over the local network. With proper permissions and networking considerations, you'll now be able to migrate this location to Azure, even if your application uses the local path.
+If your workload uses Direct Attached Storage (DAS), then most likely Azure Storage Mover can still assist with your cloud migration. You may be able to create a file share on the local folder path and then share out the location over the local network. With proper permissions and networking considerations, you're now able to migrate this location to Azure, even if your application uses the local path.
-Start by making a list of all the shares your workload depends on. Refer to your cloud solution design to see which shares remain where they are and which are in scope of the cloud migration. Narrow the scope of your migration project as much as possible. Ultimately, your workload will need to fail over to the cloud locations. The smaller the number of source locations, the easier the failover of your workload will be.
+Start by making a list of all the shares on which your workload depends. Refer to your cloud solution design to see which shares remain on-premises are and which are in-scope for cloud migration. Narrow the scope of your migration project as much as possible. Ultimately, your workload needs to fail over to the cloud locations. The smaller the number of source locations, the easier the failover of your workload.
If you need to migrate storage for multiple workloads at roughly the same time, you should split them into individual migration projects.
Azure Storage Mover offers [migration projects](resource-hierarchy.md#migration-
## Phase 2: Assessment
-Azure offers various types of cloud storage. A fundamental aspect of file migrations to Azure is determining which Azure storage option is right for your data. The number of files and folders, their directory structure, file fidelity and other aspects are important inputs into a complete cloud solution design.
+Azure offers various types of cloud storage. A fundamental aspect of file migrations to Azure is determining which Azure storage option is right for your data. The number of files and folders, their directory structure, access protocol, file fidelity and other aspects are important inputs into a complete cloud solution design.
-In the assessment phase, you'll investigate your discovered and short-listed shares to ensure you've picked the right Azure target storage for your cloud solution design.
+In the assessment phase, you investigate your discovered and short-listed shares to ensure you've picked the right Azure target storage for your cloud solution design.
A key part of any migration is to capture the required file fidelity when moving your files from their current storage location to Azure. Different file systems and storage devices record an array of file fidelity information, and fully preserving or keeping that information in Azure isn't always necessary. The file fidelity required by your scenario, and the degree of fidelity supported by the storage offering in Azure, also helps you to pick the right storage solution in Azure. General-purpose file data traditionally depends on at least some file metadata. App data might not.
Here are the two basic components of a file:
- **Data stream:** The data stream of a file stores the file content. - **File metadata:** The file metadata has these subcomponents:
- - file attributes like read-only
- - file permissions, for instance NTFS permissions or file and folder ACLs
- - timestamps, most notably the creation, and last-modified timestamps
- - an alternate data stream, which is a space to store larger amounts of non-standard properties
+ - file attributes, such as *read-only*
+ - file permissions, such as NTFS permissions or file and folder access control lists (ACLs)
+ - timestamps, most notably the *creation* and *last-modified* timestamps
+ - an alternate data stream, which is a space to store larger amounts of nonstandard properties
File fidelity in a migration can be defined as the ability to:
File fidelity in a migration can be defined as the ability to:
- Transfer files with the migration service or tool. - Store files in the target storage of the migration.
-The output of the assessment phase is a list of aspects found in the source share. For example: share size, number of namespace items (combined count of files and folders), fidelity that needs to be preserved in the Azure storage target, fidelity that must remain natively working in the Azure storage target.
+The output of the assessment phase is a list of aspects found in the source share. These aspects may include data such as:
+
+- Share size.
+- The number of namespace items, or the combined count of files and folders.
+- The level of fidelity that needs to be preserved in the Azure storage target.
+- The level of fidelity that must remain natively working in the Azure storage target.
This insight is an important input into your cloud solutions design for storage. ## Phase 3: Planning
-In the planning phase, you're combining your discovered source shares with your target locations in Azure.
+In the planning phase, you combine your discovered source shares with your target locations in Azure.
-The planning phase maps each source share to a concrete destination. For instance an Azure blob container. To do that, you must plan and record which Azure subscription and storage account will contain your target container.
+The planning phase maps each source share to a specific destination, such as an Azure blob container or an Azure file share. To do that, you must plan and record which Azure subscription and storage accounts contain your target resources.
-In the Azure Storage Mover service, you can record each source/target pair as a [job definition](resource-hierarchy.md#job-definition). A job definition is nested in the migration project you've previously created. You'll need a new, distinct job definition for each source/target pair.
+In the Azure Storage Mover service, you can record each source/target pair as a [job definition](resource-hierarchy.md#job-definition). A job definition is nested in the migration project you've previously created. You need a new, distinct job definition for each source/target pair.
> [!NOTE]
-> In this release of Azure Storage Mover, your target storage must exist, before you can create a job definition. For instance if your target is an Azure blob container, you'll need to deploy that first before making a new job definition.
+> In this release of Azure Storage Mover, your target storage must exist before you can create a job definition. For example, if your target is an Azure blob container, you need to deploy it before you create a new job definition.
The outcome of the planning phase is a mapping of source shares to Azure target locations. If your targets don't already exist, you'll have to complete the next phase "Deploy" before you can record your migration plan in the Azure Storage Mover service. ## Phase 4: Deployment
-When you have a completed migration plan, you'll need to deploy the target Azure Storage resources, like storage accounts, containers, etc. before you can record your migration plan in Azure Storage Mover as a job definition for each source/target pair.
+After you complete a migration plan, you need to ensure that the target Azure Storage resources such as storage accounts and containers are deployed. You need to complete this deployment before you can record your migration plan as a job definition for each source/target pair within Azure Storage Mover.
-Azure Storage Mover currently can't help with the target resource deployment. To deploy Azure storage, you can use the Azure portal, Az PowerShell, Az CLI, or a [Bicep template](../azure-resource-manager/bicep/overview.md).
+Azure Storage Mover currently can't help with the target resource deployment. To deploy Azure storage, you can use the Azure portal, Azure PowerShell, Azure CLI, or a [Bicep template](../azure-resource-manager/bicep/overview.md).
> [!IMPORTANT]
-> When deploying Azure Storage, [review the support source / target combinations](service-overview.md#supported-sources-and-targets) for Azure Storage Mover and ensure that you don't configure advanced storage settings like private links.
+> When deploying Azure Storage, [review the support source/target pair combinations](service-overview.md#supported-sources-and-targets) for Azure Storage Mover and ensure that you don't configure unsupported scenarios.
## Phase 5: Migration
-The migration phase is where the actual copy of your files and folders to the Azure target location occurs.
+The work of copying of your files and folders to an Azure target location occurs within the migration phase.
There are two main considerations for the migration phase:
-1. Minimize downtime of your workload.
-1. Determine the correct migration mode.
+
+- Minimize the downtime of your workload.
+- Determine the correct migration mode.
### Minimize downtime
-When migrating workloads, it's often a requirement to minimize the time the workload can't access the storage it depends on. This section discusses a common strategy to minimize workload downtime:
+During a migration, there may be periods of time during which a workload is unable to access the storage on which it depends. Minimizing these periods of time is often a requirement. This section discusses a common strategy to minimize workload downtime.
-**Convergent, n-pass migration**
+#### Convergent, n-pass migration
-In this strategy, you copy from source to target several times. During these copy iterations, the source remains available for read and write to the workload. Just before the final copy iteration, you take the source offline. It's expected that the final copy finishes faster than say the first copy you've ever made. After the final copy, the workload is failed over to use the new target storage in Azure.
+In this strategy, you copy data from source to target several times. During these copy iterations, the source remains available for read and write to the workload. Just before the final copy iteration, you take the source offline. It's expected that the final copy finishes faster than the initial copy. After the final copy, the workload is failed over to use the new target storage in Azure.
-Azure Storage Mover supports copying from source to target as often as you require. A job definition stores your source, your target and migrations settings. You can instruct a migration agent to execute your job definition. That results in a job run. In this linked article, you can learn more about the [Storage Mover resource hierarchy](resource-hierarchy.md).
+Azure Storage Mover supports copying from source to target as often as you require. A job definition stores your source, target, and migration settings. You can instruct a migration agent to execute your job definition, which results in a job run. In this linked article, you can learn more about the [Storage Mover resource hierarchy](resource-hierarchy.md).
<!-- Needs a video in the future <!!!!!!!!! VIDEO !!!!!!!!!!!!> --> ### Migration modes
-How your files are copied from source to target matters just as much as from where to where the copy occurs. Different migration scenarios need different settings. During a migration, youΓÇÖll likely copy from source to target several times - to [minimize downtime](#minimize-downtime). When files or folders change between copy iterations, the *copy mode* will determine the behavior of the migration engine. Carefully select the correct mode, based on the expected changes to your namespace during the migration.
+*How* your files are copied from source to target is equally important as *where* the files are copied to and from. Different migration scenarios require different settings. During a migration, you're likely to copy from source to target several times in order to [minimize downtime](#minimize-downtime). When files or folders change between copy iterations, the *copy mode* determines the behavior of the migration engine. Carefully select the correct mode, based on the expected changes to your namespace during the migration.
There are two copy modes: | Copy mode | Migration behavior | |--|--|
-|**Mirror**<br/>The target will look like the source. | *- Files in the target will be deleted if they donΓÇÖt exist in the source.*<br/>*- Files and folders in the target will be updated to match the source.* |
-|**Merge**<br/>The target has more content than the source, and you keep adding to it. | *- Files will be kept in the target, even if they donΓÇÖt exist in the source.*<br/>*- Files with matching names and paths will be updated to match the source.*<br/>*- Folder renames between copies may lead to duplicate content in the target.*|
-
-> [!NOTE]
-> The current release of Azure Storage Mover only supports the **Merge** mode.
+|**Mirror**<br/>The target looks like the source. | - *Files in the target are deleted if they donΓÇÖt exist in the source.*<br/>- *Files and folders in the target are updated to match the source.* |
+|**Merge**<br/>The target has more content than the source, and you keep adding to it. | - *Files are kept in the target, even if they donΓÇÖt exist in the source.*<br/>- *Files with matching names and paths are updated to match the source.*<br/>- *Folder renames between copies may lead to duplicate content in the target.*|
## Phase 6: Post-migration tasks
-In this phase of the migration you need to think about other configurations and services that enable you to fail over your workload and to safeguard your data.
+In this phase of the migration, you need to think about other configurations and services that enable you to fail over your workload and to safeguard your data.
-For instance, failing-over your workload requires a network path to safely access Azure storage. The public endpoint of an Azure storage account is currently required for migration, but now that your migration is complete, you may think about configuring [private endpoints for your storage account](../storage/common/storage-private-endpoints.md) and [enable firewall rules to disable data requests over the public endpoint](../storage/common/storage-network-security.md).
+For instance, failing-over your workload requires a network path to safely access Azure storage. If you used the public endpoint of an Azure storage account during migration, consider configuring [private endpoints for your storage account](../storage/common/storage-private-endpoints.md) and [enable firewall rules to disable data requests over the public endpoint](../storage/common/storage-network-security.md).
Here are a few more recommendations: - [Data protection](../storage/blobs/security-recommendations.md#data-protection) - [Identity and access management](../storage/blobs/security-recommendations.md#identity-and-access-management)-- [Networking](../storage/blobs/security-recommendations.md#networking)
+- [Networking](../storage/blobs/security-recommendations.md#networking)
## Next steps
storage-mover Performance Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/performance-targets.md
Previously updated : 03/27/2023 Last updated : 08/07/2023 <!--
CONTENT: final
REVIEW Stephen/Fabian: not reviewed REVIEW Engineering: not reviewed
+Initial doc score: 83
+Current doc score: 93 (1201 words and 10 false-positive issues)
+ !######################################################## -->
Azure Storage Mover is tested with 100 million namespace items (files and folder
Azure Storage Mover is a hybrid cloud service. Hybrid services have a cloud service component and an infrastructure component the administrator of the service runs in their corporate environment. For Storage Mover, that hybrid component is a migration agent. Agents are virtual machines, ran on a host near the source storage.
-Only the agent is a relevant part of the service for performance testing. To omit privacy and performance concerns, data travels directly from the Storage Mover agent to the target storage in Azure. Only control and telemetry messages are sent to the cloud service.
- :::image type="content" source="media/across-articles/data-vs-management-path.png" alt-text="A diagram illustrating a migration's path by showing two arrows. The first arrow represents data traveling to a storage account from the source or agent and a second arrow represents only the management or control info to the storage mover resource or service." lightbox="media/across-articles/data-vs-management-path-large.png":::
-The following table describes the characteristics of the test environment that produced the performance test results shared later in this article.
-
-|Test | Result |
-|-|-|
-|Test namespace | 19% files 0 KiB - 1 KiB <br />57% files 1 KiB - 16 KiB <br />16% files 16 KiB - 1 MiB <br />6% folders |
-|Test source device | Linux server VM <br />16 virtual CPU cores<br />64-GiB RAM |
-|Test source share | NFS v3.0 share <br /> Warm cache: Data set in memory (baseline test). In real-world scenarios, add disk recall times. |
-|Network | Dedicated, over-provisioned configuration, negligible latency. No bottle neck between source - agent - target Azure storage.) |
+Only the agent is a relevant part of the service for performance testing. To omit privacy and performance concerns, data travels directly from the Storage Mover agent to the target storage in Azure. Only control and telemetry messages are sent to the cloud service.
## Performance baselines These test results are created under ideal conditions. They're meant as a baseline of the components the Storage Mover service and agent can directly influence. Differences in source devices, disks, and network connections aren't considered in this test. Real-world performance varies.
-Different agent resource configurations are tested:
+### [SMB Mount : Azure file share](#tab/smb)
+
+Migration from SMB mount to Azure file share tests were executed as follows:
+
+The following table describes the characteristics of the test environments that produced the performance test results from an SMB mount to an Azure file share.
+
+|Test No. |No. of files |Total files weight |File size |Folder structure |
+|-|-|-|-||
+|**1** |12 million |12 GB |1 KB each |12 folders, each with 100 sub-folders containing 10,000 files |
+|**2** |30 |20 GB | |1 folder |
+|**3** |1 million |100 GB |100 KB each |1,000 folders, each with 1,000 files |
+|**4** |1 | |4 TB | |
+|**5** |117 million |117 GB |1 KB each |117 folders, each with 100 sub-folders containing 10,000 files |
+|**6** |1 | |1 TB | |
+|**7** |3.3 million |45 GB |13 KB each |200,000 folders, each contains 16\17 files |
+|**8** |50 million |1 TB |20 KB each |2,940,000 folders, each contains 17 files |
+|**9** |100 million |2 TB |20 KB each |5,880,000 folders, each contains 17 files |
+
+Different agent resource configurations are tested on SMB endpoints:
+
+1. **Minspec: 4 CPU / 8 GB RAM**
+ 4 virtual CPU cores at 2.7 GHz each and 8 GiB of memory (RAM) is the minimum specification for an Azure Storage Mover agent.
+
+ |Test No. |Execution time |Scanning time |
+ ||-|--|
+ |**6** |16 min, 42 sec | 1.2 sec |
+ |**7** |55 min, 4 sec | 1 min, 17 sec |
+ |**8** | | |
+ |**9** | | |
-### [4 CPU / 8-GiB RAM](#tab/minspec)
+1. **Bootspec: 8 CPU / 16 GiB RAM**
+ 8 virtual CPU cores at 2.7 GHz each and 16 GiB of memory (RAM) is the minimum specification for an Azure Storage Mover agent.
-4 virtual CPU cores at 2.7 GHz each and 8 GiB of memory (RAM) is the minimum specification for an Azure Storage Mover agent.
+ *Results: Standard storage account*
-|Test | Single file, 1 TiB|&tilde;3.3M files, &tilde;200-K folders, &tilde;45 GiB |&tilde;50M files, &tilde;3M folders, &tilde;1 TiB |
-|--|-||--|
-|Elapsed time | 16 Min, 42 Sec | 15 Min, 18 Sec | 5 Hours, 28 Min |
-|Items* per Second | - | 3548 | 2860 |
-|Memory (RAM) usage | 400 MiB | 1.4 GiB | 1.4 GiB |
-|Disk usage (for logs) | 28 KiB | 1.4 GiB | *result missing* |
+ |Test No. |Execution time |Scanning time |
+ ||||
+ |**1** |15 hr, 59 min |2 hr, 36 min, 34 sec |
+ |**2** |1 min, 54 sec |3.34 sec |
+ |**3** |1 hr, 19 min, 27 sec |57.62 sec |
+ |**4** |1 hr, 5 min, 57 sec |2.89 sec |
-*A namespace item is either a file or a folder.
+ *Results: Standard storage account with large files enabled*
-### [8 CPU / 16 GiB RAM](#tab/boostspec)
+ |Test No. |Execution time |Scanning time |
+ ||||
+ |**1** |3 hr, 51 min, 31 sec |41 min and 45 sec |
+ |**5** |25 hr, 47 min |23 hr, 35 min |
+ |**6** |11 min, 11 sec |0.7 sec |
+ |**7** |55 min, 10 sec |1 min, 3 sec |
+ |**8** | | |
+ |**9** | | |
-8 virtual CPU cores at 2.7 GHz each and 16 GiB of memory (RAM) is the minimum specification for an Azure Storage Mover agent.
+ *Results: Premium storage account*
-|Test | Single file, 1 TiB| &tilde;3.3M files, &tilde;200 K folders, &tilde;45 GiB |
-|--|-|--|
-|Elapsed time | 14 Min, 36 Sec | 8 Min, 30 Sec |
-|Items* per Second | - | 6298 |
-|Memory (RAM) usage | 400 MiB | 1.4 GiB |
-|Disk usage (for logs) | 28 KiB | 1.4 GiB |
+ |Test No. |Execution time |Scanning time |
+ ||||
+ |**1** |2 hr, 35 min, 14 sec |24 min, 46 sec |
+ |**5** |23 hr, 34 min |21 hr, 34 min |
-*A namespace item is either a file or a folder.
+### [NFS mount : Azure blob container](#tab/nfs)
+
+The following table describes the characteristics of the test environment that produced the performance test results.
+
+|Test | Result |
+|-|-|
+|Test namespace | 19% files 0 KiB - 1 KiB <br />57% files 1 KiB - 16 KiB <br />16% files 16 KiB - 1 MiB <br />6% folders |
+|Test source device | Linux server VM <br />16 virtual CPU cores<br />64-GiB RAM |
+|Test source share | NFS v3.0 share <br /> Warm cache: Data set in memory (baseline test). In real-world scenarios, add disk recall times. |
+|Network | Dedicated, over-provisioned configuration, negligible latency. No bottle neck between source - agent - target Azure storage. |
+
+Different agent resource configurations are tested on NFS endpoints:
+
+1. **Minspec: 4 CPU / 8 GB RAM**<br/>
+ 4 virtual CPU cores at 2.7 GHz each and 8 GiB of memory (RAM) is the minimum specification for an Azure Storage Mover agent.
+
+ |Test | Single file, 1 TiB|&tilde;3.3M files, &tilde;200 K folders, &tilde;45 GiB |&tilde;50M files, &tilde;3M folders, &tilde;1 TiB |
+ |--|-||--|
+ |Elapsed time | 16 Min, 42 Sec | 15 Min, 18 Sec | 5 hr, 28 Min |
+ |Items* per Second | - | 3548 | 2860 |
+ |Memory (RAM) usage | 400 MiB | 1.4 GiB | 1.4 GiB |
+ |Disk usage (for logs) | 28 KiB | 1.4 GiB | *result missing* |
+
+ *A namespace item is either a file or a folder.
+
+1. **Boot spec: 8 CPU / 16 GiB RAM**
+ 8 virtual CPU cores at 2.7 GHz each and 16 GiB of memory (RAM) is the minimum specification for an Azure Storage Mover agent.
+
+ |Test | Single file, 1 TiB| &tilde;3.3M files, &tilde;200-K folders, &tilde;45 GiB |
+ |--|-|--|
+ |Elapsed time | 14 Min, 36 Sec | 8 Min, 30 Sec |
+ |Items* per Second | - | 6298 |
+ |Memory (RAM) usage | 400 MiB | 1.4 GiB |
+ |Disk usage (for logs) | 28 KiB | 1.4 GiB |
+
+ *A namespace item is either a file or a folder.
+ [Review recommended agent resources](agent-deploy.md#recommended-compute-and-memory-resources) for your migration scope in the [agent deployment article](agent-deploy.md). ## Why migration performance varies
storage-mover Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/release-notes.md
Previously updated : 4/14/2022 Last updated : 08/04/2023 # Release notes for the Azure Storage Mover service
The following Azure Storage Mover agent versions are supported:
| Milestone | Version number | Release date | Status | ||-|--|-|
+| Refresh release | 2.0.287 | August 5, 2023 | Supported |
| Refresh release | 1.1.256 | June 14, 2023 | Supported | | General availability release | 1.0.229 | April 17, 2023 | Supported | | Public preview release | 0.1.116 | September 15, 2022 | Functioning. No longer supported by Microsoft Azure Support teams.|
Azure Storage Mover is a hybrid service, which continuously introduces new featu
- Major versions are supported for at least six months from the date of initial release. - We guarantee there's an overlap of at least three months between the support of major agent versions.-- The [Supported agent versions](#supported-agent-versions) table lists expiration dates. Agent versions that have expired, might still be able to update themselves to a supported version but there are no guarantees.
+- The [Supported agent versions](#supported-agent-versions) table lists expiration dates. Agent versions that have expired, might still be able to update themselves to a supported version but there are no guarantees.
> [!IMPORTANT] > Preview versions of the Storage Mover agent cannot update themselves. You must replace them manually by deploying the [latest available agent](https://aka.ms/StorageMover/agent).
+## 2023 August 5
+
+Refresh release notes for:
+
+- Service version: August 5, 2023
+- Agent version: 2.0.287
+
+### Migration scenarios
+Azure Storage mover can migrate your SMB share to Azure file shares (in public preview).
+
+### Service
+
+- [Two new endpoints](endpoint-manage.md) have been introduced.
+- [Error messages](status-code.md) have been improved.
+
+### Agent
+
+- Changes to include handling of SMB sources and the data plane transfer to Az Files
+- Handling SMB credentials via Azure Key Vault.
+
+### Limitations
+
+- Folder ACLs are not updated on incremental transfers.
+- Last modified dates on folders are not preserved.
+ ## 2023 June 14 Refresh release notes for:
Existing migration scenarios from the GA release remain unchanged. This release
- When moving the storage mover resource in your resource group, an issue was fixed where some properties may have been left behind. - Error messages have been improved.
-## Agent
+### Agent
- Fixed an issue with registration failing sometimes when a proxy server connection and a private link scope were configured at the same time. - Improved the security stance by omitting to transmit a specific user input to the service that is no longer necessary.
storage-mover Resource Hierarchy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/resource-hierarchy.md
Previously updated : 03/27/2023 Last updated : 07/25/2023 <!--
REVIEW Engineering: not reviewed
EDIT PASS: not started Initial doc score: 86
-Current doc score: 96 (1915 words and 0 issues)
+Current doc score: 100 (1570 words and 2 false positive issues)
!######################################################## -->
A storage mover resource is the name of the top-level service resource that you
You're better able to utilize your agents and manage your migrations if all resources find their home in the same storage mover instance.
-An agent can only be registered to one storage mover.
+A migration agent can only be registered to one storage mover.
-When you deploy the resource, your subscription is registered with the *Microsoft.StorageMover* and *Microsoft.HybridCompute* resource providers. You also assign the region in which control messages and metadata about your migration is stored. The region assignment doesn't
-
-<! A storage mover resource has a region you'll assign at the time of it's deployment. The region you select is only determining where control messages and metadata about your migration is stored. The data that is migrated, is sent directly from the agent to the target in Azure Storage. Your files never travel through the Storage Mover service. That means the proximity between source, agent, and target storage is more important for migration performance than the location of your storage mover resource.-->
+When you deploy the resource, your subscription is registered with the *Microsoft.StorageMover* and *Microsoft.HybridCompute* resource providers. You also assign the region in which control messages and metadata about your migration is stored. The Storage Mover resource itself isn't directly responsible for migrating your data. Instead, a migration agent copies your data from the source and sends it directly to the target in Azure Storage. Because the agent performs most the work, the proximity between source, agent, and target storage is more important for migration performance than your storage mover resource's location.
:::image type="content" source="media/across-articles/data-vs-management-path.png" alt-text="A diagram illustrating the data flow by showing two arrows. The first arrow represents data traveling to a storage account from the source or agent and a second arrow represents only the management or control info to the storage mover resource or service." lightbox="media/across-articles/data-vs-management-path-large.png"::: ## Migration agent
-Storage Mover is a hybrid service and utilizes one or more migration agents to facilitate migrations. The agent is a virtual machine you are running in your network. It's also the name of a resource, parented to the storage mover resource you've deployed in your resource group.
-
-Your agents appear in your storage mover after they've been registered. Registration creates the trust relationship to the storage mover resource you've selected during registration and enables you to manage all migration related aspects from the cloud service. (Azure portal, Azure PowerShell/CLI)
+Storage Mover is a hybrid service and utilizes one or more migration agents to facilitate migrations. The agent is a virtual machine that runs within your network. It's also the name of a resource, parented to the storage mover resource you've deployed in your resource group.
You can deploy several migration agent VMs and register each with a unique name to the same storage mover resource. If you have migration needs in different locations, it's best to have a migration agent very close to the source storage you'd like to migrate.
+Your agents appear in your storage mover after they've been registered. Registration creates the trust relationship with the storage mover resource you've selected during registration. This trust enables you to manage all migration related aspects from the cloud service, either through the Azure portal, Azure PowerShell, or Azure CLI.
+ > [!TIP] > The proximity and network quality between your migration agent and the target storage in Azure determine migration velocity in early stages of your migration. The region of the storage mover resource you've deployed doesn't play a role for performance.
You can deploy several migration agent VMs and register each with a unique name
## Migration project
-A project allows you to organize your larger scale cloud migration into smaller, more manageable units that make sense for your situation.
+A project allows you to organize your larger scale cloud migrations into smaller, more manageable units that make sense for your situation.
-The smallest unit of a migration can be defined as the contents of one source moving into one target. But data center migrations are rarely that simple. Often multiple sources support one workload and must be migrated together for timely failover of the workload to the new cloud storage locations in Azure.
+The smallest unit of a migration can be defined as the contents of one source moving into one target, but data center migrations are rarely that simple. Often multiple sources support one workload and must be migrated together for timely failover of the workload to the new cloud storage locations in Azure.
In a different example, one source may even need to be split into multiple target locations. The reverse is also possible, where you need to combine multiple sources into subpaths of the same target location in Azure. :::image type="content" source="media/resource-hierarchy/project-illustration.png" alt-text="an image showing the nested relationship of a project into a storage mover resource. It also shows child objects of the resource, called job definitions, described later in this article." lightbox="media/resource-hierarchy/project-illustration-large.png":::
-Grouping sources into a project doesn't mean you have to migrate all of them in parallel. You have control over what to run and when to run it. The remaining paragraphs in this article describe more resources that allow for such fine-grained control.
+Grouping sources into a project doesn't mean you have to migrate all of them in parallel. You have control over what to run and when to run it. The remaining sections in this article describe more resources that allow for such fine-grained control.
> [!TIP] > You can optionally add a description to your project. A description can help to keep track of additional information for your project. If you've already created a migration plan elsewhere, the description field can be used to link this project to your plan. You can also use it to record information a colleague might need later on. You can add descriptions to all storage mover resources and each description can contain up to 1024 characters. ## Job definition
-A Job definition is contained in a project. The job definition describes a source, a target, and the migration settings you want to use the next time you start a copy from the defined source to the defined target in Azure.
+A Job definition is contained within a project. The job definition describes a source, a target, and the migration settings you want to use the next time you start a copy from the defined source to the defined target in Azure.
> [!IMPORTANT]
-> Once a job definition was created, source and target information cannot be changed. However, migration settings can be changed any time. A change won't affect a running migration job but take effect the next time you start a migration job.
+> After a job definition is created, source and target information cannot be changed. However, migration settings can be changed any time. A change won't affect a running migration job, but will take effect the next time you start a migration job.
-Here's an example for why changing source and target information is prohibited in a job definition. Let's say you define *Share A* as the source and copy to your target a few times. Let's also assume the source can be changed in a job definition, and you change it to *Share B*. This change could have potentially dangerous consequences.
+It may not seem immediately logical that changing source and target information in an existing job definition isn't permitted. By way of example, imagine you define *Share A* as the migration source and that run several copy operations. Imagine also that you change the migration source to *Share B*. This change could have potentially dangerous consequences.
-A common migration setting is to mirrors source to target. If that is applied to our example, files from *Share A* might get deleted in the target, as soon as you start copying files from *Share B*. To prevent mistakes and maintain the integrity of a job run history, you can't edit the source or target in a job definition. Source, target, and their optional subpath information are locked when a job definition is created. If you want to reuse the same target but use a different source (or vice versa), you have to create a new job definition.
+*Mirroring* is a common migration setting that creates a "mirror" image of a source within a target. If this setting is applied to our example, files from *Share A* might get deleted in the target when the copy operation begins migrating files from *Share B*. To prevent mistakes and maintain the integrity of a job run history, you can't edit a provisioned job definition's source or target. Source, target, and their optional subpath information are locked when a job definition is created. If you want to reuse the same target but use a different source (or vice versa), you're required to create a new job definition.
The job definition also keeps a historic record of past copy runs and their results.
Learn more about telemetry, metrics and logs in the job definition monitoring ar
Migrations require well defined source and target locations. While the term *endpoint* is often used in networking, here it describes a storage location to a high level of detail. An endpoint contains the path to the storage location and additional information.
-While there's a single endpoint resource, the properties of each endpoint may vary, based on the type of endpoint. For example, an NFS share endpoint needs fundamentally different information as compared to an Azure Storage blob container endpoint.
+While there's a single endpoint resource, the properties of each endpoint may vary, based on the type of endpoint. For example, NFS shares, SMB shares, and Azure Storage blob container endpoints each require fundamentally different information.
Endpoints are used in the creation of a job definition. Only certain types of endpoints may be used as a source or a target, respectively. Refer to the [Supported sources and targets](service-overview.md#supported-sources-and-targets) section in the Azure Storage Mover overview article.
storage-mover Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/service-overview.md
Previously updated : 03/27/2023 Last updated : 08/04/2023
CONTENT: final
REVIEW Stephen/Fabian: COMPLETE EDIT PASS: not started
-Document score: 100 (393 words and 0 issues)
+Document score: 100 (505 words and 0 issues)
!######################################################## -->
Azure Storage Mover is a new, fully managed migration service that enables you t
## Supported sources and targets
-The current Azure Storage Mover release supports migrations from NFS shares on a NAS or server device within your network to an Azure blob container.
> [!IMPORTANT] > Storage accounts with the [hierarchical namespace service (HNS)](../storage/blobs/data-lake-storage-namespace.md) feature enabled are not supported at this time. An Azure blob container without the hierarchical namespace service feature doesnΓÇÖt have a traditional file system. A standard blob container supports ΓÇ£virtualΓÇ¥ folders. Files in folders on the source get their path prepended to their name and placed in a flat list in the target blob container.
-The Storage Mover service represents empty folders as an empty blob in the target. The metadata of the source folder is persisted in the custom metadata field of this blob, just as they are with files.
+When migrating data from a source endpoint using the SMB protocol, Storage Mover supports the same level of file fidelity as the underlying Azure file share. Folder structure and metadata values such as file and folder timestamps, ACLs, and file attributes are maintained. When migrating data from an NFS source, the Storage Mover service represents empty folders as an empty blob in the target. The metadata of the source folder is persisted in the custom metadata field of this blob, just as they are with files.
## Fully managed migrations
storage-mover Status Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/status-code.md
Previously updated : 03/20/2023 Last updated : 08/07/2023
REVIEW Engineering: not reviewed
EDIT PASS: completed Initial doc score: 79
-Current doc score: 100 (552, 0)
+Current doc score: 100 (648, 0)
!######################################################## -->
Current doc score: 100 (552, 0)
An Azure Storage Mover agent uses string status codes for statuses that are conveyed to the end user. All status codes have the prefix *AZSM* followed by four decimal digits. The first decimal digit indicates the high-level scope of the status. Each status code should belong to one of the following scopes: -- Status that applies to the entire agent.<br />These codes use the scope digit '0', and therefore and have the prefix "AZSM0".-- Status that applies to a specific job run by the agent.<br />These codes use the scope digit '1' and therefore have the prefix "AZSM1".-- Status that applies to a specific file or directory transferred by a job run by the agent.<br />These codes use the scope digit '2' and therefore have the prefix "AZSM2".
+- Status that applies to the entire agent.<br />These codes use the scope digit `0`, and therefore and have the prefix `AZSM0`.
+- Status that applies to a specific job run by the agent.<br />These codes use the scope digit `1` and therefore have the prefix `AZSM1`.
+- Status that may be set by the agent on a specific file or directory that is transferred by a job run by the agent.<br />These codes use the scope digit `2` and therefore have the prefix `AZSM2`.
Each of these scopes further divides statuses into categories and subcategories. Each subcategory typically reserves 20 status codes to accommodate future expansion.
Each of these scopes further divides statuses into categories and subcategories.
|Error Code |Error Message | Details/Troubleshooting steps/Mitigation | ||--||
-| <a name="AZSM1001"></a>AZSM1001 |Failed to mount source path | Verify the provided server information, name or IP-address, is valid, or the source location is correct. |
+| <a name="AZSM1001"></a>AZSM1001 |Failed to mount source path | Verify the provided server name or IP-address is valid, or the source location is correct. If using SMB, verify the provided username and password is correct. |
| <a name="AZSM1002"></a>AZSM1002 |Encountered an error while scanning the source | Retry or create a support ticket. |
-| <a name="AZSM1003"></a>AZSM1003 |Failed to access source folder due to permission issues | Check if the agent has been granted permissions correctly to the source file share. |
+| <a name="AZSM1003"></a>AZSM1003 |Failed to access source folder due to permission issues | Verify that the agent has been granted permissions to the source file share. |
| <a name="AZSM1004"></a>AZSM1004 |Source path provided is invalid | Create a new endpoint with a valid source share path and update the job definition and retry. | | <a name="AZSM1020"></a>AZSM1020 |Miscellaneous error while accessing source | Retry or create a support ticket. | | <a name="AZSM1021"></a>AZSM1021 |Failed to access target folder due to permission issues | Retry or create a support ticket. | | <a name="AZSM1022"></a>AZSM1022 |Target path provided is invalid | Create a new endpoint with a valid target container and path and update the job definition and retry. | | <a name="AZSM1023"></a>AZSM1023 |Lease expired for this agent on the target container | Retry or create a support ticket. |
-| <a name="AZSM1024"></a>AZSM1024 |Authorization failure on claiming the target container | The agent doesn't have the permission to access the target container. The role assignment is performed automatically while running jobs from the portal. If you're using the APIs/Powershell cmdlets/SDKs, then manually create a 'Storage Blob Data Contributor' role assignment for the agent to access the target storage account blob container. The [Assign an Azure role for access to blob data](/azure/storage/blobs/assign-azure-role-data-access) article may help resolve this issue. |
-| <a name="AZSM1025"></a>AZSM1025 |Authentication failure on claiming the target container | Retry or create a support ticket. |
-| <a name="AZSM1026"></a>AZSM1026 |Blob type in the target container not supported by the agent | This blob type is unsupported by the current Storage Mover agent. |
-| <a name="AZSM1040"></a>AZSM1040 |Miscellaneous error while accessing target | Retry or create a support ticket. |
-| <a name="AZSM1041"></a>AZSM1041 |Failed to send job progress | Retry or create a support ticket. |
-| <a name="AZSM1042"></a>AZSM1042 |Failed to create job | Retry or create a support ticket. |
-| <a name="AZSM1043"></a>AZSM1043 |Failed to resume job | Retry or create a support ticket. |
-| <a name="AZSM1044"></a>AZSM1044 |Failed to finalize the job | Retry or create a support ticket. |
-| <a name="AZSM1045"></a>AZSM1045 |Job was aborted while it was still running | Retry or create a support ticket. |
-| <a name="AZSM1060"></a>AZSM1060 |Miscellaneous error during job execution | Retry or create a support ticket. |
+| <a name="AZSM1024"></a>AZSM1024 |Authorization failure accessing the target location | The agent doesn't have sufficient permission to access the target location. RBAC (role-based access control) role assignments are performed automatically when resources are created using the Azure portal. If you're using the APIs, PowerShell cmdlets, or SDKs, manually create a role assignment for the agent's managed identity to access the target location. For NFS, use the *Storage Blob Data Contributor* role assignment. For SMB, use *Storage File Data Privileged Contributor*. The [Assign an Azure role for access to blob data](/azure/storage/blobs/assign-azure-role-data-access) article may help resolve this issue. |
+| <a name="AZSM1025"></a>**AZSM1025** |Authentication failure accessing the source location | Verify that the agent has been granted permissions to the source location. |
+| <a name="AZSM1026"></a>**AZSM1026** |Target type is not supported by the agent | This target type is unsupported by the current Storage Mover agent. |
+| <a name="AZSM1027"></a>**AZSM1027** |The target location is busy | The agent can't access the target location because an existing lease is active. This error may be caused by another agent writing to the location. Ensure no other job is running against the target. Retry or create support ticket. |
+| <a name="AZSM1028"></a>**AZSM1028** |Key Vault access failure | Verify that the agent has been granted permissions to the relevant Key Vault. |
+| <a name="AZSM1040"></a>**AZSM1040** |Miscellaneous error while accessing target | It's likely that this error is temporary. Retry the migration job again. If the issue persists, please create a support ticket for further assistance. |
+| <a name="AZSM1041"></a>**AZSM1041** |Failed to send job progress | It's likely that this error is temporary. Retry the migration job again. If the issue persists, please create a support ticket for further assistance. |
+| <a name="AZSM1042"></a>**AZSM1042** |Failed to create job | It's likely that this error is temporary. Retry the migration job again. If the issue persists, please create a support ticket for further assistance. |
+| <a name="AZSM1043"></a>**AZSM1043** |Failed to resume job | Retry or create a support ticket. |
+| <a name="AZSM1044"></a>**AZSM1044** |Failed to finalize the job | Retry or create a support ticket. |
+| <a name="AZSM1045"></a>**AZSM1045** |Job was aborted while it was still running | Retry or create a support ticket. |
+| <a name="AZSM1060"></a>**AZSM1060** |Miscellaneous error during job execution | Retry or create a support ticket. |
+| <a name="AZSM2021"></a>**AZSM2021** |File type not supported by target. | This target type does not support files of this type. Reference scalability and performance targets for [Azure Files](/azure/storage/files/storage-files-scale-targets) and [Azure Blob Storage](/azure/storage/blobs/scalability-targets) for additional information. |
+| <a name="AZSM2024"></a>**AZSM2024** |Source path length longer than max supported by target. | Refer to guidance within the [Naming and referencing shares, directories, files, and metadata](/rest/api/storageservices/naming-and-referencing-shares--directories--files--and-metadata) article. |
+| <a name="AZSM2026"></a>**AZSM2026** |Source file has size larger than max supported by target. | Refer to guidance within the [Naming and referencing shares, directories, files, and metadata](/rest/api/storageservices/naming-and-referencing-shares--directories--files--and-metadata) article. |
+| <a name="AZSM2061"></a>**AZSM2061** |Unknown Error encountered when scanning the source. | This is probably a transient error. Rerun the migration job. |
+| <a name="AZSM2062"></a>**AZSM2062** |Failed to read source file due to permission issues. | Verify that the agent has been granted permissions to the source location. |
+| <a name="AZSM2063"></a>**AZSM2063** |Encountered I/O error while reading source file. | It's likely that this error is temporary. Retry the migration job again. If the issue persists, please create a support ticket for further assistance. |
+| <a name="AZSM2069"></a>**AZSM2069** |Failed to read target file due to permission issues. | Verify that the agent has been granted permissions to the target location. |
+| <a name="AZSM2070"></a>**AZSM2070** |Cannot write blob because it has an active lease. | This error may be caused by another agent writing to the location. Ensure no other job is running against the target. Retry or create support ticket. |
+| <a name="AZSM2080"></a>**AZSM2080** |Copy failed due to an unknown error. | It's likely that this error is temporary. Retry the migration job again. If the issue persists, please create a support ticket for further assistance. |
storage-mover Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/troubleshooting.md
Previously updated : 10/06/2022 Last updated : 08/04/2023
EDIT PASS: not started
Your organization's migration project utilizes the Azure Storage Mover to do the bulk of the migration-specific work. An unexpected issue within one of the components has the potential to bring the migration to a standstill. Storage Mover agents are capable of generating a support bundle to help resolve such issues.
-This article will help you through the process of creating the support bundle on the agent, retrieving the compressed log bundle, and accessing the logs it contains. This article assumes that you're using the virtual machine (VM) host operating system (OS), and that the host is able to connect to the guest VM.
+This article helps you through the process of creating the support bundle on the agent, retrieving the compressed log bundle, and accessing the logs it contains. This article assumes that you're using the virtual machine (VM) host operating system (OS), and that the host is able to connect to the guest VM.
-You'll need a secure FTP client on the host if you want to transfer the bundle. A secure FTP client is installed on most typical Windows instances.
+You need a secure FTP client on the host if you want to transfer the bundle. A secure FTP client is installed on most typical Windows instances.
-You'll also need a Zstandard (ZSTD) archive tool such as WinRAR to extract the compressed logs if you want to review them yourself.
+You also need a Zstandard (ZSTD) archive tool such as WinRAR to extract the compressed logs if you want to review them yourself.
## About the agent support bundle
A support bundle is a set of logs that can help determine the underlying cause o
After the logs within the bundle are extracted, they can be used to located and diagnose issues that have occurred during the migration.
-Extracting the logs from the Zstd compressed tar file will create the following file structure:
+Extracting the logs from the Zstd compressed tar file creates the following file structure:
- ***misc***
- - df.txt ΓÇö Filesystem usage
- - dmesg.txt ΓÇö Kernel messages
- - files.txt ΓÇö Directory listings
- - ifconfig.txt ΓÇö Network interface settings
- - meminfo.txt ΓÇö Memory usage
- - netstat.txt ΓÇö Network connections
- - top.txt ΓÇö Process memory and CPU usage
+ - df.txt ΓÇö Filesystem usage
+ - dmesg.txt ΓÇö Kernel messages
+ - files.txt ΓÇö Directory listings
+ - free.txt ΓÇö Display amount of free and used memory in the system
+ - ifconfig.txt ΓÇö Network interface settings
+ - meminfo.txt ΓÇö Memory usage
+ - netstat.txt ΓÇö Network connections
+ - top.txt ΓÇö Process memory and CPU usage
- ***root***
- - **xdmdata**
- - archive ΓÇö Archived job logs
- - azcopy ΓÇö AzCopy logs
- - kvΓÇöAgent ΓÇö persisted data
- - xdmsh ΓÇö Restricted shell logs
+ - **xdmdata**
+ - archive ΓÇö Archived job logs
+ - azcopy ΓÇö AzCopy logs
+ - copy log ΓÇö Copy logs
+ - kvΓÇöAgent ΓÇö persisted data
+ - xdmsh ΓÇö Restricted shell logs
- ***run***
- - **xdatamoved**
- - datadir ΓÇö Location of data directory
- - kv ΓÇö Agent persisted data
- - pid ΓÇö Agent process ID
- - watchdog
+ - **xdatamoved**
+ - datadir ΓÇö Location of data directory
+ - kv ΓÇö Agent persisted data
+ - pid ΓÇö Agent process ID
+ - watchdog
- ***var***
- - **log** ΓÇö Various agent and system logs
- - xdatamoved.err ΓÇö Agent error log
- - xdatamoved.log ΓÇö Agent log
- - xdatamoved.warn ΓÇö Agent warning log
- - xdmreg.log ΓÇö Registration service log
+ - **log** ΓÇö Various agent and system logs
+ - xdatamoved.err ΓÇö Agent error log
+ - xdatamoved.log ΓÇö Agent log
+ - xdatamoved.warn ΓÇö Agent warning log
+ - xdmreg.log ΓÇö Registration service log
## Generate the agent support bundle
-The first step to identifying the root cause of the error is to collect the support bundle from the agent. To retrieve the bundle, complete the steps listed below.
+The first step to identifying the root cause of the error is to collect the support bundle from the agent. To retrieve the bundle, complete the following steps.
-1. Connect to the agent using the administrative credentials. The default password for agents `admin`, though you'll need to supply the updated password if it was changed. In the example provided, the agent maintains the default password.
-1. From the root menu, choose option `6`, the **Collect support bundle** command, to generate the bundle with a unique filename. The support bundle will be created and stored in a share, locally on the agent. A confirmation message containing the name of the support bundle is displayed. The commands necessary to retrieve the bundle are also displayed as shown in the example provided. These commands should be copied and are utilized in the [Retrieve the agent support bundle](#retrieve-the-agent-support-bundle) section.
+1. Connect to the agent using the administrative credentials. The default password for agents `admin`, though you need to supply the updated password if it was changed. In the example provided, the agent maintains the default password.
+
+1. From the root menu, choose option `6`, the **Collect support bundle** command, to generate the bundle with a unique filename. The support bundle is created in a share, locally on the agent. A confirmation message containing the name of the support bundle is displayed. The commands necessary to retrieve the bundle are also displayed as shown in the example provided. These commands should be copied and are utilized in the [Retrieve the agent support bundle](#retrieve-the-agent-support-bundle) section.
:::image type="content" source="media/troubleshooting/bundle-collect-sml.png" alt-text="Screen capture of the agent menu showing the results of the Collect Support Bundle command." lightbox="media/troubleshooting/bundle-collect-lrg.png":::
storage Data Lake Storage Directory File Acl Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-directory-file-acl-dotnet.md
A container acts as a file system for your files. You can create a container by
- [DataLakeServiceClient.CreateFileSystem](/dotnet/api/azure.storage.files.datalake.datalakeserviceclient.createfilesystemasync)
-This example creates a container and returns a [DataLakeFileSystemClient](/dotnet/api/azure.storage.files.datalake.datalakefilesystemclient) object for later use:
+The following code example creates a container and returns a [DataLakeFileSystemClient](/dotnet/api/azure.storage.files.datalake.datalakefilesystemclient) object for later use:
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/CRUD_DataLake.cs" id="Snippet_CreateContainer":::
You can rename or move a directory by using the following method:
- [DataLakeDirectoryClient.RenameAsync](/dotnet/api/azure.storage.files.datalake.datalakedirectoryclient.renameasync)
-Pass the path of the desire directory as a parameter. The following code example shows how to rename a subdirectory:
+Pass the path of the desired directory as a parameter. The following code example shows how to rename a subdirectory:
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/CRUD_DataLake.cs" id="Snippet_RenameDirectory":::
storage Data Lake Storage Directory File Acl Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-directory-file-acl-java.md
Previously updated : 02/07/2023 Last updated : 08/08/2023 ms.devlang: java
To learn about how to get, set, and update the access control lists (ACL) of dir
- An Azure subscription. For more information, see [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/). -- A storage account that has hierarchical namespace enabled. Follow [these](create-data-lake-storage-account.md) instructions to create one.
+- A storage account that has hierarchical namespace enabled. Follow [these instructions](create-data-lake-storage-account.md) to create one.
## Set up your project To get started, open [this page](https://search.maven.org/artifact/com.azure/azure-storage-file-datalake) and find the latest version of the Java library. Then, open the *pom.xml* file in your text editor. Add a dependency element that references that version.
-If you plan to authenticate your client application by using Azure Active Directory (Azure AD), then add a dependency to the Azure Secret Client Library. For more information, see [Adding the Secret Client Library package to your project](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/identity/azure-identity#adding-the-package-to-your-project).
+If you plan to authenticate your client application by using Azure Active Directory (Azure AD), then add a dependency to the Azure Identity library. For more information, see [Azure Identity client library for Java](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/identity/azure-identity#adding-the-package-to-your-project).
Next, add these imports statements to your code file. ```java
+import com.azure.identity.*;
import com.azure.storage.common.StorageSharedKeyCredential;
-import com.azure.storage.file.datalake.DataLakeDirectoryClient;
-import com.azure.storage.file.datalake.DataLakeFileClient;
-import com.azure.storage.file.datalake.DataLakeFileSystemClient;
-import com.azure.storage.file.datalake.DataLakeServiceClient;
-import com.azure.storage.file.datalake.DataLakeServiceClientBuilder;
-import com.azure.storage.file.datalake.models.ListPathsOptions;
-import com.azure.storage.file.datalake.models.PathItem;
-import com.azure.storage.file.datalake.models.AccessControlChangeCounters;
-import com.azure.storage.file.datalake.models.AccessControlChangeResult;
-import com.azure.storage.file.datalake.models.AccessControlType;
-import com.azure.storage.file.datalake.models.PathAccessControl;
-import com.azure.storage.file.datalake.models.PathAccessControlEntry;
-import com.azure.storage.file.datalake.models.PathPermissions;
-import com.azure.storage.file.datalake.models.PathRemoveAccessControlEntry;
-import com.azure.storage.file.datalake.models.RolePermissions;
-import com.azure.storage.file.datalake.options.PathSetAccessControlRecursiveOptions;
+import com.azure.core.http.rest.PagedIterable;
+import com.azure.core.util.BinaryData;
+import com.azure.storage.file.datalake.*;
+import com.azure.storage.file.datalake.models.*;
+import com.azure.storage.file.datalake.options.*;
```
-## Connect to the account
+## Authorize access and connect to data resources
-To use the snippets in this article, you'll need to create a **DataLakeServiceClient** instance that represents the storage account.
+To work with the code examples in this article, you need to create an authorized [DataLakeServiceClient](/java/api/com.azure.storage.file.datalake.datalakeserviceclient) instance that represents the storage account. You can authorize a `DataLakeServiceClient` object using Azure Active Directory (Azure AD), an account access key, or a shared access signature (SAS).
-### Connect by using Azure Active Directory (Azure AD)
+### [Azure AD](#tab/azure-ad)
You can use the [Azure identity client library for Java](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/identity/azure-identity) to authenticate your application with Azure AD.
Create a [DataLakeServiceClient](/java/api/com.azure.storage.file.datalake.datal
:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/Java-v12/src/main/java/com/datalake/manage/Authorize_DataLake.java" id="Snippet_AuthorizeWithAzureAD":::
-To learn more about using **DefaultAzureCredential** to authorize access to data, see [Azure Identity client library for Java](/java/api/overview/azure/identity-readme).
+To learn more about using `DefaultAzureCredential` to authorize access to data, see [Azure Identity client library for Java](/java/api/overview/azure/identity-readme).
-### Connect by using an account key
+### [SAS token](#tab/sas-token)
+
+To use a shared access signature (SAS) token, provide the token as a string and initialize a [DataLakeServiceClient](/java/api/com.azure.storage.file.datalake.datalakeserviceclient) object. If your account URL includes the SAS token, omit the credential parameter.
++
+To learn more about generating and managing SAS tokens, see the following article:
+
+- [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md?toc=/azure/storage/blobs/toc.json)
+
+### [Account key](#tab/account-key)
You can authorize access to data using your account access keys (Shared Key). This example creates a [DataLakeServiceClient](/dotnet/api/azure.storage.files.datalake.datalakeserviceclient) instance that is authorized with the account key.
You can authorize access to data using your account access keys (Shared Key). Th
[!INCLUDE [storage-shared-key-caution](../../../includes/storage-shared-key-caution.md)] ++ ## Create a container
-A container acts as a file system for your files. You can create one by calling the **DataLakeServiceClient.createFileSystem** method.
+A container acts as a file system for your files. You can create a container by using the following method:
-This example creates a container named `my-file-system`.
+- [DataLakeServiceClient.createFileSystem](/java/api/com.azure.storage.file.datalake.datalakeserviceclient#method-details)
+
+The following code example creates a container and returns a [DataLakeFileSystemClient](/java/api/com.azure.storage.file.datalake.datalakefilesystemclient) object for later use:
:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/Java-v12/src/main/java/com/datalake/manage/CRUD_DataLake.java" id="Snippet_CreateFileSystem"::: ## Create a directory
-Create a directory reference by calling the **DataLakeFileSystemClient.createDirectory** method.
+You can create a directory reference in the container by using the following method:
+
+- [DataLakeFileSystemClient.createDirectory](/java/api/com.azure.storage.file.datalake.datalakefilesystemclient#method-details)
-This example adds a directory named `my-directory` to a container, and then adds a subdirectory named `my-subdirectory`.
+The following code example adds a directory to a container, then adds a subdirectory and returns a [DataLakeDirectoryClient](/java/api/com.azure.storage.file.datalake.datalakedirectoryclient) object for later use:
:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/Java-v12/src/main/java/com/datalake/manage/CRUD_DataLake.java" id="Snippet_CreateDirectory"::: ## Rename or move a directory
-Rename or move a directory by calling the **DataLakeDirectoryClient.rename** method. Pass the path of the desired directory a parameter.
+You can rename or move a directory by using the following method:
-This example renames a subdirectory to the name `my-subdirectory-renamed`.
+- [DataLakeDirectoryClient.rename](/java/api/com.azure.storage.file.datalake.datalakedirectoryclient#method-details)
+
+Pass the path of the desired directory as a parameter. The following code example shows how to rename a subdirectory:
:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/Java-v12/src/main/java/com/datalake/manage/CRUD_DataLake.java" id="Snippet_RenameDirectory":::
-This example moves a directory named `my-subdirectory-renamed` to a subdirectory of a directory named `my-directory-2`.
+The following code example shows how to move a subdirectory from one directory to a different directory:
:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/Java-v12/src/main/java/com/datalake/manage/CRUD_DataLake.java" id="Snippet_MoveDirectory":::
-## Delete a directory
+## Upload a file to a directory
-Delete a directory by calling the **DataLakeDirectoryClient.deleteWithResponse** method.
+You can upload content to a new or existing file by using the following method:
-This example deletes a directory named `my-directory`.
+- [DataLakeFileClient.upload](/java/api/com.azure.storage.file.datalake.datalakefileclient#method-summary)
+- [DataLakeFileClient.uploadFromFile](/java/api/com.azure.storage.file.datalake.datalakefileclient#method-summary)
+The following code example shows how to upload a local file to a directory using the `uploadFromFile` method:
-## Upload a file to a directory
-First, create a file reference in the target directory by creating an instance of the **DataLakeFileClient** class. Upload a file by calling the **DataLakeFileClient.append** method. Make sure to complete the upload by calling the **DataLakeFileClient.FlushAsync** method.
+You can use this method to create and upload content to a new file, or you can set the `overwrite` parameter to `true` to overwrite an existing file.
-This example uploads a text file to a directory named `my-directory`.
+## Append data to a file
+You can upload data to be appended to a file by using the following method:
-> [!TIP]
-> If your file size is large, your code will have to make multiple calls to the **DataLakeFileClient.append** method. Consider using the **DataLakeFileClient.uploadFromFile** method instead. That way, you can upload the entire file in a single call.
->
-> See the next section for an example.
+- [DataLakeFileClient.append](/java/api/com.azure.storage.file.datalake.datalakefileclient#method-summary)
-## Upload a large file to a directory
+The following code example shows how to append data to the end of a file using these steps:
-Use the **DataLakeFileClient.uploadFromFile** method to upload large files without having to make multiple calls to the **DataLakeFileClient.append** method.
+- Create a `DataLakeFileClient` object to represent the file resource you're working with.
+- Upload data to the file using the `DataLakeFileClient.append` method.
+- Complete the upload by calling the `DataLakeFileClient.flush` method to write the previously uploaded data to the file.
## Download from a directory
-First, create a **DataLakeFileClient** instance that represents the file that you want to download. Use the **DataLakeFileClient.read** method to read the file. Use any Java file processing API to save bytes from the stream to a file.
+The following code example shows how to download a file from a directory to a local file using these steps:
+
+- Create a `DataLakeFileClient` object to represent the file that you want to download.
+- Use the `DataLakeFileClient.readToFile` method to read the file. This example sets the `overwrite` parameter to `true`, which overwrites an existing file.
:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/Java-v12/src/main/java/com/datalake/manage/CRUD_DataLake.java" id="Snippet_DownloadFile"::: ## List directory contents
-This example prints the names of each file that is located in a directory named `my-directory`.
+You can list directory contents by using the following method and enumerating the result:
+
+- [DataLakeDirectoryClient.listPaths](/java/api/com.azure.storage.file.datalake.datalakedirectoryclient#method-summary)
+
+Enumerating the paths in the result may make multiple requests to the service while fetching the values.
+
+The following code example prints the names of each file that is located in a directory:
:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/Java-v12/src/main/java/com/datalake/manage/CRUD_DataLake.java" id="Snippet_ListFilesInDirectory":::
+## Delete a directory
+
+You can delete a directory by using one of the following methods:
+
+- [DataLakeDirectoryClient.delete](/java/api/com.azure.storage.file.datalake.datalakedirectoryclient#method-summary)
+- [DataLakeDirectoryClient.deleteIfExists](/java/api/com.azure.storage.file.datalake.datalakedirectoryclient#method-summary)
+- [DataLakeDirectoryClient.deleteWithResponse](/java/api/com.azure.storage.file.datalake.datalakedirectoryclient#method-summary)
+
+The following code example uses `deleteWithResponse` to delete a nonempty directory and all paths beneath the directory:
++ ## See also - [API reference documentation](/java/api/overview/azure/storage-file-datalake-readme)
storage Lifecycle Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md
The run conditions are based on age. Current versions use the last modified time
## Lifecycle policy runs
-The platform runs the lifecycle policy once a day. Once you configure a policy, it can take up to 24 hours to go into effect. Once the policy is in effect, it could take up to 24 hours for some actions to run for the first time.
-
-An updated policy takes up to 24 hours to go into effect. Once the policy is in effect, it could take up to 24 hours for the actions to run. Therefore, the policy actions may take up to 48 hours to complete.
+The platform runs the lifecycle policy once a day. Once you configure or edit a policy, it can take up to 24 hours for changes to go into effect. Once the policy is in effect, it could take up to 24 hours for some actions to run. Therefore, the policy actions may take up to 48 hours to complete.
If you disable a policy, then no new policy runs will be scheduled, but if a run is already in progress, that run will continue until it completes.
storage Monitor Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/monitor-blob-storage.md
Previously updated : 03/13/2023 Last updated : 08/08/2023 ms.devlang: csharp
To collect resource logs, you must create a diagnostic setting. When you create
| StorageWrite | Write operations on objects. | | StorageDelete | Delete operations on objects. |
+The **audit** resource log category group allows you to collect the baseline of resource logs that Microsoft deems necessary for auditing your resource. What's collected is dynamic, and Microsoft may change it over time as new resource log categories become available. If you choose the **audit** category group, you can't specify any other resource categories, because the system will decide which logs to collect. For more information, see [Diagnostic settings in Azure Monitor: Resource logs](../../azure-monitor/essentials/diagnostic-settings.md#resource-logs).
+ > [!NOTE] > Data Lake Storage Gen2 doesn't appear as a storage type. That's because Data Lake Storage Gen2 is a set of capabilities available to Blob storage.
storage Storage Blob Delete Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-java.md
This method restores the content and metadata of a soft-deleted blob and any ass
3. The following snippet restores a soft-deleted file named `my-file`.
- This method assumes that you've created a **DataLakeServiceClient** instance. To learn how to create a **DataLakeServiceClient** instance, see [Connect to the account](data-lake-storage-directory-file-acl-java.md#connect-to-the-account).
+ This method assumes that you've created a **DataLakeServiceClient** instance. To learn how to create a **DataLakeServiceClient** instance, see [Connect to the account](data-lake-storage-directory-file-acl-java.md#authorize-access-and-connect-to-data-resources).
```java
storage Storage How To Mount Container Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-how-to-mount-container-linux.md
For example, suppose you are authorizing with the account access keys and storin
accountName myaccount accountKey storageaccesskey containerName mycontainer
+authType Key
```
-The `accountName` is the name of your storage account, and not the full URL.
+The `accountName` is the name of your storage account, and not the full URL. You need to update `myaccount`, `storageaccesskey`, and `mycontainer` with your storage information.
Create this file using:
storage Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/policy-reference.md
Title: Built-in policy definitions for Azure Storage description: Lists Azure Policy built-in policy definitions for Azure Storage. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
storage Storage Use Azcopy Blobs Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-blobs-copy.md
You can copy blobs, directories, and containers between storage accounts by usin
To see examples for other types of tasks such as uploading files, downloading blobs, and synchronizing with Blob storage, see the links presented in the [Next Steps](#next-steps) section of this article.
-AzCopy uses [server-to-server](/rest/api/storageservices/put-block-from-url) [APIs](/rest/api/storageservices/put-page-from-url), so data is copied directly between storage servers. These copy operations don't use the network bandwidth of your computer.
+AzCopy uses [server-to-server](/rest/api/storageservices/put-block-from-url) [APIs](/rest/api/storageservices/put-page-from-url), so data is copied directly between storage servers.
## Get started
storage Storage Use Azcopy Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-files.md
azcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/myFileSh
You can use AzCopy to copy files to other storage accounts. The copy operation is synchronous so all files are copied when the command returns.
-AzCopy uses [server-to-server](/rest/api/storageservices/put-block-from-url) [APIs](/rest/api/storageservices/put-page-from-url), so data is copied directly between storage servers. These copy operations don't use the network bandwidth of your computer. You can increase the throughput of these operations by setting the value of the `AZCOPY_CONCURRENCY_VALUE` environment variable. To learn more, see [Increase Concurrency](storage-use-azcopy-optimize.md#increase-concurrency).
+AzCopy uses [server-to-server](/rest/api/storageservices/put-block-from-url) [APIs](/rest/api/storageservices/put-page-from-url), so data is copied directly between storage servers. You can increase the throughput of these operations by setting the value of the `AZCOPY_CONCURRENCY_VALUE` environment variable. To learn more, see [Increase Concurrency](storage-use-azcopy-optimize.md#increase-concurrency).
You can also copy specific versions of a file by referencing the **DateTime** value of a share snapshot. To learn more about share snapshots, see [Overview of share snapshots for Azure Files](../files/storage-snapshots-files.md).
storage Geo Redundant Storage For Large File Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/geo-redundant-storage-for-large-file-shares.md
Azure Files geo-redundancy for large file shares preview is currently available
- Australia Central 2 - Australia East - Australia Southeast
+- Brazil South
+- Brazil Southeast
+- Canada Central
+- Canada East
- Central India - Central US - China East 2
storage Storage Files Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-monitoring.md
Title: Monitoring Azure Files description: Learn how to monitor the performance and availability of Azure Files. Monitor Azure Files data, learn about configuration, and analyze metric and log data.-+ Previously updated : 07/11/2023- Last updated : 08/07/2023+ ms.devlang: csharp
To collect resource logs, you must create a diagnostic setting. When you create
| StorageWrite | Write operations on objects. | | StorageDelete | Delete operations on objects. |
+The **audit** resource log category group allows you to collect the baseline of resource logs that Microsoft deems necessary for auditing your resource. What's collected is dynamic, and Microsoft may change it over time as new resource log categories become available. If you choose the **audit** category group, you can't specify any other resource categories, because the system will decide which logs to collect. For more information, see [Diagnostic settings in Azure Monitor: Resource logs](../../azure-monitor/essentials/diagnostic-settings.md#resource-logs).
+ To get the list of SMB and REST operations that are logged, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) and [Azure Files monitoring data reference](storage-files-monitoring-reference.md). See [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/platform/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, and PowerShell. You can also find links to information about how to create a diagnostic setting by using an Azure Resource Manager template or an Azure Policy definition.
storage Monitor Queue Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/monitor-queue-storage.md
Previously updated : 10/06/2022 Last updated : 08/08/2023 ms.devlang: csharp, powershell, azurecli
To collect resource logs, you must create a diagnostic setting. When you create
| **StorageWrite** | Write operations on objects. | | **StorageDelete** | Delete operations on objects. |
+The **audit** resource log category group allows you to collect the baseline of resource logs that Microsoft deems necessary for auditing your resource. What's collected is dynamic, and Microsoft may change it over time as new resource log categories become available. If you choose the **audit** category group, you can't specify any other resource categories, because the system will decide which logs to collect. For more information, see [Diagnostic settings in Azure Monitor: Resource logs](../../azure-monitor/essentials/diagnostic-settings.md#resource-logs).
+ See [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/platform/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, and PowerShell. You can also find links to information about how to create a diagnostic setting by using an Azure Resource Manager template or an Azure Policy definition. ## Destination limitations
storage Monitor Table Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/monitor-table-storage.md
Previously updated : 10/06/2022 Last updated : 08/08/2023 ms.devlang: csharp
To collect resource logs, you must create a diagnostic setting. When you create
| StorageWrite | Write operations on objects. | | StorageDelete | Delete operations on objects. |
+The **audit** resource log category group allows you to collect the baseline of resource logs that Microsoft deems necessary for auditing your resource. What's collected is dynamic, and Microsoft may change it over time as new resource log categories become available. If you choose the **audit** category group, you can't specify any other resource categories, because the system will decide which logs to collect. For more information, see [Diagnostic settings in Azure Monitor: Resource logs](../../azure-monitor/essentials/diagnostic-settings.md#resource-logs).
+ See [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/platform/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, and PowerShell. You can also find links to information about how to create a diagnostic setting by using an Azure Resource Manager template or an Azure Policy definition. ## Destination limitations
stream-analytics Connect Job To Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/connect-job-to-vnet.md
Previously updated : 01/04/2021 Last updated : 08/08/2023
-# Connect Stream Analytics jobs to resources in an Azure Virtual Network (VNet)
+# Connect Stream Analytics jobs to resources in an Azure Virtual Network (VNET)
-Your Stream Analytics jobs make outbound connections to your input and output Azure resources to process data in real time and produce results. These input and output resources (for example, Azure Event Hubs and Azure SQL Database) could be behind an Azure firewall or in an Azure Virtual Network (VNet). Stream Analytics service operates from networks that can't be directly included in your network rules.
+Your Stream Analytics jobs make outbound connections to your input and output Azure resources to process data in real time and produce results. These input and output resources (for example, Azure Event Hubs and Azure SQL Database) could be behind an Azure firewall or in an Azure Virtual Network (VNET). Stream Analytics service operates from networks that can't be directly included in your network rules.
-However, there are two ways to securely connect your Stream Analytics jobs to your input and output resources in such scenarios.
-* Using private endpoints in Stream Analytics clusters.
-* Using Managed Identity authentication mode coupled with 'Allow trusted services' networking setting.
+However, there are several ways to securely connect your Stream Analytics jobs to your input and output resources in such scenarios.
+* [Run your Azure Stream Analytics job in an Azure Virtual Network (Public preview)](../stream-analytics/run-job-in-virtual-network.md)
+* Use private endpoints in Stream Analytics clusters.
+* Use Managed Identity authentication mode coupled with 'Allow trusted services' networking setting.
Your Stream Analytics job does not accept any inbound connection.
+## Run your Azure Stream Analytics job in an Azure Virtual Network (Public preview)
+Virtual network (VNET) support enables you to lock down access to Azure Stream Analytics to your virtual network infrastructure. This capability provides you with the benefits of network isolation and can be accomplished by [deploying a containerized instance of your ASA job inside your Virtual Network](../virtual-network/virtual-network-for-azure-services.md). Your VNET injected ASA job can then privately access your resources within the virtual network via:
+
+- [Private endpoints](../private-link/private-endpoint-overview.md), which connect your VNet injected ASA job to your data sources over private links powered by Azure Private Link.
+- [Service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md), which connect your data sources to your VNet injected ASA job.
+- [Service tags](../virtual-network/service-tags-overview.md), which allow or deny traffic to Azure Stream Analytics.
+
+Currently, VNET integration is only available in **select regions**. Visit [this](../stream-analytics/run-job-in-virtual-network.md) page for most recent list of VNET enabled regions and how to request it in your region.
+ ## Private endpoints in Stream Analytics clusters. [Stream Analytics clusters](./cluster-overview.md) is a single tenant dedicated compute cluster where you can run your Stream Analytics jobs. You can create managed private endpoints in your Stream Analytics cluster, which allows any jobs running on your cluster to make a secure outbound connection to your input and output resources.
-The creation of private endpoints in your Stream Analytics cluster is a [two step operation](./private-endpoints.md). This option is best suited for medium to large streaming workloads as the minimum size of a Stream Analytics cluster is 36 SUs (although the 36 SUs can be shared by different jobs in various subscriptions or environments like development, test, and production).
+The creation of private endpoints in your Stream Analytics cluster is a [two step operation](./private-endpoints.md). This option is best suited for medium to large streaming workloads as the minimum size of a Stream Analytics cluster is 12 SU V2 or 36 SU V1s (SUs can be shared by different jobs in various subscriptions or environments like development, test, and production). See [Azure Stream Analytics cluster](../stream-analytics/cluster-overview.md) for more information.
## Managed identity authentication with 'Allow trusted services' configuration Some Azure services provide **Allow trusted Microsoft services** networking setting, which when enabled, allows your Stream Analytics jobs to securely connect to your resource using strong authentication. This option allows you to connect your jobs to your input and output resources without requiring a Stream Analytics cluster and private endpoints. Configuring your job to use this technique is a 2-step operation:
stream-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Stream Analytics description: Lists Azure Policy built-in policy definitions for Azure Stream Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
stream-analytics Stream Analytics Streaming Unit Consumption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-streaming-unit-consumption.md
The underlying compute power for V1 and V2 streaming units is as follows:
![SU V1 and SU V2 mapping.](./media/stream-analytics-scale-jobs/su-conversion-suv2.png)
-> [!Note]
-> If you notice that the SU count in your [Activity log](stream-analytics-job-diagnostic-logs.md) appears to be diffrent than the value that you see on the UI for a particular job, do not be alarmed as long as the mapping is as follows: 1/3 SU V2 = 3, 2/3 SU V2 = 7, 1 SU V2 = 10, 2 SU V2= 20, 3 SU V2 = 30, and so on. This conversion is automatic and has no impact on your job's performance.
- For information on SU pricing, visit the [Azure Stream Analytics Pricing Page](https://azure.microsoft.com/pricing/details/stream-analytics/).
-To achieve low latency stream processing, Azure Stream Analytics jobs perform all processing in memory. When running out of memory, the streaming job fails. As a result, for a production job, itΓÇÖs important to monitor a streaming jobΓÇÖs resource usage, and make sure there's enough resource allocated to keep the jobs running 24/7.
+## Understand streaming unit conversions and where they apply
+There's an automatic conversion of Streaming Units which occurs from REST API layer to UI. You may also notice this conversion in your [Activity log](stream-analytics-job-diagnostic-logs.md) where SU count appears different than the value which was specified on the UI for a particular job. This is by design and it is because REST API fields must be limited to integer values and ASA jobs support fractional nodes (1/3 SUV2 and 2/3 SUV2). To support this, we put an automatic conversion in place from Azure Portal to backend. On the portal, you will see 1/3, 2/3, 1, 2, 3, … and so on. In activity logs, REST API, etc. SU V2 values are 3, 7, 10, 20, 30, etc. The backend structure is the proposal multiplied by 10 (rounding up in some cases). This allows us to convey the same granularity and eliminate the decimal point at the API layer. This conversion is automatic and has no impact on your job's performance.
+
+| Standard | Standard V2 (UI) | Standard V2 (Backend) |
+| - | - | - |
+| 1 | 1/3 | 3 |
+| 3 | 2/3 | 7 |
+| 6 | 1 | 10 |
+| 12 | 2 | 20 |
+| 18 | 3 | 30 |
+| ... | ... | ... |
++
+## Understanding consumption and memory utilization
+To achieve low latency stream processing, Azure Stream Analytics jobs perform all processing in memory. When running out of memory, the streaming job fails. As a result, for a production job, itΓÇÖs important to monitor a streaming jobΓÇÖs resource usage, and make sure there's enough resource allocated to keep the jobs running 24/7.
The SU % utilization metric, which ranges from 0% to 100%, describes the memory consumption of your workload. For a streaming job with minimal footprint, this metric is usually between 10% to 20%. If SU% utilization is high (above 80%), or if input events get backlogged (even with a low SU% utilization since it doesn't show CPU usage), your workload likely requires more compute resources, which requires you to increase the number of streaming units. It's best to keep the SU metric below 80% to account for occasional spikes. To react to increased workloads and increase streaming units, consider setting an alert of 80% on the SU Utilization metric. Also, you can use watermark delay and backlogged events metrics to see if there's an impact.
synapse-analytics Continuous Integration Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/cicd/continuous-integration-delivery.md
Last updated 10/08/2021
-ms.search.keywords:
- - CICD
- - Synapse
- - source control
-ms.search.form:
- - CICD
- - source control 1
-ms.search.features:
- - CICD
- - source control 2
-searchScope:
- - Deployment
- - CICD
- - Azure
-tags: CICD, source control 1
+ # Continuous integration and delivery for an Azure Synapse Analytics workspace
To automate the deployment of an Azure Synapse workspace to multiple environment
- Prepare an Azure DevOps project for running the release pipeline. - [Grant any users who will check in code Basic access at the organization level](/azure/devops/organizations/accounts/add-organization-users?view=azure-devops&tabs=preview-page&preserve-view=true), so they can see the repository. - Grant Owner permission to the Azure Synapse repository.-- Make sure that you've created a self-hosted Azure DevOps VM agent or use an Azure DevOps hosted agent.
+- Make sure that you've created a self-hosted Azure DevOps VM agent or use an Azure DevOps hosted agent.
- Grant permissions to [create an Azure Resource Manager service connection for the resource group](/azure/devops/pipelines/library/service-endpoints?view=azure-devops&tabs=yaml&preserve-view=true). - An Azure Active Directory (Azure AD) administrator must [install the Azure DevOps Synapse Workspace Deployment Agent extension in the Azure DevOps organization](/azure/devops/marketplace/install-extension). - Create or nominate an existing service account for the pipeline to run as. You can use a personal access token instead of a service account, but your pipelines won't work after the user account is deleted.
The deployment task supports 3 types of operations, validate only, deploy and v
TargetWorkspaceName: '<target workspace name>' ```
-**Deploy** The inputs of the operation deploy include Synapse workspace template and parameter template, which can be created after publishing in the workspace publish branch or after the validation. It is same as the version 1.x.
- **Validate and deploy** can be used to directly deploy the workspace from non-publish branch with the artifact root folder.
+ > [!NOTE]
+ > The deployment task needs to download dependency JS files from this endpoint **web.azuresynapse.net** when the operation type is selected as **Validate** or **Validate and deploy**. Please ensure the endpoint **web.azuresynapse.net** is allowed if network policies are enabled on the VM.
+
+**Deploy** The inputs of the operation deploy include Synapse workspace template and parameter template, which can be created after publishing in the workspace publish branch or after the validation. It is same as the version 1.x.
+ You can choose the operation types based on the use case. Following part is an example of the deploy. 1. In the task, select the operation type as **Deploy**.
synapse-analytics Source Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/cicd/source-control.md
Last updated 11/20/2020
-ms.search.keywords: CICD, Synapse, source control
-ms.search.form: CICD, source control 1
-ms.search.features: CICD, source control 2
-tags: CICD, source control 1
-searchScope:
- - Source control
- - CICD
- - Azure
+
# Source control in Synapse Studio By default, Synapse Studio authors directly against the Synapse service. If you have a need for collaboration using Git for source control, Synapse Studio allows you to associate your workspace with a Git repository, Azure DevOps, or GitHub.
-This article will outline how to configure and work in a Synapse workspace with git repository enabled. And we also highlight some best practices and a troubleshooting guide.
+This article outlines how to configure and work in a Synapse workspace with git repository enabled. And we also highlight some best practices and a troubleshooting guide.
> [!NOTE] >To use GitHub in Azure Gov and Microsoft Azure operated by 21Vianet, you can bring your own GitHub OAuth application in Synapse Studio for git integration. The configure experience is same with ADF. You can refer to the [announcement blog](https://techcommunity.microsoft.com/t5/azure-data-factory/cicd-improvements-with-github-support-in-azure-government-and/ba-p/2686918).
For more info about connecting Azure Repos to your organization's Active Directo
### Use a cross tenant Azure DevOps account
-When your Azure DevOps is not in the same tenant as the Synapse workspace, you can configure the workspace with your cross tenant Azure DevOps account with the guide below.
+When your Azure DevOps isn't in the same tenant as the Synapse workspace, you can configure the workspace with cross tenant Azure DevOps account.
1. Select the **Cross tenant sign in** option and click **Continue**
Connecting to a GitHub organization requires the organization to grant permissio
If you're connecting to GitHub from Synapse Studio for the first time, follow these steps to connect to a GitHub organization.
-1. In the Git configuration pane, enter the organization name in the *GitHub Account* field. A prompt to login into GitHub will appear.
+1. In the Git configuration pane, enter the organization name in the *GitHub Account* field. A prompt to login into GitHub appears.
1. Login using your user credentials.
-1. You'll be asked to authorize Synapse as an application called *Azure Synapse*. On this screen, you will see an option to grant permission for Synapse to access the organization. If you don't see the option to grant permission, ask an admin to manually grant the permission through GitHub.
+1. You are asked to authorize Synapse as an application called *Azure Synapse*. On this screen, you see an option to grant permission for Synapse to access the organization. If you don't see the option to grant permission, ask an admin to manually grant the permission through GitHub.
-Once you follow these steps, your workspace will be able to connect to both public and private repositories within your organization. If you are unable to connect, try clearing the browser cache and retrying.
+Once you follow these steps, your workspace is able to connect to both public and private repositories within your organization. If you're unable to connect, try clearing the browser cache and retrying.
#### Already connected to GitHub using a personal account
If you have already connected to GitHub and only granted permission to access a
![Grant organization permission](media/grant-organization-permission.png)
-Once you complete these steps, your workspace will be able to connect to both public and private repositories within your organization.
+Once you complete these steps, your workspace is able to connect to both public and private repositories within your organization.
## Version control
-Version control systems (also known as _source control_) allows developers to collaborate on code and track changes.Source control is an essential tool for multi-developer projects.
+Version control systems (also known as _source control_) allow developers to collaborate on code and track changes. Source control is an essential tool for multi-developer projects.
### Creating feature branches
Once the new branch pane appears, enter the name of your feature branch and sele
![Create branch based on private branch ](media/create-branch-from-private-branch.png)
-When you are ready to merge the changes from your feature branch to your collaboration branch, click on the branch dropdown and select **Create pull request**. This action takes you to Git provider where you can raise pull requests, do code reviews, and merge changes to your collaboration branch. You are only allowed to publish to the Synapse service from your collaboration branch.
+When you're ready to merge the changes from your feature branch to your collaboration branch, click on the branch dropdown and select **Create pull request**. This action takes you to Git provider where you can raise pull requests, do code reviews, and merge changes to your collaboration branch. You're only allowed to publish to the Synapse service from your collaboration branch.
![Create a new pull request](media/create-pull-request.png) ### Configure publishing settings
-By default, Synapse Studio generates the workspace templates and saves them into a branch called `workspace_publish`. To configure a custom publish branch, add a `publish_config.json` file to the root folder in the collaboration branch. When publishing, Synapse Studio reads this file, looks for the field `publishBranch`, and saves workspace template files to the specified location. If the branch doesn't exist, Synapse Studio will automatically create it. And example of what this file looks like is below:
+By default, Synapse Studio generates the workspace templates and saves them into a branch called `workspace_publish`. To configure a custom publish branch, add a `publish_config.json` file to the root folder in the collaboration branch. When you publishing, Synapse Studio reads this file, looks for the field `publishBranch`, and saves workspace template files to the specified location. If the branch doesn't exist, Synapse Studio would automatically create it. And example of what this file looks like is below:
```json {
By default, Synapse Studio generates the workspace templates and saves them into
} ```
-Synapse Studio can only have one publish branch at a time. When you specify a new publish branch, the previous publish branch would not been deleted. If you want to remove the previous publish branch, delete it manually.
+Synapse Studio can only have one publish branch at a time. When you specify a new publish branch, the original publish branch would not been deleted. If you want to remove the previous publish branch, delete it manually.
### Publish code changes
-After merging changes to the collaboration branch , click **Publish** to manually publish your code changes in the collaboration branch to the Synapse service.
+After merging changes to the collaboration branch, click **Publish** to manually publish your code changes in the collaboration branch to the Synapse service.
![Publish changes](media/gitmode-publish.png)
-A side pane will open where you confirm that the publish branch and pending changes are correct. Once you verify your changes, click **OK** to confirm the publish.
+A side pane opens where you confirm that the publish branch and pending changes are correct. Once you verify your changes, click **OK** to confirm the publish.
![Confirm the correct publish branch](media/publish-change.png)
Enter your workspace name and click **Disconnect** to remove the Git repository
After you remove the association with the current repo, you can configure your Git settings to use a different repo and then import existing resources to the new repo. > [!IMPORTANT]
-> Removing Git configuration from a workspace doesn't delete anything from the repository. Synapse workspace will contain all published resources. You can continue to edit the workspace directly against the service.
+> Removing Git configuration from a workspace doesn't delete anything from the repository. Synapse workspace contains all published resources. You can continue to edit the workspace directly against the service.
## Best practices for Git integration -- **Permissions**. After you have a git repository connected to your workspace, anyone who can access to your git repo with any role in your workspace will be able to update artifacts, like sql script, notebook,spark job definition, dataset, dataflow and pipeline in git mode. Typically you don't want every team member to have permissions to update workspace.
+- **Permissions**. After you have a git repository connected to your workspace, anyone who can access to your git repo with any role in your workspace is able to update artifacts, like sql script, notebook, spark job definition, dataset, dataflow and pipeline in git mode. Typically you don't want every team member to have permissions to update workspace.
Only grant git repository permission to Synapse workspace artifact authors. -- **Collaboration**. It's recommended to not allow direct check-ins to the collaboration branch. This restriction can help prevent bugs as every check-in will go through a pull request review process described in [Creating feature branches](source-control.md#creating-feature-branches).-- **Synapse live mode**. After publishing in git mode, all changes will be reflected in Synapse live mode. In Synapse live mode, publishing is disabled. And you can view, run artifacts in live mode if you have been granted the right permission. -- **Edit artifacts in Studio**. Synapse studio is the only place you can enable workspace source control and sync changes to git automatically. Any change via SDK, PowerShell, will not be synced to git. We recommend you always edit artifact in Studio when git is enabled.
+- **Collaboration**. It's recommended to not allow direct check-ins to the collaboration branch. This restriction can help prevent bugs as every check-in goes through a pull request review process described in [Creating feature branches](source-control.md#creating-feature-branches).
+- **Synapse live mode**. After publishing in git mode, all changes are reflected in Synapse live mode. In Synapse live mode, publishing is disabled. And you can view, run artifacts in live mode if you have been granted the right permission.
+- **Edit artifacts in Studio**. Synapse studio is the only place you can enable workspace source control and sync changes to git automatically. Any change via SDK, PowerShell,is not synced to git. We recommend you always edit artifact in Studio when git is enabled.
## Troubleshooting git integration ### Access to git mode
-If you have been granted the permission to the GitHub git repository linked with your workspace, but you can not access to Git mode:
+If you have been granted the permission to the GitHub git repository linked with your workspace, but you cannot access to Git mode:
1. Clear your cache and refresh the page.
If the publish branch is out of sync with the collaboration branch and contains
## Unsupported features - Synapse Studio doesn't allow cherry-picking of commits or selective publishing of resources. -- Synapse Studio doesn't support customize commit message.-- By design, delete action in Studio will be committed to git directly
+- Synapse Studio doesn't support self-customized commit message.
+- By design, delete action in Studio is committed to git directly
## Next steps
synapse-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/policy-reference.md
Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
synapse-analytics Develop Tables Statistics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-tables-statistics.md
Serverless SQL pool analyzes incoming user queries for missing statistics. If st
The SELECT statement will trigger automatic creation of statistics. > [!NOTE]
-> Automatic creation of statistics is turned on for Parquet files. For CSV files, statistics will be automatically created if you use OPENROWSET. You need to create statistics manually you use CSV external tables.
+> For automatic creation of statistics sampling is used and in most cases sampling percentage will be less than 100%. This flow is the same for every file format. Have in mind that when reading CSV with parser version 1.0 sampling is not supported and automatic creation of statistics will not happen with sampling percentage less than 100%. For small tables with estimated low cardinality (number of rows) automatic statistics creation will be triggered with sampling percentage of 100%. That basically means that fullscan is triggered and automatic statistics are created even for CSV with parser version 1.0.
Automatic creation of statistics is done synchronously so you may incur slightly degraded query performance if your columns are missing statistics. The time to create statistics for a single column depends on the size of the files targeted. ### Manual creation of statistics
-Serverless SQL pool lets you create statistics manually. For CSV external tables, you have to create statistics manually because automatic creation of statistics isn't turned on for CSV external tables.
+Serverless SQL pool lets you create statistics manually. In case you are using parser version 1.0 with CSV, you will probably have to create statistics manually, because this parser version does not support sampling. Automatic creation of statistics in case of parser version 1.0 will not happen, unless the sampling percent is 100%.
See the following examples for instructions on how to manually create statistics.
When statistics are stale, new ones will be created. The algorithm goes through
Manual stats are never declared stale. > [!NOTE]
-> Automatic recreation of statistics is turned on for Parquet files. For CSV files, statistics will be recreated if you use OPENROWSET. You need to drop and create statistics manually for CSV external tables. Check the examples below on how to drop and create statistics.
+> For automatic recreation of statistics sampling is used and in most cases sampling percentage will be less than 100%. This flow is the same for every file format. Have in mind that when reading CSV with parser version 1.0 sampling is not supported and automatic recreation of statistics will not happen with sampling percentage less than 100%. In that case you need to drop and recreate statistics manually. Check the examples below on how to drop and create statistics. For small tables with estimated low cardinality (number of rows) automatic statistics recreation will be triggered with sampling percentage of 100%. That basically means that fullscan is triggered and automatic statistics are created even for CSV with parser version 1.0.
One of the first questions to ask when you're troubleshooting a query is, **"Are the statistics up to date?"**
Specifies a Transact-SQL statement that will return column values to be used for
``` > [!NOTE]
-> CSV sampling does not work at this time, only FULLSCAN is supported for CSV.
+> CSV sampling does not work if you are using parser version 1.0, only FULLSCAN is supported for CSV with parser version 1.0.
#### Create single-column statistics by examining every row
Specifies the approximate percentage or number of rows in the table or indexed v
SAMPLE can't be used with the FULLSCAN option. > [!NOTE]
-> CSV sampling does not work at this time, only FULLSCAN is supported for CSV.
+> CSV sampling does not work if you are using parser version 1.0, only FULLSCAN is supported for CSV with parser version 1.0.
#### Create single-column statistics by examining every row
virtual-desktop Licensing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/licensing.md
Here's a summary of the two types of licenses for Azure Virtual Desktop you can
- Includes use rights to leverage [FSlogix](/fslogix/overview-what-is-fslogix) > [!IMPORTANT]
-> Per-user access pricing only supports Windows 10 Enterprise multi-session and Windows 11 Enterprise multi-session. Per-user access pricing currently doesn't support Windows Server session hosts.
+> Per-user access pricing only supports Windows Enterprise and Windows Enterprise multi-session client operating systems for session hosts. Windows Server session hosts are not supported with per-user access pricing.
## Licensing other products and services for use with Azure Virtual Desktop
virtual-desktop Whats New Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-agent.md
Title: What's new in the Azure Virtual Desktop Agent? - Azure
description: New features and product updates for the Azure Virtual Desktop Agent. Previously updated : 07/21/2023 Last updated : 08/08/2023
New versions of the Azure Virtual Desktop Agent are installed automatically. Whe
A rollout may take several weeks before the agent is available in all environments. Some agent versions may not reach non-validation environments, so you may see multiple versions of the agent deployed across your environments.
+## Version 1.0.7255.800
+
+This update was released at the end of July 2023 and includes the following changes:
+
+- Fixed an issue that would disable the Traversal Using Relay NAT (TURN) health check when a user disabled the Unified Datagram Protocol (UDP).
+- Security improvements and bug fixes.
+
+## Version 1.0.7033.1401
+
+This update was released at the end of July 2023 and includes the following changes:
+
+- Security improvements and bug fixes.
+ ## Version 1.0.6713.1603 This update was released at the end of July 2023 and includes the following changes:
virtual-machine-scale-sets Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/policy-reference.md
Previously updated : 08/03/2023 Last updated : 08/08/2023 # Azure Policy built-in definitions for Azure Virtual Machine Scale Sets
virtual-machines Dcv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dcv2-series.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-The DCsv2-series virtual machines help protect the confidentiality and integrity of your data and code while it’s processed in the public cloud. DCsv2-series leverage Intel® Software Guard Extensions, which enable customers to use secure enclaves for protection.
+The DCsv2-series virtual machines help protect the confidentiality and integrity of your data and code while it’s processed in the public cloud. DCsv2-series leverage Intel® Software Guard Extensions (SGX), which enable customers to use [secure enclaves](../confidential-computing/confidential-computing-enclaves.md) for protection.
These machines are backed by 3.7 GHz Intel® Xeon E-2288G (Coffee Lake) with SGX technology. With Intel® Turbo Boost Max Technology 3.0 these machines can go up to 5.0 GHz.
virtual-machines Disks Convert Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-convert-types.md
Previously updated : 08/01/2023 Last updated : 08/08/2023
Because conversion requires a restart of the virtual machine (VM), schedule the
## Restrictions -- You can only change disk type once per day.
+- You can only change disk type twice per day.
- You can only change the disk type of managed disks. If your disk is unmanaged, convert it to a managed disk with [CLI](linux/convert-unmanaged-to-managed-disks.md) or [PowerShell](windows/convert-unmanaged-to-managed-disks.md) to switch between disk types. ## Switch all managed disks of a VM between from one account to another
Start-AzVM -ResourceGroupName $vm.ResourceGroupName -Name $vm.Name
# [Azure CLI](#tab/azure-cli) + ```azurecli #resource group that contains the managed disk
The following steps assume you already have a snapshot. To learn how to create o
1. Continue to the **Advanced** tab. 1. Select **512** for **Logical sector size (bytes)**. 1. Select **Review+Create** and then **Create**.++ ## Next steps Make a read-only copy of a VM by using a [snapshot](snapshot-copy-managed-disk.md).+
virtual-machines Disks Enable Ultra Ssd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-ultra-ssd.md
description: Learn about ultra disks for Azure VMs
Previously updated : 06/07/2023 Last updated : 08/07/2023
virtual-machines Disks Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-metrics.md
All metrics are emitted every minute, except for the bursting credit percentage
The following metrics are available to get insight on VM and Disk IO, throughput, and queue depth performance: - **OS Disk Queue Depth**: The number of current outstanding IO requests that are waiting to be read from or written to the OS disk.-- **OS Disk Read Bytes/Sec**: The number of bytes that are read in a second from the OS disk.-- **OS Disk Read Operations/Sec**: The number of input operations that are read in a second from the OS disk.
+- **OS Disk Read Bytes/Sec**: The number of bytes that are read in a second from the OS disk. If Read-only or Read/write [disk caching](premium-storage-performance.md#disk-caching) is enabled, this metric is inclusive of bytes read from the cache.
+- **OS Disk Read Operations/Sec**: The number of input operations that are read in a second from the OS disk. If Read-only or Read/write [disk caching](premium-storage-performance.md#disk-caching) is enabled, this metric is inclusive of IOPs read from the cache.
- **OS Disk Write Bytes/Sec**: The number of bytes that are written in a second from the OS disk. - **OS Disk Write Operations/Sec**: The number of output operations that are written in a second from the OS disk. - **Data Disk Queue Depth**: The number of current outstanding IO requests that are waiting to be read from or written to the data disk(s).-- **Data Disk Read Bytes/Sec**: The number of bytes that are read in a second from the data disk(s).-- **Data Disk Read Operations/Sec**: The number of input operations that are read in a second from data disk(s).
+- **Data Disk Read Bytes/Sec**: The number of bytes that are read in a second from the data disk(s). If Read-only or Read/write [disk caching](premium-storage-performance.md#disk-caching) is enabled, this metric is inclusive of bytes read from the cache.
+- **Data Disk Read Operations/Sec**: The number of input operations that are read in a second from data disk(s). If Read-only or Read/write [disk caching](premium-storage-performance.md#disk-caching) is enabled, this metric is inclusive of IOPs read from the cache.
- **Data Disk Write Bytes/Sec**: The number of bytes that are written in a second from the data disk(s). - **Data Disk Write Operations/Sec**: The number of output operations that are written in a second from data disk(s).-- **Disk Read Bytes/Sec**: The number of total bytes that are read in a second from all disks attached to a VM.-- **Disk Read Operations/Sec**: The number of input operations that are read in a second from all disks attached to a VM.-- **Disk Write Bytes/Sec**: The number of bytes that are written in a second from all disks attached to a VM.
+- **Disk Read Bytes**: The number of total bytes that are read in a minute from all disks attached to a VM. If Read-only or Read/write [disk caching](premium-storage-performance.md#disk-caching) is enabled, this metric is inclusive of bytes read from the cache.
+- **Disk Read Operations/Sec**: The number of input operations that are read in a second from all disks attached to a VM. If Read-only or Read/write [disk caching](premium-storage-performance.md#disk-caching) is enabled, this metric is inclusive of IOPs read from the cache.
+- **Disk Write Bytes**: The number of bytes that are written in a minute from all disks attached to a VM.
- **Disk Write Operations/Sec**: The number of output operations that are written in a second from all disks attached to a VM. ## Bursting metrics
The following metrics help with observability into our [bursting](disk-bursting.
- **OS Disk Used Burst BPS Credits Percentage**: The accumulated percentage of the throughput burst used for the OS disk. Emitted on a 5 minute interval. - **Data Disk Used Burst IO Credits Percentage**: The accumulated percentage of the IOPS burst used for the data disk(s). Emitted on a 5 minute interval. - **OS Disk Used Burst IO Credits Percentage**: The accumulated percentage of the IOPS burst used for the OS disk. Emitted on a 5 minute interval.
+- **Disk On-demand Burst Operations**: The accumulated operations of burst transactions used for disks with on-demand bursting enabled. Emitted on an hour interval.
## VM Bursting metrics The following metrics provide insight on VM-level bursting:
virtual-machines N Series Driver Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/n-series-driver-setup.md
sudo reboot
With Secure Boot enabled, all Linux kernel modules are required to be signed by the key trusted by the system.
-1. Install pre-built Azure Linux kernel based NVIDIA modules and drivers
+1. Install pre-built Azure Linux kernel based NVIDIA modules and CUDA drivers
```bash sudo apt-get update
With Secure Boot enabled, all Linux kernel modules are required to be signed by
sudo reboot ```
-8. Verify NVIDIA drivers are installed and loaded
+8. Verify NVIDIA CUDA drivers are installed and loaded
```bash dpkg -l | grep -i nvidia
virtual-machines Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Machines description: Lists Azure Policy built-in policy definitions for Azure Virtual Machines. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
virtual-machines Tutorial Secure Web Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/tutorial-secure-web-server.md
$certURL=(Get-AzKeyVaultSecret -VaultName $keyvaultName -Name "mycert").id
$vm=Get-AzVM -ResourceGroupName $resourceGroup -Name "myVM" $vaultId=(Get-AzKeyVault -ResourceGroupName $resourceGroup -VaultName $keyVaultName).ResourceId
-$vm = Add-AzVMSecret -VM $vm -SourceVaultId $vaultId -CertificateStore "My" -CertificateUrl $certURL
-
-Update-AzVM -ResourceGroupName $resourceGroup -VM $vm
+$vm = Add-AzVMSecret -VM $vm -SourceVaultId $vaultId -CertificateStore "My" -CertificateUrl $certURL | Update-AzVM
``` ## Configure IIS to use the certificate
virtual-network Public Ip Upgrade Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-upgrade-vm.md
The module logs all upgrade activity to a file named `PublicIPUpgrade.log`, crea
## Download the script
-Download the migration script from the [PowerShell Gallery](https://www.powershellgallery.com/packages/AzureVMPublicIPUpgrade/1.0.0).
+Download the migration script from the [PowerShell Gallery](https://www.powershellgallery.com/packages/AzureVMPublicIPUpgrade).
```powershell PS C:\> Install-Module -Name AzureVMPublicIPUpgrade -Scope CurrentUser -Repository PSGallery -Force
virtual-network Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Network description: Lists Azure Policy built-in policy definitions for Azure Virtual Network. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/03/2023 Last updated : 08/08/2023
virtual-wan How To Routing Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-routing-policies.md
When routing intent is enabled on the hub, static routes corresponding to the co
| Route Name | Prefixes | Next Hop Resource| |--|--|--| | _policy_PrivateTraffic | 10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12| Azure Firewall |
- _policy_InternetTraffic| 0.0.0.0/0| Azure Firewall |
+ | _policy_PublicTraffic| 0.0.0.0/0| Azure Firewall |
> [!NOTE] > Any static routes in the defaultRouteTable containing prefixes that aren't exact matches with 0.0.0.0/0 or the RFC1918 super-nets (10.0.0.0/8, 192.168.0.0/16 and 172.16.0.0/12) are automatically consolidated into a single static route, named **private_traffic**. Prefixes in the defaultRouteTable that match RFC1918 supernets or 0.0.0.0/0 are always automatically removed once routing intent is configured, regardless of the policy type.
Enabling routing intent on this hub would result in the following end state of t
| Route Name | Prefixes | Next Hop Resource| |--|--|--| | _policy_PrivateTraffic | 10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12| Azure Firewall |
- _policy_InternetTraffic| 0.0.0.0/0| Azure Firewall |
+ | _policy_PublicTraffic| 0.0.0.0/0| Azure Firewall |
| private_traffic | 40.0.0.0/24, 10.0.0.0/24, 50.0.0.0/24| Azure Firewall | #### Other methods (PowerShell, REST, CLI)
For example, consider the scenario where the defaultRouteTable has the following
| firewall_route_ 1 | 10.0.0.0/8|Azure Firewall | | firewall_route_2 | 192.168.0.0/16, 10.0.0.0/24 | Azure Firewall| | firewall_route_3 | 40.0.0.0/24| Azure Firewall|
- to_internet | 0.0.0.0/0| Azure Firewall |
+| to_internet | 0.0.0.0/0| Azure Firewall |
The following table represents the final state of the defaultRouteTable after routing intent creation succeeds. Note that firewall_route_1 and to_internet was automatically removed as the only prefix in those routes were 10.0.0.0/8 and 0.0.0.0/0. firewall_route_2 was modified to remove 192.168.0.0/16 as that prefix is an RFC1918 aggregate prefix. | Route Name | Prefixes | Next Hop Resource| |--|--|--| | _policy_PrivateTraffic | 10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12| Azure Firewall |
-| _policy_InternetTraffic| 0.0.0.0/0| Azure Firewall |
+| _policy_PublicTraffic| 0.0.0.0/0| Azure Firewall |
| firewall_route_2 | 10.0.0.0/24 | Azure Firewall| | firewall_route_3 | 40.0.0.0/24| Azure Firewall|
virtual-wan How To Virtual Hub Routing Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-virtual-hub-routing-powershell.md
The steps in this section help you set up routing configuration for a virtual ne
$updatedRoutingConfiguration= New-AzRoutingConfiguration -AssociatedRouteTable $associatedTable.Id -Label @("testLabel") -Id @($propagatedTable.Id) -StaticRoute @($staticRoute) ```
+> [!NOTE]
+> For updates, when using the `New-AzRoutingConfiguration`, all exisiting cofiguration needs to be provided, such as AssociatedRouteTables, Labels and/or StaticRoutes.
+> This command creates a new configuration, which will overwrite existing configurations, when the `Update-AzVirtualHubVnetConnection` is executed.
++ 1. Update the existing virtual network connection. ```azurepowershell-interactive