Updates from: 02/16/2023 02:09:47
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Phone Based Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/phone-based-mfa.md
# Securing phone-based multi-factor authentication (MFA) - With Azure Active Directory (Azure AD) Multi-Factor Authentication (MFA), users can choose to receive an automated voice call at a phone number they register for verification. Malicious users could take advantage of this method by creating multiple accounts and placing phone calls without completing the MFA registration process. These numerous failed sign-ups could exhaust the allowed sign-up attempts, preventing other users from signing up for new accounts in your Azure AD B2C tenant. To help protect against these attacks, you can use Azure Monitor to monitor phone authentication failures and mitigate fraudulent sign-ups. ## Prerequisites
active-directory User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning.md
Previously updated : 02/14/2023 Last updated : 02/15/2023
App provisioning lets you:
## What is SCIM?
-To help automate provisioning and deprovisioning, apps expose proprietary user and group APIs. But anyone who's tried to manage users in more than one app will tell you that every app tries to perform the same actions, such as creating or updating users, adding users to groups, or deprovisioning users. Yet, all these actions are implemented slightly differently by using different endpoint paths, different methods to specify user information, and a different schema to represent each element of information.
+To help automate provisioning and deprovisioning, apps expose proprietary user and group APIs. User management in more than one app is a challenge because every app tries to perform the same actions. For example, creating or updating users, adding users to groups, or deprovisioning users. Yet, all these actions are implemented slightly differently by using different endpoint paths, different methods to specify user information, and a different schema to represent each element of information.
To address these challenges, the System for Cross-domain Identity Management (SCIM) specification provides a common user schema to help users move into, out of, and around apps. SCIM is becoming the de facto standard for provisioning and, when used with federation standards like Security Assertions Markup Language (SAML) or OpenID Connect (OIDC), provides administrators an end-to-end standards-based solution for access management.
The provisioning mode supported by an application is also visible on the **Provi
## Benefits of automatic provisioning
-As the number of applications used in modern organizations continues to grow, IT admins are tasked with access management at scale. Standards such as SAML or OIDC allow admins to quickly set up single sign-on (SSO), but access also requires users to be provisioned into the app. To many admins, provisioning means manually creating every user account or uploading CSV files each week. These processes are time-consuming, expensive, and error prone. Solutions such as SAML just-in-time (JIT) have been adopted to automate provisioning. Enterprises also need a solution to deprovision users when they leave the organization or no longer require access to certain apps based on role change.
+The number of applications used in modern organizations continues to grow. IT admins are tasked with access management at scale. Admins use standards such as SAML or OIDC for single sign-on (SSO), but access also requires users to be provisioned into the app. To many admins, provisioning means manually creating every user account or uploading CSV files each week. These processes are time-consuming, expensive, and error prone. Solutions such as SAML just-in-time (JIT) have been adopted to automate provisioning. Enterprises also need a solution to deprovision users when they leave the organization or no longer require access to certain apps based on role change.
Some common motivations for using automatic provisioning include:
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
description: Learn how to use number matching in MFA notifications
Previously updated : 02/10/2023 Last updated : 02/15/2023
This topic covers how to enable number matching in Microsoft Authenticator push notifications to improve user sign-in security. >[!NOTE]
->Number matching is a key security upgrade to traditional second factor notifications in Microsoft Authenticator. We will remove the admin controls and enforce the number match experience tenant-wide for all users of Microsoft Authenticator push notifications starting February 27, 2023.<br>
->We highly recommend enabling number matching in the near term for improved sign-in security. Relevant services will begin deploying these changes after February 27, 2023 and users will start to see number match in approval requests. As services deploy, some may see number match while others don't. To ensure consistent behavior for all users, we highly recommend you enable number match for Microsoft Authenticator push notifications in advance.
+>Number matching is a key security upgrade to traditional second factor notifications in Microsoft Authenticator. We will remove the admin controls and enforce the number match experience tenant-wide for all users of Microsoft Authenticator push notifications starting May 8, 2023.<br>
+>We highly recommend enabling number matching in the near term for improved sign-in security. Relevant services will begin deploying these changes after May 8, 2023 and users will start to see number match in approval requests. As services deploy, some may see number match while others don't. To ensure consistent behavior for all users, we highly recommend you enable number match for Microsoft Authenticator push notifications in advance.
## Prerequisites
AD FS adapter will require number matching on supported versions of Windows Serv
Although NPS doesn't support number matching, the latest NPS extension does support One-Time Password (OTP) methods such as the OTP available in Microsoft Authenticator, other software tokens, and hardware FOBs. OTP sign-in provides better security than the alternative **Approve**/**Deny** experience. Make sure you run the latest version of the [NPS extension](https://www.microsoft.com/download/details.aspx?id=54688).
-After Feb 27, 2023, when number matching is enabled for all users, anyone who performs a RADIUS connection with NPS extension version 1.2.2216.1 or later will be prompted to sign in with an OTP method instead.
+After May 8, 2023, when number matching is enabled for all users, anyone who performs a RADIUS connection with NPS extension version 1.2.2216.1 or later will be prompted to sign in with an OTP method instead.
Users must have an OTP authentication method registered to see this behavior. Without an OTP method registered, users continue to see **Approve**/**Deny**.
-Prior to the release of NPS extension version 1.2.2216.1 after February 27, 2023, organizations that run any of these earlier versions of NPS extension can modify the registry to require users to enter an OTP:
+Prior to the release of NPS extension version 1.2.2216.1 after May 8, 2023, organizations that run any of these earlier versions of NPS extension can modify the registry to require users to enter an OTP:
- 1.2.2131.2 - 1.2.1959.1
GET https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationM
### When will my tenant see number matching if I don't use the Azure portal or Graph API to roll out the change?
-Number match will be enabled for all users of Microsoft Authenticator push notifications after February 27, 2023. Relevant services will begin deploying these changes after February 27, 2023 and users will start to see number match in approval requests. As services deploy, some may see number match while others don't. To ensure consistent behavior for all your users, we highly recommend you use the Azure portal or Graph API to roll out number match for all Microsoft Authenticator users.
+Number match will be enabled for all users of Microsoft Authenticator push notifications after May 8, 2023. We had previously announced that we will remove the admin controls and enforce the number match experience tenant-wide for all users of Microsoft Authenticator push notifications starting May 8, 2023. After listening to customers, we will extend the availability of the rollout controls for a few more weeks.
-### Will the changes after February 27th, 2023, override number matching settings that are configured for a group in the Authentication methods policy?
+Relevant services will begin deploying these changes after May 8, 2023 and users will start to see number match in approval requests. As services deploy, some may see number match while others don't. To ensure consistent behavior for all your users, we highly recommend you use the Azure portal or Graph API to roll out number match for all Microsoft Authenticator users.
-No, the changes after February 27th won't affect the **Enable and Target** tab for Microsoft Authenticator in the Authentication methods policy. Administrators can continue to target specific users and groups or **All Users** for Microsoft Authenticator **Push** or **Any** authentication mode.
+### Will the changes after May 8th, 2023, override number matching settings that are configured for a group in the Authentication methods policy?
-When Microsoft begins protecting all organizations by enabling number matching after February 27th, 2023, administrators will see the **Require number matching for push notifications** setting on the **Configure** tab of the Microsoft Authenticator policy is set to **Enabled** for **All users** and can't be disabled. In addition, the **Exclude** option for this setting will be removed.
+No, the changes after May 8th won't affect the **Enable and Target** tab for Microsoft Authenticator in the Authentication methods policy. Administrators can continue to target specific users and groups or **All Users** for Microsoft Authenticator **Push** or **Any** authentication mode.
+
+When Microsoft begins protecting all organizations by enabling number matching after May 8th, 2023, administrators will see the **Require number matching for push notifications** setting on the **Configure** tab of the Microsoft Authenticator policy is set to **Enabled** for **All users** and can't be disabled. In addition, the **Exclude** option for this setting will be removed.
### What happens for users who aren't specified in the Authentication methods policy but they are enabled for Notifications through mobile app in the legacy MFA tenant-wide policy?
-Users who are enabled for MFA push notifications in the legacy MFA policy will also see number match after February 27th, 2023. If the legacy MFA policy has enabled **Notifications through mobile app**, users will see number matching regardless of whether or not it's enabled on the **Enable and Target** tab for Microsoft Authenticator in the Authentication methods policy.
+Users who are enabled for MFA push notifications in the legacy MFA policy will also see number match after May 8th, 2023. If the legacy MFA policy has enabled **Notifications through mobile app**, users will see number matching regardless of whether or not it's enabled on the **Enable and Target** tab for Microsoft Authenticator in the Authentication methods policy.
:::image type="content" border="true" source="./media/how-to-mfa-number-match/notifications-through-mobile-app.png" alt-text="Screenshot of Notifications through mobile app setting.":::
They'll see a prompt to supply a verification code. They must select their accou
### Can I opt out of number matching?
-Yes, currently you can disable number matching. We highly recommend that you enable number matching for all users in your tenant to protect yourself from MFA fatigue attacks. To protect the ecosystem and mitigate these threats, Microsoft will enable number matching for all tenants starting February 27, 2023. After protection is enabled by default, users can't opt out of number matching in Microsoft Authenticator push notifications.
+Yes, currently you can disable number matching. We highly recommend that you enable number matching for all users in your tenant to protect yourself from MFA fatigue attacks. To protect the ecosystem and mitigate these threats, Microsoft will enable number matching for all tenants starting May 8, 2023. After protection is enabled by default, users can't opt out of number matching in Microsoft Authenticator push notifications.
-Relevant services will begin deploying these changes after February 27, 2023 and users will start to see number match in approval requests. As services deploy, some may see number match while others don't. To ensure consistent behavior for all users, we highly recommend you enable number match for Microsoft Authenticator push notifications in advance.
+Relevant services will begin deploying these changes after May 8, 2023 and users will start to see number match in approval requests. As services deploy, some may see number match while others don't. To ensure consistent behavior for all users, we highly recommend you enable number match for Microsoft Authenticator push notifications in advance.
### Does number matching only apply if Microsoft Authenticator is set as the default authentication method?
-If the user has a different default authentication method, there won't be any change to their default sign-in. If the default method is Microsoft Authenticator and the user is specified in either of the following policies, they'll start to receive number matching approval after February 27th, 2023:
+If the user has a different default authentication method, there won't be any change to their default sign-in. If the default method is Microsoft Authenticator and the user is specified in either of the following policies, they'll start to receive number matching approval after May 8th, 2023:
- Authentication methods policy (in the portal, click **Security** > **Authentication methods** > **Policies**) - Legacy MFA tenant-wide policy (in the portal, click **Security** > **Multifactor Authentication** > **Additional cloud-based multifactor authentication settings**)
-Regardless of their default method, any user who is prompted to sign-in with Authenticator push notifications will see number match after February 27th, 2023. If the user is prompted for another method, they won't see any change.
+Regardless of their default method, any user who is prompted to sign-in with Authenticator push notifications will see number match after May 8th, 2023. If the user is prompted for another method, they won't see any change.
### Is number matching supported with MFA Server?
active-directory Reference Aadsts Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-aadsts-error-codes.md
The `error` field has several possible values - review the protocol documentatio
| AADSTS75008 | RequestDeniedError - The request from the app was denied since the SAML request had an unexpected destination. | | AADSTS75011 | NoMatchedAuthnContextInOutputClaims - The authentication method by which the user authenticated with the service doesn't match requested authentication method. To learn more, see the troubleshooting article for error [AADSTS75011](/troubleshoot/azure/active-directory/error-code-aadsts75011-auth-method-mismatch). | | AADSTS75016 | Saml2AuthenticationRequestInvalidNameIDPolicy - SAML2 Authentication Request has invalid NameIdPolicy. |
+| AADSTS76026 | RequestIssueTimeExpired - IssueTime in an SAML2 Authentication Request is expired. |
| AADSTS80001 | OnPremiseStoreIsNotAvailable - The Authentication Agent is unable to connect to Active Directory. Make sure that agent servers are members of the same AD forest as the users whose passwords need to be validated and they are able to connect to Active Directory. | | AADSTS80002 | OnPremisePasswordValidatorRequestTimedout - Password validation request timed out. Make sure that Active Directory is available and responding to requests from the agents. | | AADSTS80005 | OnPremisePasswordValidatorUnpredictableWebException - An unknown error occurred while processing the response from the Authentication Agent. Retry the request. If it continues to fail, [open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) to get more details on the error. |
The `error` field has several possible values - review the protocol documentatio
## Next steps
-* Have a question or can't find what you're looking for? Create a GitHub issue or see [Support and help options for developers](./developer-support-help-options.md) to learn about other ways you can get help and support.
+* Have a question or can't find what you're looking for? Create a GitHub issue or see [Support and help options for developers](./developer-support-help-options.md) to learn about other ways you can get help and support.
active-directory Test Automate Integration Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/test-automate-integration-testing.md
To exclude a test application:
## Write your application tests
-Now that you're set up, you can write your automated tests. The following .NET example code uses [Microsoft Authentication Library (MSAL)](msal-overview.md) and [xUnit](https://xunit.net/), a common testing framework.
+Now that you're set up, you can write your automated tests. The following are tests for:
+1. .NET example code uses [Microsoft Authentication Library (MSAL)](msal-overview.md) and [xUnit](https://xunit.net/), a common testing framework.
+1. JavaScript example code uses [Microsoft Authentication Library (MSAL)](msal-overview.md) and [Playwright](https://playwright.dev/), a common testing framework.
+
+## [.NET](#tab/dotnet)
### Set up your appsettings.json file
public class ApiTests : IClassFixture<ClientFixture>
} } ```+
+## [JavaScript](#tab/JavaScript)
+
+### Set up your authConfig.json file
+
+Add the client ID and the tenant ID of the test app you previously created, the key vault URI and the secret name to the authConfig.js file of your test project.
+
+```javascript
+export const msalConfig = {
+ auth: {
+ clientId: 'Enter_the_Application_Id_Here',
+ authority: 'https://login.microsoftonline.com/Enter_the_Tenant_Id_Here',
+ },
+};
+
+export const keyVaultConfig = {
+ keyVaultUri: 'https://<your-unique-keyvault-name>.vault.azure.net',
+ secretName: 'Enter_the_Secret_Name',
+};
+```
+
+### Initialize MSAL.js and fetch the user credentials from Key Vault
+
+Initialize the MSAL.js authentication context by instantiating a [PublicClientApplication](https://azuread.github.io/microsoft-authentication-library-for-js/ref/classes/_azure_msal_browser.publicclientapplication.html) with a [Configuration](https://azuread.github.io/microsoft-authentication-library-for-js/ref/modules/_azure_msal.html#configuration) object. The minimum required configuration property is the `clientID` of the application.
+
+Use [SecretClient()](/javascript/api/@azure/keyvault-secrets/secretclient) to get the test username and password secrets from Azure Key Vault.
+
+[DefaultAzureCredential()](/javascript/api/@azure/identity/defaultazurecredential) authenticates with Azure Key Vault by getting an access token from a service principal configured by environment variables or a managed identity (if the code is running on an Azure resource with a managed identity). If the code is running locally, `DefaultAzureCredential` uses the local user's credentials. Read more in the [Azure Identity client library](/javascript/api/@azure/identity/defaultazurecredential) content.
+
+Use Microsoft Authentication Library (MSAL) to authenticate using the ROPC flow and get an access token. The access token is passed along as a bearer token in the HTTP request.
++
+```javascript
+import { test, expect } from '@playwright/test';
+import { DefaultAzureCredential } from '@azure/identity';
+import { SecretClient } from '@azure/keyvault-secrets';
+import { PublicClientApplication, CacheKVStore } from '@azure/msal-node';
+import { msalConfig, keyVaultConfig } from '../authConfig';
+
+let tokenCache;
+const KVUri = keyVaultConfig.keyVaultUri;
+const secretName = keyVaultConfig.secretName;
+
+async function getCredentials() {
+ try {
+ const credential = new DefaultAzureCredential();
+ const secretClient = new SecretClient(KVUri, credential);
+ const secret = await secretClient.getSecret(keyVaultConfig.secretName);
+ const password = secret.value;
+ return [secretName, password];
+ } catch (error) {
+ console.log(error);
+ }
+}
+
+test.beforeAll(async () => {
+ const pca = new PublicClientApplication(msalConfig);
+ const [username, password] = await getCredentials();
+ const usernamePasswordRequest = {
+ scopes: ['user.read', 'User.ReadBasic.All'],
+ username: username,
+ password: password,
+ };
+ await pca.acquireTokenByUsernamePassword(usernamePasswordRequest);
+ tokenCache = pca.getTokenCache().getKVStore();
+});
+```
+
+### Run the test suite
+
+In the same file, add the tests as shown below:
+
+```javascript
+/**
+ * Stores the token in the session storage and reloads the page
+ */
+async function setSessionStorage(page, tokens) {
+ const cacheKeys = Object.keys(tokens);
+ for (let key of cacheKeys) {
+ const value = JSON.stringify(tokenCache[key]);
+ await page.context().addInitScript(
+ (arr) => {
+ window.sessionStorage.setItem(arr[0], arr[1]);
+ },
+ [key, value]
+ );
+ }
+ await page.reload();
+}
+
+test.describe('Testing Authentication with MSAL.js ', () => {
+ test('Test user has signed in successfully', async ({ page }) => {
+ await page.goto('http://localhost:<port>/');
+ let signInButton = page.getByRole('button', { name: /Sign In/i });
+ let signOutButton = page.getByRole('button', { name: /Sign Out/i });
+ let welcomeDev = page.getByTestId('WelcomeMessage');
+ expect(await signInButton.count()).toBeGreaterThan(0);
+ expect(await signOutButton.count()).toBeLessThanOrEqual(0);
+ expect(await welcomeDev.innerHTML()).toEqual('Please sign-in to see your profile and read your mails');
+ await setSessionStorage(page, tokenCache);
+ expect(await signInButton.count()).toBeLessThanOrEqual(0);
+ expect(await signOutButton.count()).toBeGreaterThan(0);
+ expect(await welcomeDev.innerHTML()).toContain(`Welcome`);
+ });
+});
+
+```
+
+For more information, please check the following code sample [MSAL.js Testing Example](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/samples/msal-browser-samples/TestingSample).
active-directory V2 Protocols Oidc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-protocols-oidc.md
description: Sign in Azure AD users by using the Microsoft identity platform's i
Previously updated : 08/26/2022 Last updated : 02/14/2023
# OpenID Connect on the Microsoft identity platform
-OpenID Connect (OIDC) extends the OAuth 2.0 authorization protocol for use also as an authentication protocol. You can use OIDC to enable single sign-on (SSO) between your OAuth-enabled applications by using a security token called an *ID token*.
+OpenID Connect (OIDC) extends the OAuth 2.0 authorization protocol for use as an additional authentication protocol. You can use OIDC to enable single sign-on (SSO) between your OAuth-enabled applications by using a security token called an *ID token*.
The full specification for OIDC is available on the OpenID Foundation's website at [OpenID Connect Core 1.0 specification](https://openid.net/specs/openid-connect-core-1_0.html). ## Protocol flow: Sign-in
-This diagram shows the basic OpenID Connect sign-in flow. The steps in the flow are described in more detail in later sections of the article.
+The following diagram shows the basic OpenID Connect sign-in flow. The steps in the flow are described in more detail in later sections of the article.
![Swim-lane diagram showing the OpenID Connect protocol's sign-in flow.](./media/v2-protocols-oidc/convergence-scenarios-webapp.svg)
This diagram shows the basic OpenID Connect sign-in flow. The steps in the flow
## Enable ID tokens
-The *ID token* introduced by OpenID Connect is issued by the authorization server (the Microsoft identity platform) when the client application requests one during user authentication. The ID token enables a client application to verify the identity of the user and to get other information (claims) about them.
+The *ID token* introduced by OpenID Connect is issued by the authorization server, the Microsoft identity platform, when the client application requests one during user authentication. The ID token enables a client application to verify the identity of the user and to get other information (claims) about them.
-ID tokens aren't issued by default for an application registered with the Microsoft identity platform. Enable ID tokens for an app by using one of the following methods.
+ID tokens aren't issued by default for an application registered with the Microsoft identity platform. ID tokens for an application are enabled by using one of the following methods:
-To enable ID tokens for your app, navigate to the [Azure portal](https://portal.azure.com) and then:
-
-1. Select **Azure Active Directory** > **App registrations** > *\<your application\>* > **Authentication**.
+1. Navigate to the [Azure portal](https://portal.azure.com) and select **Azure Active Directory** > **App registrations** > *\<your application\>* > **Authentication**.
1. Under **Implicit grant and hybrid flows**, select the **ID tokens (used for implicit and hybrid flows)** checkbox. Or:
Or:
1. Select **Azure Active Directory** > **App registrations** > *\<your application\>* > **Manifest**. 1. Set `oauth2AllowIdTokenImplicitFlow` to `true` in the app registration's [application manifest](reference-app-manifest.md).
-If you forget to enable ID tokens for your app and you request one, the Microsoft identity platform returns an `unsupported_response` error similar to:
+If ID tokens are not enabled for your app and one is requested, the Microsoft identity platform returns an `unsupported_response` error similar to:
> *The provided value for the input parameter 'response_type' isn't allowed for this client. Expected value is 'code'*.
-Requesting an ID token by specifying a `response_type` of `id_token` is explained in [Send the sign-in request](#send-the-sign-in-request) later in the article.
+Requesting an ID token by specifying a `response_type` of `code` is explained in [Send the sign-in request](#send-the-sign-in-request) later in the article.
## Fetch the OpenID configuration document
-OpenID providers like the Microsoft identity platform provide an [OpenID Provider Configuration Document](https://openid.net/specs/openid-connect-discovery-1_0.html) at a publicly accessible endpoint containing the provider's OIDC endpoints, supported claims, and other metadata. Client applications can use the metadata to discover the URLs to use for authentication and the authentication service's public signing keys, among other things.
+OpenID providers like the Microsoft identity platform provide an [OpenID Provider Configuration Document](https://openid.net/specs/openid-connect-discovery-1_0.html) at a publicly accessible endpoint containing the provider's OIDC endpoints, supported claims, and other metadata. Client applications can use the metadata to discover the URLs to use for authentication and the authentication service's public signing keys.
-Authentication libraries are the most common consumers of the OpenID configuration document, which they use for discovery of authentication URLs, the provider's public signing keys, and other service metadata. If you use an authentication library in your app (recommended), you likely won't need to hand-code requests to and responses from the OpenID configuration document endpoint.
+Authentication libraries are the most common consumers of the OpenID configuration document, which they use for discovery of authentication URLs, the provider's public signing keys, and other service metadata. If an authentication library is used in your app, you likely won't need to hand-code requests to and responses from the OpenID configuration document endpoint.
### Find your app's OpenID configuration document URI
The value of `{tenant}` varies based on the application's sign-in audience as sh
> [!TIP] > Note that when using the `common` or `consumers` authority for personal Microsoft accounts, the consuming resource application must be configured to support such type of accounts in accordance with [signInAudience](./supported-accounts-validation.md).
-You can also find your app's OpenID configuration document URI in its app registration in the Azure portal.
-
-To find the OIDC configuration document for your app, navigate to the [Azure portal](https://portal.azure.com) and then:
+To find the OIDC configuration document in the Azure portal, navigate to the [Azure portal](https://portal.azure.com) and then:
1. Select **Azure Active Directory** > **App registrations** > *\<your application\>* > **Endpoints**. 1. Locate the URI under **OpenID Connect metadata document**. ### Sample request
-This request gets the OpenID configuration metadata from the `common` authority's OpenID configuration document endpoint on the Azure public cloud:
+The following request gets the OpenID configuration metadata from the `common` authority's OpenID configuration document endpoint on the Azure public cloud:
```http GET /common/v2.0/.well-known/openid-configuration
The configuration metadata is returned in JSON format as shown in the following
... } ```-
-<!-- UNCOMMENT WHEN THE EXAMPLE APP REGISTRATION IS RE-ENABLED -->
-<!-- If your app has custom signing keys as a result of using [claims mapping](active-directory-claims-mapping.md), append the `appid` query parameter to include the `jwks_uri` claim that includes your app's signing key information. For example, `https://login.microsoftonline.com/{tenant}/v2.0/.well-known/openid-configuration?appid=6731de76-14a6-49ae-97bc-6eba6914391e` includes a `jwks_uri` of `https://login.microsoftonline.com/{tenant}/discovery/v2.0/keys?appid=6731de76-14a6-49ae-97bc-6eba6914391e`. -->
- ## Send the sign-in request To authenticate a user and request an ID token for use in your application, direct their user-agent to the Microsoft identity platform's _/authorize_ endpoint. The request is similar to the first leg of the [OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md) but with these distinctions: * Include the `openid` scope in the `scope` parameter.
-* Specify `id_token` or `code+id_token` in the `response_type` parameter.
+* Specify `code` in the `response_type` parameter.
* Include the `nonce` parameter. Example sign-in request (line breaks included only for readability):
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| | | | | `tenant` | Required | You can use the `{tenant}` value in the path of the request to control who can sign in to the application. The allowed values are `common`, `organizations`, `consumers`, and tenant identifiers. For more information, see [protocol basics](active-directory-v2-protocols.md#endpoints). Critically, for guest scenarios where you sign a user from one tenant into another tenant, you *must* provide the tenant identifier to correctly sign them into the resource tenant.| | `client_id` | Required | The **Application (client) ID** that the [Azure portal ΓÇô App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) experience assigned to your app. |
-| `response_type` | Required | Must include `id_token` for OpenID Connect sign-in. It might also include other `response_type` values, such as `code`. |
+| `response_type` | Required | Must include `code` for OpenID Connect sign-in. |
| `redirect_uri` | Recommended | The redirect URI of your app, where authentication responses can be sent and received by your app. It must exactly match one of the redirect URIs you registered in the portal, except that it must be URL-encoded. If not present, the endpoint will pick one registered `redirect_uri` at random to send the user back to. | | `scope` | Required | A space-separated list of scopes. For OpenID Connect, it must include the scope `openid`, which translates to the **Sign you in** permission in the consent UI. You might also include other scopes in this request for requesting consent. | | `nonce` | Required | A value generated and sent by your app in its request for an ID token. The same `nonce` value is included in the ID token returned to your app by the Microsoft identity platform. To mitigate token replay attacks, your app should verify the `nonce` value in the ID token is the same value it sent when requesting the token. The value is typically a unique, random string. |
active-directory Workload Identity Federation Create Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust.md
az ad app federated-credential create --id f6475511-fd81-4965-a00e-41e7792b7b9c
("credential.json" contains the following content) { "name": "Testing",
- "issuer": "https://token.actions.githubusercontent.com/",
+ "issuer": "https://token.actions.githubusercontent.com",
"subject": "repo:octo-org/octo-repo:environment:Production", "description": "Testing", "audiences": [
az rest -m DELETE -u 'https://graph.microsoft.com/applications/f6475511-fd81-49
- To learn how to use workload identity federation for GitHub Actions, see [Configure a GitHub Actions workflow to get an access token](/azure/developer/github/connect-from-azure). - Read the [GitHub Actions documentation](https://docs.github.com/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-azure) to learn more about configuring your GitHub Actions workflow to get an access token from Microsoft identity provider and access Azure resources. - For more information, read about how Azure AD uses the [OAuth 2.0 client credentials grant](v2-oauth2-client-creds-grant-flow.md#third-case-access-token-request-with-a-federated-credential) and a client assertion issued by another IdP to get a token.-- For information about the required format of JWTs created by external identity providers, read about the [assertion format](active-directory-certificate-credentials.md#assertion-format).
+- For information about the required format of JWTs created by external identity providers, read about the [assertion format](active-directory-certificate-credentials.md#assertion-format).
active-directory Domains Verify Custom Subdomain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-verify-custom-subdomain.md
Because subdomains inherit the authentication type of the root domain by default
1. Use the following example to GET the domain. Because the domain isn't a root domain, it inherits the root domain authentication type. Your command and results might look as follows, using your own tenant ID:
+> [!Note]
+> Issuing this request can be performed directly in [Graph Explorer](https://aka.ms/ge).
+ ```http GET https://graph.microsoft.com/v1.0/domains/foo.contoso.com/
Because subdomains inherit the authentication type of the root domain by default
Use the following command to promote the subdomain: ```http
-POST https://graph.windows.net/{tenant-id}/domains/foo.contoso.com/promote
+POST https://graph.microsoft.com/{tenant-id}/domains/foo.contoso.com/promote
``` ### Promote command error conditions
active-directory Scenario Azure First Sap Identity Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/scenario-azure-first-sap-identity-integration.md
Consider building automation to execute the entire certificate rollover process.
## Using Azure AD B2C as the Identity Provider
-[Azure Active Directory B2C](../../active-directory-b2c/overview.md) provides business-to-customer identity as a service. Given that the integration with Azure AD B2C is similar to how you would allow enterprise users to sign in with Azure AD, the recommendations above still mostly apply when you want to use Azure AD B2C for your customers, consumers or citizens and allow them to use their preferred social, enterprise, or local account identities. There are a few important differences, however.
+[Azure Active Directory B2C](../../active-directory-b2c/overview.md) provides business-to-customer identity as a service. Given that the integration with Azure AD B2C is similar to how you would allow enterprise users to sign in with Azure AD, the recommendations above still mostly apply when you want to use Azure AD B2C for your customers, consumers or citizens and allow them to use their preferred social, enterprise, or local account identities.
+
+There are a few important differences, however. Setting up Azure AD B2C as a corporate identity provider in IAS and configuring federation between both tenants is described in more detail in [this blog post](https://blogs.sap.com/2023/02/08/identity-federation-between-azure-ad-b2c-and-sap-cloud-identity-services-using-custom-policies/).
### Registering a SAML application in Azure AD B2C
active-directory Lifecycle Workflow Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-tasks.md
For Microsoft Graph the parameters for the **Generate Temporary Access Pass and
### Add user to groups
-Allows users to be added to Microsoft 365 and cloud-only security groups. Mail-enabled, distribution, dynamic and PIM for Groups are not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
+Allows users to be added to Microsoft 365 and cloud-only security groups. Mail-enabled, distribution, dynamic and role-assignable groups are not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
You're able to customize the task name and description for this task. :::image type="content" source="media/lifecycle-workflow-task/add-group-task.png" alt-text="Screenshot of Workflows task: Add user to group task.":::
For Microsoft Graph the parameters for the **Add user to teams** task are as fol
### Enable user account
-Allows cloud-only user accounts to be enabled. You're able to customize the task name and description for this task in the Azure portal.
+Allows cloud-only user accounts to be enabled. Users with Azure AD role assignments are not supported, nor are users with membership or ownership of role-assignable groups. You're able to customize the task name and description for this task in the Azure portal.
:::image type="content" source="media/lifecycle-workflow-task/enable-task.png" alt-text="Screenshot of Workflows task: enable user account.":::
For more information on setting up a Logic app to run with Lifecycle Workflows,
### Disable user account
-Allows cloud-only user accounts to be disabled. You're able to customize the task name and description for this task in the Azure portal.
+Allows cloud-only user accounts to be disabled. Users with Azure AD role assignments are not supported, nor are users with membership or ownership of role-assignable groups. You're able to customize the task name and description for this task in the Azure portal.
:::image type="content" source="media/lifecycle-workflow-task/disable-task.png" alt-text="Screenshot of Workflows task: disable user account.":::
For Microsoft Graph the parameters for the **Disable user account** task are as
### Remove user from selected groups
-Allows users to be removed from Microsoft 365 and cloud-only security groups. Mail-enabled, distribution, dynamic and PIM for Groups are not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
+Allows users to be removed from Microsoft 365 and cloud-only security groups. Mail-enabled, distribution, dynamic and role-assignable groups are not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
You're able to customize the task name and description for this task in the Azure portal. :::image type="content" source="media/lifecycle-workflow-task/remove-group-task.png" alt-text="Screenshot of Workflows task: Remove user from select groups.":::
For Microsoft Graph the parameters for the **Remove user from selected groups**
### Remove users from all groups
-Allows users to be removed from every Microsoft 365 and cloud-only security group they're a member of. Mail-enabled, distribution, dynamic and PIM for Groups are not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
+Allows users to be removed from every Microsoft 365 and cloud-only security group they're a member of. Mail-enabled, distribution, dynamic and role-assignable groups are not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
You're able to customize the task name and description for this task in the Azure portal.
For Microsoft Graph the parameters for the **Remove all license assignment from
### Delete User
-Allows cloud-only user accounts to be deleted. You're able to customize the task name and description for this task in the Azure portal.
+Allows cloud-only user accounts to be deleted. Users with Azure AD role assignments are not supported, nor are users with membership or ownership of role-assignable groups. You're able to customize the task name and description for this task in the Azure portal.
:::image type="content" source="media/lifecycle-workflow-task/delete-user-task.png" alt-text="Screenshot of Workflows task: Delete user account.":::
active-directory How To Connect Sync Configure Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-configure-filtering.md
This article covers how to configure the different filtering methods.
## Basics and important notes In Azure AD Connect sync, you can enable filtering at any time. If you start with a default configuration of directory synchronization and then configure filtering, the objects that are filtered out are no longer synchronized to Azure AD. Because of this change, any objects in Azure AD that were previously synchronized but were then filtered are deleted in Azure AD.
-Before you start making changes to filtering, make sure that you [disable the scheduled task](#disable-the-scheduled-task) so you don't accidentally export changes that you haven't yet verified to be correct.
+Before you start making changes to filtering, make sure that you [disable the built-in scheduler](#disable-the-synchronization-scheduler) so you don't accidentally export changes that you haven't yet verified to be correct.
Because filtering can remove many objects at the same time, you want to make sure that your new filters are correct before you start exporting any changes to Azure AD. After you've completed the configuration steps, we strongly recommend that you follow the [verification steps](#apply-and-verify-changes) before you export and make changes to Azure AD.
active-directory How To Use Vm Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-use-vm-token.md
class GetMSIToken {
## Get a token using Go
-```
+```go
package main import (
active-directory How To View Associated Resources For An Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-view-associated-resources-for-an-identity.md
description: Step-by-step instructions for viewing the Azure resources that are
documentationcenter: '' -+ editor: ''
ms.devlang: na
na Previously updated : 06/20/2022 Last updated : 01/18/2023
Being able to quickly see which Azure resources are associated with a user-assig
Select the resource name to be brought to its summary page. #### Filtering and sorting by resource type+ Filter the resources by typing in the filter box at the top of the summary page. You can filter by the name, type, resource group, and subscription ID. Select the column title to sort alphabetically, ascending or descending.
https://management.azure.com/subscriptions/{resourceID of user-assigned identity
| $skip | 50 | The number of items you want to skip while paging through the results. | | $top | 10 | The number of resources to return. 0 will return only a count of the resources. |
-Below is a sample request to the REST API:
+You can see a sample request to the REST API:
+ ```http POST https://management.azure.com/subscriptions/aab111d1-1111-43e2-8d11-3bfc47ab8111/resourceGroups/devrg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/devIdentity/listAssociatedResources?$filter={filter}&$orderby={orderby}&$skip={skip}&$top={top}&skipToken={skipToken}&api-version=2021-09-30-preview ```
-Below is a sample response from the REST API:
+Notice a sample response from the REST API:
+ ```json { "totalCount": 2,
Below is a sample response from the REST API:
``` ### Command Line Interface+ To view the associated resources for a user-assigned managed identity, run the following command:+ ```azurecli az identity list-resources --resource-group <ResourceGroupName> --name <ManagedIdentityName> ``` The response will look like this:+ ```json [ {
The response will look like this:
``` ### REST API using PowerShell+ There's no specific PowerShell command for returning the associated resources of a managed identity, but you can use the REST API in PowerShell by using the following command: ```PowerShell
Invoke-AzRestMethod -Path "/subscriptions/XXX-XXX-XXX-XXX/resourceGroups/test-rg
> All resources associated with an identity will be returned, regardless of the user's permissions. The user only needs to have access to read the managed identity. This means that more resources may be visible than the user can see elsewhere in the portal. This is to provide full visibility of the identity's usage. If the user doesn't have access to an associated resource, an error will be displayed if they try to access it from the list. ## Delete a user-assigned managed identity+ When you select the delete button for a user-assigned managed identity, you'll see a list of up to 10 associated resources for that identity. The full count will be displayed at the top of the pane. This list allows you to see which resources will be affected by deleting the identity. You'll be asked to confirm your decision. :::image type="content" source="media/viewing-associated-resources/associated-resources-delete.png" alt-text="Screenshot showing the delete confirmation screen for a user-assigned managed identity.":::
When you select the delete button for a user-assigned managed identity, you'll s
This confirmation process is only available in the portal. To view an identity's resources before deleting it using the REST API, retrieve the list of resources manually in advance. ## Limitations+ - This functionality is available in all public regions, and will be available in USGov and China in the coming weeks. - API requests for associated resources are limited to one per second per tenant. If you exceed this limit, you may receive a `HTTP 429` error. This limit doesn't apply to retrieving a list of user-assigned managed identities. - Azure Resources types that are in preview, or their support for Managed identities is in preview, may not appear in the associated resources list until fully generally available. This list includes Service Fabric clusters, Blueprints, and Machine learning services.
active-directory Admin Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/admin-api.md
You are able to [search](how-to-issuer-revoke.md) for verifiable credentials wit
} ```
-The following request shows how to add the calculated value to the filter parameter of the request. At this moment only the filter=indexclaim eq format is supported.
+The following request shows how to add the calculated value to the filter parameter of the request. At this moment only the filter=indexclaimhash eq format is supported.
### HTTP request
-`GET /v1.0/verifiableCredentials/authorities/:authorityId/contracts/:contractId/credentials?filter=indexclaim eq {hashedsearchclaimvalue}`
+`GET /v1.0/verifiableCredentials/authorities/:authorityId/contracts/:contractId/credentials?filter=indexclaimhash eq {hashedsearchclaimvalue}`
#### Request headers
OK
## Next steps - [Specify the request service REST API issuance request](issuance-request-api.md)-- [Entra Verified ID Network API](issuance-request-api.md)
+- [Entra Verified ID Network API](issuance-request-api.md)
aks Auto Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-cluster.md
Part of the AKS cluster lifecycle involves performing periodic upgrades to the l
Auto-upgrade provides a set once and forget mechanism that yields tangible time and operational cost benefits. By enabling auto-upgrade, you can ensure your clusters are up to date and don't miss the latest AKS features or patches from AKS and upstream Kubernetes.
-AKS follows a strict versioning window with regard to supportability. With properly selected auto-upgrade channels, you can avoid clusters falling into an unsupported version. For more on the AKS support window, see [Supported Kubernetes versions][supported-kubernetes-versions].
+AKS follows a strict versioning window with regard to supportability. With properly selected auto-upgrade channels, you can avoid clusters falling into an unsupported version. For more on the AKS support window, see [Alias minor versions][supported-kubernetes-versions].
Even if using node image auto upgrade (which won't change the Kubernetes version), it still requires MC to be in a supported version
The following upgrade channels are available:
> [!NOTE] > Cluster auto-upgrade only updates to GA versions of Kubernetes and will not update to preview versions.
+> [!NOTE]
+> With AKS, you can create a cluster without specifying the exact patch version. When you create a cluster without designating a patch, the cluster will run the minor version's latest GA patch. To Learn more [AKS support window][supported-kubernetes-versions]
+ > [!NOTE] > Auto-upgrade requires the cluster's Kubernetes version to be within the [AKS support window][supported-kubernetes-versions], even if using the `node-image` channel.
aks Csi Secrets Store Identity Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-identity-access.md
Azure AD workload identity (preview) is supported on both Windows and Linux clus
usePodIdentity: "false" useVMManagedIdentity: "false" clientID: "${USER_ASSIGNED_CLIENT_ID}" # Setting this to use workload identity
- keyvaultName: ${$KEYVAULT_NAME} # Set to the name of your key vault
+ keyvaultName: ${KEYVAULT_NAME} # Set to the name of your key vault
cloudName: "" # [OPTIONAL for Azure] if not provided, the Azure environment defaults to AzurePublicCloud objects: | array:
aks Managed Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/managed-aad.md
Operation failed with status: 'Bad Request'. Details: Getting static credential
### Disable local accounts on an existing cluster
-To disable local accounts on an existing AKS cluster, use the [`az aks update`][az-aks-update] command with the `disable-local-accounts` parameter.
+To disable local accounts on an existing Azure AD integration enabled AKS cluster, use the [`az aks update`][az-aks-update] command with the `disable-local-accounts` parameter.
```azurecli-interactive
-az aks update -g <resource-group> -n <cluster-name> --enable-aad --aad-admin-group-object-ids <aad-group-id> --disable-local-accounts
+az aks update -g <resource-group> -n <cluster-name> --disable-local-accounts
``` In the output, confirm local accounts have been disabled by checking the field `properties.disableLocalAccounts` is set to `true`.
Operation failed with status: 'Bad Request'. Details: Getting static credential
### Re-enable local accounts on an existing cluster
-AKS supports enabling a disabled local account on an existing cluster with the `enable-local` parameter.
+AKS supports enabling a disabled local account on an existing cluster with the `enable-local-accounts` parameter.
```azurecli-interactive
-az aks update -g <resource-group> -n <cluster-name> --enable-aad --aad-admin-group-object-ids <aad-group-id> --enable-local
+az aks update -g <resource-group> -n <cluster-name> --enable-local-accounts
``` In the output, confirm local accounts have been re-enabled by checking the field `properties.disableLocalAccounts` is set to `false`.
app-service Configure Language Dotnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-dotnetcore.md
namespace SomeNamespace
} ```
-If you configure an app setting with the same name in App Service and in *appsettings.json*, for example, the App Service value takes precedence over the *appsettings.json* value. The local *appsettings.json* value lets you debug the app locally, but the App Service value lets your run the app in production with production settings. Connection strings work in the same way. This way, you can keep your application secrets outside of your code repository and access the appropriate values without changing your code.
+If you configure an app setting with the same name in App Service and in *appsettings.json*, for example, the App Service value takes precedence over the *appsettings.json* value. The local *appsettings.json* value lets you debug the app locally, but the App Service value lets you run the app in production with production settings. Connection strings work in the same way. This way, you can keep your application secrets outside of your code repository and access the appropriate values without changing your code.
::: zone pivot="platform-linux" > [!NOTE]
app-service Quickstart Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-arc.md
Get a sample Node.js app using Git and deploy it using [ZIP deploy](deploy-zip.m
git clone https://github.com/Azure-Samples/nodejs-docs-hello-world cd nodejs-docs-hello-world zip -r package.zip .
+az webapp config appsettings set --resource-group myResourceGroup --name --settings SCM_DO_BUILD_DURING_DEPLOYMENT=true
az webapp deployment source config-zip --resource-group myResourceGroup --name <app-name> --src package.zip ```
application-gateway Application Gateway Backend Health Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-backend-health-troubleshooting.md
Previously updated : 09/13/2022 Last updated : 02/14/2023
Learn more about [Application Gateway probe matching](./application-gateway-prob
1. Sign in to the machine where your application is hosted. 2. Select Win+R or right-click the **Start** button, and then select **Run**.
-3. Enter `certmgr.msc` and select Enter. You can also search for Certificate Manager on the **Start** menu.
-4. Locate the certificate, typically in `\Certificates - Current User\\Personal\\Certificates\`, and open it.
+3. Enter `certlm.msc` and select Enter. You can also search for Certificate Manager on the **Start** menu.
+4. Locate the certificate, typically in `Certificates - Local Computer\Personal\Certificates`, and open it.
5. Select the root certificate and then select **View Certificate**. 6. In the Certificate properties, select the **Details** tab. 7. On the **Details** tab, select the **Copy to File** option and save the file in the Base-64 encoded X.509 (.CER) format.
For Windows:
1. Sign in to the machine where your application is hosted. 2. Select Win+R or right-click the **Start** button and select **Run**.
-3. Enter **certmgr.msc** and select Enter. You can also search for Certificate Manager on the **Start** menu.
-4. Locate the certificate (typically in `\Certificates - Current User\\Personal\\Certificates`), and open the certificate.
+3. Enter **certlm.msc** and select Enter. You can also search for Certificate Manager on the **Start** menu.
+4. Locate the certificate (typically in `Certificates - Local Computer\Personal\Certificates`), and open the certificate.
5. On the **Details** tab, check the certificate **Subject**. 6. Verify the CN of the certificate from the details and enter the same in the host name field of the custom probe or in the HTTP settings (if **Pick hostname from backend HTTP settings** is selected). If that's not the desired host name for your website, you must get a certificate for that domain or enter the correct host name in the custom probe or HTTP setting configuration.
application-gateway Application Gateway Websocket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-websocket.md
A BackendAddressPool is used to define a backend pool with WebSocket enabled ser
}] ```
+> [!NOTE]
+> Ensure that your **timeout value** is greater than your server-defined ping/pong interval to avoid experiencing timeout errors before a ping is sent from the client. A typical value for a WebSocket is 20 seconds, so, for example, a timeout value of 40 seconds will ensure that the gateway does not send a timeout error before the client sends a ping; otherwise, this would throw a 1006 error on the client side.
+ ## WebSocket enabled backend Your backend must have a HTTP/HTTPS web server running on the configured port (usually 80/443) for WebSocket to work. This requirement is because WebSocket protocol requires the initial handshake to be HTTP with upgrade to WebSocket protocol as a header field. The following is an example of a header:
applied-ai-services Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/studio-overview.md
+
+ Title: What is Form Recognizer Studio?
+
+description: Learn how to set up and use Form Recognizer Studio to test features of Azure Form Recognizer on the web.
+++++ Last updated : 02/14/2023+
+monikerRange: 'form-recog-3.0.0'
+recommendations: false
++
+<!-- markdownlint-disable MD033 -->
+# What is Form Recognizer Studio?
+
+**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
+
+Form Recognizer Studio is an online tool to visually explore, understand, train, and integrate features from the Form Recognizer service into your applications. The studio provides a platform for you to experiment with the different Form Recognizer models and sample their returned data in an interactive manner without the need to write code.
+
+The studio supports Form Recognizer v3.0 models and v3.0 model training. Previously trained v2.1 models with labeled data are supported, but not v2.1 model training. Refer to the [REST API migration guide](v3-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
+
+## Get started using Form Recognizer Studio
+
+1. To use Form Recognizer Studio, you need the following assets:
+
+ * **Azure subscription** - [Create one for free](https://azure.microsoft.com/free/cognitive-services/).
+
+ * **Cognitive Services or Form Recognizer resource**. Once you have your Azure subscription, create a [single-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource, in the Azure portal to get your key and endpoint. Use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
+
+1. Navigate to the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/). If it's your first time logging in, a popup window appears prompting you to configure your service resource. You have two options:
+
+ **a. Access by Resource**.
+
+ * Choose your existing subscription.
+ * Select an existing resource group within your subscription or create a new one.
+ * Select your existing Form Recognizer or Cognitive services resource.
+
+ :::image type="content" source="media/studio/welcome-to-studio.png" alt-text="Screenshot of the configure service resource window.":::
+
+ **b. Access by API endpoint and key**.
+
+ * Retrieve your endpoint and key from the Azure portal.
+ * Go to the overview page for your resource and select **Keys and Endpoint** from the left navigation bar.
+ * Enter the values in the appropriate fields.
+
+ :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
+
+1. Once you've completed configuring your resource, you're able to try the different models offered by Form Recognizer Studio. From the front page, select any Form Recognizer model to try using with a no-code approach.
+
+ :::image type="content" source="media/studio/form-recognizer-studio-front.png" alt-text="Screenshot of Form Recognizer Studio front page.":::
+
+1. After you've tried Form Recognizer Studio, use the [**C#**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) or [**Python**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) client libraries or the [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) to get started incorporating Form Recognizer models into your own applications.
+
+ To learn more about each model, *see* concepts pages.
+
+ | Model type| Models |
+ |--|--|
+ |Document analysis models| <ul><li>[**Read model**](concept-read.md)</li><li>[**Layout model**](concept-layout.md)</li><li>[**General document model**](concept-general-document.md)</li></ul>.</br></br>
+ |**Prebuilt models**|<ul><li>[**W-2 form model**](concept-w2.md)</li><li>[**Invoice model**](concept-invoice.md)</li><li>[**Receipt model**](concept-receipt.md)</li><li>[**ID document model**](concept-id-document.md)</li><li>[**Business card model**](concept-business-card.md)</li></ul>
+ |Custom models|<ul><li>[**Custom model**](concept-custom.md)</li><ul><li>[**Template model**](concept-custom-template.md)</li><li>[**Neural model**](concept-custom-template.md)</li></ul><li>[**Composed model**](concept-model-overview.md)</li></ul>
+
+### Manage your resource
+
+ To view resource details such as name and pricing tier, select the **Settings** icon in the top-right corner of the Form Recognizer Studio home page and select the **Resource** tab. If you have access to other resources, you can switch resources as well.
++
+With Form Recognizer, you can quickly automate your data processing in applications and workflows, easily enhance data-driven strategies, and skillfully enrich document search capabilities.
+
+## Next steps
+
+* Visit [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio) to begin using the models presented by the service.
+
+* For more information on Form Recognizer capabilities, see [Azure Form Recognizer overview](overview.md).
automation Automation Managed Identity Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-managed-identity-faq.md
Title: Azure Automation migration to managed identity FAQ
+ Title: Azure Automation migration to managed identities FAQ
description: This article gives answers to frequently asked questions when you're migrating from a Run As account to a managed identity.
#Customer intent: As an implementer, I want answers to various questions.
-# FAQ for migrating from a Run As account to a managed identity
+# FAQ for migrating from a Run As account to managed identities
-The following FAQ can help you migrate from a Run As account to a managed identity in Azure Automation. If you have any other questions about the capabilities, post them on the [discussion forum](https://aka.ms/retirement-announcement-automation-runbook-start-using-managed-identities). When a question is frequently asked, we add it to this article so that it benefits everyone.
+The following FAQ can help you migrate from a Run As account to a Managed identity in Azure Automation. If you have any other questions about the capabilities, post them on the [discussion forum](https://aka.ms/retirement-announcement-automation-runbook-start-using-managed-identities). When a question is frequently asked, we add it to this article so that it benefits everyone.
## How long will you support a Run As account?
-Automation Run As accounts will be supported until *September 30, 2023*. Although we continue to support existing users, we recommend that all new users use managed identities for runbook authentication.
-
-Existing users can still create a Run As account. You can go to the account properties and renew a certificate upon expiration until *January 30, 2023*. After that date, you won't be able to create a Run As account from the Azure portal.
-
-You'll still be able to create a Run As account through a [PowerShell script](./create-run-as-account.md#create-account-using-powershell) until support ends. You can [use this script](https://github.com/azureautomation/runbooks/blob/master/Utility/AzRunAs/RunAsAccountAssessAndRenew.ps1) to renew the certificate after *January 30, 2023*, until *September 30, 2023*. This script will assess the Automation account that has configured Run As accounts and renew the certificate if you choose to do so. On confirmation, the script will renew the key credentials of the Azure Active Directory (Azure AD) app and upload new a self-signed certificate to the Azure AD app.
+Automation Run As accounts will be supported until *30 September 2023*. Moreover, starting 01 April 2023, creation of **new** Run As accounts in Azure Automation will not be possible. Renewing of certificates for existing Run As accounts would be possible only till the end of support.
## Will existing runbooks that use the Run As account be able to authenticate?
-Yes, they'll be able to authenticate. There will be no impact to existing runbooks that use a Run As account.
-
-## How can I renew an existing Run As account after January 30, 2023, when portal support to renew the account is removed?
-You can [use this script](https://github.com/azureautomation/runbooks/blob/master/Utility/AzRunAs/RunAsAccountAssessAndRenew.ps1) to renew the Run As account certificate after January 30, 2023, until September 30, 2023.
+Yes, they'll be able to authenticate. There will be no impact to existing runbooks that use a Run As account. After 30 September 2023, all runbook executions using RunAs accounts, including Classic Run As accounts wouldn't be supported. Hence, you must migrate all runbooks to use Managed identities before that date.
-## Can Run As accounts still be created after September 30, 2023, when Run As accounts will retire?
-Yes, you can still create Run As accounts by using the [PowerShell script](../automation/create-run-as-account.md#create-account-using-powershell). However, this will be an unsupported scenario.
-
-## Can Run As accounts still be renewed after September 30, 2023, when Run As account will retire?
-You can use [this script](https://github.com/azureautomation/runbooks/blob/master/Utility/AzRunAs/RunAsAccountAssessAndRenew.ps1) to renew the Run As account certificate after September 30, 2023, when Run As accounts will retire. However, it will be an unsupported scenario.
+## My Run as account will expire soon, how can I renew it?
+If your Run As account certificate is going to expire soon, it's a good time to start using Managed identities for authentication instead of renewing the certificate. However, if you still want to renew it, you would be able to do it through the portal only till 30 September 2023.
+## Can I create new Run As accounts?
+From 1 April 2023, creation of new Run As accounts wouldn't be possible. We strongly recommend that you start using Managed identities for authentication instead of creating new Run As accounts.
+
## Will runbooks that still use the Run As account be able to authenticate after September 30, 2023?
-Yes, the runbooks will be able to authenticate until the Run As account certificate expires.
+Yes, the runbooks will be able to authenticate until the Run As account certificate expires. After 30 September 2023, all runbook executions using RunAs accounts wouldn't be supported.
## What is a managed identity? Applications use managed identities in Azure AD when they're connecting to resources that support Azure AD authentication. Applications can use managed identities to obtain Azure AD tokens without managing credentials, secrets, certificates, or keys.
Run As accounts also have a management overhead that involves creating a service
## Can a managed identity be used for both cloud and hybrid jobs? Azure Automation supports [system-assigned managed identities](./automation-security-overview.md#managed-identities) for both cloud and hybrid jobs. Currently, Azure Automation [user-assigned managed identities](./automation-security-overview.md) can be used for cloud jobs only and can't be used for jobs that run on a hybrid worker.
-## Can I use a Run As account for new Automation account?
-Yes, but only in a scenario where managed identities aren't supported for specific on-premises resources. We'll allow the creation of a Run As account through a [PowerShell script](./create-run-as-account.md#create-account-using-powershell).
- ## How can I migrate from an existing Run As account to a managed identity? Follow the steps in [Migrate an existing Run As account to a managed identity](./migrate-run-as-accounts-managed-identity.md).
automation Migrate Run As Accounts Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-run-as-accounts-managed-identity.md
Title: Migrate from a Run As account to a managed identity
-description: This article describes how to migrate from a Run As account to a managed identity in Azure Automation.
+ Title: Migrate from a Run As account to Managed identities
+description: This article describes how to migrate from a Run As account to managed identities in Azure Automation.
Previously updated : 02/11/2023 Last updated : 02/15/2023
-# Migrate from an existing Run As account to a managed identity
+# Migrate from an existing Run As account to Managed identities
> [!IMPORTANT]
-> Azure Automation Run As accounts will retire on *September 30, 2023*. Microsoft won't provide support beyond that date. From now through *September 30, 2023*, you can continue to use Azure Automation Run As accounts. However, we recommend that you transition to [managed identities](../automation/automation-security-overview.md#managed-identities) before *September 30, 2023*.
->
-> For more information about migration cadence and the support timeline for Run As account creation and certificate renewal, see the [frequently asked questions](automation-managed-identity-faq.md).
+> Azure Automation Run As accounts will retire on *30 September 2023* and completely move to [Managed Identities](automation-security-overview.md#managed-identities). All runbook executions using RunAs accounts, including Classic Run As accounts wouldn't be supported after this date. Starting 01 April 2023, the creation of **new** Run As accounts in Azure Automation will not be possible.
+
+For more information about migration cadence and the support timeline for Run As account creation and certificate renewal, see the [frequently asked questions](automation-managed-identity-faq.md).
Run As accounts in Azure Automation provide authentication for managing resources deployed through Azure Resource Manager or the classic deployment model. Whenever a Run As account is created, an Azure AD application is registered, and a self-signed certificate is generated. The certificate is valid for one year. Renewing the certificate every year before it expires keeps the Automation account working but adds overhead.
Before you migrate from a Run As account or Classic Run As account to a managed
For example, if the Automation account is required only to start or stop an Azure VM, then the permissions assigned to the Run As account need to be only for starting or stopping the VM. Similarly, assign read-only permissions if a runbook is reading from Azure Blob Storage. For more information, see [Azure Automation security guidelines](../automation/automation-security-guidelines.md#authentication-certificate-and-identities).
-1. If you are using Classic Run As accounts, ensure that you have [migrated](../virtual-machines/classic-vm-deprecation.md) resources deployed through classic deployment model to Azure Resource Manager.
+1. If you're using Classic Run As accounts, ensure that you have [migrated](../virtual-machines/classic-vm-deprecation.md) resources deployed through classic deployment model to Azure Resource Manager.
+1. Use [this script](https://github.com/azureautomation/runbooks/blob/master/Utility/AzRunAs/Check-AutomationRunAsAccountRoleAssignments.ps1) to find out which Automation accounts are using a Run As account. If your Azure Automation accounts contain a Run As account, it will have the built-in contributor role assigned to it by default. You can use the script to check the Azure Automation Run As accounts and determine if their role assignment is the default one or if it has been changed to a different role definition.
## Migrate from an Automation Run As account to a managed identity
To migrate from an Automation Run As account or Classic Run As account to a mana
For managed identity support, use the `Connect-AzAccount` cmdlet. To learn more about this cmdlet, see [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount?branch=main&view=azps-8.3.0) in the PowerShell reference.
- - If you're using Az modules, update to the latest version by following the steps in the [Update Azure PowerShell modules](./automation-update-azure-modules.md?branch=main#update-az-modules) article.
+ - If you're using `Az` modules, update to the latest version by following the steps in the [Update Azure PowerShell modules](./automation-update-azure-modules.md?branch=main#update-az-modules) article.
- If you're using AzureRM modules, update `AzureRM.Profile` to the latest version and replace it by using the `Add-AzureRMAccount` cmdlet with `Connect-AzureRMAccount ΓÇôIdentity`. To understand the changes to the runbook code that are required before you can use managed identities, use the [sample scripts](#sample-scripts).
To migrate from an Automation Run As account or Classic Run As account to a mana
## Sample scripts
-The following examples of runbook scripts fetch the Resource Manager resources by using the Run As account (service principal) and the managed identity.
-
-# [Run As account](#tab/run-as-account)
-
-```powershell-interactive
- $connectionName = "AzureRunAsConnection"
- try
- {
- # Get the connection "AzureRunAsConnection"
- $servicePrincipalConnection=Get-AutomationConnection -Name $connectionName
-
- "Logging in to Azure..."
- Add-AzureRmAccount `
- -ServicePrincipal `
- -TenantId $servicePrincipalConnection.TenantId `
- -ApplicationId $servicePrincipalConnection.ApplicationId `
- -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint
- }
- catch {
- if (!$servicePrincipalConnection)
- {
- $ErrorMessage = "Connection $connectionName not found."
- throw $ErrorMessage
- } else{
- Write-Error -Message $_.Exception
- throw $_.Exception
- }
- }
-
- #Get all Resource Manager resources from all resource groups
- $ResourceGroups = Get-AzureRmResourceGroup
-
- foreach ($ResourceGroup in $ResourceGroups)
- {
- Write-Output ("Showing resources in resource group " + $ResourceGroup.ResourceGroupName)
- $Resources = Find-AzureRmResource -ResourceGroupNameContains $ResourceGroup.ResourceGroupName | Select ResourceName, ResourceType
- ForEach ($Resource in $Resources)
- {
- Write-Output ($Resource.ResourceName + " of type " + $Resource.ResourceType)
- }
- Write-Output ("")
- }
- ```
+The following examples of runbook scripts fetch the Resource Manager resources by using the Run As account (service principal) and the managed identity. You would notice the difference in runbook code at the beginning of the runbook, where it authenticates against the resource.
# [System-assigned managed identity](#tab/sa-managed-identity)
foreach ($ResourceGroup in $ResourceGroups)
Write-Output ("") } ```
+# [Run As account](#tab/run-as-account)
+
+```powershell-interactive
+ $connectionName = "AzureRunAsConnection"
+ try
+ {
+ # Get the connection "AzureRunAsConnection"
+ $servicePrincipalConnection=Get-AutomationConnection -Name $connectionName
+
+ "Logging in to Azure..."
+ Add-AzureRmAccount `
+ -ServicePrincipal `
+ -TenantId $servicePrincipalConnection.TenantId `
+ -ApplicationId $servicePrincipalConnection.ApplicationId `
+ -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint
+ }
+ catch {
+ if (!$servicePrincipalConnection)
+ {
+ $ErrorMessage = "Connection $connectionName not found."
+ throw $ErrorMessage
+ } else{
+ Write-Error -Message $_.Exception
+ throw $_.Exception
+ }
+ }
+
+ #Get all Resource Manager resources from all resource groups
+ $ResourceGroups = Get-AzureRmResourceGroup
+
+ foreach ($ResourceGroup in $ResourceGroups)
+ {
+ Write-Output ("Showing resources in resource group " + $ResourceGroup.ResourceGroupName)
+ $Resources = Find-AzureRmResource -ResourceGroupNameContains $ResourceGroup.ResourceGroupName | Select ResourceName, ResourceType
+ ForEach ($Resource in $Resources)
+ {
+ Write-Output ($Resource.ResourceName + " of type " + $Resource.ResourceType)
+ }
+ Write-Output ("")
+ }
+ ```
+ ## Graphical runbooks
foreach ($ResourceGroup in $ResourceGroups)
:::image type="content" source="./medilet.":::
- For use with the Run As account, the cmdlet will use the `ServicePrinicipalCertificate` parameter set to `ApplicationId`. `CertificateThumbprint` will be from `RunAsAccountConnection`.
+ For use with the Run As account, the cmdlet uses the `ServicePrinicipalCertificate` parameter set to `ApplicationId`. `CertificateThumbprint` will be from `RunAsAccountConnection`.
:::image type="content" source="./media/migrate-run-as-account-managed-identity/parameter-sets-inline.png" alt-text="Screenshot that shows parameter sets." lightbox="./media/migrate-run-as-account-managed-identity/parameter-sets-expanded.png":::
automation Source Control Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/source-control-integration.md
Currently, you can't use the Azure portal to update the PAT in source control. W
## Next steps
-* For integrating source control in Azure Automation, see [Azure Automation: Source Control Integration in Azure Automation](https://azure.microsoft.com/blog/azure-automation-source-control-13/).
-* For integrating runbook source control with Visual Studio Codespaces, see [Azure Automation: Integrating Runbook Source Control using Visual Studio Codespaces](https://azure.microsoft.com/blog/azure-automation-integrating-runbook-source-control-using-visual-studio-online/).
+* For integrating runbook source control with Visual Studio Codespaces, see [Azure Automation: Integrating Runbook Source Control using Visual Studio Codespaces](https://azure.microsoft.com/blog/azure-automation-integrating-runbook-source-control-using-visual-studio-online/).
azure-arc Privacy Data Collection And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/privacy-data-collection-and-reporting.md
The following JSON document is an example of the SQL Server database - Azure Arc
- Last uploaded date from on-premises cluster. - `System.DateTime: LastUploadedDate` - Data controller state
- - `string: ProvisioningState`
+- `string: ProvisioningState`
+
+
+The following JSON document is an example of the Azure Arc Data Controller resource.
++
+```json
+{
+ "id": "/subscriptions/7894901a-dfga-rf4d-85r4-cc1234459df2/resourceGroups/contoso-rg/providers/Microsoft.AzureArcData/dataControllers/contosodc",
+ "name": "contosodc",
+ "type": "microsoft.azurearcdata/datacontrollers",
+ "location": "eastus",
+ "extendedLocation": {
+ "name": "/subscriptions/7894901a-dfga-rf4d-85r4-cc1234459df2/resourceGroups/contoso-rg/providers/Microsoft.ExtendedLocation/customLocations/contoso",
+ "type": "CustomLocation"
+ },
+ "tags": {},
+ "systemData": {
+ "createdBy": "contosouser@contoso.com",
+ "createdByType": "User",
+ "createdAt": "2023-01-03T21:35:36.8412132Z",
+ "lastModifiedBy": "319f651f-7ddb-4fc6-9857-7aef9250bd05",
+ "lastModifiedByType": "Application",
+ "lastModifiedAt": "2023-02-15T17:13:26.6429039Z"
+ },
+ "properties": {
+ "infrastructure": "azure",
+ "onPremiseProperty": {
+ "id": "4eb0a7a5-5ed6-4463-af71-12590b2fad5d",
+ "publicSigningKey": "MIIDWzCCAkOgAwIBAgIIA8OmTJKpD8AwDQYJKoZIhvcNAQELBQAwKDEmMCQGA1UEAxMdQ2x1c3RlciBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkwHhcNMjMwMTAzMjEzNzUxWhcNMjgwMTAyMjEzNzUxWjAaMRgwFgYDVQQDEw9iaWxsaW5nLXNpZ25pbmcwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC3rAuXaXIeaipFiqGW5rtkdq/1+S58CRMEkANHvwFnimXEWIt8VnbG9foIm20r0RK+6XeRpn5r92jrOl/3R4Q9AAiF3Tgzy3NF9Dg9OsKo1bnrfWHMxmyX2w8TxyZSvWKEUVpVhjhqyhy/cqSJA5ASjEtthMx4Q1HTVcEDSTfnPHPz9EhfZqZ6ES3Yqun2D9MIatkSUpjHJbqYwRTzzrsPG84hJX7EGAWntvEzzCjmTUsouShEwUhi8c05CLBwzF5bxDNLhTdy+tj2ZyUzL7R+BmifwPR9jvOziYPlrbgIIs77sPbNlZjZvMeeBaJHktWZ0s8/UpUpV1W69m7hT2gbAgMBAAGjgZYwgZMwIAYDVR0lAQH/BBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMBMA4GA1UdDwEB/wQEAwIFoDBfBgNVHREEWDBWgg5jb250cm9sbGVyLXN2Y4IoY29udHJvbGxlci1zdmMuY29udG9zby5zdmMuY2x1c3Rlci5sb2NhbIIaY29udHJvbGxlci1zdmMuY29udG9zby5zdmMwDQYJKoZIhvcNAQELBQADggEBADcZNIZcDDUC79ElbRrXdbHo9bUUv/NJfY7Dx226jc8j0AdDq8MbHAnt+JiMH6+GDb88avleA448yZ9ujBP9zC8v8IyaWu4vQpPT7MagzlsAhb6VEWU0FQfM6R14WwbATWSOIwDlMn4I33mZULyJdZhk4TqzqTQ8F0I3TavHh8TWBbjnwg1IhR/8TQ9HfgceoI80SBE3BDI5at/CzYgoWcWS2pzfd3QYwD8DIPVLCdcx1LNSDjdlQCQTKal0yKMauGIzMuYpCF1M6Z0LunPU/Ns96T9mqLXJHu+wmAoJ2CwdXa4FruwTSgrQlY3pokjTMwGaP3uzpnCSI7ykvi5kp4Q=",
+ "signingCertificateThumbprint": "8FB48D0DD44DCFB25ECC13B9CB5F493F5438D38C"
+ },
+ "k8sRaw": {
+ "kind": "DataController",
+ "spec": {
+ "credentials": {
+ "dockerRegistry": "arc-private-registry",
+ "domainServiceAccount": "domain-service-account-secret",
+ "serviceAccount": "sa-arc-controller"
+ },
+ "security": {
+ "allowDumps": true,
+ "allowNodeMetricsCollection": true,
+ "allowPodMetricsCollection": true
+ },
+ "services": [
+ {
+ "name": "controller",
+ "port": 30080,
+ "serviceType": "LoadBalancer"
+ }
+ ],
+ "settings": {
+ "ElasticSearch": {
+ "vm.max_map_count": "-1"
+ },
+ "azure": {
+ "autoUploadMetrics": "true",
+ "autoUploadLogs": "false",
+ "subscription": "7894901a-dfga-rf4d-85r4-cc1234459df2",
+ "resourceGroup": "contoso-rg",
+ "location": "eastus",
+ "connectionMode": "direct"
+ },
+ "controller": {
+ "logs.rotation.days": "7",
+ "logs.rotation.size": "5000",
+ "displayName": "contosodc"
+ }
+ },
+ "storage": {
+ "data": {
+ "accessMode": "ReadWriteOnce",
+ "className": "managed-premium",
+ "size": "15Gi"
+ },
+ "logs": {
+ "accessMode": "ReadWriteOnce",
+ "className": "managed-premium",
+ "size": "10Gi"
+ }
+ },
+ "infrastructure": "azure",
+ "docker": {
+ "registry": "mcr.microsoft.com",
+ "imageTag": "v1.14.0_2022-12-13",
+ "repository": "arcdata",
+ "imagePullPolicy": "Always"
+ }
+ },
+ "metadata": {
+ "namespace": "contoso",
+ "name": "contosodc",
+ "annotations": {
+ "management.azure.com/apiVersion": "2022-03-01-preview",
+ "management.azure.com/cloudEnvironment": "AzureCloud",
+ "management.azure.com/correlationId": "aa531c88-6dfb-46c3-af5b-d93f7eaaf0f6",
+ "management.azure.com/customLocation": "/subscriptions/7894901a-dfga-rf4d-85r4-cc1234459df2/resourceGroups/contoso-rg/providers/Microsoft.ExtendedLocation/customLocations/contoso",
+ "management.azure.com/location": "eastus",
+ "management.azure.com/operationId": "265b98a7-0fc2-4dce-9cef-26f9b6dd000c*705EDFCA81D01028EFA1C3E9CB3CEC2BF472F25894ACB2FFDF955711236F486D",
+ "management.azure.com/resourceId": "/subscriptions/7894901a-dfga-rf4d-85r4-cc1234459df2/resourceGroups/contoso-rg/providers/Microsoft.AzureArcData/dataControllers/contosodc",
+ "management.azure.com/systemData": "{\"createdBy\":\"9c1a17be-338f-4b3c-90e9-55eb526c5aef\",\"createdByType\":\"User\",\"createdAt\":\"2023-01-03T21:35:36.8412132Z\",\"resourceUID\":\"74087467-4f98-4a23-bacf-a1e40404457f\"}",
+ "management.azure.com/tenantId": "123488bf-8asd-41wf-91ab-211kl345db47",
+ "traceparent": "00-197d885376f938d6138babf8ed4d809c-1a584b84b3c8f5df-01"
+ },
+ "creationTimestamp": "2023-01-03T21:35:42Z",
+ "generation": 2,
+ "resourceVersion": "15446366",
+ "uid": "4eb0a7a5-5ed6-4463-af71-12590b2fad5d"
+ },
+ "apiVersion": "arcdata.microsoft.com/v5",
+ "status": {
+ "observedGeneration": 2,
+ "state": "Ready",
+ "azure": {
+ "uploadStatus": {
+ "logs": {
+ "lastUploadTime": "0001-01-01T00:00:00Z",
+ "message": "Automatic upload of logs is disabled. Execution time: 02/15/2023 17:07:57"
+ },
+ "metrics": {
+ "lastUploadTime": "2023-02-15T17:00:57.047934Z",
+ "message": "Success"
+ },
+ "usage": {
+ "lastUploadTime": "2023-02-15T17:07:53.843439Z",
+ "message": "Success. Records uploaded: 1."
+ }
+ }
+ },
+ "lastUpdateTime": "2023-02-15T17:07:57.587925Z",
+ "runningVersion": "v1.14.0_2022-12-13",
+ "arcDataServicesK8sExtensionLatestVersion": "v1.16.0",
+ "registryVersions": {
+ "available": [
+ "v1.16.0_2023-02-14",
+ "v1.15.0_2023-01-10"
+ ],
+ "behind": 2,
+ "current": "v1.14.0_2022-12-13",
+ "latest": "v1.16.0_2023-02-14",
+ "next": "v1.15.0_2023-01-10",
+ "previous": "v1.13.0_2022-11-08"
+ }
+ }
+ },
+ "provisioningState": "Succeeded"
+ }
+}
+```
### PostgreSQL server - Azure Arc
The following JSON document is an example of the SQL Server database - Azure Arc
- Username and password for basic authentication - `public: BasicLoginInformation BasicLoginInformation` - The raw Kubernetes information (`kubectl get postgres12`)
- - `object: K8sRaw` [Details](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/crds)
+- `object: K8sRaw` [Details](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/crds)
- Last uploaded date from on premises cluster. - `System.DateTime: LastUploadedDate` - Group provisioning state
The following JSON document is an example of the SQL managed instance - Azure Ar
- Last uploaded date from on-premises cluster. - `public: System.DateTime LastUploadedDate` - SQL managed instance provisioning state
- - `public string: ProvisioningState`
+- `public string: ProvisioningState`
+
+The following JSON document is an example of the SQL Managed Instance - Azure Arc resource.
+++
+```json
+
+{
+ "id": "/subscriptions/7894901a-dfga-rf4d-85r4-cc1234459df2/resourceGroups/contoso-rg/providers/Microsoft.AzureArcData/sqlManagedInstances/sqlmi1",
+ "name": "sqlmi1",
+ "type": "microsoft.azurearcdata/sqlmanagedinstances",
+ "sku": {
+ "name": "vCore",
+ "tier": "BusinessCritical"
+ },
+ "location": "eastus",
+ "extendedLocation": {
+ "name": "/subscriptions/7894901a-dfga-rf4d-85r4-cc1234459df2/resourcegroups/contoso-rg/providers/microsoft.extendedlocation/customlocations/contoso",
+ "type": "CustomLocation"
+ },
+ "tags": {},
+ "systemData": {
+ "createdBy": "contosouser@contoso.com",
+ "createdByType": "User",
+ "createdAt": "2023-01-04T01:33:57.5232885Z",
+ "lastModifiedBy": "319f651f-7ddb-4fc6-9857-7aef9250bd05",
+ "lastModifiedByType": "Application",
+ "lastModifiedAt": "2023-02-15T01:39:11.6582399Z"
+ },
+ "properties": {
+ "dataControllerId": "/subscriptions/7894901a-dfga-rf4d-85r4-cc1234459df2/resourceGroups/contoso-rg/providers/Microsoft.AzureArcData/dataControllers/contosodc",
+ "admin": "sqladmin",
+ "k8sRaw": {
+ "spec": {
+ "scheduling": {
+ "default": {
+ "resources": {
+ "requests": {
+ "cpu": "2",
+ "memory": "4Gi"
+ },
+ "limits": {
+ "cpu": "2",
+ "memory": "4Gi"
+ }
+ }
+ }
+ },
+ "replicas": 2,
+ "dev": true,
+ "services": {
+ "primary": {
+ "type": "LoadBalancer"
+ },
+ "readableSecondaries": {}
+ },
+ "readableSecondaries": 1,
+ "syncSecondaryToCommit": 0,
+ "storage": {
+ "data": {
+ "volumes": [
+ {
+ "size": "5Gi"
+ }
+ ]
+ },
+ "logs": {
+ "volumes": [
+ {
+ "size": "5Gi"
+ }
+ ]
+ },
+ "datalogs": {
+ "volumes": [
+ {
+ "size": "5Gi"
+ }
+ ]
+ },
+ "backups": {
+ "volumes": [
+ {
+ "className": "azurefile",
+ "size": "5Gi"
+ }
+ ]
+ }
+ },
+ "security": {
+ "adminLoginSecret": "sqlmi1-login-secret"
+ },
+ "tier": "BusinessCritical",
+ "update": {},
+ "backup": {
+ "retentionPeriodInDays": 7
+ },
+ "licenseType": "LicenseIncluded",
+ "orchestratorReplicas": 1,
+ "parentResource": {
+ "apiGroup": "arcdata.microsoft.com",
+ "kind": "DataController",
+ "name": "contosodc",
+ "namespace": "contoso"
+ },
+ "settings": {
+ "collation": "SQL_Latin1_General_CP1_CI_AS",
+ "language": {
+ "lcid": 1033
+ },
+ "network": {
+ "forceencryption": 0,
+ "tlsciphers": "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384",
+ "tlsprotocols": "1.2"
+ },
+ "sqlagent": {
+ "enabled": false
+ },
+ "timezone": "UTC"
+ }
+ },
+ "metadata": {
+ "annotations": {
+ "management.azure.com/apiVersion": "2022-03-01-preview",
+ "management.azure.com/cloudEnvironment": "AzureCloud",
+ "management.azure.com/correlationId": "3a49178d-a09f-48d3-9292-3133f6591743",
+ "management.azure.com/customLocation": "/subscriptions/7894901a-dfga-rf4d-85r4-cc1234459df2/resourceGroups/contoso-rg/providers/microsoft.extendedlocation/customlocations/contoso",
+ "management.azure.com/location": "eastus",
+ "management.azure.com/operationId": "dbf2e708-78da-4762-8fd5-75ba43721b24*4C234309E6735F28E751F5734D64E8F98A910A88E54A1AD35C6469BCD0E6EA84",
+ "management.azure.com/resourceId": "/subscriptions/7894901a-dfga-rf4d-85r4-cc1234459df2/resourceGroups/contoso-rg/providers/Microsoft.AzureArcData/sqlManagedInstances/sqlmi1",
+ "management.azure.com/systemData": "{\"createdBy\":\"9c1a17be-338f-4b3c-90e9-55eb526c5aef\",\"createdByType\":\"User\",\"createdAt\":\"2023-01-04T01:33:57.5232885Z\",\"resourceUID\":\"40fa8b55-4b7d-4d6a-b783-043169d7fd03\"}",
+ "management.azure.com/tenantId": "123488bf-8asd-41wf-91ab-211kl345db47",
+ "traceparent": "00-3c07cf4caa8b4778591b02b1bf3979ef-f2ee2c890c21ea8a-01"
+ },
+ "creationTimestamp": "2023-01-04T01:34:03Z",
+ "generation": 1,
+ "labels": {
+ "management.azure.com/resourceProvider": "Microsoft.AzureArcData"
+ },
+ "name": "sqlmi1",
+ "namespace": "contoso",
+ "resourceVersion": "15215035",
+ "uid": "6d653cd8-f17e-437a-b0dc-48154164c1ad"
+ },
+ "status": {
+ "lastUpdateTime": "2023-02-15T01:39:07.691211Z",
+ "observedGeneration": 1,
+ "readyReplicas": "2/2",
+ "roles": {
+ "sql": {
+ "replicas": 2,
+ "lastUpdateTime": "2023-02-14T11:37:14.875705Z",
+ "readyReplicas": 2
+ }
+ },
+ "state": "Ready",
+ "endpoints": {
+ "logSearchDashboard": "https://230.41.13.18:5601/app/kibana#/discover?_a=(query:(language:kuery,query:'custom_resource_name:sqlmi1'))",
+ "metricsDashboard": "https://230.41.13.18:3000/d/40q72HnGk/sql-managed-instance-metrics?var-hostname=sqlmi1-0",
+ "mirroring": "230.41.13.18:5022",
+ "primary": "230.41.13.18,1433",
+ "secondary": "230.41.13.18,1433"
+ },
+ "highAvailability": {
+ "lastUpdateTime": "2023-02-14T11:47:42.208708Z",
+ "mirroringCertificate": "--BEGIN CERTIFICATE--\nMIIDQzCCAiugAwIBAgIISqqmfCPaolkwDQYJKoZIhvcNAQELBQAwKDEmMCQGA1UEAxMdQ2x1c3Rl\r\nciBDZXJ0aWZpDEzNDA2WhcNMjgwMTAzMDEzNDA2WjAO\r\nMQwwCgYDVQQDEwNkYm0wggEiMA0GCSqgEKAoIBAQDEXj2nm2cGkyfu\r\npXWQ4s6G//AI1rbH4JStZOAHwJNYmBuESSHz0i6znjnQQloFe+g2KM+1m4TN1T39Lz+/ufEYQQX9\r\nx9WuGP2IALgH1LXc/0DGuOB16QXqN7ZWULQ4ovW4Aaz5NxTSDXWYPK+zpb1c8adsQyamLHwmSPs4\r\nMpsgfOR9EUCqdnuKjSHbWCtkJTYogpAFyZb5HOgY1TMICrTkXG6VYoCPS/EDNmtPOyVuykdjjsxx\r\nIC5KkVgHWTaYIDjim7L44FPh4HUIVM/OFScRijCZTJogN/Fe94+kGDWfgWIG36Jlz127BbWV3HNJ\r\nkH2oLchIABvgTXsdKnjK3i2TAgMBAAGjgYowgYcwIAYDVR0lAQH/BBYwFAYIKwYBBQUHAwIGCCsG\r\nAQUFBwMBMA4GA1UdDwEB/wQEAwIFoDBTBgNVHREETDBKggpzcWxtaTEtc3ZjgiRzcWxtaTEtc3Zj\r\nLmNvbnRvc28uc3ZjLmNsdXN0ZXIubG9jYWyCFnNxbG1pMS1zdmMuY29udG9zby5zdmMwDQYJKoZI\r\nhvcNAQELBQADggEBAA+Wj6WK9NgX4szxT7zQxPVIn+0iviO/2dFxHmjmvj+lrAffsgNdfeX5095f\r\natxIO+no6VW2eoHze2f6AECh4/KefyAzd+GL9MIksJcMLqSqAemXju3pUfGBS1SAW8Rh361D8tmA\r\nEFpPMwZG3uMidYMso0GqO0tpejz2+5Q4NpweHBGoq6jk+9ApTLD+s5qetZHrxGD6tS1Z/Lvt24lE\r\nKtSKEDw5O2qnqbsOe6xxtPAuIfTmpwIzIv2WiGC3aGuXSr0bNyPHzh5RL1MCIpwLMrnruFwVzB25\r\nA0xRalcXVZRZ1H0zbznGsecyBRJiA+7uxNB7/V6i+SjB/qxj2xKh4s8=\n--END CERTIFICATE--\n",
+ "healthState": "Error",
+ "replicas": []
+ },
+ "logSearchDashboard": "https://230.41.13.18:5601/app/kibana#/discover?_a=(query:(language:kuery,query:'custom_resource_name:sqlmi1'))",
+ "metricsDashboard": "https://230.41.13.18:3000/d/40q72HnGk/sql-managed-instance-metrics?var-hostname=sqlmi1-0",
+ "primaryEndpoint": "230.41.13.18,1433",
+ "runningVersion": "v1.14.0_2022-12-13",
+ "registryVersions": {
+ "available": [],
+ "behind": 0,
+ "current": "v1.14.0_2022-12-13",
+ "latest": "v1.14.0_2022-12-13",
+ "previous": "v1.13.0_2022-11-08"
+ }
+ }
+ },
+ "provisioningState": "Succeeded",
+ "licenseType": "LicenseIncluded"
+ }
+}
+```
## Examples
In support situations, you may be asked to provide database instance logs, Kuber
## Next steps [Upload usage data to Azure Monitor](upload-usage-data.md) +
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
az connectedk8s delete --name AzureArcTest1 --resource-group AzureArcTest
If the deletion process fails, use the following command to force deletion (adding `-y` if you want to bypass the confirmation prompt): ```azurecli
-az connectedk8s delete -g AzureArcTest1 -n AzureArcTest --force
+az connectedk8s delete -n AzureArcTest1 -g AzureArcTest --force
``` This command can also be used if you experience issues when creating a new cluster deployment (due to previously created resources not being completely removed).
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/network-requirements.md
Static configuration is recommended for Arc resource bridge because the resource
The subnet of the IP addresses for Arc resource bridge must lie in the IP address prefix that is passed in the `ipaddressprefix` parameter of the `createconfig` command. The IP address prefix is the IP prefix that is exposed by the network to which Arc resource bridge is connected. It is entered as the subnet's IP address range for the virtual network and subnet mask (IP Mask) in CIDR notation, for example `192.168.7.1/24`. Consult your system or network administrator to obtain the IP address prefix in CIDR notation. An IP Subnet CIDR calculator may be used to obtain this value.
-### Gateway
-
-The gateway address provided in the `createconfig` command must be in the same subnet specified in the IP address prefix.
- ### DNS Server DNS Server must have internal and external endpoint resolution. The appliance VM and control plane need to resolve the management machine and vice versa. All three must be able to reach the required URLs for deployment.
+### Configuration file example
+
+The example below highlights a couple key requirements for Arc resource bridge when creating the configuration files. The IPs for `k8snodeippoolstart` and `k8snodeippoolend` reside in the subnet range designated in `ipaddressprefix`. The `ipaddressprefix` is provided in the format of the subnet's IP address range for the virtual network and subnet mask (IP Mask) in CIDR notation.
+
+```
+azurestackhciprovider:
+virtualnetwork:
+name: "mgmtvnet
+vswitchname: "Default Switch
+type: "Transparent"
+macpoolname
+vlanid: 0
+ipaddressprefix: 172.16.0.0/16
+gateway: 17.16.1.1
+dnsservers: 17.16.0.1
+vippoolstart: 172.16.250.0
+vippoolend: 172.16.250.254
+k8snodeippoolstart: 172.16.30.0
+k8snodeippoolend: 172.16.30.254
+```
+ ## General network requirements [!INCLUDE [network-requirement-principles](../includes/network-requirement-principles.md)]
azure-cache-for-redis Cache Web App Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-web-app-howto.md
Because the file *CacheSecrets.config* isn't deployed to Azure with your applica
:::image type="content" source="media/cache-web-app-howto/cache-web-config.png" alt-text="Web.config":::
-1. In the *web.config* file, you can how to set the `<appSetting>` element for running the application locally.
+1. In the *web.config* file, you can set the `<appSettings>` element for running the application locally.
`<appSettings file="C:\AppSecrets\CacheSecrets.config">`
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
Azure Monitor Agent provides the following benefits over legacy agents: -- **Security and performance**
+- **Security**
- Enhanced security through Managed Identity and Azure Active Directory (Azure AD) tokens (for clients).
- - Same events-per-second (EPS) upload rate with less resource utilization.
+- **Performance**
+ - The AMA agent event throughput is 25% better than the MMA agent.
- **Cost savings** by [using data collection rules](data-collection-rule-azure-monitor-agent.md). Using DCRs is one of the most useful advantages of using Azure Monitor Agent: - DCRs let you configure data collection for specific machines connected to a workspace as compared to the "all or nothing" approach of legacy agents. - With DCRs, you can define which data to ingest and which data to filter out to reduce workspace clutter and save on costs.
For more information, see:
- [Azure Monitor Agent overview](agents-overview.md) - [Azure Monitor Agent migration for Microsoft Sentinel](../../sentinel/ama-migrate.md)-- [Frequently asked questions for Azure Monitor Agent migration](/azure/azure-monitor/faq#azure-monitor-agent)
+- [Frequently asked questions for Azure Monitor Agent migration](/azure/azure-monitor/faq#azure-monitor-agent)
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
Title: Application Insights overview description: Learn how Application Insights in Azure Monitor provides performance management and usage tracking of your live web application. Previously updated : 01/24/2023 Last updated : 02/14/2023 # Application Insights overview
azure-monitor Asp Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net.md
Title: Configure monitoring for ASP.NET with Azure Application Insights | Microsoft Docs description: Configure performance, availability, and user behavior analytics tools for your ASP.NET website hosted on-premises or in Azure. Previously updated : 11/15/2022 Last updated : 02/14/2023 ms.devlang: csharp
azure-monitor Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-ad-authentication.md
Title: Azure AD authentication for Application Insights description: Learn how to enable Azure Active Directory (Azure AD) authentication to ensure that only authenticated telemetry is ingested in your Application Insights resources. Previously updated : 01/10/2023 Last updated : 02/14/2023 ms.devlang: csharp, java, javascript, python
azure-monitor Azure Web Apps Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-java.md
Title: Monitor Azure app services performance Java | Microsoft Docs description: Application performance monitoring for Azure app services using Java. Chart load and response time, dependency information, and set alerts on performance. Previously updated : 11/15/2022 Last updated : 02/14/2023 ms.devlang: java
# Application Monitoring for Azure App Service and Java
-Monitoring of your Java web applications running on [Azure App Services](../../app-service/index.yml) does not require any modifications to the code. This article will walk you through enabling Azure Monitor Application Insights monitoring as well as provide preliminary guidance for automating the process for large-scale deployments.
+Monitoring of your Java web applications running on [Azure App Services](../../app-service/index.yml) doesn't require any modifications to the code. This article walks you through enabling Azure Monitor Application Insights monitoring and provides preliminary guidance for automating the process for large-scale deployments.
## Enable Application Insights The recommended way to enable application monitoring for Java applications running on Azure App Services is through Azure portal. Turning on application monitoring in Azure portal will automatically instrument your application with Application Insights, and doesn't require any code changes.
-You can apply additional configurations, and then based on your specific scenario you [add your own custom telemetry](./opentelemetry-enable.md?tabs=java#modify-telemetry) if needed.
+You can apply extra configurations, and then based on your specific scenario you [add your own custom telemetry](./opentelemetry-enable.md?tabs=java#modify-telemetry) if needed.
### Auto-instrumentation through Azure portal
-You can turn on monitoring for your Java apps running in Azure App Service just with one click, no code change required. The integration adds [Application Insights Java 3.x](./opentelemetry-enable.md?tabs=java) and you will get the telemetry auto-collected.
+You can turn on monitoring for your Java apps running in Azure App Service just with one selection, no code change required. The integration adds [Application Insights Java 3.x](./opentelemetry-enable.md?tabs=java) and auto-collects telemetry.
For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers).
For a complete list of supported auto-instrumentation scenarios, see [Supported
:::image type="content"source="./media/azure-web-apps/change-resource.png" alt-text="Screenshot of Change your resource dropdown.":::
-3. This last step is optional. After specifying which resource to use, you can configure the Java agent. If you do not configure the Java agent, default configurations will apply.
+3. This last step is optional. After specifying which resource to use, you can configure the Java agent. If you don't configure the Java agent, default configurations apply.
- The full [set of configurations](./java-standalone-config.md) is available, you just need to paste a valid [json file](./java-standalone-config.md#an-example). **Exclude the connection string and any configurations that are in preview** - you will be able to add the items that are currently in preview as they become generally available.
+ The full [set of configurations](./java-standalone-config.md) is available, you just need to paste a valid [json file](./java-standalone-config.md#an-example). **Exclude the connection string and any configurations that are in preview** - you're able to add the items that are currently in preview as they become generally available.
- Once you modify the configurations through Azure portal, APPLICATIONINSIGHTS_CONFIGURATION_FILE environment variable will automatically be populated and will appear in App Service settings panel. This variable will contain the full json content that you have pasted in Azure portal configuration text box for your Java app.
+ Once you modify the configurations through Azure portal, APPLICATIONINSIGHTS_CONFIGURATION_FILE environment variable are automatically populated and appear in App Service settings panel. This variable contains the full json content that you've pasted in Azure portal configuration text box for your Java app.
:::image type="content"source="./media/azure-web-apps-java/create-app-service-ai.png" alt-text="Screenshot of instrument your application.":::
In order to enable telemetry collection with Application Insights, only the foll
## Troubleshooting
-Below is our step-by-step troubleshooting guide for Java-based applications running on Azure App Services.
+Use our step-by-step guide to troubleshoot Java-based applications running on Azure App Services.
1. Check that `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of "~2" on Windows, "~3" on Linux 1. Examine the log file to see that the agent has started successfully: browse to `https://yoursitename.scm.azurewebsites.net/, under SSH change to the root directory, the log file is located under LogFiles/ApplicationInsights. :::image type="content"source="./media/azure-web-apps-java/app-insights-java-status.png" alt-text="Screenshot of the link above results page.":::
-1. After enabling application monitoring for your Java app, you can validate that the agent is working by looking at the live metrics - even before you deploy and app to App Service you will see some requests from the environment. Remember that the full set of telemetry will only be available when you have your app deployed and running.
-1. Set APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_LEVEL environment variable to 'debug' if you do not see any errors and there is no telemetry
-1. Sometimes the latest version of the Application Insights Java agent is not available in App Service - it takes a couple of months for the latest versions to roll out to all regions. In case you need the latest version of Java agent to monitor your app in App Service, you can upload the agent manually:
- * Upload the Java agent jar file to App Service
- * Get the latest version of [Azure CLI](/cli/azure/install-azure-cli-windows?tabs=azure-cli)
- * Get the latest version of [Application Insights Java agent](./opentelemetry-enable.md?tabs=java)
- * Deploy Java agent to App Service - a sample command to deploy the Java agent jar: `az webapp deploy --src-path applicationinsights-agent-{VERSION_NUMBER}.jar --target-path jav?tabs=javase&pivots=platform-linux#3configure-the-maven-plugin) to deploy through Maven plugin
- * Once the agent jar file is uploaded, go to App Service configurations and add a new environment variable, JAVA_OPTS, and set its value to `-javaagent:D:/home/{PATH_TO_THE_AGENT_JAR}/applicationinsights-agent-{VERSION_NUMBER}.jar`
- * Disable Application Insights via Application Insights tab
- * Restart the app
-
- > [!NOTE]
- > If you set the JAVA_OPTS environment variable, you will have to disable Application Insights in the portal. Alternatively, if you prefer to enable Application Insights from the portal, make sure that you don't set the JAVA_OPTS variable in App Service configurations settings.
+1. After enabling application monitoring for your Java app, you can validate that the agent is working by looking at the live metrics - even before you deploy and app to App Service you'll see some requests from the environment. Remember that the full set of telemetry is only available when you have your app deployed and running.
+1. Set APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_LEVEL environment variable to 'debug' if you don't see any errors and there's no telemetry
[!INCLUDE [azure-web-apps-troubleshoot](../../../includes/azure-monitor-app-insights-azure-web-apps-troubleshoot.md)] [!INCLUDE [azure-monitor-app-insights-test-connectivity](../../../includes/azure-monitor-app-insights-test-connectivity.md)]
+## Manually deploy the latest Application Insights Java version
+
+The Application Insights Java version is updated automatically as part of App Services updates.
+
+If you encounter an issue that's fixed in the latest version of Application Insights Java, you can update it manually.
+
+To manually update, follow these steps:
+
+1. Upload the Java agent jar file to App Service
+
+ > a. First, get the latest version of Azure CLI by following the instructions [here](/cli/azure/install-azure-cli-windows?tabs=azure-cli).
+
+ > b. Next, get the latest version of the Application Insights Java agent by following the instructions [here](./opentelemetry-enable.md?tabs=java).
+
+ > c. Then, deploy the Java agent jar file to App Service using the following command: `az webapp deploy --src-path applicationinsights-agent-{VERSION_NUMBER}.jar --target-path jav?tabs=javase&pivots=platform-linux#3configure-the-maven-plugin) to deploy the agent through the Maven plugin.
+
+2. Disable Application Insights via the Application Insights tab in the Azure portal.
+
+3. Once the agent jar file is uploaded, go to App Service configurations and add a new environment variable, `JAVA_OPTS`, with the value `-javaagent:{PATH_TO_THE_AGENT_JAR}/applicationinsights-agent-{VERSION_NUMBER}.jar`.
+
+4. Restart the app, leaving the **Startup Command** field blank, to apply the changes.
+
+> [!NOTE]
+> If you set the JAVA_OPTS environment variable, you will have to disable Application Insights in the portal. Alternatively, if you prefer to enable Application Insights from the portal, make sure that you don't set the `JAVA_OPTS` variable in App Service configurations settings.
+ ## Release notes For the latest updates and bug fixes, [consult the release notes](web-app-extension-release-notes.md).
azure-monitor Codeless Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md
Title: Auto-instrumentation for Azure Monitor Application Insights description: Overview of auto-instrumentation for Azure Monitor Application Insights codeless application performance management. Previously updated : 01/06/2023 Last updated : 02/14/2023
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Title: Migrate an Application Insights classic resource to a workspace-based resource - Azure Monitor | Microsoft Docs description: Learn how to upgrade your Application Insights classic resource to the new workspace-based model. Previously updated : 11/15/2022 Last updated : 02/14/2023
azure-monitor Deprecated Java 2X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/deprecated-java-2x.md
Title: Use Application Insights Java 2.x description: Learn how to use Application Insights Java 2.x so that you can send trace logs, monitor dependencies, filter telemetry, and measure metrics. Previously updated : 12/07/2022 Last updated : 02/14/2023 ms.devlang: java
azure-monitor Export Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/export-telemetry.md
Title: Continuous export of telemetry from Application Insights | Microsoft Docs description: Export diagnostic and usage data to storage in Azure and download it from there. Previously updated : 11/14/2022 Last updated : 02/14/2023
azure-monitor Java Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-spring-boot.md
Title: Configure Azure Monitor Application Insights for Spring Boot description: How to configure Azure Monitor Application Insights for Spring Boot applications Previously updated : 01/18/2023 Last updated : 02/14/2023 ms.devlang: java
azure-monitor Java Standalone Arguments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-arguments.md
Title: Add the JVM arg - Application Insights for Java description: Learn how to add the JVM arg that enables Application Insights for Java. Previously updated : 01/18/2023 Last updated : 02/14/2023 ms.devlang: java
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
Title: Configuration options - Azure Monitor Application Insights for Java description: This article shows you how to configure Azure Monitor Application Insights for Java. Previously updated : 01/18/2023 Last updated : 02/14/2023 ms.devlang: java
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
Title: Diagnose with Live Metrics - Application Insights - Azure Monitor description: Monitor your web app in real time with custom metrics, and diagnose issues with a live feed of failures, traces, and events. Previously updated : 01/06/2023 Last updated : 02/14/2023 ms.devlang: csharp
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
Title: Telemetry sampling in Azure Application Insights | Microsoft Docs description: How to keep the volume of telemetry under control. Previously updated : 01/06/2023 Last updated : 02/14/2023
azure-monitor Usage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-overview.md
Title: Usage analysis with Application Insights | Azure Monitor description: Understand your users and what they do with your app. Previously updated : 07/30/2021 Last updated : 02/14/2023
azure-monitor Best Practices Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-analysis.md
description: Guidance and recommendations for customizing visualizations beyond
Previously updated : 10/18/2021 Last updated : 02/14/2023
This table describes Azure Monitor features that provide analysis of collected d
|Component |Description | Required training and/or configuration| |||--| |Overview page|Most Azure services have an **Overview** page in the Azure portal that includes a **Monitor** section with charts that show recent critical metrics. This information is intended for owners of individual services to quickly assess the performance of the resource. |This page is based on platform metrics that are collected automatically. No configuration is required. |
-|Metrics Explorer|You can use Metrics Explorer to interactively work with metric data and create metric alerts. You need minimal training to use Metrics Explorer, but you must be familiar with the metrics you want to analyze. |- Once data collection is configured, no another configuration is required.<br>- Platform metrics for Azure resources are automatically available.<br>- Guest metrics for virtual machines are available after an Azure Monitor agent is deployed to the virtual machine.<br>- Application metrics are available after Application Insights is configured. |
-|Log Analytics|With Log Analytics, you can create log queries to interactively work with log data and create log query alerts.| Some training is required for you to become familiar with the query language, although you can use prebuilt queries for common requirements. You can also add [query packs](logs/query-packs.md) with queries that are unique to your organization. Then if you're familiar with the query language, you can build queries for others in your organization. |
+|[Metrics Explorer](essentials/metrics-getting-started.md)|You can use Metrics Explorer to interactively work with metric data and create metric alerts. You need minimal training to use Metrics Explorer, but you must be familiar with the metrics you want to analyze. |- Once data collection is configured, no another configuration is required.<br>- Platform metrics for Azure resources are automatically available.<br>- Guest metrics for virtual machines are available after an Azure Monitor agent is deployed to the virtual machine.<br>- Application metrics are available after Application Insights is configured. |
+|[Log Analytics](logs/log-analytics-overview.md)|With Log Analytics, you can create log queries to interactively work with log data and create log query alerts.| Some training is required for you to become familiar with the query language, although you can use prebuilt queries for common requirements. You can also add [query packs](logs/query-packs.md) with queries that are unique to your organization. Then if you're familiar with the query language, you can build queries for others in your organization. |
## Built-in visualization tools
+### Azure workbooks
+
+ [Azure Workbooks](./visualize/workbooks-overview.md) provide a flexible canvas for data analysis and the creation of rich visual reports. You can use workbooks to tap into multiple data sources from across Azure and combine them into unified interactive experiences. They're especially useful to prepare end-to-end monitoring views across multiple Azure resources. Insights use prebuilt workbooks to present you with critical health and performance information for a particular service. You can access a gallery of workbooks on the **Workbooks** tab of the Azure Monitor menu and create custom workbooks to meet the requirements of your different users.
+
+![Diagram that shows screenshots of three pages from a workbook, including Analysis of Page Views, Usage, and Time Spent on Page.](media/visualizations/workbook.png)
+ ### Azure dashboards [Azure dashboards](../azure-portal/azure-portal-dashboards.md) are useful in providing a "single pane of glass" of your Azure infrastructure and services. While a workbook provides richer functionality, a dashboard can combine Azure Monitor data with data from other Azure services.
This table describes Azure Monitor features that provide analysis of collected d
Here's a video about how to create dashboards: > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4AslH]-
-### Azure workbooks
-
- [Workbooks](./visualize/workbooks-overview.md) provide a flexible canvas for data analysis and the creation of rich visual reports. You can use workbooks to tap into multiple data sources from across Azure and combine them into unified interactive experiences. They're especially useful to prepare end-to-end monitoring views across multiple Azure resources. Insights use prebuilt workbooks to present you with critical health and performance information for a particular service. You can access a gallery of workbooks on the **Workbooks** tab of the Azure Monitor menu and create custom workbooks to meet the requirements of your different users.
-
-![Diagram that shows screenshots of three pages from a workbook, including Analysis of Page Views, Usage, and Time Spent on Page.](media/visualizations/workbook.png)
- ### Grafana [Grafana](https://grafana.com/) is an open platform that excels in operational dashboards. It's useful for:
All versions of Grafana include the [Azure Monitor datasource plug-in](visualize
|Visualization tool|Benefits|Common use cases|Good fit for| |:|:|:|:|
-|Azure Workbooks|- Native dashboarding platform in Azure.<br>- Designed for collaborating and troubleshooting.<br>- Out-of-the-box templates and reports.<br>- Fully customizable. |- Create an interactive report with parameters where selecting an element in a table dynamically updates associated charts and visualizations.<br>- Share a report with other users in your organization.<br>- Collaborate with other workbook authors in your organization by using a public GitHub-based template gallery. | |
-|Azure Dashboards|- Native dashboarding platform in Azure.<br>- Supports at scale deployments.<br>- Supports RBAC.<br>- No added cost|- Create a dashboard that combines a metrics graph and the results of a log query with operational data for related services.<br>- Share a dashboard with service owners through integration with [Azure role-based access control](../role-based-access-control/overview.md). |Azure/Arc exclusive environments|
-|Grafana |- Multi-platform, multicloud single pane of glass visualizations.<br>- Out-of-the-box plugins from most monitoring tools and platforms.<br>- Dashboard templates with focus on operations.<br>- Supports portability, multi-tenancy, and flexible RBAC.<br>- Azure managed Grafana provides seamless integration with Azure. |- Combine time-series and event data in a single visualization panel.<br>- Create a dynamic dashboard based on user selection of dynamic variables.<br>- Create a dashboard from a community-created and community-supported template.<br>- Create a vendor-agnostic business continuity and disaster scenario that runs on any cloud provider or on-premises. |- Cloud Native CNCF monitoring.<br>- Best with Prometheus.<br>- Multicloud environments.<br>- Combining with 3rd party monitoring tools.|
-|Power BI |- Helps design business centric KPI dashboards for long term trends.<br>- Supports BI analytics with extensive slicing and dicing. <br>- Create rich visualizations.<br>- Benefit from extensive interactivity, including zoom-in and cross-filtering.<br>- Share easily throughout your organization.<br>- Integrate data from multiple data sources.<br>- Experience better performance with results cached in a cube. |Dashboarding for long term trends.|
+|[Azure Workbooks](./visualize/workbooks-overview.md)|- Native dashboarding platform in Azure.<br>- Designed for collaborating and troubleshooting.<br>- Out-of-the-box templates and reports.<br>- Fully customizable. |- Create an interactive report with parameters where selecting an element in a table dynamically updates associated charts and visualizations.<br>- Share a report with other users in your organization.<br>- Collaborate with other workbook authors in your organization by using a public GitHub-based template gallery. | |
+|[Azure dashboards](../azure-portal/azure-portal-dashboards.md)|- Native dashboarding platform in Azure.<br>- Supports at scale deployments.<br>- Supports RBAC.<br>- No added cost|- Create a dashboard that combines a metrics graph and the results of a log query with operational data for related services.<br>- Share a dashboard with service owners through integration with [Azure role-based access control](../role-based-access-control/overview.md). |Azure/Arc exclusive environments|
+|[Azure Managed Grafana](../managed-grafan)|- Multi-platform, multicloud single pane of glass visualizations.<br>- Out-of-the-box plugins from most monitoring tools and platforms.<br>- Dashboard templates with focus on operations.<br>- Supports portability, multi-tenancy, and flexible RBAC.<br>- Azure managed Grafana provides seamless integration with Azure. |- Combine time-series and event data in a single visualization panel.<br>- Create a dynamic dashboard based on user selection of dynamic variables.<br>- Create a dashboard from a community-created and community-supported template.<br>- Create a vendor-agnostic business continuity and disaster scenario that runs on any cloud provider or on-premises. |- Cloud Native CNCF monitoring.<br>- Best with Prometheus.<br>- Multicloud environments.<br>- Combining with 3rd party monitoring tools.|
+|[Power BI](https://powerbi.microsoft.com/documentation/powerbi-service-get-started/) |- Helps design business centric KPI dashboards for long term trends.<br>- Supports BI analytics with extensive slicing and dicing. <br>- Create rich visualizations.<br>- Benefit from extensive interactivity, including zoom-in and cross-filtering.<br>- Share easily throughout your organization.<br>- Integrate data from multiple data sources.<br>- Experience better performance with results cached in a cube. |Dashboarding for long term trends.|
## Other options Some Azure Monitor partners provide visualization functionality. For a list of partners that Microsoft has evaluated, see [Azure Monitor partner integrations](./partners.md). An Azure Monitor partner might provide out-of-the-box visualizations to save you time, although these solutions might have an extra cost.
-You can also build your own custom websites and applications using metric and log data in Azure Monitor accessed through a REST API. This approach gives you complete flexibility in UI, visualization, interactivity, and features.
+You can also build your own custom websites and applications using metric and log data in Azure Monitor using the REST API. The REST API gives you flexibility in UI, visualization, interactivity, and features.
## Next steps - [Deploy Azure Monitor: Alerts and automated actions](best-practices-alerts.md)
azure-monitor Metrics Custom Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-custom-overview.md
# Custom metrics in Azure Monitor (preview)
-As you deploy resources and applications in Azure, you'll want to start collecting telemetry to gain insights into their performance and health. Azure makes some metrics available to you out of the box. These metrics are called [standard or platform](./metrics-supported.md). However, they're limited.
+As you deploy resources and applications in Azure, start collecting telemetry to gain insights into their performance and health. Azure makes some metrics available to you out of the box. These metrics are called [standard or platform](./metrics-supported.md).
-You might want to collect some custom performance indicators or business-specific metrics to provide deeper insights. These *custom* metrics can be collected via your application telemetry, an agent that runs on your Azure resources, or even an outside-in monitoring system. They can then be submitted directly to Azure Monitor. After custom metrics are published to Azure Monitor, you can browse, query, and alert on custom metrics for your Azure resources and applications side by side with the standard Azure metrics.
+Collect custom performance indicators or business-specific metrics to provide deeper insights. These *custom* metrics can be collected via your application telemetry, an agent that runs on your Azure resources, or even an outside-in monitoring system. They can then be submitted directly to Azure Monitor. Once custom metrics are published to Azure Monitor, you can browse, query, and alert on them for your Azure resources and applications along side the standard Azure metrics.
Azure Monitor custom metrics are currently in public preview.
Custom metrics can be sent to Azure Monitor via several methods:
- Instrument your application by using the Azure Application Insights SDK and send custom telemetry to Azure Monitor. - Install the Azure Monitor agent (preview) on your [Windows or Linux Azure VM](../agents/azure-monitor-agent-overview.md). Use a [data collection rule](../agents/data-collection-rule-azure-monitor-agent.md) to send performance counters to Azure Monitor metrics.-- Install the Azure Diagnostics extension on your [Azure VM](../essentials/collect-custom-metrics-guestos-resource-manager-vm.md), [virtual machine scale set](../essentials/collect-custom-metrics-guestos-resource-manager-vmss.md), [classic VM](../essentials/collect-custom-metrics-guestos-vm-classic.md), or [classic cloud service](../essentials/collect-custom-metrics-guestos-vm-cloud-service-classic.md). Then send performance counters to Azure Monitor.
+- Install the Azure Diagnostics extension on your [Azure VM](../essentials/collect-custom-metrics-guestos-resource-manager-vm.md), [Virtual Machine Scale Set](../essentials/collect-custom-metrics-guestos-resource-manager-vmss.md), [classic VM](../essentials/collect-custom-metrics-guestos-vm-classic.md), or [classic cloud service](../essentials/collect-custom-metrics-guestos-vm-cloud-service-classic.md). Then send performance counters to Azure Monitor.
- Install the [InfluxData Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md) on your Azure Linux VM. Send metrics by using the Azure Monitor output plug-in. - Send custom metrics [directly to the Azure Monitor REST API](./metrics-store-custom-rest-api.md), `https://<azureregion>.monitoring.azure.com/<AzureResourceID>/metrics`. ## Pricing model and retention
-For details on when billing will be enabled for custom metrics and metrics queries, check the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/). In summary, there's no cost to ingest standard metrics (platform metrics) into an Azure Monitor metrics store, but custom metrics will incur costs when they enter general availability. Queries to the metrics API do incur costs.
+For details on when billing is enabled for custom metrics and metrics queries, check the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/). In summary, there's no cost to ingest standard metrics (platform metrics) into an Azure Monitor metrics store, but custom metrics incur costs when they enter general availability. Queries to the metrics API do incur costs.
Custom metrics are retained for the [same amount of time as platform metrics](../essentials/data-platform-metrics.md#retention-of-metrics).
To submit custom metrics to Azure Monitor, the entity that submits the metric ne
### Subject
-The subject property captures which Azure resource ID the custom metric is reported for. This information will be encoded in the URL of the API call. Each API can submit metric values for only a single Azure resource.
+The subject property captures which Azure resource ID the custom metric is reported for. This information is encoded in the URL of the API call. Each API can submit metric values for only a single Azure resource.
> [!NOTE] > You can't emit custom metrics against the resource ID of a resource group or subscription.
Although dimensions are optional, if a metric post defines dimension keys, corre
Azure Monitor stores all metrics at 1-minute granularity intervals. During a given minute, a metric might need to be sampled several times. An example is CPU utilization. Or a metric might need to be measured for many discrete events, such as sign-in transaction latencies.
-To limit the number of raw values that you have to emit and pay for in Azure Monitor, you can locally pre-aggregate and emit the values:
+To limit the number of raw values that you have to emit and pay for in Azure Monitor, locally pre-aggregate and emit the aggregated values:
* **Min**: The minimum observed value from all the samples and measurements during the minute. * **Max**: The maximum observed value from all the samples and measurements during the minute.
In the following example, you create a custom metric called **Memory Bytes in Us
## Custom metric definitions
-There's no need to predefine a custom metric in Azure Monitor before it's emitted. Each metric data point published contains namespace, name, and dimension information. So, the first time a custom metric is emitted to Azure Monitor, a metric definition is automatically created. This metric definition is then discoverable on any resource that the metric is emitted against via the metric definitions.
+Each metric data point published contains a namespace, name, and dimension information. The first time a custom metric is emitted to Azure Monitor, a metric definition is automatically created. This new metric definition is then discoverable on any resource that the metric is emitted from via the metric definitions. There's no need to predefine a custom metric in Azure Monitor before it's emitted.
> [!NOTE]
-> Azure Monitor doesn't yet support defining **Units** for a custom metric.
+> Azure Monitor doesn't support defining **Units** for a custom metric.
## Using custom metrics
Azure Monitor imposes the following usage limits on custom metrics:
|Category|Limit| |||
-|Total active time series in a subscription across all regions you've deployed to|50,000|
+|Total active time series in a subscription per region|50,000|
|Dimension keys per metric|10| |String length for metric namespaces, metric names, dimension keys, and dimension values|256 characters|
+|The combined length of all custom metric names, using utf-8 encoding|64 KB|
An active time series is defined as any unique combination of metric, dimension key, or dimension value that has had metric values published in the past 12 hours.
Follow the steps below to see your current total active time series metrics, and
1. Select the **Apply** button. 1. Choose either **Active Time Series**, **Active Time Series Limit**, or **Throttled Time Series**.
+There is a limit of 64 KB on the combined length of all custom metrics names, assuming utf-8 or 1 byte per character. If the 64-KB limit is exceeded, metadata for additional metrics won't be available. The metric names for additional custom metrics won't appear in the Azure portal in selection fields, and won't be returned by the API in requests for metric definitions. The metric data is still available and can be queried.
+
+When the limit has been exceeded, reduce the number of metrics you're sending or shorten the length of their names. It then takes up to two days for the new metrics' names to appear.
+
+To avoid reaching the limit, don't include variable or dimensional aspects in your metric names.
+For example, the metrics for server CPU usage,`CPU_server_12345678-319d-4a50-b27e-1234567890ab` and `CPU_server_abcdef01-319d-4a50-b27e-abcdef012345` should be defined as metric `CPU` and with a `Server` dimension.
+ ## Design limitations and considerations **Using Application Insights for the purpose of auditing.** The Application Insights telemetry pipeline is optimized for minimizing the performance impact and limiting the network traffic from monitoring your application. As such, it throttles or samples (takes only a percentage of your telemetry and ignores the rest) if the initial dataset becomes too large. Because of this behavior, you can't use it for auditing purposes because some records are likely to be dropped.
But if high cardinality is essential for your scenario, the aggregated metrics a
Use custom metrics from various - [Virtual machine](../essentials/collect-custom-metrics-guestos-resource-manager-vm.md)
+ - [Virtual Machine Scale Set](../essentials/collect-custom-metrics-guestos-resource-manager-vmss.md)
- [Azure virtual machine (classic)](../essentials/collect-custom-metrics-guestos-vm-classic.md) - [Linux virtual machine using the Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md) - [REST API](./metrics-store-custom-rest-api.md)
azure-monitor Azure Networking Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/azure-networking-analytics.md
Title: Azure Networking Analytics solution in Azure Monitor | Microsoft Docs
-description: You can use the Azure Networking Analytics solution in Azure Monitor to review Azure network security group logs and Azure Application Gateway logs.
+ Title: Azure networking analytics solution in Azure Monitor | Microsoft Docs
+description: You can use the Azure networking analytics solution in Azure Monitor to review Azure network security group logs and Azure Application Gateway logs.
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)] Azure Monitor offers the following solutions for monitoring your networks:
-* Network Performance Monitor (NPM) to
- * Monitor the health of your network
-* Azure Application Gateway analytics to review
- * Azure Application Gateway logs
- * Azure Application Gateway metrics
-* Solutions to monitor and audit network activity on your cloud network
- * [Traffic Analytics](../../networking/network-monitoring-overview.md#traffic-analytics)
+* Network Performance Monitor to:
+ * Monitor the health of your network.
+* Azure Application Gateway analytics to review:
+ * Application Gateway logs.
+ * Application Gateway metrics.
+* Solutions to monitor and audit network activity on your cloud network:
+ * [Traffic analytics](../../networking/network-monitoring-overview.md#traffic-analytics).
-## Network Performance Monitor (NPM)
+## Network Performance Monitor
-The [Network Performance Monitor](../../networking/network-monitoring-overview.md) management solution is a network monitoring solution that monitors the health, availability and reachability of networks. It is used to monitor connectivity between:
+The [Network Performance Monitor](../../networking/network-monitoring-overview.md) management solution is a network monitoring solution that monitors the health, availability, and reachability of networks. It's used to monitor connectivity between:
-* Public cloud and on-premises
-* Data centers and user locations (branch offices)
-* Subnets hosting various tiers of a multi-tiered application.
+* Public cloud and on-premises.
+* Datacenters and user locations like branch offices.
+* Subnets that host various tiers of a multi-tiered application.
For more information, see [Network Performance Monitor](../../networking/network-monitoring-overview.md). -
-## Azure Application Gateway analytics
+## Application Gateway analytics
1. Enable diagnostics to direct the diagnostics to a Log Analytics workspace in Azure Monitor.
-2. Consume the detailed summary for your resource using the workbook template for Application Gateway.
-
-If diagnostic logs are not enabled for Application Gateway, only the default metric data would be populated within the workbook.
+1. Consume the detailed summary for your resource by using the workbook template for Application Gateway.
+If diagnostic logs aren't enabled for Application Gateway, only the default metric data would be populated within the workbook.
## Review Azure networking data collection details
-The Azure Application Gateway analytics and the Network Security Group analytics management solutions collect diagnostics logs directly from Azure Application Gateways and Network Security Groups. It is not necessary to write the logs to Azure Blob storage and no agent is required for data collection.
+The Application Gateway analytics and the network security group analytics management solutions collect diagnostics logs directly from Application Gateway and network security groups. It isn't necessary to write the logs to Azure Blob Storage, and no agent is required for data collection.
-The following table shows data collection methods and other details about how data is collected for Azure Application Gateway analytics and the Network Security Group analytics.
+The following table shows data collection methods and other details about how data is collected for Application Gateway analytics and the network security group analytics.
| Platform | Direct agent | Systems Center Operations Manager agent | Azure | Operations Manager required? | Operations Manager agent data sent via management group | Collection frequency | | | | | | | | |
-| Azure | | |&#8226; | | |when logged |
-
+| Azure | | |&#8226; | | |When logged |
-### Enable Azure Application Gateway diagnostics in the portal
+### Enable Application Gateway diagnostics in the portal
-1. In the Azure portal, navigate to the Application Gateway resource to monitor.
-2. Select *Diagnostics Settings* to open the following page.
+1. In the Azure portal, go to the Application Gateway resource to monitor.
+1. Select **Diagnostic settings** to open the following page.
- ![Screenshot of the Diagnostics Settings config for Application Gateway resource.](media/azure-networking-analytics/diagnostic-settings-1.png)
+ ![Screenshot that shows the Diagnostic settings config for an Application Gateway resource.](media/azure-networking-analytics/diagnostic-settings-1.png)
- [![Screenshot of the page for configuring Diagnostics settings.](media/azure-networking-analytics/diagnostic-settings-2.png)](media/azure-networking-analytics/application-gateway-diagnostics-2.png#lightbox)
+ [![Screenshot that shows the page for configuring diagnostic settings.](media/azure-networking-analytics/diagnostic-settings-2.png)](media/azure-networking-analytics/application-gateway-diagnostics-2.png#lightbox)
-5. Click the checkbox for *Send to Log Analytics*.
-6. Select an existing Log Analytics workspace, or create a workspace.
-7. Click the checkbox under **Log** for each of the log types to collect.
-8. Click *Save* to enable the logging of diagnostics to Azure Monitor.
+1. Select the **Send to Log Analytics workspace** checkbox.
+1. Select an existing Log Analytics workspace or create a workspace.
+1. Select the checkbox under **log** for each of the log types to collect.
+1. Select **Save** to enable the logging of diagnostics to Azure Monitor.
-#### Enable Azure network diagnostics using PowerShell
+#### Enable Azure network diagnostics by using PowerShell
-The following PowerShell script provides an example of how to enable resource logging for application gateways.
+The following PowerShell script provides an example of how to enable resource logging for application gateways:
```powershell $workspaceId = "/subscriptions/d2e37fee-1234-40b2-5678-0b2199de3b50/resourcegroups/oi-default-east-us/providers/microsoft.operationalinsights/workspaces/rollingbaskets"
$gateway = Get-AzApplicationGateway -Name 'ContosoGateway'
Set-AzDiagnosticSetting -ResourceId $gateway.ResourceId -WorkspaceId $workspaceId -Enabled $true ```
-#### Accessing Azure Application Gateway analytics via Azure Monitor Network insights
+#### Access Application Gateway analytics via Azure Monitor Network Insights
-Application insights can be accessed via the insights tab within your Application Gateway resource.
+Application insights can be accessed via the **Insights** tab in your Application Gateway resource.
-![Screenshot of Application Gateway insights](media/azure-networking-analytics/azure-appgw-insights.png)
+![Screenshot that shows Application Gateway insights.](media/azure-networking-analytics/azure-appgw-insights.png)
-The "view detailed metrics" tab will open up the pre-populated workbook summarizing the data from your Application Gateway.
+The **View detailed metrics** tab opens the pre-populated workbook that summarizes the data from your Application Gateway resource.
-[![Screenshot of Application Gateway workbook](media/azure-networking-analytics/azure-appgw-workbook.png)](media/azure-networking-analytics/application-gateway-workbook.png#lightbox)
+[![Screenshot that shows an Application Gateway workbook.](media/azure-networking-analytics/azure-appgw-workbook.png)](media/azure-networking-analytics/application-gateway-workbook.png#lightbox)
-### New capabilities with Azure Monitor Network Insights workbook
+### New capabilities with an Azure Monitor Network Insights workbook
> [!NOTE]
-> There are no additional costs associated with Azure Monitor Insights workbook. Log Analytics workspace will continue to be billed as per usage.
+> No other costs are associated with an Azure Monitor Network Insights workbook. The Log Analytics workspace will continue to be billed per usage.
-The Network Insights workbook allows you to take advantage of the latest capabilities of Azure Monitor and Log Analytics including:
+The Network Insights workbook allows you to take advantage of the latest capabilities of Azure Monitor and Log Analytics, including:
* Centralized console for monitoring and troubleshooting with both [metric](../../network-watcher/network-insights-overview.md#resource-health-and-metrics) and log data.
+* Flexible canvas to support creation of custom-rich [visualizations](../visualize/workbooks-overview.md#visualizations).
+* Ability to consume and [share workbook templates](../visualize/workbooks-templates.md) with a wider community.
-* Flexible canvas to support creation of custom rich [visualizations](../visualize/workbooks-overview.md#visualizations).
-
-* Ability to consume and [share workbook templates](../visualize/workbooks-templates.md) with wider community.
-
-To find more information about the capabilities of the new workbook solution check out [Workbooks-overview](../visualize/workbooks-overview.md)
+For more information about the capabilities of the new workbook solution, see [Workbooks overview](../visualize/workbooks-overview.md).
-## Migrating from Azure Gateway analytics solution to Azure Monitor workbooks
+## Migrate from the Azure Gateway analytics solution to Azure Monitor workbooks
> [!NOTE]
-> Azure Monitor Network Insights workbook is the recommended solution for accessing metric and log analytics for your Application Gateway resources.
+> We recommend the Azure Monitor Network Insights workbook solution for accessing metric and log analytics for your Application Gateway resources.
-1. Ensure [diagnostics settings are enabled](#enable-azure-application-gateway-diagnostics-in-the-portal) to store logs into a Log Analytics workspace. If it is already configured, Azure Monitor Network Insights workbook will be able to consume data from the same location and no more changes are required.
+1. Ensure that [diagnostics settings are enabled](#enable-application-gateway-diagnostics-in-the-portal) to store logs in a Log Analytics workspace. If it's already configured, the Azure Monitor Network Insights workbook will be able to consume data from the same location. No more changes are required.
-> [!NOTE]
-> All past data is already available within the workbook from the point diagnostic settings were originally enabled. There is no data transfer required.
-
-2. Access the [default insights workbook](#accessing-azure-application-gateway-analytics-via-azure-monitor-network-insights) for your Application Gateway resource. All existing insights supported by the Application Gateway analytics solution will be already present in the workbook. You can extend this by adding custom [visualizations](../visualize/workbooks-overview.md#visualizations) based on metric and log data.
+ > [!NOTE]
+ > All past data is already available within the workbook from the point when diagnostic settings were originally enabled. No data transfer is required.
-3. After you are able to see all your metric and log insights, to clean up the Azure Gateway analytics solution from your workspace, you can delete the solution from the solution resource page.
+1. Access the [default insights workbook](#access-application-gateway-analytics-via-azure-monitor-network-insights) for your Application Gateway resource. All existing insights supported by the Application Gateway analytics solution will be already present in the workbook. You can add custom [visualizations](../visualize/workbooks-overview.md#visualizations) based on metric and log data.
-[![Screenshot of the delete option for Azure Application Gateway analytics solution.](media/azure-networking-analytics/azure-appgw-analytics-delete.png)](media/azure-networking-analytics/application-gateway-analytics-delete.png#lightbox)
+1. After you see all your metric and log insights, to clean up the Azure Gateway analytics solution from your workspace, delete the solution from the **Solution Resources** pane.
+ [![Screenshot that shows the delete option for the Application Gateway analytics solution.](media/azure-networking-analytics/azure-appgw-analytics-delete.png)](media/azure-networking-analytics/application-gateway-analytics-delete.png#lightbox)
## Troubleshooting+
+Follow the steps here to troubleshoot Azure Diagnostics.
+ [!INCLUDE [log-analytics-troubleshoot-azure-diagnostics](../../../includes/log-analytics-troubleshoot-azure-diagnostics.md)] ## Next steps
-* Use [Log queries in Azure Monitor](../logs/log-query-overview.md) to view detailed Azure diagnostics data.
-
+Use [log queries in Azure Monitor](../logs/log-query-overview.md) to view detailed Azure Diagnostics data.
azure-monitor Dns Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/dns-analytics.md
-# Gather insights about your DNS infrastructure with the DNS Analytics Preview solution
+# Gather insights about your DNS infrastructure with the DNS Analytics preview solution
-![DNS Analytics symbol](./media/dns-analytics/dns-analytics-symbol.png)
+![The DNS Analytics symbol.](./media/dns-analytics/dns-analytics-symbol.png)
This article describes how to set up and use the Azure DNS Analytics solution in Azure Monitor to gather insights into DNS infrastructure on security, performance, and operations.
DNS Analytics helps you to:
The solution collects, analyzes, and correlates Windows DNS analytic and audit logs and other related data from your DNS servers. > [!IMPORTANT]
-> The Log Analytics agent will be **retired on 31 August, 2024**. If you are using the Log Analytics agent in your Microsoft Sentinel deployment, we recommend that you start planning your migration to the AMA. For more information, see [AMA migration for Microsoft Sentinel](../..//sentinel/ama-migrate.md).
+> The Log Analytics agent will be **retired on August 31, 2024**. If you're using the Log Analytics agent in your Microsoft Sentinel deployment, we recommend that you start planning your migration to the Azure Monitor Agent. For more information, see [Azure Monitor Agent migration for Microsoft Sentinel](../..//sentinel/ama-migrate.md).
## Connected sources The following table describes the connected sources that are supported by this solution:
-| **Connected source** | **Support** | **Description** |
+| Connected source | Support | Description |
| | | | | [Windows agents](../agents/agent-windows.md) | Yes | The solution collects DNS information from Windows agents. |
-| [Linux agents](../vm/monitor-virtual-machine.md) | No | The solution does not collect DNS information from direct Linux agents. |
-| [System Center Operations Manager management group](../agents/om-agents.md) | Yes | The solution collects DNS information from agents in a connected Operations Manager management group. A direct connection from the Operations Manager agent to Azure Monitor is not required. Data is forwarded from the management group to the Log Analytics workspace. |
-| [Azure storage account](../essentials/resource-logs.md#send-to-log-analytics-workspace) | No | Azure storage isn't used by the solution. |
+| [Linux agents](../vm/monitor-virtual-machine.md) | No | The solution doesn't collect DNS information from direct Linux agents. |
+| [System Center Operations Manager management group](../agents/om-agents.md) | Yes | The solution collects DNS information from agents in a connected Operations Manager management group. A direct connection from the Operations Manager agent to Azure Monitor isn't required. Data is forwarded from the management group to the Log Analytics workspace. |
+| [Azure Storage account](../essentials/resource-logs.md#send-to-log-analytics-workspace) | No | Azure Storage isn't used by the solution. |
### Data collection details
The solution collects DNS inventory and DNS event-related data from the DNS serv
Use the following information to configure the solution: - You must have a [Windows](../agents/agent-windows.md) or [Operations Manager](../agents/om-agents.md) agent on each DNS server that you want to monitor.-- You can add the DNS Analytics solution to your Log Analytics workspace from the [Azure Marketplace](https://aka.ms/dnsanalyticsazuremarketplace). You can also use the process described in [Add Azure Monitor solutions from the Solutions Gallery](solutions.md).
+- You can add the DNS Analytics solution to your Log Analytics workspace from [Azure Marketplace](https://aka.ms/dnsanalyticsazuremarketplace). You can also use the process described in [Add Azure Monitor solutions from the Solutions Gallery](solutions.md).
The solution starts collecting data without the need of further configuration. However, you can use the following configuration to customize data collection. ### Configure the solution
-From the Log Analytics workspace in the Azure portal, select **Workspace summary** and then click on the **DNS Analytics** tile. On the solution dashboard, click **Configuration** to open the DNS Analytics Configuration page. There are two types of configuration changes that you can make:
+From the Log Analytics workspace in the Azure portal, select **Workspace summary**. Then select the **DNS Analytics** tile. On the solution dashboard, select **Configuration** to open the **DNS Analytics Configuration** page. There are two types of configuration changes that you can make:
-- **Allowlisted Domain Names**. The solution does not process all the lookup queries. It maintains an allowlist of domain name suffixes. The lookup queries that resolve to the domain names that match domain name suffixes in this allowlist are not processed by the solution. Not processing allowlisted domain names helps to optimize the data sent to Azure Monitor. The default allowlist includes popular public domain names, such as www.google.com and www.facebook.com. You can view the complete default list by scrolling.
+- **Allowlisted Domain Names**: The solution doesn't process all the lookup queries. It maintains an allowlist of domain name suffixes. The lookup queries that resolve to the domain names that match domain name suffixes in this allowlist aren't processed by the solution. Not processing allowlisted domain names helps to optimize the data sent to Azure Monitor. The default allowlist includes popular public domain names, such as www.google.com and www.facebook.com. You can view the complete default list by scrolling.
You can modify the list to add any domain name suffix that you want to view lookup insights for. You can also remove any domain name suffix that you don't want to view lookup insights for. -- **Talkative Client Threshold**. DNS clients that exceed the threshold for the number of lookup requests are highlighted in the **DNS Clients** pane. The default threshold is 1,000. You can edit the threshold.
+- **Talkative Client Threshold**: DNS clients that exceed the threshold for the number of lookup requests are highlighted in the **DNS Clients** pane. The default threshold is 1,000. You can edit the threshold.
- ![Allowlisted domain names](./media/dns-analytics/dns-config.png)
+ ![Screenshot that shows the Allowlisted domain names.](./media/dns-analytics/dns-config.png)
## Management packs
-If you are using the Microsoft Monitoring Agent to connect to your Log Analytics workspace, the following management pack is installed:
+If you're using the Microsoft Monitoring Agent to connect to your Log Analytics workspace, the following management pack is installed:
- Microsoft DNS Data Collector Intelligence Pack (Microsoft.IntelligencePacks.Dns)
-If your Operations Manager management group is connected to your Log Analytics workspace, the following management packs are installed in Operations Manager when you add this solution. There is no required configuration or maintenance of these management packs:
+If your Operations Manager management group is connected to your Log Analytics workspace, the following management packs are installed in Operations Manager when you add this solution. There's no required configuration or maintenance of these management packs:
- Microsoft DNS Data Collector Intelligence Pack (Microsoft.IntelligencePacks.Dns) - Microsoft System Center Advisor DNS Analytics Configuration (Microsoft.IntelligencePack.Dns.Configuration)
For more information on how solution management packs are updated, see [Connect
[!INCLUDE [azure-monitor-solutions-overview-page](../../../includes/azure-monitor-solutions-overview-page.md)]
+The DNS tile includes the number of DNS servers where the data is being collected. It also includes the number of requests made by clients to resolve malicious domains in the past 24 hours. When you select a tile, the solution dashboard opens.
-The DNS tile includes the number of DNS servers where the data is being collected. It also includes the number of requests made by clients to resolve malicious domains in the past 24 hours. When you click the tile, the solution dashboard opens.
-
-![DNS Analytics tile](./media/dns-analytics/dns-tile.png)
+![Screenshot that shows the DNS Analytics tile.](./media/dns-analytics/dns-tile.png)
### Solution dashboard The solution dashboard shows summarized information for the various features of the solution. It also includes links to the detailed view for forensic analysis and diagnosis. By default, the data is shown for the last seven days. You can change the date and time range by using the **date-time selection control**, as shown in the following image:
-![Time selection control](./media/dns-analytics/dns-time.png)
+![Screenshot that shows the time selection control.](./media/dns-analytics/dns-time.png)
The solution dashboard shows the following sections:
-**DNS Security**. Reports the DNS clients that are trying to communicate with malicious domains. By using Microsoft threat intelligence feeds, DNS Analytics can detect client IPs that are trying to access malicious domains. In many cases, malware-infected devices "dial out" to the "command and control" center of the malicious domain by resolving the malware domain name.
+**DNS Security**: Reports the DNS clients that are trying to communicate with malicious domains. By using Microsoft threat intelligence feeds, DNS Analytics can detect client IPs that are trying to access malicious domains. In many cases, malware-infected devices "dial out" to the "command and control" center of the malicious domain by resolving the malware domain name.
-![DNS Security section](./media/dns-analytics/dns-security-blade.png)
+![Screenshot that shows the DNS Security section.](./media/dns-analytics/dns-security-blade.png)
-When you click a client IP in the list, Log Search opens and shows the lookup details of the respective query. In the following example, DNS Analytics detected that the communication was done with an [IRCbot](https://www.microsoft.com/en-us/wdsi/threats/malware-encyclopedia-description?Name=Backdoor:Win32/IRCbot&threatId=2621):
+When you select a client IP in the list, Log Search opens and shows the lookup details of the respective query. In the following example, DNS Analytics detected that the communication was done with an [IRCbot](https://www.microsoft.com/en-us/wdsi/threats/malware-encyclopedia-description?Name=Backdoor:Win32/IRCbot&threatId=2621):
-![Log search results showing ircbot](./media/dns-analytics/ircbot.png)
+![Screenshot that shows the log search results showing ircbot.](./media/dns-analytics/ircbot.png)
The information helps you to identify the:
The information helps you to identify the:
- Reason for blocklisting the malicious IP. - Detection time.
-**Domains Queried**. Provides the most frequent domain names being queried by the DNS clients in your environment. You can view the list of all the domain names queried. You can also drill down into the lookup request details of a specific domain name in Log Search.
+**Domains Queried**: Provides the most frequent domain names being queried by the DNS clients in your environment. You can view the list of all the domain names queried. You can also drill down into the lookup request details of a specific domain name in **Log Search**.
-![Domains Queried section](./media/dns-analytics/domains-queried-blade.png)
+![Screenshot that shows the Domains Queried section.](./media/dns-analytics/domains-queried-blade.png)
-**DNS Clients**. Reports the clients *breaching the threshold* for number of queries in the chosen time period. You can view the list of all the DNS clients and the details of the queries made by them in Log Search.
+**DNS Clients**: Reports the clients *breaching the threshold* for number of queries in the chosen time period. You can view the list of all the DNS clients and the details of the queries made by them in **Log Search**.
-![DNS Clients section](./media/dns-analytics/dns-clients-blade.png)
+![Screenshot that shows the DNS Clients section.](./media/dns-analytics/dns-clients-blade.png)
-**Dynamic DNS Registrations**. Reports name registration failures. All registration failures for address [resource records](https://en.wikipedia.org/wiki/List_of_DNS_record_types) (Type A and AAAA) are highlighted along with the client IPs that made the registration requests. You can then use this information to find the root cause of the registration failure by following these steps:
+**Dynamic DNS Registrations**: Reports name registration failures. All registration failures for address [resource records](https://en.wikipedia.org/wiki/List_of_DNS_record_types) (Type A and AAAA) are highlighted along with the client IPs that made the registration requests. You can then use this information to find the root cause of the registration failure by following these steps:
-1. Find the zone that is authoritative for the name that the client is trying to update.
+1. Find the zone that's authoritative for the name that the client is trying to update.
1. Use the solution to check the inventory information of that zone.
The information helps you to identify the:
1. Check whether the zone is configured for secure dynamic update or not.
- ![Dynamic DNS Registrations section](./media/dns-analytics/dynamic-dns-reg-blade.png)
-
-**Name registration requests**. The upper tile shows a trendline of successful and failed DNS dynamic update requests. The lower tile lists the top 10 clients that are sending failed DNS update requests to the DNS servers, sorted by the number of failures.
+ ![Screenshot that shows the Dynamic DNS Registrations section.](./media/dns-analytics/dynamic-dns-reg-blade.png)
-![Name registration requests section](./media/dns-analytics/name-reg-req-blade.png)
+**Name registration requests**: The upper tile shows a trendline of successful and failed DNS dynamic update requests. The lower tile lists the top 10 clients that are sending failed DNS update requests to the DNS servers, sorted by the number of failures.
-**Sample DDI Analytics Queries**. Contains a list of the most common search queries that fetch raw analytics data directly.
+![Screenshot that shows the Name registration requests section.](./media/dns-analytics/name-reg-req-blade.png)
+**Sample DDI Analytics Queries**: Contains a list of the most common search queries that fetch raw analytics data directly.
-![Sample queries](./media/dns-analytics/queries.png)
+![Screenshot that shows the Sample queries.](./media/dns-analytics/queries.png)
-You can use these queries as a starting point for creating your own queries for customized reporting. The queries link to the DNS Analytics Log Search page where results are displayed:
+You can use these queries as a starting point for creating your own queries for customized reporting. The queries link to the **DNS Analytics Log Search** page where results are displayed:
-- **List of DNS Servers**. Shows a list of all DNS servers with their associated FQDN, domain name, forest name, and server IPs.-- **List of DNS Zones**. Shows a list of all DNS zones with the associated zone name, dynamic update status, name servers, and DNSSEC signing status.-- **Unused Resource Records**. Shows a list of all the unused/stale resource records. This list contains the resource record name, resource record type, the associated DNS server, record generation time, and zone name. You can use this list to identify the DNS resource records that are no longer in use. Based on this information, you can then remove those entries from the DNS servers.-- **DNS Servers Query Load**. Shows information so that you can get a perspective of the DNS load on your DNS servers. This information can help you plan the capacity for the servers. You can go to the **Metrics** tab to change the view to a graphical visualization. This view helps you understand how the DNS load is distributed across your DNS servers. It shows DNS query rate trends for each server.
+- **List of DNS Servers**: Shows a list of all DNS servers with their associated FQDN, domain name, forest name, and server IPs.
+- **List of DNS Zones**: Shows a list of all DNS zones with the associated zone name, dynamic update status, name servers, and DNSSEC signing status.
+- **Unused Resource Records**: Shows a list of all the unused/stale resource records. This list contains the resource record name, resource record type, the associated DNS server, record generation time, and zone name. You can use this list to identify the DNS resource records that are no longer in use. Based on this information, you can then remove those entries from the DNS servers.
+- **DNS Servers Query Load**: Shows information so that you can get a perspective of the DNS load on your DNS servers. This information can help you plan the capacity for the servers. You can go to the **Metrics** tab to change the view to a graphical visualization. This view helps you understand how the DNS load is distributed across your DNS servers. It shows DNS query rate trends for each server.
- ![DNS servers query log search results](./media/dns-analytics/dns-servers-query-load.png)
+ ![Screenshot that shows the DNS servers query log search results.](./media/dns-analytics/dns-servers-query-load.png)
-- **DNS Zones Query Load**. Shows the DNS zone-query-per-second statistics of all the zones on the DNS servers being managed by the solution. Click the **Metrics** tab to change the view from detailed records to a graphical visualization of the results.-- **Configuration Events**. Shows all the DNS configuration change events and associated messages. You can then filter these events based on time of the event, event ID, DNS server, or task category. The data can help you audit changes made to specific DNS servers at specific times.-- **DNS Analytical Log**. Shows all the analytic events on all the DNS servers managed by the solution. You can then filter these events based on time of the event, event ID, DNS server, client IP that made the lookup query, and query type task category. DNS server analytic events enable activity tracking on the DNS server. An analytic event is logged each time the server sends or receives DNS information.
+- **DNS Zones Query Load**: Shows the DNS zone-query-per-second statistics of all the zones on the DNS servers being managed by the solution. Select the **Metrics** tab to change the view from detailed records to a graphical visualization of the results.
+- **Configuration Events**: Shows all the DNS configuration change events and associated messages. You can then filter these events based on time of the event, event ID, DNS server, or task category. The data can help you audit changes made to specific DNS servers at specific times.
+- **DNS Analytical Log**: Shows all the analytic events on all the DNS servers managed by the solution. You can then filter these events based on time of the event, event ID, DNS server, client IP that made the lookup query, and query type task category. DNS server analytic events enable activity tracking on the DNS server. An analytic event is logged each time the server sends or receives DNS information.
### Search by using DNS Analytics Log Search
-On the Log Search page, you can create a query. You can filter your search results by using facet controls. You can also create advanced queries to transform, filter, and report on your results. Start by using the following queries:
+On the **Log Search** page, you can create a query. You can filter your search results by using facet controls. You can also create advanced queries to transform, filter, and report on your results. Start by using the following queries:
+
+1. In the search query box, enter `DnsEvents` to view all the DNS events generated by the DNS servers managed by the solution. The results list the log data for all events related to lookup queries, dynamic registrations, and configuration changes.
-1. In the **search query box**, type `DnsEvents` to view all the DNS events generated by the DNS servers managed by the solution. The results list the log data for all events related to lookup queries, dynamic registrations, and configuration changes.
+ ![Screenshot that shows the DnsEvents log search.](./media/dns-analytics/log-search-dnsevents.png)
- ![DnsEvents log search](./media/dns-analytics/log-search-dnsevents.png)
+ 1. To view the log data for lookup queries, select **LookUpQuery** as the **Subtype** filter from the facet control on the left. A table that lists all the lookup query events for the selected time period appears.
- a. To view the log data for lookup queries, select **LookUpQuery** as the **Subtype** filter from the facet control on the left. A table that lists all the lookup query events for the selected time period is displayed.
+ 1. To view the log data for dynamic registrations, select **DynamicRegistration** as the **Subtype** filter from the facet control on the left. A table that lists all the dynamic registration events for the selected time period appears.
- b. To view the log data for dynamic registrations, select **DynamicRegistration** as the **Subtype** filter from the facet control on the left. A table that lists all the dynamic registration events for the selected time period is displayed.
+ 1. To view the log data for configuration changes, select **ConfigurationChange** as the **Subtype** filter from the facet control on the left. A table that lists all the configuration change events for the selected time period appears.
- c. To view the log data for configuration changes, select **ConfigurationChange** as the **Subtype** filter from the facet control on the left. A table that lists all the configuration change events for the selected time period is displayed.
+1. In the search query box, enter `DnsInventory` to view all the DNS inventory-related data for the DNS servers managed by the solution. The results list the log data for DNS servers, DNS zones, and resource records.
-1. In the **search query box**, type `DnsInventory` to view all the DNS inventory-related data for the DNS servers managed by the solution. The results list the log data for DNS servers, DNS zones, and resource records.
+ ![Screenshot that shows the DnsInventory log search.](./media/dns-analytics/log-search-dnsinventory.png)
- ![DnsInventory log search](./media/dns-analytics/log-search-dnsinventory.png)
-
## Troubleshooting Common troubleshooting steps:
-1. Missing DNS Lookups Data - To troubleshoot this issue, try resetting the config or just loading the configuration page once in portal. For resetting, just change a setting to another value, then change it back to to the original value, and save the config.
+* **Missing DNS Lookups Data**: To troubleshoot this issue, try resetting the config or loading the configuration page once in the portal. For resetting, change a setting to another value, change it back to the original value, and save the config.
## Suggestions
-To provide feedback, visit the [Log Analytics UserVoice page](https://aka.ms/dnsanalyticsuservoice) to post ideas for DNS Analytics features to work on.
+To provide feedback, see the [Log Analytics UserVoice page](https://aka.ms/dnsanalyticsuservoice) to post ideas for DNS Analytics features to work on.
## Next steps
-[Query logs](../logs/log-query-overview.md) to view detailed DNS log records.
+Review [Query logs](../logs/log-query-overview.md) to view detailed DNS log records.
azure-monitor Solution Agenthealth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/solution-agenthealth.md
The Agent Health solution in Azure helps you understand which monitoring agents
You can also use the Agent Health solution to: * Keep track of how many agents are deployed and where they're distributed geographically.
-* Perform other queries to maintain awareness of the distribution of agents deployed in Azure, in other cloud environments, or on-premises.
+* Perform other queries to maintain awareness of the distribution of agents deployed in Azure, in other cloud environments, or on-premises.
## Prerequisites Before you deploy this solution, confirm that you have supported [Windows agents](../agents/agent-windows.md) reporting to the Log Analytics workspace or reporting to an [Operations Manager management group](../agents/om-agents.md) integrated with your workspace. ## Management packs
-If your Operations Manager management group is connected to a Log Analytics workspace, the following management packs are installed in Operations Manager. These management packs are also installed on directly connected Windows computers after you add this solution.
+If your Operations Manager management group is connected to a Log Analytics workspace, the following management packs are installed in Operations Manager. These management packs are also installed on directly connected Windows computers after you add this solution:
-* Microsoft System Center Advisor HealthAssessment Direct Channel Intelligence Pack (Microsoft.IntelligencePacks.HealthAssessmentDirect)
-* Microsoft System Center Advisor HealthAssessment Server Channel Intelligence Pack (Microsoft.IntelligencePacks.HealthAssessmentViaServer).
+* Microsoft System Center Advisor HealthAssessment Direct Channel Intelligence Pack (Microsoft.IntelligencePacks.HealthAssessmentDirect)
+* Microsoft System Center Advisor HealthAssessment Server Channel Intelligence Pack (Microsoft.IntelligencePacks.HealthAssessmentViaServer)
There's nothing to configure or manage with these management packs. For more information on how solution management packs are updated, see [Connect Operations Manager to Log Analytics](../agents/om-agents.md).
The following table describes the connected sources that this solution supports.
| Connected source | Supported | Description | | | | | | Windows agents | Yes | Heartbeat events are collected from direct Windows agents.|
-| System Center Operations Manager management group | Yes | Heartbeat events are collected from agents that report to the management group every 60 seconds and then forwarded to Azure Monitor. A direct connection from Operations Manager agents to Azure Monitor is not required. Heartbeat event data is forwarded from the management group to the Log Analytics workspace.|
+| System Center Operations Manager management group | Yes | Heartbeat events are collected from agents that report to the management group every 60 seconds and are then forwarded to Azure Monitor. A direct connection from Operations Manager agents to Azure Monitor isn't required. Heartbeat event data is forwarded from the management group to the Log Analytics workspace.|
-## Using the solution
+## Use the solution
When you add the solution to your Log Analytics workspace, the **Agent Health** tile is added to your dashboard. This tile shows the total number of agents and the number of unresponsive agents in the last 24 hours. ![Screenshot that shows the Agent Health tile on the dashboard.](./media/solution-agenthealth/agenthealth-solution-tile-homepage.png)
-Select the **Agent Health** tile to open the **Agent Health** dashboard. The dashboard includes the columns in the following table. Each column lists the top 10 events by count that match that column's criteria for the specified time range. You can run a log search that provides the entire list by selecting **See all** beneath each column, or by selecting the column heading.
+Select the **Agent Health** tile to open the **Agent Health** dashboard. The dashboard includes the columns in the following table. Each column lists the top 10 events by count that match that column's criteria for the specified time range. You can run a log search that provides the entire list. Select **See all** beneath each column or select the column heading.
| Column | Description | |--|-|
Select the **Agent Health** tile to open the **Agent Health** dashboard. The da
| Geo-location of agents | A partition of the countries/regions where you have agents, and a total count of the number of agents that have been installed in each country/region| | Count of gateways installed | The number of servers that have the Log Analytics gateway installed, and a list of these servers|
-![Screenshot that shows an example of the Agent Health solution dashboard.](./media/solution-agenthealth/agenthealth-solution-dashboard.png)
+![Screenshot that shows an example of the Agent Health solution dashboard.](./media/solution-agenthealth/agenthealth-solution-dashboard.png)
## Azure Monitor log records
-The solution creates one type of record in the Log Analytics workspace: heartbeat. Heartbeat records have the properties in the following table.
+The solution creates one type of record in the Log Analytics workspace: heartbeat. Heartbeat records have the properties listed in the following table.
| Property | Description | | | |
The solution creates one type of record in the Log Analytics workspace: heartbea
| `RemoteIPLongitude` | Longitude of the computer's geographic location| | `RemoteIPLatitude` | Latitude of the computer's geographic location|
-Each agent that reports to an Operations Manager management server will send two heartbeats. The `SCAgentChannel` property's value will include both `Direct` and `SCManagementServer`, depending on what data sources and monitoring solutions you've enabled in your subscription.
+Each agent that reports to an Operations Manager management server will send two heartbeats. The `SCAgentChannel` property's value will include both `Direct` and `SCManagementServer`, depending on what data sources and monitoring solutions you've enabled in your subscription.
If you recall, data from solutions is sent either:
-* Directly from an Operations Manager management server to Azure Monitor
-* Directly from the agent to Azure Monitor, because of the volume of data collected on the agent
+* Directly from an Operations Manager management server to Azure Monitor.
+* Directly from the agent to Azure Monitor, because of the volume of data collected on the agent.
-For heartbeat events that have the value `SCManagementServer`, the `ComputerIP` value is the IP address of the management server because it actually uploads the data. For heartbeats where `SCAgentChannel` is set to `Direct`, it's the public IP address of the agent.
+For heartbeat events that have the value `SCManagementServer`, the `ComputerIP` value is the IP address of the management server because it actually uploads the data. For heartbeats where `SCAgentChannel` is set to `Direct`, it's the public IP address of the agent.
## Sample log searches The following table provides sample log searches for records that the solution collects.
The following table provides sample log searches for records that the solution c
## Next steps
-* Learn about [generating alerts from log queries in Azure Monitor](../alerts/alerts-overview.md).
-
+Learn about [generating alerts from log queries in Azure Monitor](../alerts/alerts-overview.md).
azure-monitor Solution Targeting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/solution-targeting.md
Title: Targeting monitoring solutions in Azure Monitor | Microsoft Docs
-description: Targeting monitoring solutions allows you to limit monitoring solutions to a specific set of agents. This article describes how to create a scope configuration and apply it to a solution.
+ Title: Target monitoring solutions in Azure Monitor | Microsoft Docs
+description: Targeting monitoring solutions allows you to limit monitoring solutions to a specific set of agents. This article describes how to create a scope configuration and apply it to a solution.
Last updated 06/08/2022
-# Targeting monitoring solutions in Azure Monitor (Preview)
+# Target monitoring solutions in Azure Monitor (preview)
> [!IMPORTANT]
-> This feature has been deprecated as the Log Analytics agent is being replaced with the Azure Monitor agent and solutions in Azure Monitor are being replaced with insights. You can continue to use it if you already have it configured, but it's being removed from regions where it is not already being used. The feature will longer be supported after August 31, 2024.
+> This feature has been deprecated because the Log Analytics agent is being replaced with the Azure Monitor Agent. Solutions in Azure Monitor are being replaced with insights. You can continue to use it if you already have it configured, but it's being removed from regions where it isn't already being used. The feature will no longer be supported after August 31, 2024.
-When you add a monitoring solution to your subscription, it's automatically deployed by default to all Windows and Linux agents connected to your Log Analytics workspace. You may want to manage your costs and limit the amount of data collected for a solution by limiting it to a particular set of agents. This article describes how to use **Solution Targeting** which is a feature that allows you to apply a scope to your solutions.
+When you add a monitoring solution to your subscription, it's automatically deployed by default to all Windows and Linux agents connected to your Log Analytics workspace. You might want to manage your costs and limit the amount of data collected for a solution by limiting it to a particular set of agents. This article describes how to use *solution targeting*, which is a feature that allows you to apply a scope to your solutions.
+## Target a solution
+There are three steps to targeting a solution, as described in the following sections.
-## How to target a solution
-There are three steps to targeting a solution as described in the following sections.
+### Create a computer group
+You specify the computers that you want to include in a scope by creating a [computer group](../logs/computer-groups.md) in Azure Monitor. The computer group can be based on a log query or imported from other sources, such as Active Directory or Windows Server Update Services groups. As described in the section [Solutions and agents that can't be targeted](#solutions-and-agents-that-cant-be-targeted), only computers that are directly connected to Azure Monitor are included in the scope.
+After you have the computer group created in your workspace, you'll include it in a scope configuration that can be applied to one or more solutions.
-### 1. Create a computer group
-You specify the computers that you want to include in a scope by creating a [computer group](../logs/computer-groups.md) in Azure Monitor. The computer group can be based on a log query or imported from other sources such as Active Directory or WSUS groups. As [described below](#solutions-and-agents-that-cant-be-targeted), only computers that are directly connected to Azure Monitor will be included in the scope.
+### Create a scope configuration
+ A *scope configuration* includes one or more computer groups and can be applied to one or more solutions.
-Once you have the computer group created in your workspace, then you'll include it in a scope configuration that can be applied to one or more solutions.
-
-
-### 2. Create a scope configuration
- A **Scope Configuration** includes one or more computer groups and can be applied to one or more solutions.
-
- Create a scope configuration using the following process.
+ To create a scope configuration:
- 1. In the Azure portal, navigate to **Log Analytics workspaces** and select your workspace.
- 2. In the properties for the workspace under **Workspace Data Sources** select **Scope Configurations**.
- 3. Click **Add** to create a new scope configuration.
- 4. Type a **Name** for the scope configuration.
- 5. Click **Select Computer Groups**.
- 6. Select the computer group that you created and optionally any other groups to add to the configuration. Click **Select**.
- 6. Click **OK** to create the scope configuration.
+ 1. In the Azure portal, go to **Log Analytics workspaces** and select your workspace.
+ 1. In the properties for the workspace under **Workspace Data Sources**, select **Scope Configurations**.
+ 1. Select **Add** to create a new scope configuration.
+ 1. Enter a name for the scope configuration.
+ 1. Click **Select Computer Groups**.
+ 1. Select the computer group that you created and optionally any other groups to add to the configuration. Click **Select**.
+ 1. Select **OK** to create the scope configuration.
+### Apply the scope configuration to a solution.
+After you have a scope configuration, you can apply it to one or more solutions. Although a single scope configuration can be used with multiple solutions, each solution can only use one scope configuration.
-### 3. Apply the scope configuration to a solution.
-Once you have a scope configuration, then you can apply it to one or more solutions. Note that while a single scope configuration can be used with multiple solutions, each solution can only use one scope configuration.
+To apply a scope configuration:
-Apply a scope configuration using the following process.
-
- 1. In the Azure portal, navigate to **Log Analytics workspaces** and select your workspace.
- 2. In the properties for the workspace select **Solutions**.
- 3. Click on the solution you want to scope.
- 4. In the properties for the solution under **Workspace Data Sources** select **Solution Targeting**. If the option is not available then [this solution cannot be targeted](#solutions-and-agents-that-cant-be-targeted).
- 5. Click **Add scope configuration**. If you already have a configuration applied to this solution then this option will be unavailable. You must remove the existing configuration before adding another one.
- 6. Click on the scope configuration that you created.
- 7. Watch the **Status** of the configuration to ensure that it shows **Succeeded**. If the status indicates an error, then click the ellipse to the right of the configuration and select **Edit scope configuration** to make changes.
+ 1. In the Azure portal, go to **Log Analytics workspaces** and select your workspace.
+ 1. In the properties for the workspace, select **Solutions**.
+ 1. Select the solution you want to scope.
+ 1. In the properties for the solution under **Workspace Data Sources**, select **Solution Targeting**. If the option isn't available, [this solution can't be targeted](#solutions-and-agents-that-cant-be-targeted).
+ 1. Select **Add scope configuration**. If you already have a configuration applied to this solution, this option is unavailable. You must remove the existing configuration before you add another one.
+ 1. Select the scope configuration that you created.
+ 1. Watch the **Status** of the configuration to ensure that it shows **Succeeded**. If the status indicates an error, select the ellipses to the right of the configuration and select **Edit scope configuration** to make changes.
## Solutions and agents that can't be targeted
-Following are the criteria for agents and solutions that can't be used with solution targeting.
+The following criteria are for agents and solutions that can't be used with solution targeting:
- Solution targeting only applies to solutions that deploy to agents.-- Solution targeting only applies to solutions provided by Microsoft. It does not apply to solutions [created by yourself or partners](./solutions.md).-- You can only filter out agents that connect directly to Azure Monitor. Solutions will automatically deploy to any agents that are part of a connected Operations Manager management group whether or not they're included in a scope configuration.
+- Solution targeting only applies to solutions provided by Microsoft. It doesn't apply to solutions [created by yourself or partners](./solutions.md).
+- You can only filter out agents that connect directly to Azure Monitor. Solutions automatically deploy to any agents that are part of a connected Operations Manager management group whether or not they're included in a scope configuration.
### Exceptions
-Solution targeting cannot be used with the following solutions even though they fit the stated criteria.
+Solution targeting can't be used with the following solution even though it fits the stated criteria:
- Agent Health Assessment ## Next steps-- Learn more about monitoring solutions including the solutions that are available to install in your environment at [Add Azure Log Analytics monitoring solutions to your workspace](solutions.md).-- Learn more about creating computer groups at [Computer groups in Azure Monitor log queries](../logs/computer-groups.md).
+- Learn more about monitoring solutions, including the solutions that are available to install in your environment, in [Add Azure Log Analytics monitoring solutions to your workspace](solutions.md).
+- Learn more about creating computer groups in [Computer groups in Azure Monitor log queries](../logs/computer-groups.md).
azure-monitor Data Ingestion Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-ingestion-time.md
Ingestion time might vary for different resources under different circumstances.
| Step | Property or function | Comments | |:|:|:|
-| Record created at data source | [TimeGenerated](./log-standard-columns.md#timegenerated) <br>If the data source doesn't set this value, it will be set to the same time as _TimeReceived. | If at processing time the Time Generated value is older than 3 days, the row will be dropped. |
+| Record created at data source | [TimeGenerated](./log-standard-columns.md#timegenerated) <br>If the data source doesn't set this value, it will be set to the same time as _TimeReceived. | If at processing time the Time Generated value is older than two days, the row will be dropped. |
| Record received by the data collection endpoint | [_TimeReceived](./log-standard-columns.md#_timereceived) | This field isn't optimized for mass processing and shouldn't be used to filter large datasets. | | Record stored in workspace and available for queries | [ingestion_time()](/azure/kusto/query/ingestiontimefunction) | We recommend using `ingestion_time()` if there's a need to filter only records that were ingested in a certain time window. In such cases, we recommend also adding a `TimeGenerated` filter with a larger range. |
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-reference.md
+
+ Title: What is monitored by Azure Monitor
+description: Reference of all services and other resources monitored by Azure Monitor.
++++ Last updated : 09/08/2022+++
+# What is monitored by Azure Monitor?
+
+This article is a reference of the different applications and services that are monitored by Azure Monitor.
+
+Azure Monitor data is collected and stored based on resource provider namespaces. Each resource in Azure has a unique ID. The resource provider namespace is part of all unique IDs. For example, a key vault resource ID would be similar to `/subscriptions/d03b04c7-d1d4-eeee-aaaa-87b6fcb38b38/resourceGroups/KeyVaults/providers/Microsoft.KeyVault/vaults/mysafekeys ` . *Microsoft.KeyVault* is the resource provider namespace. *Microsoft.KeyVault/vaults/* is the resource provider.
+
+For a list of Azure resource provider namespaces, see [Resource providers for Azure services](../azure-resource-manager/management/azure-services-resource-providers.md).
+
+For a list of resource providers that support Azure Monitor
+
+- **Metrics** - See [Supported metrics in Azure Monitor](essentials/metrics-supported.md).
+- **Metric alerts** - See [Supported resources for metric alerts in Azure Monitor](alerts/alerts-metric-near-real-time.md).
+- **Prometheus metrics** - See [Prometheus metrics overview](essentials/prometheus-metrics-overview.md#enable).
+- **Resource logs** - See [Supported categories for Azure Monitor resource logs](essentials/resource-logs-categories.md).
+- **Activity log** - All entries in the activity log are available for query, alerting and routing to Azure Monitor Logs store regardless of resource provider.
+
+## Services that require agents
+
+Azure Monitor can't see inside a service running its own application, operating system or container. That type of service requires one or more agents to be installed. The agent then runs as well to collect metrics, logs, traces and changes and forward them to Azure Monitor. The following services require agents for this reason.
+
+- [Azure Cloud Services](../cloud-services-extended-support/index.yml)
+- [Azure Virtual Machines](../virtual-machines/index.yml)
+- [Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml)
+- [Azure Service Fabric](../service-fabric/index.yml)
+
+In addition, applications also require either the Application Insights SDK or auto-instrumentation (via an agent) to collect information and write it to the Azure Monitor data platform.
+
+## Services with Insights
+
+Some services have curated monitoring experiences call "insights". Insights are meant to be a starting point for monitoring a service or set of services. Some insights may also automatically pull additional data that's not captured or stored in Azure Monitor. For more information on monitoring insights, see [Insights Overview](insights/insights-overview.md).
+
+## Product integrations
+
+The services and [older monitoring solutions](insights/solutions.md) in the following table store their data in Azure Monitor Logs so that it can be analyzed with other log data collected by Azure Monitor.
+
+| Product/Service | Description |
+|:|:|
+| [Azure Automation](../automation/index.yml) | Manage operating system updates and track changes on Windows and Linux computers. See [Change tracking](../automation/change-tracking/overview.md) and [Update management](../automation/update-management/overview.md). |
+| [Azure Information Protection](/azure/information-protection/) | Classify and optionally protect documents and emails. See [Central reporting for Azure Information Protection](/azure/information-protection/reports-aip#configure-a-log-analytics-workspace-for-the-reports). |
+| [Defender for the Cloud](../defender-for-cloud/defender-for-cloud-introduction.md) | Collect and analyze security events and perform threat analysis. See [Data collection in Defender for the Cloud](../defender-for-cloud/monitoring-components.md). |
+| [Microsoft Sentinel](../sentinel/index.yml) | Connect to different sources including Office 365 and Amazon Web Services Cloud Trail. See [Connect data sources](../sentinel/connect-data-sources.md). |
+| [Microsoft Intune](/intune/) | Create a diagnostic setting to send logs to Azure Monitor. See [Send log data to storage, Event Hubs, or log analytics in Intune (preview)](/intune/fundamentals/review-logs-using-azure-monitor). |
+| Network [Traffic Analytics](../network-watcher/traffic-analytics.md) | Analyze Network Watcher network security group flow logs to provide insights into traffic flow in your Azure cloud. |
+| [System Center Operations Manager](/system-center/scom) | Collect data from Operations Manager agents by connecting their management group to Azure Monitor. See [Connect Operations Manager to Azure Monitor](agents/om-agents.md).<br> Assess the risk and health of your System Center Operations Manager management group with the [Operations Manager Assessment](insights/scom-assessment.md) solution. |
+| [Microsoft Teams Rooms](/microsoftteams/room-systems/azure-monitor-deploy) | Integrated, end-to-end management of Microsoft Teams Rooms devices. |
+| [Visual Studio App Center](/appcenter/) | Build, test, and distribute applications and then monitor their status and usage. See [Start analyzing your mobile app with App Center and Application Insights](https://github.com/Microsoft/appcenter). |
+| Windows | [Windows Update Compliance](/windows/deployment/update/update-compliance-get-started) - Assess your Windows desktop upgrades.<br>[Desktop Analytics](/configmgr/desktop-analytics/overview) - Integrates with Configuration Manager to provide insight and intelligence to make more informed decisions about the update readiness of your Windows clients. |
+| **The following solutions also integrate with parts of Azure Monitor. Note that solutions, which are based on Azure Monitor Logs and Log Analytics, are no longer under active development. Use [Insights](insights/insights-overview.md) instead.** | |
+| Network - [Network Performance Monitor solution](insights/network-performance-monitor.md) |
+| Network - [Azure Application Gateway solution](insights/azure-networking-analytics.md#application-gateway-analytics) |
+| [Office 365 solution](insights/solution-office-365.md) | Monitor your Office 365 environment. Updated version with improved onboarding available through Microsoft Sentinel. |
+| [SQL Analytics solution](insights/azure-sql.md) | Use SQL Insights instead. |
+| [Surface Hub solution](insights/surface-hubs.md) | |
+
+## Third-party integration
+
+| Integration | Description |
+|:|:|
+| [ITSM](alerts/itsmc-overview.md) | The IT Service Management (ITSM) Connector allows you to connect Azure and a supported ITSM product/service. |
+| [Azure Monitor Partners](./partners.md) | A list of partners that integrate with Azure Monitor in some form. |
+| [Azure Monitor Partner integrations](../partner-solutions/overview.md)| Specialized integrations between Azure Monitor and other non-Microsoft monitoring platforms if you've already built on them. Examples include Datadog and Elastic.|
+
+## Resources outside of Azure
+
+Azure Monitor can collect data from resources outside of Azure by using the methods listed in the following table.
+
+| Resource | Method |
+|:|:|
+| Applications | Monitor web applications outside of Azure by using Application Insights. See [What is Application Insights?](./app/app-insights-overview.md). |
+| Virtual machines | Use agents to collect data from the guest operating system of virtual machines in other cloud environments or on-premises. See [Overview of Azure Monitor agents](agents/agents-overview.md). |
+| REST API Client | Separate APIs are available to write data to Azure Monitor Logs and Metrics from any REST API client. See [Send log data to Azure Monitor with the HTTP Data Collector API](logs/data-collector-api.md) for Logs. See [Send custom metrics for an Azure resource to the Azure Monitor metric store by using a REST API](essentials/metrics-store-custom-rest-api.md) for Metrics. |
+
+## Next steps
+
+- Read more about the [Azure Monitor data platform that stores the logs and metrics collected by insights and solutions](data-platform.md).
+- Complete a [tutorial on monitoring an Azure resource](essentials/tutorial-resource-logs.md).
+- Complete a [tutorial on writing a log query to analyze data in Azure Monitor Logs](essentials/tutorial-resource-logs.md).
+- Complete a [tutorial on creating a metrics chart to analyze data in Azure Monitor Metrics](essentials/tutorial-metrics.md).
azure-monitor Monitor Virtual Machine Analyze https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-analyze.md
For instructions on how to create your own custom workbooks, see [Create interac
:::image type="content" source="media/monitor-virtual-machines/workbook-example.png" alt-text="Screenshot that shows virtual machine workbooks." lightbox="media/monitor-virtual-machines/workbook-example.png":::
-## VM availability information in Azure Resource Graph
-[Azure Resource Graph](../../governance/resource-graph/overview.md) is an Azure service that allows you to use the same KQL query language used in log queries to query your Azure resources at scale with complex filtering, grouping, and sorting by resource properties. You can use [VM health annotations](../../service-health/resource-health-vm-annotation.md) to Azure Resource Graph (ARG) for detailed failure attribution and downtime analysis including the following:
--- Query the latest snapshot of VM availability together across all your Azure subscriptions. -- Assess the impact to business SLAs and trigger decisive mitigation actions, in response to disruptions and type of failure signature.-- Set up custom dashboards to supervise the comprehensive health of applications by [joining](../../governance/resource-graph/concepts/work-with-data.md) VM availability information with additional [resource metadata](../../governance/resource-graph/samples/samples-by-table.md?tabs=azure-cli) in Resource Graph.-- Track relevant changes in VM availability across a rolling 14 days window, by using the [change tracking](../../governance/resource-graph/how-to/get-resource-changes.md) mechanism for conducting detailed investigations.-
-To get started with Resource Graph, open **Resource Graph Explorer** in the Azure portal. Select the **Table** tab and have a look at the [microsoft.resourcehealth/availabilitystatuses](#microsoftresourcehealthavailabilitystatuses) and [microsoft.resourcehealth/resourceannotations](#microsoftresourcehealthresourceannotations) tables which are described below. Click on **healthresources** to create a simple query and then click **Run** to return the records.
--
-To view the details for a record, scroll to the right and select **See details**.
--
-There will be two types of events populated in the HealthResources table:
-
-### microsoft.resourcehealth/availabilitystatuses
-This event denotes the latest availability status of a VM, based on the [health checks](../../service-health/resource-health-checks-resource-types.md#microsoftcomputevirtualmachines) performed by the underlying Azure platform. The [availability states](../../service-health/resource-health-overview.md#health-status) currently emitted for VMs are as follows:
--- **Available**: The VM is up and running as expected.-- **Unavailable**: A disruption to the normal functioning of the VM has been detected.-- **Unknown**: The platform is unable to accurately detect the health of the VM. Check back in a few minutes.-
-The availability state is in the `properties` field of the record which includes the following properties:
-
-| Field | Description |
-|:|:|
-| targetResourceType | Type of resource for which health data is flowing |
-| targetResourceId | Resource ID |
-| occurredTime | Timestamp when the latest availability state is emitted by the platform |
-| previousAvailabilityState | Previous availability state of the VM |
-| availabilityState | Current availability state of the VM |
-
-A sample `properties` value looks similar to the following:
-
-```json
-{
- "targetResourceType": "Microsoft.Compute/virtualMachines",
- "previousAvailabilityState": "Available",
-"targetResourceId": "/subscriptions/<subscriptionId>/resourceGroups/<ResourceGroupName>/providers/Microsoft.Compute/virtualMachines/<VMName>",
- "occurredTime": "2022-10-11T11:13:59.9570000Z",
- "availabilityState": "Unavailable"
-}
-
-```
-
-### microsoft.resourcehealth/resourceannotations
-This event contextualizes any changes to VM availability, by detailing necessary failure attributes to help you investigate and mitigate the disruption as needed. The full list of VM health annotations are listed at [Resource Health virtual machine Health Annotations] (../../service-health/resource-health-vm-annotation.md).
-
-These annotations can be broadly classified into the following:
--- **Downtime Annotations**: Emitted when the platform detects VM availability transitioning to Unavailable. Examples include host crashes or reboot operations.-- **Informational Annotations**: Emitted during control plane activities with no impact to VM availability. Examples include VM allocation, stop, delete, start. Usually, no additional customer action is required in response.-- **Degraded Annotations**: Emitted when VM availability is detected to be at risk. Examples include when failure prediction models predict a degraded hardware component that can cause the VM to reboot at any given time. You should redeploy by the deadline specified in the annotation message to avoid any unanticipated loss of data or downtime.-
-| Field | Description |
-|:|:|
-| targetResourceType | Type of resource for which health data is flowing |
-| targetResourceId | Resource ID |
-| occurredTime | Timestamp when the latest availability state is emitted by the platform |
-| annotationName | Name of the Annotation emitted |
-| reason | Brief overview of the availability impact observed by the customer |
-| category | Denotes whether the platform activity triggering the annotation was either planned maintenance or unplanned repair. This field is not applicable to customer/VM-initiated events.<br><br>Possible values: Planned \| Unplanned \| Not Applicable \| Null |
-| context | Denotes whether the activity triggering the annotation was due to an authorized user or process (customer initiated), or due to the Azure platform (platform initiated) or even activity in the guest OS that has resulted in availability impact (VM initiated).<br><br>Possible values: Platform-Initiated \| User-initiated \|VM-initiated \| Not Applicable \| Null |
-| summary | Statement detailing the cause for annotation emission, along with remediation steps that can be taken by users |
-
-See [Azure Resource Graph sample queries by table](../../governance/resource-graph/samples/samples-by-table.md?tabs=azure-cli#healthresources) for sample queries using this data.
- ## Next steps * [Create alerts from collected data](monitor-virtual-machine-alerts.md)
azure-relay Ip Firewall Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/ip-firewall-virtual-networks.md
Title: Configure IP firewall for Azure Relay namespace description: This article describes how to Use firewall rules to allow connections from specific IP addresses to Azure Relay namespaces. Previously updated : 06/21/2022 Last updated : 02/15/2023 # Configure IP firewall for an Azure Relay namespace
The IP firewall rules are applied at the namespace level. Therefore, the rules a
This section shows you how to use the Azure portal to create IP firewall rules for a namespace. 1. Navigate to your **Relay namespace** in the [Azure portal](https://portal.azure.com).
-2. On the left menu, select **Networking** option. If you select the **All networks** option in the **Allow access from** section, the Relay namespace accepts connections from any IP address. This setting is equivalent to a rule that accepts the 0.0.0.0/0 IP address range.
-
- ![Screenshot shows the Networking page with the All networks option selected.](./media/ip-firewall/all-networks-selected.png)
+2. On the left menu, select **Networking**.
1. To restrict access to specific networks and IP addresses, select the **Selected networks** option. In the **Firewall** section, follow these steps: 1. Select **Add your client IP address** option to give your current client IP the access to the namespace. 2. For **address range**, enter a specific IPv4 address or a range of IPv4 address in CIDR notation. -
- ![Firewall - All networks option selected](./media/ip-firewall/selected-networks-trusted-access-disabled.png)
-3. Select **Save** on the toolbar to save the settings. Wait for a few minutes for the confirmation to show up on the portal notifications.
+ 3. If you want to allow Microsoft services trusted by the Azure Relay service to bypass this firewall, select **Yes** for **Allow trusted Microsoft services to bypass this firewall?**.
+
+ :::image type="content" source="./media/ip-firewall/selected-networks-trusted-access-disabled.png" alt-text="Screenshot showing the Public access tab of the Networking page with the Firewall enabled.":::
+1. Select **Save** on the toolbar to save the settings. Wait for a few minutes for the confirmation to show up on the portal notifications.
### Use Resource Manager template
The template takes one parameter: **ipMask**, which is a single IPv4 address or
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": {
- "relayNamespaceName": {
- "type": "string",
- "metadata": {
- "description": "Name of the Relay namespace"
+ "namespaces_name": {
+ "defaultValue": "contosorelay0215",
+ "type": "String"
}
- },
- "location": {
- "type": "string",
- "metadata": {
- "description": "Location for Namespace"
- }
- }
- },
- "variables": {
- "namespaceNetworkRuleSetName": "[concat(parameters('relayNamespaceName'), concat('/', 'default'))]"
},
+ "variables": {},
"resources": [
- {
- "apiVersion": "2018-01-01-preview",
- "name": "[parameters('relayNamespaceName')]",
- "type": "Microsoft.Relay/namespaces",
- "location": "[parameters('location')]",
- "sku": {
- "name": "Standard",
- "tier": "Standard"
- },
- "properties": { }
- },
- {
- "apiVersion": "2018-01-01-preview",
- "name": "[variables('namespaceNetworkRuleSetName')]",
- "type": "Microsoft.Relay/namespaces/networkrulesets",
- "dependsOn": [
- "[concat('Microsoft.Relay/namespaces/', parameters('relayNamespaceName'))]"
- ],
- "properties": {
- "ipRules":
- [
- {
- "ipMask":"10.1.1.1",
- "action":"Allow"
+ {
+ "type": "Microsoft.Relay/namespaces",
+ "apiVersion": "2021-11-01",
+ "name": "[parameters('namespaces_name')]",
+ "location": "East US",
+ "sku": {
+ "name": "Standard",
+ "tier": "Standard"
},
- {
- "ipMask":"11.0.0.0/24",
- "action":"Allow"
+ "properties": {}
+ },
+ {
+ "type": "Microsoft.Relay/namespaces/authorizationrules",
+ "apiVersion": "2021-11-01",
+ "name": "[concat(parameters('namespaces_sprelayns0215_name'), '/RootManageSharedAccessKey')]",
+ "location": "eastus",
+ "dependsOn": [
+ "[resourceId('Microsoft.Relay/namespaces', parameters('namespaces_sprelayns0215_name'))]"
+ ],
+ "properties": {
+ "rights": [
+ "Listen",
+ "Manage",
+ "Send"
+ ]
+ }
+ },
+ {
+ "type": "Microsoft.Relay/namespaces/networkRuleSets",
+ "apiVersion": "2021-11-01",
+ "name": "[concat(parameters('namespaces_sprelayns0215_name'), '/default')]",
+ "location": "East US",
+ "dependsOn": [
+ "[resourceId('Microsoft.Relay/namespaces', parameters('namespaces_sprelayns0215_name'))]"
+ ],
+ "properties": {
+ "publicNetworkAccess": "Enabled",
+ "defaultAction": "Deny",
+ "ipRules": [
+ {
+ "ipMask": "172.72.157.204",
+ "action": "Allow"
+ },
+ {
+ "ipMask": "10.1.1.1",
+ "action": "Allow"
+ },
+ {
+ "ipMask": "11.0.0.0/24",
+ "action": "Allow"
+ }
+ ]
}
- ],
- "virtualNetworkRules": [],
- "trustedServiceAccessEnabled": false,
- "defaultAction": "Deny"
}
- }
- ],
- "outputs": { }
- }
+ ]
+}
``` To deploy the template, follow the instructions for [Azure Resource Manager](../azure-resource-manager/templates/deploy-powershell.md).
azure-relay Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/private-link-service.md
Title: Integrate Azure Relay with Azure Private Link Service description: Learn how to integrate Azure Relay with Azure Private Link Service Previously updated : 06/21/2022 Last updated : 02/15/2023
A **private endpoint** is a network interface that allows your workloads running
## Add a private endpoint using Azure portal ### Prerequisites
-To integrate an Azure Relay namespace with Azure Private Link, you'll need the following entities or permissions:
+To integrate an Azure Relay namespace with Azure Private Link, you need the following entities or permissions:
- An Azure Relay namespace. - An Azure virtual network.
For step-by-step instructions on creating a new Azure Relay namespace and entiti
1. Sign in to the [Azure portal](https://portal.azure.com). 2. In the search bar, type in **Relays**. 3. Select the **namespace** from the list to which you want to add a private endpoint.
-4. Select the **Networking** tab under **Settings**.
+4. On the left menu, select the **Networking** tab under **Settings**.
5. Select the **Private endpoint connections** tab at the top of the page 6. Select the **+ Private Endpoint** button at the top of the page.
- ![Add private endpoint button](./media/private-link-service/add-private-endpoint-button.png)
+ :::image type="content" source="./media/private-link-service/add-private-endpoint-button.png" alt-text="Screenshot showing the selection of the Add private endpoint button on the Private endpoint connections tab of the Networking page.":::
7. On the **Basics** page, follow these steps: 1. Select the **Azure subscription** in which you want to create the private endpoint. 2. Select the **resource group** for the private endpoint resource.
- 3. Enter a **name** for the private endpoint.
- 5. Select a **region** for the private endpoint. Your private endpoint must be in the same region as your virtual network, but can be in a different region from the Azure Relay namespace that you're connecting to.
- 6. Select **Next: Resource >** button at the bottom of the page.
-
- ![Create Private Endpoint - Basics page](./media/private-link-service/create-private-endpoint-basics-page.png)
-8. On the **Resource** page, follow these steps:
- 1. For connection method, if you select **Connect to an Azure resource in my directory**, you've owner or contributor access to the namespace and that namespace is in the same directory as the private endpoint, follow these steps:
- 1. Select the **Azure subscription** in which your **Azure Relay namespace** exists.
- 2. For **Resource type**, Select **Microsoft.Relay/namespaces** for the **Resource type**.
- 3. For **Resource**, select a Relay namespace from the drop-down list.
- 4. Confirm that the **Target subresource** is set to **namespace**.
- 5. Select **Next: Configuration >** button at the bottom of the page.
-
- ![Create Private Endpoint - Resource page](./media/private-link-service/create-private-endpoint-resource-page.png)
- 2. If you select **Connect to an Azure resource by resource ID or alias** because the namespace isn't under the same directory as that of the private endpoint, follow these steps:
- 1. Enter the **resource ID** or **alias**. It can be the resource ID or alias that someone has shared with you. The easiest way to get the resource ID is to navigate to the Azure Relay namespace in the Azure portal and copy the portion of URI starting from `/subscriptions/`. Here's an example: `/subscriptions/000000000-0000-0000-0000-000000000000000/resourceGroups/myresourcegroup/providers/Microsoft.Relay/namespaces/myrelaynamespace.`
- 2. For **Target sub-resource**, enter **namespace**. It's the type of the sub-resource that your private endpoint can access.
- 3. (optional) Enter a **request message**. The resource owner sees this message while managing private endpoint connection.
- 4. Then, select **Next: Configuration >** button at the bottom of the page.
-
- ![Create Private Endpoint - Connect using resource ID](./media/private-link-service/connect-resource-id.png)
-9. On the **Configuration** page, you select the subnet in a virtual network to where you want to deploy the private endpoint.
- 1. Select a **virtual network**. Only virtual networks in the currently selected subscription and location are listed in the drop-down list.
- 2. Select a **subnet** in the virtual network you selected.
- 3. Enable **Integrate with private DNS zone** if you want to integrate your private endpoint with a private DNS zone.
-
- To connect privately with your private endpoint, you need a DNS record. We recommend that you integrate your private endpoint with a **private DNS zone**. You can also utilize your own DNS servers or create DNS records using the host files on your virtual machines. For more information, see [Azure Private Endpoint DNS Configuration](../private-link/private-endpoint-dns.md). In this example, the **Integrate with private DNS zone** option is selected and a private DNS zone will be created for you.
- 3. Select **Next: Tags >** button at the bottom of the page.
-
- ![Create Private Endpoint - Configuration page](./media/private-link-service/create-private-endpoint-configuration-page.png)
+ 3. Enter a **name** for the **private endpoint**.
+ 1. Enter a **name** for the **network interface**.
+ 1. Select a **region** for the private endpoint. Your private endpoint must be in the same region as your virtual network, but can be in a different region from the Azure Relay namespace that you're connecting to.
+ 1. Select **Next: Resource >** button at the bottom of the page.
+
+ :::image type="content" source="./media/private-link-service/create-private-endpoint-basics-page.png" alt-text="Screenshot showing the Basics page of the Create a private endpoint wizard.":::
+8. Review settings on the **Resource** page, and select **Next: Virtual Network**.
+
+ :::image type="content" source="./media/private-link-service/create-private-endpoint-resource-page.png" alt-text="Screenshot showing the Resource page of the Create a private endpoint wizard.":::
+9. On the **Virtual Network** page, select the **virtual network** and the **subnet** where you want to deploy the private endpoint. Only virtual networks in the currently selected subscription and location are listed in the drop-down list.
+
+ :::image type="content" source="./media/private-link-service/create-private-endpoint-virtual-network-page.png" alt-text="Screenshot showing the Virtual Network page of the Create a private endpoint wizard.":::
+
+ You can configure whether you want to **dynamically** allocate an IP address or **statically** allocate an **IP address** to the private endpoint
+
+ You can also associate a new or existing **application security group** to the private endpoint.
+3. Select **Next: DNS** to navigate to the **DNS** page of the wizard. On the **DNS** page, **Integrate with private DNZ zone** setting is enabled by default (recommended). You have an option to disable it.
+
+ :::image type="content" source="./media/private-link-service/create-private-endpoint-dns-page.png" alt-text="Screenshot showing the DNS page of the Create a private endpoint wizard.":::
+
+ To connect privately with your private endpoint, you need a DNS record. We recommend that you integrate your private endpoint with a **private DNS zone**. You can also utilize your own DNS servers or create DNS records using the host files on your virtual machines. For more information, see [Azure Private Endpoint DNS Configuration](../private-link/private-endpoint-dns.md).
+3. Select **Next: Tags >** button at the bottom of the page.
10. On the **Tags** page, create any tags (names and values) that you want to associate with the private endpoint and the private DNS zone (if you had enabled the option). Then, select **Review + create** button at the bottom of the page. 11. On the **Review + create**, review all the settings, and select **Create** to create the private endpoint.
-
- ![Create Private Endpoint - Review and Create page](./media/private-link-service/create-private-endpoint-review-create-page.png)
12. On the **Private endpoint** page, you can see the status of the private endpoint connection. If you're the owner of the Relay namespace or have the manage access over it and had selected **Connect to an Azure resource in my directory** option for the **Connection method**, the endpoint connection should be **auto-approved**. If it's in the **pending** state, see the [Manage private endpoints using Azure portal](#manage-private-endpoints-using-azure-portal) section.
- ![Private endpoint page](./media/private-link-service/private-endpoint-page.png)
+ :::image type="content" source="./media/private-link-service/private-endpoint-page.png" alt-text="Screenshot showing the Private endpoint page in the Azure portal.":::
13. Navigate back to the **Networking** page of the **namespace**, and switch to the **Private endpoint connections** tab. You should see the private endpoint that you created.
- ![Private endpoint created](./media/private-link-service/private-endpoint-created.png)
+
+ :::image type="content" source="./media/private-link-service/private-endpoint-created.png" alt-text="Screenshot showing the Private endpoint connections tab of the Networking page with the private endpoint you just created.":::
## Add a private endpoint using PowerShell The following example shows you how to use Azure PowerShell to create a private endpoint connection to an Azure Relay namespace.
There are four provisioning states:
### Approve a private endpoint connection
-1. If there are any connections that are pending, you'll see a connection listed with **Pending** in the provisioning state.
+1. If there are any connections that are pending, you see a connection listed with **Pending** in the provisioning state.
2. Select the **private endpoint** you wish to approve 3. Select the **Approve** button.
- ![Approve private endpoint](./media/private-link-service/private-endpoint-approve.png)
+ :::image type="content" source="./media/private-link-service/private-endpoint-approve.png" alt-text="Screenshot showing the Approve button on the command bar for the selected private endpoint.":::
4. On the **Approve connection** page, enter an optional **comment**, and select **Yes**. If you select **No**, nothing happens.
- ![Approve connection page](./media/private-link-service/approve-connection-page.png)
+ :::image type="content" source="./media/private-link-service/approve-connection-page.png" alt-text="Screenshot showing the Approve connection page asking for your confirmation.":::
5. You should see the status of the connection in the list changed to **Approved**. ### Reject a private endpoint connection
-1. If there are any private endpoint connections you want to reject, whether it's a pending request or existing connection that was approved earlier, select the endpoint connection and click the **Reject** button.
+1. If there are any private endpoint connections you want to reject, whether it's a pending request or existing connection that was approved earlier, select the endpoint connection and select the **Reject** button.
- ![Reject button](./media/private-link-service/private-endpoint-reject.png)
+ :::image type="content" source="./media/private-link-service/private-endpoint-reject.png" alt-text="Screenshot showing the Reject button on the command bar for the selected private endpoint.":::
2. On the **Reject connection** page, enter an optional comment, and select **Yes**. If you select **No**, nothing happens.
- ![Reject connection page](./media/private-link-service/reject-connection-page.png)
+ :::image type="content" source="./media/private-link-service/reject-connection-page.png" alt-text="Screenshot showing the Reject connection page asking for your confirmation.":::
3. You should see the status of the connection in the list changed **Rejected**.
There are four provisioning states:
1. To remove a private endpoint connection, select it in the list, and select **Remove** on the toolbar.
- ![Remove button](./media/private-link-service/remove-endpoint.png)
+ :::image type="content" source="./media/private-link-service/remove-endpoint.png" alt-text="Screenshot showing the Remove button on the command bar for the selected private endpoint.":::
2. On the **Delete connection** page, select **Yes** to confirm the deletion of the private endpoint. If you select **No**, nothing happens.
- ![Delete connection page](./media/private-link-service/delete-connection-page.png)
+ :::image type="content" source="./media/private-link-service/delete-connection-page.png" alt-text="Screenshot showing the Delete connection page asking you for the confirmation.":::
3. You should see the status changed to **Disconnected**. Then, you won't see the endpoint in the list. ## Validate that the private link connection works
azure-resource-manager Deploy What If https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-what-if.md
Title: Bicep deployment what-if
description: Determine what changes will happen to your resources before deploying a Bicep file. Previously updated : 07/11/2022 Last updated : 02/15/2023 + # Bicep deployment what-if operation Before deploying a Bicep file, you can preview the changes that will happen. Azure Resource Manager provides the what-if operation to let you see how resources will change if you deploy the Bicep file. The what-if operation doesn't make any changes to existing resources. Instead, it predicts the changes if the specified Bicep file is deployed.
If you would rather learn about the what-if operation through step-by-step guida
[!INCLUDE [permissions](../../../includes/template-deploy-permissions.md)]
+## What-if limits
+
+What-if expands nested templates until these limits are reached:
+
+- 500 nested templates.
+- 800 resource groups in a cross resource-group deployment.
+- 5 minutes taken for expanding the nested templates.
+
+When one of the limits is reached, the remaining resources' [change type](#change-types) is set to **Ignore**.
+ ## Install Azure PowerShell module To use what-if in PowerShell, you must have version **4.2 or later of the Az module**.
Resource changes: 1 to modify.
To preview changes before deploying a Bicep file, use [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment) or [New-AzSubscriptionDeployment](/powershell/module/az.resources/new-azdeployment). Add the `-Whatif` switch parameter to the deployment command.
-* `New-AzResourceGroupDeployment -Whatif` for resource group deployments
-* `New-AzSubscriptionDeployment -Whatif` and `New-AzDeployment -Whatif` for subscription level deployments
+- `New-AzResourceGroupDeployment -Whatif` for resource group deployments
+- `New-AzSubscriptionDeployment -Whatif` and `New-AzDeployment -Whatif` for subscription level deployments
You can use the `-Confirm` switch parameter to preview the changes and get prompted to continue with the deployment.
-* `New-AzResourceGroupDeployment -Confirm` for resource group deployments
-* `New-AzSubscriptionDeployment -Confirm` and `New-AzDeployment -Confirm` for subscription level deployments
+- `New-AzResourceGroupDeployment -Confirm` for resource group deployments
+- `New-AzSubscriptionDeployment -Confirm` and `New-AzDeployment -Confirm` for subscription level deployments
The preceding commands return a text summary that you can manually inspect. To get an object that you can programmatically inspect for changes, use [Get-AzResourceGroupDeploymentWhatIfResult](/powershell/module/az.resources/get-azresourcegroupdeploymentwhatifresult) or [Get-AzSubscriptionDeploymentWhatIfResult](/powershell/module/az.resources/get-azdeploymentwhatifresult).
-* `$results = Get-AzResourceGroupDeploymentWhatIfResult` for resource group deployments
-* `$results = Get-AzSubscriptionDeploymentWhatIfResult` or `$results = Get-AzDeploymentWhatIfResult` for subscription level deployments
+- `$results = Get-AzResourceGroupDeploymentWhatIfResult` for resource group deployments
+- `$results = Get-AzSubscriptionDeploymentWhatIfResult` or `$results = Get-AzDeploymentWhatIfResult` for subscription level deployments
### Azure CLI To preview changes before deploying a Bicep file, use:
-* [az deployment group what-if](/cli/azure/deployment/group#az-deployment-group-what-if) for resource group deployments
-* [az deployment sub what-if](/cli/azure/deployment/sub#az-deployment-sub-what-if) for subscription level deployments
-* [az deployment mg what-if](/cli/azure/deployment/mg#az-deployment-mg-what-if) for management group deployments
-* [az deployment tenant what-if](/cli/azure/deployment/tenant#az-deployment-tenant-what-if) for tenant deployments
+- [az deployment group what-if](/cli/azure/deployment/group#az-deployment-group-what-if) for resource group deployments
+- [az deployment sub what-if](/cli/azure/deployment/sub#az-deployment-sub-what-if) for subscription level deployments
+- [az deployment mg what-if](/cli/azure/deployment/mg#az-deployment-mg-what-if) for management group deployments
+- [az deployment tenant what-if](/cli/azure/deployment/tenant#az-deployment-tenant-what-if) for tenant deployments
You can use the `--confirm-with-what-if` switch (or its short form `-c`) to preview the changes and get prompted to continue with the deployment. Add this switch to:
-* [az deployment group create](/cli/azure/deployment/group#az-deployment-group-create)
-* [az deployment sub create](/cli/azure/deployment/sub#az-deployment-sub-create).
-* [az deployment mg create](/cli/azure/deployment/mg#az-deployment-mg-create)
-* [az deployment tenant create](/cli/azure/deployment/tenant#az-deployment-tenant-create)
+- [az deployment group create](/cli/azure/deployment/group#az-deployment-group-create)
+- [az deployment sub create](/cli/azure/deployment/sub#az-deployment-sub-create).
+- [az deployment mg create](/cli/azure/deployment/mg#az-deployment-mg-create)
+- [az deployment tenant create](/cli/azure/deployment/tenant#az-deployment-tenant-create)
For example, use `az deployment group create --confirm-with-what-if` or `-c` for resource group deployments.
If you want to return the results without colors, open your [Azure CLI configura
For REST API, use:
-* [Deployments - What If](/rest/api/resources/deployments/whatif) for resource group deployments
-* [Deployments - What If At Subscription Scope](/rest/api/resources/deployments/whatifatsubscriptionscope) for subscription deployments
-* [Deployments - What If At Management Group Scope](/rest/api/resources/deployments/whatifatmanagementgroupscope) for management group deployments
-* [Deployments - What If At Tenant Scope](/rest/api/resources/deployments/whatifattenantscope) for tenant deployments.
+- [Deployments - What If](/rest/api/resources/deployments/whatif) for resource group deployments
+- [Deployments - What If At Subscription Scope](/rest/api/resources/deployments/whatifatsubscriptionscope) for subscription deployments
+- [Deployments - What If At Management Group Scope](/rest/api/resources/deployments/whatifatmanagementgroupscope) for management group deployments
+- [Deployments - What If At Tenant Scope](/rest/api/resources/deployments/whatifattenantscope) for tenant deployments.
## Change types The what-if operation lists six different types of changes:
-* **Create**: The resource doesn't currently exist but is defined in the Bicep file. The resource will be created.
-* **Delete**: This change type only applies when using [complete mode](../templates/deployment-modes.md) for JSON template deployment. The resource exists, but isn't defined in the Bicep file. With complete mode, the resource will be deleted. Only resources that [support complete mode deletion](../templates/deployment-complete-mode-deletion.md) are included in this change type.
-* **Ignore**: The resource exists, but isn't defined in the Bicep file. The resource won't be deployed or modified.
-* **NoChange**: The resource exists, and is defined in the Bicep file. The resource will be redeployed, but the properties of the resource won't change. This change type is returned when [ResultFormat](#result-format) is set to `FullResourcePayloads`, which is the default value.
-* **Modify**: The resource exists, and is defined in the Bicep file. The resource will be redeployed, and the properties of the resource will change. This change type is returned when [ResultFormat](#result-format) is set to `FullResourcePayloads`, which is the default value.
-* **Deploy**: The resource exists, and is defined in the Bicep file. The resource will be redeployed. The properties of the resource may or may not change. The operation returns this change type when it doesn't have enough information to determine if any properties will change. You only see this condition when [ResultFormat](#result-format) is set to `ResourceIdOnly`.
+- **Create**: The resource doesn't currently exist but is defined in the Bicep file. The resource will be created.
+- **Delete**: This change type only applies when using [complete mode](../templates/deployment-modes.md) for JSON template deployment. The resource exists, but isn't defined in the Bicep file. With complete mode, the resource will be deleted. Only resources that [support complete mode deletion](../templates/deployment-complete-mode-deletion.md) are included in this change type.
+- **Ignore**: The resource exists, but isn't defined in the Bicep file. The resource won't be deployed or modified. When you reach the limits for expanding nested templates, you will encounter this change type. See [What-if limits](#what-if-limits).
+- **NoChange**: The resource exists, and is defined in the Bicep file. The resource will be redeployed, but the properties of the resource won't change. This change type is returned when [ResultFormat](#result-format) is set to `FullResourcePayloads`, which is the default value.
+- **Modify**: The resource exists, and is defined in the Bicep file. The resource will be redeployed, and the properties of the resource will change. This change type is returned when [ResultFormat](#result-format) is set to `FullResourcePayloads`, which is the default value.
+- **Deploy**: The resource exists, and is defined in the Bicep file. The resource will be redeployed. The properties of the resource may or may not change. The operation returns this change type when it doesn't have enough information to determine if any properties will change. You only see this condition when [ResultFormat](#result-format) is set to `ResourceIdOnly`.
## Result format You control the level of detail that is returned about the predicted changes. You have two options:
-* **FullResourcePayloads** - returns a list of resources that will change and details about the properties that will change
-* **ResourceIdOnly** - returns a list of resources that will change
+- **FullResourcePayloads** - returns a list of resources that will change and details about the properties that will change
+- **ResourceIdOnly** - returns a list of resources that will change
The default value is **FullResourcePayloads**.
For Azure CLI, use the `--result-format` parameter.
The following results show the two different output formats:
-* Full resource payloads
+- Full resource payloads
```powershell Resource and property changes are indicated with these symbols:
The following results show the two different output formats:
Resource changes: 1 to modify. ```
-* Resource ID only
+- Resource ID only
```powershell Resource and property changes are indicated with this symbol:
Remove-AzResourceGroup -Name ExampleGroup
You can use the what-if operation through the Azure SDKs.
-* For Python, use [what-if](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2019_10_01.operations.deploymentsoperations#what-if-resource-group-name--deployment-name--properties--location-none--custom-headers-none--raw-false--polling-true-operation-config-).
+- For Python, use [what-if](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2019_10_01.operations.deploymentsoperations#what-if-resource-group-name--deployment-name--properties--location-none--custom-headers-none--raw-false--polling-true-operation-config-).
-* For Java, use [DeploymentWhatIf Class](/java/api/com.azure.resourcemanager.resources.models.deploymentwhatif).
+- For Java, use [DeploymentWhatIf Class](/java/api/com.azure.resourcemanager.resources.models.deploymentwhatif).
-* For .NET, use [DeploymentWhatIf Class](/dotnet/api/microsoft.azure.management.resourcemanager.models.deploymentwhatif).
+- For .NET, use [DeploymentWhatIf Class](/dotnet/api/microsoft.azure.management.resourcemanager.models.deploymentwhatif).
## Next steps
-* To use the what-if operation in a pipeline, see [Test ARM templates with What-If in a pipeline](https://4bes.nl/2021/03/06/test-arm-templates-with-what-if/).
-* If you notice incorrect results from the what-if operation, please report the issues at [https://aka.ms/whatifissues](https://aka.ms/whatifissues).
-* For a Learn module that demonstrates using what-if, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/training/modules/arm-template-test/).
+- To use the what-if operation in a pipeline, see [Test ARM templates with What-If in a pipeline](https://4bes.nl/2021/03/06/test-arm-templates-with-what-if/).
+- If you notice incorrect results from the what-if operation, please report the issues at [https://aka.ms/whatifissues](https://aka.ms/whatifissues).
+- For a Learn module that demonstrates using what-if, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/training/modules/arm-template-test/).
azure-resource-manager Deploy What If https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-what-if.md
Title: Template deployment what-if description: Determine what changes will happen to your resources before deploying an Azure Resource Manager template.- Previously updated : 07/11/2022- Last updated : 02/15/2023 ms.devlang: azurecli + # ARM template deployment what-if operation Before deploying an Azure Resource Manager template (ARM template), you can preview the changes that will happen. Azure Resource Manager provides the what-if operation to let you see how resources will change if you deploy the template. The what-if operation doesn't make any changes to existing resources. Instead, it predicts the changes if the specified template is deployed.
To learn more about what-if, and for hands-on guidance, see [Preview Azure deplo
[!INCLUDE [permissions](../../../includes/template-deploy-permissions.md)]
+## What-if limits
+
+What-if expands nested templates until these limits are reached:
+
+- 500 nested templates.
+- 800 resource groups in a cross resource-group deployment.
+- 5 minutes taken for expanding the nested templates.
+
+When one of the limits is reached, the remaining resources' [change type](#change-types) is set to **Ignore**.
+ ## Install Azure PowerShell module To use what-if in PowerShell, you must have version **4.2 or later of the Az module**.
Resource changes: 1 to modify.
To preview changes before deploying a template, use [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment) or [New-AzSubscriptionDeployment](/powershell/module/az.resources/new-azdeployment). Add the `-Whatif` switch parameter to the deployment command.
-* `New-AzResourceGroupDeployment -Whatif` for resource group deployments
-* `New-AzSubscriptionDeployment -Whatif` and `New-AzDeployment -Whatif` for subscription level deployments
+- `New-AzResourceGroupDeployment -Whatif` for resource group deployments
+
+- `New-AzSubscriptionDeployment -Whatif` and `New-AzDeployment -Whatif` for subscription level deployments
You can use the `-Confirm` switch parameter to preview the changes and get prompted to continue with the deployment.
-* `New-AzResourceGroupDeployment -Confirm` for resource group deployments
-* `New-AzSubscriptionDeployment -Confirm` and `New-AzDeployment -Confirm` for subscription level deployments
+- `New-AzResourceGroupDeployment -Confirm` for resource group deployments
+- `New-AzSubscriptionDeployment -Confirm` and `New-AzDeployment -Confirm` for subscription level deployments
The preceding commands return a text summary that you can manually inspect. To get an object that you can programmatically inspect for changes, use [Get-AzResourceGroupDeploymentWhatIfResult](/powershell/module/az.resources/get-azresourcegroupdeploymentwhatifresult) or [Get-AzSubscriptionDeploymentWhatIfResult](/powershell/module/az.resources/get-azdeploymentwhatifresult).
-* `$results = Get-AzResourceGroupDeploymentWhatIfResult` for resource group deployments
-* `$results = Get-AzSubscriptionDeploymentWhatIfResult` or `$results = Get-AzDeploymentWhatIfResult` for subscription level deployments
+- `$results = Get-AzResourceGroupDeploymentWhatIfResult` for resource group deployments
+- `$results = Get-AzSubscriptionDeploymentWhatIfResult` or `$results = Get-AzDeploymentWhatIfResult` for subscription level deployments
### Azure CLI To preview changes before deploying a template, use:
-* [az deployment group what-if](/cli/azure/deployment/group#az-deployment-group-what-if) for resource group deployments
-* [az deployment sub what-if](/cli/azure/deployment/sub#az-deployment-sub-what-if) for subscription level deployments
-* [az deployment mg what-if](/cli/azure/deployment/mg#az-deployment-mg-what-if) for management group deployments
-* [az deployment tenant what-if](/cli/azure/deployment/tenant#az-deployment-tenant-what-if) for tenant deployments
+- [az deployment group what-if](/cli/azure/deployment/group#az-deployment-group-what-if) for resource group deployments
+- [az deployment sub what-if](/cli/azure/deployment/sub#az-deployment-sub-what-if) for subscription level deployments
+- [az deployment mg what-if](/cli/azure/deployment/mg#az-deployment-mg-what-if) for management group deployments
+- [az deployment tenant what-if](/cli/azure/deployment/tenant#az-deployment-tenant-what-if) for tenant deployments
You can use the `--confirm-with-what-if` switch (or its short form `-c`) to preview the changes and get prompted to continue with the deployment. Add this switch to:
-* [az deployment group create](/cli/azure/deployment/group#az-deployment-group-create)
-* [az deployment sub create](/cli/azure/deployment/sub#az-deployment-sub-create).
-* [az deployment mg create](/cli/azure/deployment/mg#az-deployment-mg-create)
-* [az deployment tenant create](/cli/azure/deployment/tenant#az-deployment-tenant-create)
+- [az deployment group create](/cli/azure/deployment/group#az-deployment-group-create)
+- [az deployment sub create](/cli/azure/deployment/sub#az-deployment-sub-create).
+- [az deployment mg create](/cli/azure/deployment/mg#az-deployment-mg-create)
+- [az deployment tenant create](/cli/azure/deployment/tenant#az-deployment-tenant-create)
For example, use `az deployment group create --confirm-with-what-if` or `-c` for resource group deployments.
If you want to return the results without colors, open your [Azure CLI configura
For REST API, use:
-* [Deployments - What If](/rest/api/resources/deployments/whatif) for resource group deployments
-* [Deployments - What If At Subscription Scope](/rest/api/resources/deployments/whatifatsubscriptionscope) for subscription deployments
-* [Deployments - What If At Management Group Scope](/rest/api/resources/deployments/whatifatmanagementgroupscope) for management group deployments
-* [Deployments - What If At Tenant Scope](/rest/api/resources/deployments/whatifattenantscope) for tenant deployments.
+- [Deployments - What If](/rest/api/resources/deployments/whatif) for resource group deployments
+- [Deployments - What If At Subscription Scope](/rest/api/resources/deployments/whatifatsubscriptionscope) for subscription deployments
+- [Deployments - What If At Management Group Scope](/rest/api/resources/deployments/whatifatmanagementgroupscope) for management group deployments
+- [Deployments - What If At Tenant Scope](/rest/api/resources/deployments/whatifattenantscope) for tenant deployments.
## Change types
The what-if operation lists six different types of changes:
- **Delete**: This change type only applies when using [complete mode](deployment-modes.md) for deployment. The resource exists, but isn't defined in the template. With complete mode, the resource will be deleted. Only resources that [support complete mode deletion](./deployment-complete-mode-deletion.md) are included in this change type. -- **Ignore**: The resource exists, but isn't defined in the template. The resource won't be deployed or modified.
+- **Ignore**: The resource exists, but isn't defined in the template. The resource won't be deployed or modified. When you reach the limits for expanding nested templates, you will encounter this change type. See [What-if limits](#what-if-limits).
- **NoChange**: The resource exists, and is defined in the template. The resource will be redeployed, but the properties of the resource won't change. This change type is returned when [ResultFormat](#result-format) is set to `FullResourcePayloads`, which is the default value.
The what-if operation lists six different types of changes:
You control the level of detail that is returned about the predicted changes. You have two options:
-* **FullResourcePayloads** - returns a list of resources that will change and details about the properties that will change
-* **ResourceIdOnly** - returns a list of resources that will change
+- **FullResourcePayloads** - returns a list of resources that will change and details about the properties that will change
+- **ResourceIdOnly** - returns a list of resources that will change
The default value is **FullResourcePayloads**.
You see the expected changes and can confirm that you want the deployment to run
You can use the what-if operation through the Azure SDKs.
-* For Python, use [what-if](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2019_10_01.operations.deploymentsoperations).
-* For Java, use [DeploymentWhatIf Class](/java/api/com.azure.resourcemanager.resources.models.deploymentwhatif).
+- For Python, use [what-if](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2019_10_01.operations.deploymentsoperations).
+- For Java, use [DeploymentWhatIf Class](/java/api/com.azure.resourcemanager.resources.models.deploymentwhatif).
-* For .NET, use [DeploymentWhatIf Class](/dotnet/api/microsoft.azure.management.resourcemanager.models.deploymentwhatif).
+- For .NET, use [DeploymentWhatIf Class](/dotnet/api/microsoft.azure.management.resourcemanager.models.deploymentwhatif).
## Next steps
azure-signalr Signalr Concept Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-azure-functions.md
Title: Build Real-time app - Azure Functions & Azure SignalR Service
-description: Learn how to develop real-time serverless web application with Azure SignalR Service by following example.
+ Title: Real-time apps with Azure SignalR Service and Azure Functions
+description: Learn about how Azure SignalR Service and Azure Functions together allow you to create real-time serverless web applications.
- Previously updated : 11/13/2019 Last updated : 02/14/2023
-# Build real-time Apps with Azure Functions and Azure SignalR Service
-Because Azure SignalR Service and Azure Functions are both fully managed, highly scalable services that allow you to focus on building applications instead of managing infrastructure, it's common to use the two services together to provide real-time communications in a [serverless](https://azure.microsoft.com/solutions/serverless/) environment.
+# Real-time apps with Azure SignalR Service and Azure Functions
++
+Azure SignalR Services combined with Azure Functions allows you to run real-time messaging web apps in a serverless environment. This article provides an overview of how the services work together.
+
+Azure SignalR Service and Azure Functions are both fully managed, highly scalable services that allow you to focus on building applications instead of managing infrastructure. It's common to use the two services together to provide real-time communications in a [serverless](https://azure.microsoft.com/solutions/serverless/) environment.
+
-> [!NOTE]
-> Learn to use SignalR and Azure Functions together in the interactive tutorial [Enable automatic updates in a web application using Azure Functions and SignalR Service](/training/modules/automatic-update-of-a-webapp-using-azure-functions-and-signalr).
## Integrate real-time communications with Azure services
-Azure Functions allow you to write code in [several languages](../azure-functions/supported-languages.md), including JavaScript, Python, C#, and Java, that triggers whenever events occur in the cloud. Examples of these events include:
+The Azure Functions service allows you to write code in [several languages](../azure-functions/supported-languages.md), including JavaScript, Python, C#, and Java that triggers whenever events occur in the cloud. Examples of these events include:
* HTTP and webhook requests * Periodic timers
Azure Functions allow you to write code in [several languages](../azure-function
- Event Hubs - Service Bus - Azure Cosmos DB change feed
- - Storage - blobs and queues
+ - Storage blobs and queues
- Logic Apps connectors such as Salesforce and SQL Server By using Azure Functions to integrate these events with Azure SignalR Service, you have the ability to notify thousands of clients whenever events occur. Some common scenarios for real-time serverless messaging that you can implement with Azure Functions and SignalR Service include:
-* Visualize IoT device telemetry on a real-time dashboard or map
-* Update data in an application when documents update in Azure Cosmos DB
-* Send in-app notifications when new orders are created in Salesforce
+* Visualize IoT device telemetry on a real-time dashboard or map.
+* Update data in an application when documents update in Azure Cosmos DB.
+* Send in-app notifications when new orders are created in Salesforce.
## SignalR Service bindings for Azure Functions The SignalR Service bindings for Azure Functions allow an Azure Function app to publish messages to clients connected to SignalR Service. Clients can connect to the service using a SignalR client SDK that is available in .NET, JavaScript, and Java, with more languages coming soon.
+<!-- Are there more lanaguages now? -->
### An example scenario
An example of how to use the SignalR Service bindings is using Azure Functions t
![Azure Cosmos DB, Azure Functions, SignalR Service](media/signalr-concept-azure-functions/signalr-cosmosdb-functions.png)
-1. A change is made in an Azure Cosmos DB collection
-2. The change event is propagated to the Azure Cosmos DB change feed
-3. An Azure Functions is triggered by the change event using the Azure Cosmos DB trigger
-4. The SignalR Service output binding publishes a message to SignalR Service
-5. SignalR Service publishes the message to all connected clients
+1. A change is made in an Azure Cosmos DB collection.
+2. The change event is propagated to the Azure Cosmos DB change feed.
+3. An Azure Functions is triggered by the change event using the Azure Cosmos DB trigger.
+4. The SignalR Service output binding publishes a message to SignalR Service.
+5. The SignalR Service publishes the message to all connected clients.
### Authentication and users
-SignalR Service allows you to broadcast messages to all clients or only to a subset of clients, such as those belonging to a single user. The SignalR Service bindings for Azure Functions can be combined with App Service Authentication to authenticate users with providers such as Azure Active Directory, Facebook, and Twitter. You can then send messages directly to these authenticated users.
+SignalR Service allows you to broadcast messages to all or a subset of clients, such as those belonging to a single user. You can combine the SignalR Service bindings for Azure Functions with App Service authentication to authenticate users with providers such as Azure Active Directory, Facebook, and Twitter. You can then send messages directly to these authenticated users.
## Next steps
-In this article, you got an overview of how to use Azure Functions with SignalR Service to enable a wide array of serverless real-time messaging scenarios.
- For full details on how to use Azure Functions and SignalR Service together visit the following resources: * [Azure Functions development and configuration with SignalR Service](signalr-concept-serverless-development-config.md) * [Enable automatic updates in a web application using Azure Functions and SignalR Service](/training/modules/automatic-update-of-a-webapp-using-azure-functions-and-signalr)
-Follow one of these quickstarts to learn more.
+To try out the SignalR Service bindings for Azure Functions, see:
* [Azure SignalR Service Serverless Quickstart - C#](signalr-quickstart-azure-functions-csharp.md) * [Azure SignalR Service Serverless Quickstart - JavaScript](signalr-quickstart-azure-functions-javascript.md)
+* [Enable automatic updates in a web application using Azure Functions and SignalR Service](/training/modules/automatic-update-of-a-webapp-using-azure-functions-and-signalr).
azure-sql-edge Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/release-notes.md
This article describes what's new and what has changed with every new build of Azure SQL Edge.
+## Azure SQL Edge 1.0.7
+
+SQL engine build 15.0.2000.1574
+
+### What's new?
+
+- Security bug fixes
+ ## Azure SQL Edge 1.0.6 SQL engine build 15.0.2000.1565
azure-vmware Enable Sql Azure Hybrid Benefit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-sql-azure-hybrid-benefit.md
Title: Enable SQL Azure hybrid benefit for Azure VMware Solution (Preview)
+ Title: Enable SQL Azure hybrid benefit for Azure VMware Solution
description: This article shows you how to apply SQL Azure hybrid benefits to your Azure VMware Solution private cloud by configuring a placement policy. Previously updated : 06/14/2022 Last updated : 02/14/2023
-# Enable SQL Azure hybrid benefit for Azure VMware Solution (Preview)
+# Enable SQL Azure hybrid benefit for Azure VMware Solution
In this article, youΓÇÖll learn how to configure SQL Azure hybrid benefits to an Azure VMware Solution private cloud by configuring a placement policy. The placement policy defines the hosts that are running SQL as well as the virtual machines on that host. >[!IMPORTANT]
By checking the Azure hybrid benefit checkbox in the configuration setting, you
## Next steps [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/)
-[Attach Azure NetApp Files datastores to Azure VMware Solution hosts (Preview)](attach-azure-netapp-files-to-azure-vmware-solution-hosts.md)
+[Attach Azure NetApp Files datastores to Azure VMware Solution hosts](attach-azure-netapp-files-to-azure-vmware-solution-hosts.md)
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 1/31/2023 Last updated : 2/14/2023
The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to the Guest OS you are using. Updates always carry forward for the particular [family][family-explain] they were introduced in.
+## February 2023 Guest OS
+
+>[!NOTE]
+
+>The February Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the February Guest OS. This list is subject to change.
+
+| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
+| | | | | |
+| Rel 23-02 | [5022838] | Latest Cumulative Update(LCU) | 5.78 | Feb 14, 2023 |
+| Rel 23-02 | [5022835] | IE Cumulative Updates | 2.134, 3.121, 4.114 | Feb 14, 2023 |
+| Rel 23-02 | [5022842] | Latest Cumulative Update(LCU) | 7.22 | Feb 14, 2023 |
+| Rel 23-02 | [5022840] | Latest Cumulative Update(LCU) | 6.54 | Feb 14, 2023 |
+| Rel 23-02 | [5022523] | .NET Framework 3.5 Security and Quality Rollup  | 2.134 | Feb 14, 2023 |
+| Rel 23-02 | [5022515] | .NET Framework 4.6.2 Security and Quality Rollup  | 2.134 | Feb 14, 2023 |
+| Rel 23-02 | [5022525] | .NET Framework 3.5 Security and Quality Rollup  | 4.114 | Feb 14, 2023 |
+| Rel 23-02 | [5022513] | .NET Framework 4.6.2 Security and Quality Rollup  | 4.114 | Feb 14, 2023 |
+| Rel 23-02 | [5022574] | .NET Framework 3.5 Security and Quality Rollup  | 3.121 | Feb 14, 2023 |
+| Rel 23-02 | [5022512] | .NET Framework 4.6.2 Security and Quality Rollup  | 3.121 | Feb 14, 2023 |
+| Rel 23-02 | [5022511] | . NET Framework 4.7.2 Cumulative Update  | 6.54 | Feb 14, 2023 |
+| Rel 23-02 | [5022507] | .NET Framework 4.8 Security and Quality Rollup  | 7.22 | Feb 14, 2023 |
+| Rel 23-02 | [5022872] | Monthly Rollup  | 2.134 | Feb 14, 2023 |
+| Rel 23-02 | [5022903] | Monthly Rollup  | 3.121 | Feb 14, 2023 |
+| Rel 23-02 | [5022899] | Monthly Rollup  | 4.114 | Feb 14, 2023 |
+| Rel 23-02 | [5022923] | Servicing Stack Update  | 3.121 | Feb 14, 2023 |
+| Rel 23-02 | [5018922] | Servicing Stack update LKG  | 4.114 | Oct 11, 2022 |
+| Rel 23-02 | [4578013] | OOB Standalone Security Update  | 4.114 | Aug 19, 2020 |
+| Rel 23-02 | [5017396] | Servicing Stack Update LKG  | 5.78 | Sep 13, 2022 |
+| Rel 23-02 | [5017397] | Servicing Stack Update LKG  | 2.134 | Sep 13, 2022 |
+| Rel 23-02 | [4494175] | Microcode  | 5.78 | Sep 1, 2020 |
+| Rel 23-02 | [4494174] | Microcode  | 6.54 | Sep 1, 2020 |
+| Rel 23-02 | 5022947 | Servicing Stack Update  | 7.22 | |
+
+[5022838]: https://support.microsoft.com/kb/5022838
+[5022835]: https://support.microsoft.com/kb/5022835
+[5022842]: https://support.microsoft.com/kb/5022842
+[5022840]: https://support.microsoft.com/kb/5022840
+[5022523]: https://support.microsoft.com/kb/5022523
+[5022515]: https://support.microsoft.com/kb/5022515
+[5022525]: https://support.microsoft.com/kb/5022525
+[5022513]: https://support.microsoft.com/kb/5022513
+[5022574]: https://support.microsoft.com/kb/5022574
+[5022512]: https://support.microsoft.com/kb/5022512
+[5022511]: https://support.microsoft.com/kb/5022511
+[5022507]: https://support.microsoft.com/kb/5022507
+[5022872]: https://support.microsoft.com/kb/5022872
+[5022903]: https://support.microsoft.com/kb/5022903
+[5022899]: https://support.microsoft.com/kb/5022899
+[5022923]: https://support.microsoft.com/kb/5022923
+[5018922]: https://support.microsoft.com/kb/5018922
+[4578013]: https://support.microsoft.com/kb/4578013
+[5017396]: https://support.microsoft.com/kb/5017396
+[5017397]: https://support.microsoft.com/kb/5017397
+[4494175]: https://support.microsoft.com/kb/4494175
+[4494174]: https://support.microsoft.com/kb/4494174
+++ ## January 2023 Guest OS
cognitive-services Create Translator Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/create-translator-resource.md
+
+ Title: Create a Translator resource
+
+description: This article shows you how to create an Azure Cognitive Services Translator resource and get a key and endpoint URL.
+++++++ Last updated : 02/14/2023++
+# Create a Translator resource
+
+In this article, you learn how to create a Translator resource in the Azure portal. [Azure Translator](translator-overview.md) is a cloud-based machine translation service that is part of the [Azure Cognitive Services](../what-are-cognitive-services.md) family of REST APIs. Azure resources are instances of services that you create. All API requests to Azure services require an **endpoint** URL and a read-only **key** for authenticating access.
+
+## Prerequisites
+
+To get started, you need an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free 12-month subscription**](https://azure.microsoft.com/free/).
+
+## Create your resource
+
+The Translator service can be accessed through two different resource types:
+
+* [**Single-service**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) resource types enable access to a single service API key and endpoint.
+
+* [**Multi-service**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource types enable access to multiple Cognitive Services using a single API key and endpoint.
+
+## Complete your project and instance details
+
+1. **Subscription**. Select one of your available Azure subscriptions.
+
+1. **Resource Group**. You can create a new resource group or add your resource to a pre-existing resource group that shares the same lifecycle, permissions, and policies.
+
+1. **Resource Region**. Choose **Global** unless your business or application requires a specific region. If you're planning on using the Document Translation feature with managed identity authentication, choose a non-global region.
+
+1. **Name**. Enter the name you have chosen for your resource. The name you choose must be unique within Azure.
+
+ > [!NOTE]
+ > If you are using a Translator feature that requires a custom domain endpoint, such as Document Translation, the value that you enter in the Name field will be the custom domain name parameter for the endpoint.
+
+1. **Pricing tier**. Select a [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/translator) that meets your needs:
+
+ * Each subscription has a free tier.
+ * The free tier has the same features and functionality as the paid plans and doesn't expire.
+ * Only one free tier is available per subscription.
+ * Document Translation isn't supported in the free tier. Select Standard S1 to try that feature.
+
+1. If you've created a multi-service resource, you need to confirm more usage details via the check boxes.
+
+1. Select **Review + Create**.
+
+1. Review the service terms and select **Create** to deploy your resource.
+
+1. After your resource has successfully deployed, select **Go to resource**.
+
+### Authentication keys and endpoint URL
+
+All Cognitive Services API requests require an endpoint URL and a read-only key for authentication.
+
+* **Authentication keys**. Your key is a unique string that is passed on every request to the Translation service. You can pass your key through a query-string parameter or by specifying it in the HTTP request header.
+
+* **Endpoint URL**. Use the Global endpoint in your API request unless you need a specific Azure region or custom endpoint. *See* [Base URLs](reference/v3-0-reference.md#base-urls). The Global endpoint URL is `api.cognitive.microsofttranslator.com`.
+
+## Get your authentication keys and endpoint
+
+1. After your new resource deploys, select **Go to resource** or navigate directly to your resource page.
+1. In the left rail, under *Resource Management*, select **Keys and Endpoint**.
+1. Copy and paste your keys and endpoint URL in a convenient location, such as *Microsoft Notepad*.
++
+## How to delete a resource or resource group
+
+> [!Warning]
+> Deleting a resource group also deletes all resources contained in the group.
+
+To remove a Cognitive Services or Translator resource, you can **delete the resource** or **delete the resource group**.
+
+To delete the resource:
+
+1. Navigate to your Resource Group in the Azure portal.
+1. Select the resources to be deleted by selecting the adjacent check box.
+1. Select **Delete** from the top menu near the right edge.
+1. Type *yes* in the **Deleted Resources** dialog box.
+1. Select **Delete**.
+
+To delete the resource group:
+
+1. Navigate to your Resource Group in the Azure portal.
+1. Select the **Delete resource group** from the top menu bar near the left edge.
+1. Confirm the deletion request by entering the resource group name and selecting **Delete**.
+
+## How to get started with Translator
+
+In our quickstart, you learn how to use the Translator service with REST APIs.
+
+> [!div class="nextstepaction"]
+> [Get Started with Translator](quickstart-translator.md)
+
+## More resources
+
+* [Microsoft Translator code samples](https://github.com/MicrosoftTranslator). Multi-language Translator code samples are available on GitHub.
+* [Microsoft Translator Support Forum](https://www.aka.ms/TranslatorForum)
+* [Get Started with Azure (3-minute video)](https://azure.microsoft.com/get-started/?b=16.24)
cognitive-services Get Started With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/quickstarts/get-started-with-rest-api.md
Previously updated : 12/17/2022 Last updated : 02/10/2023 recommendations: false ms.devlang: csharp, golang, java, javascript, python
zone_pivot_groups: programming-languages-set-translator
# Get started with Document Translation
- Document Translation is a cloud-based feature of the [Azure Translator](../../translator-overview.md) service that asynchronously translates whole documents in [supported languages](../../language-support.md) and various [file formats](../overview.md#supported-document-formats). In this quickstart, you'll learn to use Document Translation with a programming language of your choice to translate a source document into a target language while preserving structure and text formatting.
+ Document Translation is a cloud-based feature of the [Azure Translator](../../translator-overview.md) service that asynchronously translates whole documents in [supported languages](../../language-support.md) and various [file formats](../overview.md#supported-document-formats). In this quickstart, learn to use Document Translation with a programming language of your choice to translate a source document into a target language while preserving structure and text formatting.
## Prerequisites
zone_pivot_groups: programming-languages-set-translator
> * Document Translation is **only** supported in the S1 Standard Service Plan (Pay-as-you-go) or in the D3 Volume Discount Plan. *See* [Cognitive Services pricingΓÇöTranslator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/). >
-To get started, you'll need:
+To get started, you need:
* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
-* An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You'll create containers to store and organize your blob data within your storage account.
+* An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You also need to create containers to store and organize your blob data within your storage account.
* A [**single-service Translator resource**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Cognitive Services resource):
The custom domain endpoint is a URL formatted with your resource name, hostname,
> * **All API requests to the Document Translation service require a custom domain endpoint**. > * Don't use the Text Translation endpoint found on your Azure portal resource *Keys and Endpoint* page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
+ > [!div class="nextstepaction"]
+ > [I ran into an issue with the prerequisites.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?Pillar=Language&Product=Document-translation&Page=quickstart&Section=Prerequisites)
+ ### Retrieve your key and endpoint Requests to the Translator service require a read-only key and custom endpoint to authenticate access.
Requests to the Translator service require a read-only key and custom endpoint t
1. Copy and paste your **`key`** and **`document translation endpoint`** in a convenient location, such as *Microsoft Notepad*. Only one key is necessary to make an API call.
-1. You'll paste it into the code sample to authenticate your request to the Document Translation service.
+1. You **`key`** and **`document translation endpoint`** into the code samples to authenticate your request to the Document Translation service.
:::image type="content" source="../media/document-translation-key-endpoint.png" alt-text="Screenshot showing the get your key field in Azure portal.":::
+ > [!div class="nextstepaction"]
+ > [I ran into an issue retrieving my key and endpoint.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?Pillar=Language&Product=Document-translation&Page=quickstart&Section=Retrieve-your-keys-and-endpoint)
+ ## Create Azure blob storage containers
-You'll need to [**create containers**](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in your [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) for source and target files.
+You need to [**create containers**](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in your [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) for source and target files.
* **Source container**. This container is where you upload your files for translation (required).
-* **Target container**. This container is where your translated files will be stored (required).
+* **Target container**. This container is where your translated files are stored (required).
### **Required authentication**
The `sourceUrl` , `targetUrl` , and optional `glossaryUrl` must include a Share
> * If you're translating a **single** file (blob) in an operation, **delegate SAS access at the blob level**. > * As an alternative to SAS tokens, you can use a [**system-assigned managed identity**](../how-to-guides/create-use-managed-identities.md) for authentication.
+ > [!div class="nextstepaction"]
+ > [I ran into an issue creating blob storage containers with authentication.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?Pillar=Language&Product=Document-translation&Page=quickstart&Section=Create-blob-storage-containers)
+ ### Sample document
-For this project, you'll need a **source document** uploaded to your **source container**. You can download our [document translation sample document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/Translator/document-translation-sample.docx) for this quickstart.
+For this project, you need a **source document** uploaded to your **source container**. You can download our [document translation sample document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/Translator/document-translation-sample.docx) for this quickstart.
## HTTP request
-A batch Document Translation request is submitted to your Translator service endpoint via a POST request. If successful, the POST method returns a `202 Accepted` response code and the batch request is created by the service. The translated documents will be listed in your target container.
+A batch Document Translation request is submitted to your Translator service endpoint via a POST request. If successful, the POST method returns a `202 Accepted` response code and the batch request is created by the service. The translated documents are listed in your target container.
### Headers
That's it, congratulations! In this quickstart, you used Document Translation to
Learn more about Document Translation: > [!div class="nextstepaction"]
->[Document Translation REST API guide](../reference/rest-api-guide.md) [Language support](../../language-support.md)
+>[Document Translation REST API guide](../reference/rest-api-guide.md) </br></br>[Language support](../../language-support.md)
cognitive-services Translator Text Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/translator-text-apis.md
Previously updated : 12/17/2022 Last updated : 02/14/2023 ms.devlang: csharp, golang, java, javascript, python
keywords: translator, translator service, translate text, transliterate text, la
# Use Azure Cognitive Services Translator APIs
-In this how-to guide, you'll learn to use the [Translator service REST APIs](reference/rest-api-guide.md). You'll start with basic examples and move onto some core configuration options that are commonly used during development, including:
+In this how-to guide, you learn to use the [Translator service REST APIs](reference/rest-api-guide.md). You start with basic examples and move onto some core configuration options that are commonly used during development, including:
* [Translation](#translate-text) * [Transliteration](#transliterate-text)
In this how-to guide, you'll learn to use the [Translator service REST APIs](ref
* You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
- > [!TIP]
- > Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Translator access only, create a Form Translator resource. Please note that you'll need a single-service resource if you intend to use [Azure Active Directory authentication](../../active-directory/authentication/overview-authentication.md).
-
-* You'll need the key and endpoint from the resource to connect your application to the Translator service. Later, you'll paste your key and endpoint into the code samples. You can find these values on the Azure portal **Keys and Endpoint** page:
+* You need the key and endpoint from the resource to connect your application to the Translator service. Later, you paste your key and endpoint into the code samples. You can find these values on the Azure portal **Keys and Endpoint** page:
:::image type="content" source="media/keys-and-endpoint-portal.png" alt-text="Screenshot: Azure portal keys and endpoint page.":::
In this how-to guide, you'll learn to use the [Translator service REST APIs](ref
## Headers
-To call the Translator service via the [REST API](reference/rest-api-guide.md), you'll need to make sure the following headers are included with each request. Don't worry, we'll include the headers in the sample code in the following sections.
+To call the Translator service via the [REST API](reference/rest-api-guide.md), you need to make sure the following headers are included with each request. Don't worry, we include the headers in the sample code in the following sections.
|Header|Value| Condition | | |: |:|
To call the Translator service via the [REST API](reference/rest-api-guide.md),
1. Open the **Program.cs** file.
-1. Delete the pre-existing code, including the line `Console.Writeline("Hello World!")`. You'll copy and paste the code samples into your application's Program.cs file. For each code sample, make sure you update the key and endpoint variables with values from your Azure portal Translator instance.
+1. Delete the pre-existing code, including the line `Console.Writeline("Hello World!")`. Copy and paste the code samples into your application's Program.cs file. For each code sample, make sure you update the key and endpoint variables with values from your Azure portal Translator instance.
1. Once you've added a desired code sample to your application, choose the green **start button** next to formRecognizer_quickstart to build and run your program, or press **F5**.
You can use any text editor to write Go applications. We recommend using the lat
1. Create a new GO file named **text-translator.go** from the **translator-text-app** directory.
-1. You'll copy and paste the code samples into your **text-translator.go** file. Make sure you update the key variable with the value from your Azure portal Translator instance.
+1. Copy and paste the code samples into your **text-translator.go** file. Make sure you update the key variable with the value from your Azure portal Translator instance.
1. Once you've added a code sample to your application, your Go program can be executed in a command or terminal prompt. Make sure your prompt's path is set to the **translator-text-app** folder and use the following command:
You can use any text editor to write Go applications. We recommend using the lat
> * Visual Studio Code offers a **Coding Pack for Java** for Windows and macOS.The coding pack is a bundle of VS Code, the Java Development Kit (JDK), and a collection of suggested extensions by Microsoft. The Coding Pack can also be used to fix an existing development environment. > * If you are using VS Code and the Coding Pack For Java, install the [**Gradle for Java**](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-gradle) extension.
-* If you aren't using VS Code, make sure you have the following installed in your development environment:
+* If you aren't using Visual Studio Code, make sure you have the following installed in your development environment:
* A [**Java Development Kit** (OpenJDK)](/java/openjdk/download#openjdk-17) version 8 or later.
You can use any text editor to write Go applications. We recommend using the lat
mkdir translator-text-app; cd translator-text-app ```
-1. Run the `gradle init` command from the translator-text-app directory. This command will create essential build files for Gradle, including *build.gradle.kts*, which is used at runtime to create and configure your application.
+1. Run the `gradle init` command from the translator-text-app directory. This command creates essential build files for Gradle, including *build.gradle.kts*, which is used at runtime to create and configure your application.
```console gradle init --type basic
You can use any text editor to write Go applications. We recommend using the lat
mkdir -p src/main/java ```
- You'll create the following directory structure:
+ You create the following directory structure:
:::image type="content" source="media/quickstarts/java-directories-2.png" alt-text="Screenshot: Java directory structure.":::
You can use any text editor to write Go applications. We recommend using the lat
> > * You can also create a new file in your IDE named `TranslatorText.java` and save it to the `java` directory.
-1. You'll copy and paste the code samples `TranslatorText.java` file. **Make sure you update the key with one of the key values from your Azure portal Translator instance**.
+1. Copy and paste the code samples `TranslatorText.java` file. **Make sure you update the key with one of the key values from your Azure portal Translator instance**.
1. Once you've added a code sample to your application, navigate back to your main project directoryΓÇö**translator-text-app**, open a console window, and enter the following commands:
You can use any text editor to write Go applications. We recommend using the lat
> > * You can also create a new file named `index.js` in your IDE and save it to the `translator-text-app` directory.
-1. You'll copy and paste the code samples into your `index.js` file. **Make sure you update the key variable with the value from your Azure portal Translator instance**.
+1. Copy and paste the code samples into your `index.js` file. **Make sure you update the key variable with the value from your Azure portal Translator instance**.
1. Once you've added the code sample to your application, run your program:
You can use any text editor to write Go applications. We recommend using the lat
## Translate text
-The core operation of the Translator service is to translate text. In this section, you'll build a request that takes a single source (`from`) and provides two outputs (`to`). Then we'll review some parameters that can be used to adjust both the request and the response.
+The core operation of the Translator service is to translate text. In this section, you build a request that takes a single source (`from`) and provides two outputs (`to`). Then we review some parameters that can be used to adjust both the request and the response.
### [C#](#tab/csharp)
After a successful call, you should see the following response:
] ```
-You can check the consumption (the number of characters for which you'll be charged) for each request in the [**response headers: x-metered-usage**](reference/v3-0-translate.md#response-headers) field.
+You can check the consumption (the number of characters charged) for each request in the [**response headers: x-metered-usage**](reference/v3-0-translate.md#response-headers) field.
## Detect language
-If you need translation, but don't know the language of the text, you can use the language detection operation. There's more than one way to identify the source text language. In this section, you'll learn how to use language detection using the `translate` endpoint, and the `detect` endpoint.
+If you need translation, but don't know the language of the text, you can use the language detection operation. There's more than one way to identify the source text language. In this section, you learn how to use language detection using the `translate` endpoint, and the `detect` endpoint.
### Detect source language during translation
-If you don't include the `from` parameter in your translation request, the Translator service will attempt to detect the source text's language. In the response, you'll get the detected language (`language`) and a confidence score (`score`). The closer the `score` is to `1.0`, means that there's increased confidence that the detection is correct.
+If you don't include the `from` parameter in your translation request, the Translator service attempts to detect the source text's language. In the response, you get the detected language (`language`) and a confidence score (`score`). The closer the `score` is to `1.0`, means that there's increased confidence that the detection is correct.
### [C#](#tab/csharp)
After a successful call, you should see the following response:
### Detect source language without translation
-It's possible to use the Translator service to detect the language of source text without performing a translation. To do so, you'll use the [`/detect`](./reference/v3-0-detect.md) endpoint.
+It's possible to use the Translator service to detect the language of source text without performing a translation. To do so, you use the [`/detect`](./reference/v3-0-detect.md) endpoint.
### [C#](#tab/csharp)
print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separat
-The `/detect` endpoint response will include alternate detections, and will let you know if translation and transliteration are supported for all of the detected languages. After a successful call, you should see the following response:
+The `/detect` endpoint response includes alternate detections, and indicates if the translation and transliteration are supported for all of the detected languages. After a successful call, you should see the following response:
```json [
The `/detect` endpoint response will include alternate detections, and will let
## Transliterate text
-Transliteration is the process of converting a word or phrase from the script (alphabet) of one language to another based on phonetic similarity. For example, you could use transliteration to convert "สวัสดี" (`thai`) to "sawatdi" (`latn`). There's more than one way to perform transliteration. In this section, you'll learn how to use language detection using the `translate` endpoint, and the `transliterate` endpoint.
+Transliteration is the process of converting a word or phrase from the script (alphabet) of one language to another based on phonetic similarity. For example, you could use transliteration to convert "สวัสดี" (`thai`) to "sawatdi" (`latn`). There's more than one way to perform transliteration. In this section, you learn how to use language detection using the `translate` endpoint, and the `transliterate` endpoint.
### Transliterate during translation
-If you're translating into a language that uses a different alphabet (or phonemes) than your source, you might need a transliteration. In this example, we translate "Hello" from English to Thai. In addition to getting the translation in Thai, you'll get a transliteration of the translated phrase using the Latin alphabet.
+If you're translating into a language that uses a different alphabet (or phonemes) than your source, you might need a transliteration. In this example, we translate "Hello" from English to Thai. In addition to getting the translation in Thai, you get a transliteration of the translated phrase using the Latin alphabet.
To get a transliteration from the `translate` endpoint, use the `toScript` parameter.
print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separat
-After a successful call, you should see the following response. In addition to the detected source language and translation, you'll get character counts for each detected sentence for both the source (`srcSentLen`) and translation (`transSentLen`).
+After a successful call, you should see the following response. In addition to the detected source language and translation, you get character counts for each detected sentence for both the source (`srcSentLen`) and translation (`transSentLen`).
```json [
After a successful call, you should see the following response. Unlike the call
## Dictionary lookup (alternate translations)
-With the endpoint, you can get alternate translations for a word or phrase. For example, when translating the word "sunshine" from `en` to `es`, this endpoint returns "luz solar", "rayos solares", and "soleamiento", "sol", and "insolaci├│n".
+With the endpoint, you can get alternate translations for a word or phrase. For example, when translating the word "sunshine" from `en` to `es`, this endpoint returns "`luz solar`", "`rayos solares`", and "`soleamiento`", "`sol`", and "`insolaci├│n`".
### [C#](#tab/csharp)
After a successful call, you should see the following response. Let's examine th
## Dictionary examples (translations in context)
-After you've performed a dictionary lookup, pass the source and translation text to the `dictionary/examples` endpoint, to get a list of examples that show both terms in the context of a sentence or phrase. Building on the previous example, you'll use the `normalizedText` and `normalizedTarget` from the dictionary lookup response as `text` and `translation` respectively. The source language (`from`) and output target (`to`) parameters are required.
+After you've performed a dictionary lookup, pass the source and translation text to the `dictionary/examples` endpoint, to get a list of examples that show both terms in the context of a sentence or phrase. Building on the previous example, you use the `normalizedText` and `normalizedTarget` from the dictionary lookup response as `text` and `translation` respectively. The source language (`from`) and output target (`to`) parameters are required.
### [C#](#tab/csharp)
container-apps Custom Domains Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/custom-domains-certificates.md
Azure Container Apps allows you to bind one or more custom domains to a containe
- Ingress must be enabled for the container app > [!NOTE]
-> To configure a custom DNS suffix for all container apps in an environment, see [Custom environment DNS suffix in Azure Container Apps](environment-custom-dns-suffix.md).
+> To configure a custom DNS suffix for all container apps in an environment, see [Custom environment DNS suffix in Azure Container Apps](environment-custom-dns-suffix.md). If you configure a custom environment DNS suffix, you cannot add a custom domain that contains this suffix to your Container App.
## Add a custom domain and certificate
container-instances Container Instances Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-github-action.md
Update the Azure service principal credentials to allow push and pull access to
Get the resource ID of your container registry. Substitute the name of your registry in the following [az acr show][az-acr-show] command: ```azurecli
-$registryId=$(az acr show \
+registryId=$(az acr show \
--name <registry-name> \ --resource-group <resource-group-name> \ --query id --output tsv)
container-instances Container Instances Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-overview.md
Azure Container Instances enables [deployment of container instances into an Azu
## Considerations
-There are default limits that require quota increases. Not all quota increases may be approved: [Service quotas and region availability - Azure Container Instances | Microsoft Learn](./container-instances-quotas.md)
-
-Different regions have different default limits, so you should consider the limits in your region: [Resource availability by region - Azure Container Instances | Microsoft Learn](./container-instances-region-availability.md)
+There are default limits that require quota increases. Not all quota increases may be approved: [Resource availability & quota limits for ACI - Azure Container Instances | Microsoft Learn](./container-instances-resource-and-quota-limits.md)
If your container group stops working, we suggest trying to restart your container, checking your application code, or your local network configuration before opening a [support request][azure-support].
container-instances Container Instances Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-vnet.md
This article shows how to use the [az container create][az-container-create] com
For networking scenarios and limitations, see [Virtual network scenarios and resources for Azure Container Instances](container-instances-virtual-network-concepts.md). > [!IMPORTANT]
-> Container group deployment to a virtual network is generally available for Linux and Windows containers, in most regions where Azure Container Instances is available. For details, see [Regions and resource availability][container-regions].
+> Container group deployment to a virtual network is generally available for Linux and Windows containers, in most regions where Azure Container Instances is available. For details, see [available-regions][available-regions].
[!INCLUDE [network profile callout](./includes/network-profile/network-profile-callout.md)]
To deploy a new virtual network, subnet, network profile, and container group us
[az-container-show]: /cli/azure/container#az_container_show [az-network-vnet-create]: /cli/azure/network/vnet#az_network_vnet_create [az-network-profile-list]: /cli/azure/network/profile#az_network_profile_list
-[container-regions]: container-instances-region-availability.md
+[available-regions]: https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=container-instances
cosmos-db Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/support.md
Azure Cosmos DB supports the following database commands on API for Cassandra ac
||| | `ALLOW FILTERING` | Yes | | `ALTER KEYSPACE` | N/A (PaaS service, replication managed internally)|
-| `ALTER MATERIALIZED VIEW` | No |
+| `ALTER MATERIALIZED VIEW` | Yes |
| `ALTER ROLE` | No | | `ALTER TABLE` | Yes | | `ALTER TYPE` | No |
Azure Cosmos DB supports the following database commands on API for Cassandra ac
| `CREATE INDEX` | Yes (including [named indexes](secondary-indexing.md), and cluster key index is currently in [private preview](https://devblogs.microsoft.com/cosmosdb/now-in-private-preview-cluster-key-index-support-for-azure-cosmos-db-cassandra-api/) but full FROZEN collection is not supported) | | `CREATE FUNCTION` | No | | `CREATE KEYSPACE` (replication settings ignored) | Yes |
-| `CREATE MATERIALIZED VIEW` | No |
+| `CREATE MATERIALIZED VIEW` | Yes |
| `CREATE TABLE` | Yes | | `CREATE TRIGGER` | No | | `CREATE TYPE` | Yes |
Azure Cosmos DB supports the following database commands on API for Cassandra ac
| `DROP FUNCTION` | No | | `DROP INDEX` | Yes | | `DROP KEYSPACE` | Yes |
-| `DROP MATERIALIZED VIEW` | No |
+| `DROP MATERIALIZED VIEW` | Yes |
| `DROP ROLE` | No | | `DROP TABLE` | Yes | | `DROP TRIGGER` | No |
cosmos-db Distribute Data Globally https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/distribute-data-globally.md
Azure Cosmos DB is a globally distributed database system that allows you to rea
You can configure your databases to be globally distributed and available in [any of the Azure regions](https://azure.microsoft.com/global-infrastructure/services/?products=cosmos-db&regions=all). To lower the latency, place the data close to where your users are. Choosing the required regions depends on the global reach of your application and where your users are located. Azure Cosmos DB transparently replicates the data to all the regions associated with your Azure Cosmos DB account. It provides a single system image of your globally distributed Azure Cosmos DB database and containers that your application can read and write to locally.
+> [!NOTE]
+> Serverless accounts for Azure Cosmos DB can only run in a single Azure region. For more information, see [using serverless resources](serverless.md).
+ With Azure Cosmos DB, you can add or remove the regions associated with your account at any time. Your application doesn't need to be paused or redeployed to add or remove a region. Azure Cosmos DB is available in all five distinct Azure cloud environments available to customers: * **Azure public** cloud, which is available globally.
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-python.md
ms.devlang: python Previously updated : 03/29/2021 Last updated : 02/14/2023
In this quickstart, you create and manage an Azure Cosmos DB for Gremlin (graph)
## Prerequisites - An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription.-- [Python 3.6+](https://www.python.org/downloads/) including [pip](https://pip.pypa.io/en/stable/installing/) package installer.
+- [Python 3.7+](https://github.com/Azure/azure-sdk-for-python/wiki/Azure-SDKs-Python-version-support-policy) including [pip](https://pip.pypa.io/en/stable/installing/) package installer.
- [Python Driver for Gremlin](https://github.com/apache/tinkerpop/tree/master/gremlin-python). You can also install the Python driver for Gremlin by using the `pip` command line:
Before you can create a graph database, you need to create a Gremlin (Graph) dat
## Clone the sample application
-Now let's switch to working with code. Let's clone a Gremlin API app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
+Now let's switch to working with code. Let's clone a Gremlin API app from GitHub, set the connection string, and run it. You'll see how easy it's to work with data programmatically.
-1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
+1. Run the following command to clone the sample repository to your local machine. This command creates a copy of the sample app on your computer. Start at in the root of the folder where you typically store GitHub repositories.
```bash
- mkdir "./git-samples"
- ```
-
-2. Open a git terminal window, such as git bash, and use the `cd` command to change to a folder to install the sample app.
-
- ```bash
- cd "./git-samples"
+ git clone https://github.com/Azure-Samples/azure-cosmos-db-graph-python-getting-started.git
```
-3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+1. Change to the directory where the sample app is located.
```bash
- git clone https://github.com/Azure-Samples/azure-cosmos-db-graph-python-getting-started.git
+ cd azure-cosmos-db-graph-python-getting-started
``` ## Review the code
-This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. The snippets are all taken from the *connect.py* file in the *C:\git-samples\azure-cosmos-db-graph-python-getting-started\\* folder. Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-information).
+This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. The snippets are all taken from the *connect.py* file in the repo [git-samples\azure-cosmos-db-graph-python-getting-started](https://github.com/Azure-Samples/azure-cosmos-db-graph-python-getting-started). Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-information).
-* The Gremlin `client` is initialized in line 155 in *connect.py*. Make sure to replace `<YOUR_DATABASE>` and `<YOUR_CONTAINER_OR_GRAPH>` with the values of your account's database name and graph name:
+* The Gremlin `client` is initialized in *connect.py* with `client.Client()`. Make sure to replace `<YOUR_DATABASE>` and `<YOUR_CONTAINER_OR_GRAPH>` with the values of your account's database name and graph name:
```python ...
This step is optional. If you're interested in learning how the database resourc
... ```
-* A series of Gremlin steps are declared at the beginning of the *connect.py* file. They are then executed using the `client.submitAsync()` method:
+* A series of Gremlin steps (queries) are declared at the beginning of the *connect.py* file. They're then executed using the `client.submitAsync()` method. For example, to run the cleanup graph step, you'd use the following code:
```python client.submitAsync(_gremlin_cleanup_graph)
Now go back to the Azure portal to get your connection information and copy it i
1. In your Azure Cosmos DB account in the [Azure portal](https://portal.azure.com/), select **Keys**.
- Copy the first portion of the URI value.
+ Copy the first portion of the **URI** value.
:::image type="content" source="./media/quickstart-python/keys.png" alt-text="View and copy an access key in the Azure portal, Keys page":::
-2. Open the *connect.py* file and in line 155 paste the URI value over `<YOUR_ENDPOINT>` in here:
+2. Open the *connect.py* file, find the `client.Client()` definition, and paste the URI value over `<YOUR_ENDPOINT>` in here:
```python client = client.Client('wss://<YOUR_ENDPOINT>.gremlin.cosmosdb.azure.com:443/','g',
Now go back to the Azure portal to get your connection information and copy it i
password="<YOUR_PASSWORD>") ```
-4. On the **Keys** page, use the copy button to copy the PRIMARY KEY and paste it over `<YOUR_PASSWORD>` in the `password=<YOUR_PASSWORD>` parameter.
+4. On the **Keys** page, use the copy button to copy the **PRIMARY KEY** and paste it over `<YOUR_PASSWORD>` in the `password=<YOUR_PASSWORD>` parameter.
- The entire `client` object definition should now look like this code:
+ The `client` object definition should now look similar to the following:
```python client = client.Client('wss://test.gremlin.cosmosdb.azure.com:443/','g', username="/dbs/sample-database/colls/sample-graph",
Now go back to the Azure portal to get your connection information and copy it i
## Run the console app
-1. In the git terminal window, `cd` to the azure-cosmos-db-graph-python-getting-started folder.
+1. Start in a terminal window in the root of the folder where you cloned the sample app. If you are using Visual Studio Code, you can open a terminal window by selecting **Terminal** > **New Terminal**. Typically, you'll create a virtual environment to run the code. For more information, see [Python virtual environments](https://docs.python.org/3/tutorial/venv.html).
- ```git
- cd "./git-samples\azure-cosmos-db-graph-python-getting-started"
+ ```bash
+ cd azure-cosmos-db-graph-python-getting-started
```
-2. In the git terminal window, use the following command to install the required Python packages.
+1. Install the required Python packages.
``` pip install -r requirements.txt ```
-3. In the git terminal window, use the following command to start the Python application.
+1. Start the Python application.
``` python connect.py
Now go back to the Azure portal to get your connection information and copy it i
<a id="add-sample-data"></a> ## Review and add sample data
-After the vertices and edges are inserted, you can now go back to Data Explorer and see the vertices added to the graph, and add additional data points.
+After the vertices and edges are inserted, you can now go back to Data Explorer and see the vertices added to the graph, and add more data points.
-1. In your Azure Cosmos DB account in the Azure portal, select **Data Explorer**, expand **sample-graph**, select **Graph**, and then select **Apply Filter**.
+1. In your Azure Cosmos DB account in the Azure portal, select **Data Explorer**, expand **sample-database**, expand **sample-graph**, select **Graph**, and then select **Execute Gremlin Query**.
- :::image type="content" source="./media/quickstart-python/azure-cosmosdb-data-explorer-expanded.png" alt-text="Screenshot shows Graph selected from the A P I with the option to Apply Filter.":::
+ :::image type="content" source="./media/quickstart-python/azure-cosmosdb-data-explorer-expanded.png" alt-text="Screenshot shows Graph selected from the A P I with the option to Execute Gremlin Query.":::
2. In the **Results** list, notice three new users are added to the graph. You can move the vertices around by dragging and dropping, zoom in and out by scrolling the wheel of your mouse, and expand the size of the graph with the double-arrow.
After the vertices and edges are inserted, you can now go back to Data Explorer
4. Enter a label of *person*.
-5. Select **Add property** to add each of the following properties. Notice that you can create unique properties for each person in your graph. Only the id key is required.
+5. Select **Add property** to add each of the following properties. Notice that you can create unique properties for each person in your graph. Only the ID key is required.
key|value|Notes -|-|- pk|/pk|
- id|ashley|The unique identifier for the vertex. If you don't specify an id, one is generated for you.
+ id|ashley|The unique identifier for the vertex. If you don't specify an ID, one is generated for you.
gender|female| tech | java |
After the vertices and edges are inserted, you can now go back to Data Explorer
6. Select **OK**. You may need to expand your screen to see **OK** on the bottom of the screen.
-7. Select **New Vertex** again and add an additional new user.
+7. Select **New Vertex** again and add another new user.
8. Enter a label of *person*.
After the vertices and edges are inserted, you can now go back to Data Explorer
key|value|Notes -|-|- pk|/pk|
- id|rakesh|The unique identifier for the vertex. If you don't specify an id, one is generated for you.
+ id|rakesh|The unique identifier for the vertex. If you don't specify an ID, one is generated for you.
gender|male| school|MIT| 10. Select **OK**.
-11. Select the **Apply Filter** button with the default `g.V()` filter to display all the values in the graph. All of the users now show in the **Results** list.
+11. Select the **Execute Gremlin Query** button with the default `g.V()` filter to display all the values in the graph. All of the users now show in the **Results** list.
- As you add more data, you can use filters to limit your results. By default, Data Explorer uses `g.V()` to retrieve all vertices in a graph. You can change it to a different [graph query](tutorial-query.md), such as `g.V().count()`, to return a count of all the vertices in the graph in JSON format. If you changed the filter, change the filter back to `g.V()` and select **Apply Filter** to display all the results again.
+ As you add more data, you can use filters to limit your results. By default, Data Explorer uses `g.V()` to retrieve all vertices in a graph. You can change it to a different [graph query](tutorial-query.md), such as `g.V().count()`, to return a count of all the vertices in the graph in JSON format. If you changed the filter, change the filter back to `g.V()` and select **Execute Gremlin Query** to display all the results again.
-12. Now we can connect rakesh and ashley. Ensure **ashley** is selected in the **Results** list, then select the edit button next to **Targets** on lower right side. You may need to widen your window to see the **Properties** area.
+12. Now we can connect **rakesh** and **ashley**. Ensure **ashley** is selected in the **Results** list, then select the edit button next to **Targets** on lower right side. You may need to widen your window to see the **Properties** area.
:::image type="content" source="./media/quickstart-python/azure-cosmosdb-data-explorer-edit-target.png" alt-text="Change the target of a vertex in a graph":::
cosmos-db Benchmarking Framework https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/benchmarking-framework.md
+
+ Title: Measure performance with a benchmarking framework
+
+description: Use YCSB to benchmark Azure Cosmos DB for NoSQL with recipes to measure read, write, scan, and update performance.
++++++ Last updated : 01/31/2023+++
+# Measure Azure Cosmos DB for NoSQL performance with a benchmarking framework
+
+There are more choices, now than ever, on the type of database to use with your data workload. One of the key factors to picking a database is the performance of the database or service, but benchmarking performance can be cumbersome and error-prone. The [benchmarking framework for Azure Databases](https://github.com/Azure/azure-db-benchmarking) simplifies the process of measuring performance with popular open-source benchmarking tools with low-friction recipes that implement common best practices. In Azure Cosmos DB for NoSQL, the framework implements [best practices for the Java SDK](performance-tips-java-sdk-v4.md) and uses the open-source [YCSB](https://ycsb.site) tool. In this guide, you use this benchmarking framework to implement a read workload to familiarize yourself with the framework.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- Azure Cosmos DB for NoSQL account. [Create a API for NoSQL account](how-to-create-account.md).
+ - Make sure you note the endpoint URI and primary key for the account. [API for NoSQL primary keys](../database-security.md?tabs=sql-api#primary-keys).
+- Azure Storage account. [Create an Azure Storage account](../../storage/common/storage-account-create.md).
+ - Make sure you note the connection string for the storage account. [Vies Azure Storage connection string](../../storage/common/storage-account-keys-manage.md?tabs=azure-portal#view-account-access-keys).
+- Second empty resource group. [Create a resource group](../../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups).
+- [Azure Command-Line Interface (CLI)](/cli/azure/).
+
+## Create Azure Cosmos DB account resources
+
+First, you create a database and container in the existing API for NoSQL account.
+
+### [Azure portal](#tab/azure-portal)
+
+1. Navigate to your existing API for NoSQL account in the [Azure portal](https://portal.azure.com/).
+
+1. In the resource menu, select **Data Explorer**.
+
+ :::image type="content" source="media/benchmarking-framework/resource-menu-data-explorer.png" lightbox="media/benchmarking-framework/resource-menu-data-explorer.png" alt-text="Screenshot of the Data Explorer option highlighted in the resource menu.":::
+
+1. On the **Data Explorer** page, select the **New Container** option in the command bar.
+
+ :::image type="content" source="media/benchmarking-framework/page-data-explorer-new-container.png" alt-text="Screenshot of the New Container option in the Data Explorer command bar.":::
+
+1. In the **New Container** dialog, create a new container with the following settings:
+
+ | Setting | Value |
+ | | |
+ | **Database id** | `ycsb` |
+ | **Database throughput type** | **Manual** |
+ | **Database throughput amount** | `400` |
+ | **Container id** | `usertable` |
+ | **Partition key** | `/id` |
+
+ :::image type="content" source="media/benchmarking-framework/dialog-new-container.png" alt-text="Screenshot of the New Container dialog on the Data Explorer page.":::
+
+### [Azure CLI](#tab/azure-cli)
+
+1. If you haven't already, sign in to the Azure CLI using the [`az login`](/cli/azure/reference-index#az-login) command.
+
+1. Create shell variables for the following values:
+
+ - Name of your existing Azure Cosmos DB for NoSQL account named `cosmosAccountName`.
+ - Name of your first resource group with resources named `sourceResourceGroupName`.
+ - Name of your second empty resource group named `targetResourceGroupName`.
+ - Existing Azure Cosmos DB for NoSQL account endpoint URI named `cosmosEndpoint`
+ - Existing Azure Cosmos DB for NoSQL account primary key named `cosmosPrimaryKey`
+
+ ```azurecli-interactive
+ # Variable for Azure Cosmos DB for NoSQL account name
+ cosmosAccountName="<cosmos-db-nosql-account-name>"
+
+ # Variable for resource group with Azure Cosmos DB and Azure Storage accounts
+ sourceResourceGroupName="<first-resource-group-name>"
+
+ # Variable for empty resource group
+ targetResourceGroupName="<second-resource-group-name>"
+
+ # Variable for API for NoSQL endpoint URI
+ cosmosEndpoint="<cosmos-db-nosql-endpoint-uri>"
+
+ # Variable for API for NoSQL primary key
+ cosmosPrimaryKey="<cosmos-db-nosql-primary-key>"
+
+ # Variable for Azure Storage account name
+ storageAccountName="<storage-account-name>"
+
+ # Variable for storage account connection string
+ storageConnectionString="<storage-connection-string>"
+ ```
+
+1. Using the [`az cosmosdb sql database create`](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create) command, create a new database with the following settings:
+
+ | Setting | Value |
+ | | |
+ | **Database id** | `ycsb` |
+ | **Database throughput type** | **Manual** |
+ | **Database throughput amount** | `400` |
+
+ ```azurecli-interactive
+ az cosmosdb sql database create \
+ --resource-group $sourceResourceGroupName \
+ --account-name $cosmosAccountName \
+ --name "ycsb" \
+ --throughput 400
+ ```
+
+1. Using the [`az cosmosdb sql container create`](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create) command, create a new container with the following settings:
+
+ | Setting | Value |
+ | | |
+ | **Database id** | `ycsb` |
+ | **Container id** | `usertable` |
+ | **Partition key** | `/id` |
+
+ ```azurecli-interactive
+ az cosmosdb sql container create \
+ --resource-group $sourceResourceGroupName \
+ --account-name $cosmosAccountName \
+ --database-name "ycsb" \
+ --name "usertable" \
+ --partition-key-path "/id"
+ ```
+++
+## Deploy benchmarking framework to Azure
+
+Now, you use an [Azure Resource Manager template](../../azure-resource-manager/templates/overview.md) to deploy the benchmarking framework to Azure with the default read recipe.
+
+### [Azure portal](#tab/azure-portal)
+
+1. Deploy the benchmarking framework using an Azure Resource Manager template available at this link.
+
+ :::image type="content" source="https://aka.ms/deploytoazurebutton" alt-text="Deploy to Azure button." link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-db-benchmarking%2Fmain%2Fcosmos%2Fsql%2Ftools%2Fjava%2Fycsb%2Frecipes%2Fread%2Ftry-it-read%2Fazuredeploy.json":::
+
+1. On the Custom Deployment page, the following parameters
+
+ :::image type="content" source="media/benchmarking-framework/page-custom-deployment.png" lightbox="media/benchmarking-framework/page-custom-deployment.png" alt-text="Screenshot of the Custom Deployment page with parameters values filled out.":::
+
+1. Select **Review + create** and then **Create** to deploy the template.
+
+1. Wait for the deployment to complete.
+
+ > [!TIP]
+ > The deployment can take 5-10 minutes to complete.
+
+### [Azure CLI](#tab/azure-cli)
+
+1. Use [`az deployment group create`](/cli/azure/deployment/group#az-deployment-group-create) to deploy the benchmarking framework using an Azure Resource Manager template.
+
+ ```azurecli-interactive
+ # Variable for raw template JSON on GitHub
+ templateUri="https://raw.githubusercontent.com/Azure/azure-db-benchmarking/main/cosmos/sql/tools/java/ycsb/recipes/read/try-it-read/azuredeploy.json"
+
+ az deployment group create \
+ --resource-group $targetResourceGroupName \
+ --name "benchmarking-framework" \
+ --template-uri $templateUri \
+ --parameters \
+ adminPassword='P@ssw.rd' \
+ resultsStorageConnectionString=$storageConnectionString \
+ cosmosURI=$cosmosEndpoint \
+ cosmosKey=$cosmosPrimaryKey
+ ```
+
+1. Wait for the deployment to complete.
+
+ > [!TIP]
+ > The deployment can take 5-10 minutes to complete.
+++
+## View results of the benchmark
+
+Now, you can use the existing Azure Storage account to check the status of the benchmark job and view the aggregated results. The status is stored using a storage table and the results are aggregated into a storage blob using the CSV format.
+
+### [Azure portal](#tab/azure-portal)
+
+1. Navigate to your existing Azure Storage account in the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to a storage table named **ycsbbenchmarkingmetadata** and locate the entity with a partition key of `ycsb_sql`.
+
+ :::image type="content" source="media/benchmarking-framework/storage-table-data.png" alt-text="Screenshot of the ycsbbenchmarkingMetadata table in a storage account.":::
+
+1. Observe the `JobStatus` field of the table entity. Initially, the status of the job is `Started` and it includes a timestamp in the `JobStartTime` property but not the `JobFinishTime` property.
+
+1. Wait until the job has a status of `Finished` and includes a timestamp in the `JobFinishTime` property.
+
+ > [!TIP]
+ > It can take approximately 20-30 minutes for the job to finish.
+
+1. Navigate to the storage container in the same account with a prefix of **ycsbbenchmarking-***. Observe the output and diagnostic blobs for the tool.
+
+ :::image type="content" source="media/benchmarking-framework/storage-blob-output.png" alt-text="Screenshot of the container and output blobs from the benchmarking tool.":::
+
+1. Open the **aggregation.csv** blob and observe the content. You should now have a CSV dataset with aggregated results from all the benchmark clients.
+
+ :::image type="content" source="media/benchmarking-framework/storage-blob-aggregation-results.png" alt-text="Screenshot of the content of the aggregation results blob.":::
+
+ ```output
+ Operation,Count,Throughput,Min(microsecond),Max(microsecond),Avg(microsecond),P9S(microsecond),P99(microsecond)
+ READ,180000,299,706,448255,1079,1159,2867
+ ```
+
+### [Azure CLI](#tab/azure-cli)
+
+1. Query the job record in a storage table named `ycsbbenchmarkingmetadata` using [`az storage entity query`](/cli/azure/storage/entity#az-storage-entity-query).
+
+ ```azurecli-interactive
+ az storage entity query \
+ --account-name $storageAccountName \
+ --connection-string $storageConnectionString \
+ --table-name ycsbbenchmarkingmetadata
+ ```
+
+1. Observe the results of this query. The results should return a single job with `JobStartTime`, `JobStatus`, and `JobFinishTime` properties. Initially, the status of the job is `Started` and it includes a timestamp in the `JobStartTime` property but not the `JobFinishTime` property.
+
+ ```output
+ {
+ "items": [
+ {
+ "JobFinishTime": "",
+ "JobStartTime": "2023-02-02T13:59:42Z",
+ "JobStatus": "Started",
+ "NoOfClientsCompleted": "0",
+ "NoOfClientsStarted": {
+ "edm_type": "Edm.Int64",
+ "value": 1
+ },
+ "PartitionKey": "ycsb_sql",
+ ...
+ }
+ ],
+ ...
+ }
+ ```
+
+1. If necessary, run `az storage entity query` multiple times until the job has a status of `Finished` and includes a timestamp in the `JobFinishTime` property.
+
+ ```output
+ {
+ "items": [
+ {
+ "JobFinishTime": "2023-02-02T14:21:12Z",
+ "JobStartTime": "2023-02-02T13:59:42Z",
+ "JobStatus": "Finished",
+ ...
+ }
+ ],
+ ...
+ }
+ ```
+
+ > [!TIP]
+ > It can take approximately 20-30 minutes for the job to finish.
+
+1. Find the name of the most recently modified storage container with a prefix of `ycsbbenchmarking-*` using [`az storage container list`](/cli/azure/storage/container#az-storage-container-list) and a [JMESPath query](/cli/azure/query-azure-cli).
+
+ ```azurecli-interactive
+ az storage container list \
+ --account-name $storageAccountName \
+ --connection-string $storageConnectionString \
+ --query "sort_by([?starts_with(name, 'ycsbbenchmarking-')], &properties.lastModified)[-1].name" \
+ --output tsv
+ ```
+
+1. Store the container string in a variable named `storageConnectionString`.
+
+ ```azurecli-interactive
+ storageContainerName=$( \
+ az storage container list \
+ --account-name $storageAccountName \
+ --connection-string $storageConnectionString \
+ --query "sort_by([?starts_with(name, 'ycsbbenchmarking-')], &properties.lastModified)[-1].name" \
+ --output tsv \
+ )
+ ```
+
+1. Use [`az storage blob query`]/cli/azure/storage/blob#az-storage-blob-query) to query the job results in a storage blob stored in the previously located container.
+
+ ```azurecli-interactive
+ az storage blob query \
+ --account-name $storageAccountName \
+ --connection-string $storageConnectionString \
+ --container-name $storageContainerName \
+ --name aggregation.csv \
+ --query-expression "SELECT * FROM BlobStorage"
+ ```
+
+1. Observe the results of this query. You should now have a CSV dataset with aggregated results from all the benchmark clients.
+
+ ```output
+ Operation,Count,Throughput,Min(microsecond),Max(microsecond),Avg(microsecond),P9S(microsecond),P99(microsecond)
+ READ,180000,299,706,448255,1079,1159,2867
+ ```
+++
+## Recipes
+
+The [benchmarking framework for Azure Databases](https://github.com/Azure/azure-db-benchmarking) includes recipes to encapsulate the workload definitions that are passed to the underlying benchmarking tool for a "1-Click" experience. The workload definitions were designed based on the best practices published by the Azure Cosmos DB team and the benchmarking tool's team. The recipes have been tested and validated for consistent results.
+
+You can expect to see the following latencies for all the read and write recipes in the [GitHub repository](https://github.com/Azure/azure-db-benchmarking/tree/main/cosmos/sql/tools/java/ycsb/recipes).
+
+- **Read latency**
+
+ :::image type="content" source="media\benchmarking-framework\typical-read-latency.png" lightbox="media\benchmarking-framework\typical-read-latency.png" alt-text="Diagram of the typical read latency averaging around 1 millisecond to 2 milliseconds.":::
+
+- **Write latency**
+
+ :::image type="content" source="media\benchmarking-framework\typical-write-latency.png" lightbox="media\benchmarking-framework\typical-write-latency.png" alt-text="Diagram of the typical write latency averaging around 4 milliseconds.":::
+
+## Common issues
+
+This section includes the common errors that may occur when running the benchmarking tool. The error logs for the tool are typically available in a container within the Azure Storage account.
++
+- If the logs aren't available in the storage account, this issue is typically caused by an incorrect or missing storage connection string. In this case, this error is listed in the **agent.out** file within the **/home/benchmarking** folder of the client virtual machine.
+
+ ```output
+ Error while accessing storage account, exiting from this machine in agent.out on the VM
+ ```
+
+- This error is listed in the **agent.out** file both in the client VM and the storage account if the Azure Cosmos DB endpoint URI is incorrect or unreachable.
+
+ ```output
+ Caused by: java.net.UnknownHostException: rtcosmosdbsss.documents.azure.com: Name or service not known
+ ```
+
+- This error is listed in the **agent.out** file both in the client VM and the storage account if the Azure Cosmos DB key is incorrect.
+
+ ```output
+ The input authorization token can't serve the request. The wrong key is being used….
+ ```
+
+## Next steps
+
+- Learn more about the benchmarking tool with the [Getting Started guide](https://github.com/Azure/azure-db-benchmarking/tree/main/cosmos/sql/tools/java/ycsb/recipes).
cosmos-db How To Write Stored Procedures Triggers Udfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-write-stored-procedures-triggers-udfs.md
To call a stored procedure, trigger, and user-defined function, you need to regi
> [!NOTE] > For partitioned containers, when executing a stored procedure, a partition key value must be provided in the request options. Stored procedures are always scoped to a partition key. Items that have a different partition key value will not be visible to the stored procedure. This also applied to triggers as well.
-> [!Tip]
+
+> [!NOTE]
+> Server-side JavaScript features including stored procedures, triggers, and user-defined functions do not support importing modules.
+
+> [!TIP]
> Azure Cosmos DB supports deploying containers with stored procedures, triggers and user-defined functions. For more information see [Create an Azure Cosmos DB container with server-side functionality.](./manage-with-templates.md#create-sproc) ## <a id="stored-procedures"></a>How to write stored procedures
cosmos-db Performance Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/performance-testing.md
- Title: Performance and scale testing with Azure Cosmos DB
-description: Learn how to do scale and performance testing with Azure Cosmos DB. You can then evaluate the functionality of Azure Cosmos DB for high-performance application scenarios.
------ Previously updated : 08/26/2021---
-# Performance and scale testing with Azure Cosmos DB
-
-Performance and scale testing is a key step in application development. For many applications, the database tier has a significant impact on overall performance and scalability. Therefore, it's a critical component of performance testing. [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) is purpose-built for elastic scale and predictable performance. These capabilities make it a great fit for applications that need a high-performance database tier.
-
-This article is a reference for developers implementing performance test suites for their Azure Cosmos DB workloads. It also can be used to evaluate Azure Cosmos DB for high-performance application scenarios. It focuses primarily on isolated performance testing of the database, but also includes best practices for production applications.
-
-After reading this article, you'll be able to answer the following questions:
-
-* Where can I find a sample .NET client application for performance testing of Azure Cosmos DB?
-* How do I achieve high throughput levels with Azure Cosmos DB from my client application?
-
-To get started with code, download the project from [Azure Cosmos DB performance testing sample](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Tools/Benchmark).
-
-> [!NOTE]
-> The goal of this application is to demonstrate how to get the best performance from Azure Cosmos DB with a small number of client machines. The goal of the sample is not to achieve the peak throughput capacity of Azure Cosmos DB (which can scale without any limits).
-
-If you're looking for client-side configuration options to improve Azure Cosmos DB performance, see [Azure Cosmos DB performance tips](performance-tips.md).
-
-## Run the performance testing application
-The quickest way to get started is to compile and run the .NET sample, as described in the following steps. You can also review the source code and implement similar configurations on your own client applications.
-
-**Step 1:** Download the project from [Azure Cosmos DB performance testing sample](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Tools/Benchmark), or fork the GitHub repository.
-
-**Step 2:** Modify the settings for EndpointUrl, AuthorizationKey, CollectionThroughput, and DocumentTemplate (optional) in App.config.
-
-> [!NOTE]
-> Before you provision collections with high throughput, refer to the [Pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) to estimate the costs per collection. Azure Cosmos DB bills storage and throughput independently on an hourly basis. You can save costs by deleting or lowering the throughput of your Azure Cosmos DB containers after testing.
->
->
-
-**Step 3:** Compile and run the console app from the command line. You should see output like the following:
-
-```bash
-C:\Users\cosmosdb\Desktop\Benchmark>DocumentDBBenchmark.exe
-Summary:
-
-Endpoint: https://arramacquerymetrics.documents.azure.com:443/
-Collection : db.data at 100000 request units per second
-Document Template*: Player.json
-Degree of parallelism*: -1
-
-DocumentDBBenchmark starting...
-Found collection data with 100000 RU/s
-Starting Inserts with 100 tasks
-Inserted 4503 docs @ 4491 writes/s, 47070 RU/s (122B max monthly 1KB reads)
-Inserted 17910 docs @ 8862 writes/s, 92878 RU/s (241B max monthly 1KB reads)
-Inserted 32339 docs @ 10531 writes/s, 110366 RU/s (286B max monthly 1KB reads)
-Inserted 47848 docs @ 11675 writes/s, 122357 RU/s (317B max monthly 1KB reads)
-Inserted 58857 docs @ 11545 writes/s, 120992 RU/s (314B max monthly 1KB reads)
-Inserted 69547 docs @ 11378 writes/s, 119237 RU/s (309B max monthly 1KB reads)
-Inserted 80687 docs @ 11345 writes/s, 118896 RU/s (308B max monthly 1KB reads)
-Inserted 91455 docs @ 11272 writes/s, 118131 RU/s (306B max monthly 1KB reads)
-Inserted 102129 docs @ 11208 writes/s, 117461 RU/s (304B max monthly 1KB reads)
-Inserted 112444 docs @ 11120 writes/s, 116538 RU/s (302B max monthly 1KB reads)
-Inserted 122927 docs @ 11063 writes/s, 115936 RU/s (301B max monthly 1KB reads)
-Inserted 133157 docs @ 10993 writes/s, 115208 RU/s (299B max monthly 1KB reads)
-Inserted 144078 docs @ 10988 writes/s, 115159 RU/s (298B max monthly 1KB reads)
-Inserted 155415 docs @ 11013 writes/s, 115415 RU/s (299B max monthly 1KB reads)
-Inserted 166126 docs @ 10992 writes/s, 115198 RU/s (299B max monthly 1KB reads)
-Inserted 173051 docs @ 10739 writes/s, 112544 RU/s (292B max monthly 1KB reads)
-Inserted 180169 docs @ 10527 writes/s, 110324 RU/s (286B max monthly 1KB reads)
-Inserted 192469 docs @ 10616 writes/s, 111256 RU/s (288B max monthly 1KB reads)
-Inserted 199107 docs @ 10406 writes/s, 109054 RU/s (283B max monthly 1KB reads)
-Inserted 200000 docs @ 9930 writes/s, 104065 RU/s (270B max monthly 1KB reads)
-
-Summary:
-
-Inserted 200000 docs @ 9928 writes/s, 104063 RU/s (270B max monthly 1KB reads)
-
-DocumentDBBenchmark completed successfully.
-Press any key to exit...
-```
-
-**Step 4 (if necessary):** The throughput reported (RU/s) from the tool should be the same or higher than the provisioned throughput of the collection or a set of collections. If it's not, increasing the DegreeOfParallelism in small increments might help you reach the limit. If the throughput from your client app plateaus, start multiple instances of the app on additional client machines. If you need help with this step file a support ticket from the [Azure portal](https://portal.azure.com).
-
-After you have the app running, you can try different [indexing policies](../index-policy.md) and [consistency levels](../consistency-levels.md) to understand their impact on throughput and latency. You can also review the source code and implement similar configurations to your own test suites or production applications.
-
-## Next steps
-
-In this article, we looked at how you can perform performance and scale testing with Azure Cosmos DB by using a .NET console app. For more information, see the following articles:
-
-* [Azure Cosmos DB performance testing sample](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Tools/Benchmark)
-* [Client configuration options to improve Azure Cosmos DB performance](performance-tips.md)
-* [Server-side partitioning in Azure Cosmos DB](../partitioning-overview.md)
-* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
-
cosmos-db Udfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/udfs.md
However, we recommending avoiding UDFs when:
If you must use the same UDF multiple times in a query, you should reference the UDF in a [subquery](subquery.md#evaluate-once-and-reference-many-times), allowing you to use a JOIN expression to evaluate the UDF once but reference it many times.
+> [!NOTE]
+> Server-side JavaScript features including user-defined functions do not support importing modules.
+ ## Examples The following example registers a UDF under an item container in the Azure Cosmos DB database. The example creates a UDF whose name is `REGEX_MATCH`. It accepts two JSON string values, `input` and `pattern`, and checks if the first matches the pattern specified in the second using JavaScript's `string.match()` function.
cosmos-db Stored Procedures Triggers Udfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/stored-procedures-triggers-udfs.md
Writing stored procedures, triggers, and user-defined functions (UDFs) in JavaSc
> [!TIP] > Stored procedures are best suited for operations that are write-heavy and require a transaction across a partition key value. When deciding whether to use stored procedures, optimize around encapsulating the maximum amount of writes possible. Generally speaking, stored procedures are not the most efficient means for doing large numbers of read or query operations, so using stored procedures to batch large numbers of reads to return to the client will not yield the desired benefit. For best performance, these read-heavy operations should be done on the client-side, using the Azure Cosmos DB SDK.
+> [!NOTE]
+> Server-side JavaScript features including stored procedures, triggers, and user-defined functions do not support importing modules.
+ ## Transactions Transaction in a typical database can be defined as a sequence of operations performed as a single logical unit of work. Each transaction provides **ACID property guarantees**. ACID is a well-known acronym that stands for: **A**tomicity, **C**onsistency, **I**solation, and **D**urability.
Transaction in a typical database can be defined as a sequence of operations per
In Azure Cosmos DB, JavaScript runtime is hosted inside the database engine. Hence, requests made within the stored procedures and the triggers execute in the same scope as the database session. This feature enables Azure Cosmos DB to guarantee ACID properties for all operations that are part of a stored procedure or a trigger. For examples, see [how to implement transactions](how-to-write-stored-procedures-triggers-udfs.md#transactions) article.
+> [!TIP]
+> For transaction support in Azure Cosmos DB for NoSQL, you can also implement a transactional batch using your preferred client SDK. For more information, see [Transactional batch operations in Azure Cosmos DB for NoSQL](transactional-batch.md).
+ ### Scope of a transaction Stored procedures are associated with an Azure Cosmos DB container and stored procedure execution is scoped to a logical partition key. Stored procedures must include a logical partition key value during execution that defines the logical partition for the scope of the transaction. For more information, see [Azure Cosmos DB partitioning](../partitioning-overview.md) article.
cosmos-db Troubleshoot Dotnet Sdk Request Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-dotnet-sdk-request-timeout.md
description: Learn how to diagnose and fix .NET SDK request timeout exceptions.
Previously updated : 09/16/2022 Last updated : 02/15/2023
When evaluating the case for timeout errors:
The SDK has two distinct alternatives to control timeouts, each with a different scope.
-### RequestTimeout
+### Request level timeouts
-The `CosmosClientOptions.RequestTimeout` (or `ConnectionPolicy.RequestTimeout` for SDK v2) configuration allows you to set a timeout that affects each individual network request. An operation started by a user can span multiple network requests (for example, there could be throttling). This configuration would apply for each network request on the retry. This timeout isn't an end-to-end operation request timeout.
+The `CosmosClientOptions.RequestTimeout` (or `ConnectionPolicy.RequestTimeout` for SDK v2) configuration allows you to set a timeout for the network request after the request left the SDK and is on the network, until a response is received.
+
+The `CosmosClientOptions.OpenTcpConnectionTimeout` (or `ConnectionPolicy.OpenTcpConnectionTimeout` for SDK v2) configuration allows you to set a timeout for the time spent opening an initial connection. Once a connection is opened, subsequent requests will use the connection.
+
+An operation started by a user can span multiple network requests, for example, retries. These two configurations are per-request, not end-to-end for an operation.
### CancellationToken
-All the async operations in the SDK have an optional CancellationToken parameter. This [CancellationToken](/dotnet/standard/threading/how-to-listen-for-cancellation-requests-by-polling) parameter is used throughout the entire operation, across all network requests. In between network requests, the cancellation token might be checked and an operation canceled if the related token is expired. The cancellation token should be used to define an approximate expected timeout on the operation scope.
+All the async operations in the SDK have an optional CancellationToken parameter. This [CancellationToken](/dotnet/standard/threading/how-to-listen-for-cancellation-requests-by-polling) parameter is used throughout the entire operation, across all network requests and retries. In between network requests, the cancellation token might be checked and an operation canceled if the related token is expired. The cancellation token should be used to define an approximate expected timeout on the operation scope.
> [!NOTE] > The `CancellationToken` parameter is a mechanism where the library will check the cancellation when it [won't cause an invalid state](https://devblogs.microsoft.com/premier-developer/recommended-patterns-for-cancellationtoken/). The operation might not cancel exactly when the time defined in the cancellation is up. Instead, after the time is up, it cancels when it's safe to do so.
These exceptions are safe to retry on and can be treated as [timeouts](conceptua
#### Solution
-Verify the configured time in your `CancellationToken`, make sure that it's greater than your [RequestTimeout](#requesttimeout) and the [CosmosClientOptions.OpenTcpConnectionTimeout](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.opentcpconnectiontimeout) (if you're using [Direct mode](sdk-connection-modes.md)).
+Verify the configured time in your `CancellationToken`, make sure that it's greater than your [RequestTimeout](#request-level-timeouts) and the [CosmosClientOptions.OpenTcpConnectionTimeout](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.opentcpconnectiontimeout) (if you're using [Direct mode](sdk-connection-modes.md)).
If the available time in the `CancellationToken` is less than the configured timeouts, and the SDK is facing [transient connectivity issues](conceptual-resilient-sdk-applications.md#timeouts-and-connectivity-related-failures-http-408503), the SDK won't be able to retry and will throw `CosmosOperationCanceledException`. ### High CPU utilization
cost-management-billing Create Enterprise Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/create-enterprise-subscription.md
Previously updated : 05/25/2022 Last updated : 02/14/2023
Use the following information to create an EA subscription.
After the new subscription is created, the account owner can see it in on the **Subscriptions** page.
+## Can't view subscription
+
+If you created a subscription but can't find it in the Subscriptions list view, a view filter might be applied.
+
+To clear the filter and view all subscriptions:
+
+1. In the Azure portal, navigate to **Subscriptions**.
+2. At the top of the list, select the Subscriptions filter item.
+3. At the top of the subscriptions filter box, select **All**. At the bottom of the subscriptions filter box, clear **Show only subscriptions selected in the global subscriptions filter**.
+ :::image type="content" source="./media/create-enterprise-subscription/subscriptions-filter-item.png" alt-text="Screenshot showing the Subscriptions filter box with options." lightbox="./media/create-enterprise-subscription/subscriptions-filter-item.png" :::
+4. Select **Apply** to close the box and refresh the list of subscriptions.
## Create an Azure subscription programmatically
cost-management-billing Create Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/create-subscription.md
Previously updated : 05/25/2022 Last updated : 02/14/2023
Use the following procedure to create a subscription for yourself or for someone
After the new subscription is created, the owner of the subscription can see it in on the **Subscriptions** page.
+## Can't view subscription
+
+If you created a subscription but can't find it in the Subscriptions list view, a view filter might be applied.
+
+To clear the filter and view all subscriptions:
+
+1. In the Azure portal, navigate to **Subscriptions**.
+2. At the top of the list, select the Subscriptions filter item.
+3. At the top of the subscriptions filter box, select **All**. At the bottom of the subscriptions filter box, clear **Show only subscriptions selected in the global subscriptions filter**.
+ :::image type="content" source="./media/create-subscription/subscriptions-filter-item.png" alt-text="Screenshot showing the Subscriptions filter box with options." lightbox="./media/create-subscription/subscriptions-filter-item.png" :::
+4. Select **Apply** to close the box and refresh the list of subscriptions.
+ ## Create an Azure subscription programmatically You can also create subscriptions programmatically. For more information, see [Create Azure subscriptions programmatically](programmatically-create-subscription.md).
cost-management-billing Charge Back Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/charge-back-usage.md
Enterprise Agreement and Microsoft Customer Agreement billing readers can view amortized cost data for reservations. They can use the cost data to charge back the monetary value for a subscription, resource group, resource, or a tag to their partners. In amortized data, the effective price is the prorated hourly reservation cost. The cost is the total cost of reservation usage by the resource on that day.
-Users with an individual subscription can get the amortized cost data from their usage file. When a resource gets a reservation discount, the *AdditionalInfo* section in the usage file contains the reservation details. For more information, see [Download usage from the Azure portal](../understand/download-azure-daily-usage.md#download-usage-from-the-azure-portal-csv).
+Users with an individual subscription can get the amortized cost data from their usage file. When a resource gets a reservation discount, the *AdditionalInfo* section in the usage file contains the reservation details. For more information, see [View and download your Azure usage and charges](../understand/download-azure-daily-usage.md).
## See reservation usage data for show back and charge back
cost-management-billing Charge Back Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/charge-back-costs.md
Enterprise Agreement and Microsoft Customer Agreement billing readers can view amortized cost data for savings plans. They can use the cost data to charge back the monetary value for a subscription, resource group, resource, or a tag to their partners. In amortized data, the effective price is the prorated hourly savings plan cost. The cost is the total cost of savings plan usage by the resource on that day.
-Users with an individual subscription can get the amortized cost data from their usage file. When a resource gets a savings plan discount, the _AdditionalInfo_ section in the usage file contains the savings plan details. For more information, see [Download usage from the Azure portal](../understand/download-azure-daily-usage.md#download-usage-from-the-azure-portal-csv).
+Users with an individual subscription can get the amortized cost data from their usage file. When a resource gets a savings plan discount, the _AdditionalInfo_ section in the usage file contains the savings plan details. For more information, see [View and download your Azure usage and charges](../understand/download-azure-daily-usage.md).
## View savings plan usage data for show back and charge back
cost-management-billing Download Azure Daily Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/download-azure-daily-usage.md
Previously updated : 12/16/2022 Last updated : 02/15/2023 # View and download your Azure usage and charges
Based on the type of subscription that you use, options to download your usage a
If you want to get cost and usage data using the Azure CLI, see [Get usage data with the Azure CLI](../automate/get-usage-data-azure-cli.md).
-## Download usage from the Azure portal (.csv)
+## Download usage for MOSP billing accounts
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for *Cost Management + Billing*.
To view and download usage data as a EA customer, you must be an Enterprise Admi
To view and download usage data for a billing profile, you must be a billing profile Owner, Contributor, Reader, or Invoice manager.
-### Download usage for billed charges
+Use the following information to download usage for billed charges. The same steps are used to download open and pending charges, which is the month-to-date usage for the current billing period. Open and pending charges haven't been billed yet.
-1. Search for **Cost Management + Billing**.
-2. Select a billing profile.
-3. Select **Invoices**.
-4. In the invoice grid, find the row of the invoice corresponding to the usage you want to download.
-5. Select the ellipsis (`...`) at the end of the row.
-6. In the download context menu, select **Azure usage and charges**.
-
-### Download usage for open charges
-
-You can also download month-to-date usage for the current billing period, meaning the charges haven't been billed yet.
-
-1. Search for **Cost Management + Billing**.
-2. Select a billing profile.
-3. In the **Overview** area, select **Download Azure usage and charges**.
-
-### Download usage for pending charges
-
-If you have a Microsoft Customer Agreement, you can download month-to-date usage for the current billing period. These usage charges that haven't been billed yet.
+### Download usage file
1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Search for *Cost Management + Billing*.
-3. Select a billing profile. Depending on your access, you might need to select a billing account first.
-4. In the **Overview** area, find the download links beneath the recent charges.
-5. Select **Download usage and prices**.
+1. Search for *Cost Management + Billing*.
+1. Select a billing profile. Depending on your access, you might need to select a billing account first.
+1. In the left menu, select **Invoices**.
+1. In the invoice grid, find the row of the invoice corresponding to the usage file that you want to download.
+1. Select the ellipsis symbol (`...`) at the end of the row.
+1. In the context menu, select **Prepare Azure usage file**. A notification message appears stating that the usage file is being prepared.
+1. When the file is ready to download, select the **Click here to download** link in the notification. If you missed the notification, you can view it from **Notifications** area in top right of the Azure portal (the bell symbol).
## Get usage data with Azure CLI
cost-management-billing Pay Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/pay-bill.md
tags: billing, past due, pay now, bill, invoice, pay
Previously updated : 02/08/2023 Last updated : 02/10/2023 # Pay your Microsoft Customer Agreement Azure or Microsoft Online Subscription Program Azure bill
-This article applies to customers with a Microsoft Customer Agreement (MCA) and to customers who signed up for Azure through the Azure website (for a Microsoft Online Services Program account also called pay-as-you-go account).
+This article applies to customers with a Microsoft Customer Agreement (MCA) and to customers who signed up for Azure through the Azure website, Azure.com (for a Microsoft Online Services Program account also called pay-as-you-go account).
[Check your access to a Microsoft Customer Agreement](#check-access-to-a-microsoft-customer-agreement).
If you have Azure credits, they automatically apply to your invoice each billing
> [!NOTE] > Regardless of the payment method selected to complete your payment, you must specify the invoice number in the payment details.
+Here's a table summarizing payment methods for different agreement types
+
+|Agreement type| Credit card | Wire transfer┬╣ | Check┬▓ |
+| | - | | |
+| Microsoft Customer Agreement<br>purchased through a Microsoft representative | Γ£ö (with a $50,000.00 USD limit) | Γ£ö | Γ£ö |
+| Enterprise Agreement | Γ£ÿ | Γ£ö | Γ£ö |
+| Azure.com | Γ£ö | Γ£ö if approved to pay by invoice | Γ£ÿ |
+
+┬╣ If supported by your bank, an ACH credit transaction can be made automatically.
+
+┬▓ As noted previously, on April 1, 2023, Microsoft will stop accepting checks as a payment method for subscriptions that are paid by invoice.
+ ## Reserve Bank of India **The Reserve Bank of India has issued new directives.**
data-factory Concept Managed Airflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concept-managed-airflow.md
Managed Airflow in Azure Data Factory is a managed orchestration service forΓÇ»[
## When to use Managed Airflow?
-Azure Data Factory offers [Pipelines](concepts-pipelines-activities.md) to visually orchestrate data processes (UI-based authoring). While Managed Airflow, offers Airflow based python DAGs (python code-centric authoring) for defining the data orchestration process. If you have the Airflow background, or are currently using Apace Airflow, you may prefer to use the Managed Airflow instead of the pipelines. On the contrary, if you wouldn't like to write/ manage python-based DAGs for data process orchestration, you may prefer to use pipelines.
+Azure Data Factory offers [Pipelines](concepts-pipelines-activities.md) to visually orchestrate data processes (UI-based authoring). While Managed Airflow, offers Airflow based python DAGs (python code-centric authoring) for defining the data orchestration process. If you have the Airflow background, or are currently using Apache Airflow, you may prefer to use the Managed Airflow instead of the pipelines. On the contrary, if you wouldn't like to write/ manage python-based DAGs for data process orchestration, you may prefer to use pipelines.
With Managed Airflow, Azure Data Factory now offers multi-orchestration capabilities spanning across visual, code-centric, OSS orchestration requirements.
data-factory Connector Mongodb Atlas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mongodb-atlas.md
Previously updated : 01/18/2023 Last updated : 01/28/2023 # Copy data from or to MongoDB Atlas using Azure Data Factory or Synapse Analytics
Specifically, this MongoDB Atlas connector supports **versions up to 4.2**.
## Prerequisites
-If you use Azure Integration Runtime for copy, make sure you add the effective region's [Azure Integration Runtime IPs](azure-integration-runtime-ip-addresses.md) to the MongoDB Atlas IP Access List.
## Getting started
defender-for-cloud Custom Dashboards Azure Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-dashboards-azure-workbooks.md
Title: Workbooks gallery in Microsoft Defender for Cloud
description: Learn how to create rich, interactive reports of your Microsoft Defender for Cloud data with the integrated Azure Monitor Workbooks gallery -- Previously updated : 11/30/2022 Last updated : 02/02/2023 # Create rich, interactive reports of Defender for Cloud data
Within Microsoft Defender for Cloud, you can access the built-in workbooks to tr
:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-snip.png" alt-text="Secure score over time workbook.":::
-## Availability
+For pricing, check out the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
-| Aspect | Details |
-||:|
-| Release state: | General availability (GA) |
-| Pricing: | Free |
-| Required roles and permissions: | To save workbooks, you must have at least [Workbook Contributor](../role-based-access-control/built-in-roles.md#workbook-contributor) permissions on the target resource group |
-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet) |
+## Prerequisites
+
+**Required roles and permissions**: To save workbooks, you must have at least [Workbook Contributor](../role-based-access-control/built-in-roles.md#workbook-contributor) permissions on the target resource group
+
+**Cloud availability**: :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds :::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)
## Workbooks gallery in Microsoft Defender for Cloud
With the integrated Azure Workbooks functionality, Microsoft Defender for Cloud
- ['Vulnerability Assessment Findings' workbook](#use-the-vulnerability-assessment-findings-workbook) - View the findings of vulnerability scans of your Azure resources - ['Compliance Over Time' workbook](#use-the-compliance-over-time-workbook) - View the status of a subscription's compliance with the regulatory or industry standards you've selected - ['Active Alerts' workbook](#use-the-active-alerts-workbook) - View active alerts by severity, type, tag, MITRE ATT&CK tactics, and location.-- Price Estimation workbook - View monthly consolidated price estimations for Microsoft Defender for Cloud plans based on the resource telemetry in your own environment. These numbers are estimates based on retail prices and do not provide actual billing data.
+- Price Estimation workbook - View monthly consolidated price estimations for Microsoft Defender for Cloud plans based on the resource telemetry in your own environment. These numbers are estimates based on retail prices and don't provide actual billing data.
- Governance workbook - The governance report in the governance rules settings lets you track progress of the rules effective in the organization.
+- ['DevOps Security (Preview)' workbook](#use-the-devops-security-preview-workbook) - View a customizable foundation that helps you visualize the state of your DevOps posture for the connectors you've configured.
-In addition to the built-in workbooks, you can also find other useful workbooks found under the ΓÇ£Community" category, which are provided as is with no SLA or support. Choose one of the supplied workbooks or create your own.
+In addition to the built-in workbooks, you can also find other useful workbooks found under the ΓÇ£Community" category, which is provided as is with no SLA or support. Choose one of the supplied workbooks or create your own.
:::image type="content" source="media/custom-dashboards-azure-workbooks/workbooks-gallery-microsoft-defender-for-cloud.png" alt-text="Screenshot showing the gallery of built-in workbooks in Microsoft Defender for Cloud.":::
The secure score over time workbook has five graphs for the subscriptions report
|**Score trends for the last week and month**<br>Use this section to monitor the current score and general trends of the scores for your subscriptions.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-1.png" alt-text="Trends for secure score on the built-in workbook.":::| |**Aggregated score for all selected subscriptions**<br>Hover your mouse over any point in the trend line to see the aggregated score at any date in the selected time range.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-2.png" alt-text="Aggregated score for all selected subscriptions.":::| |**Recommendations with the most unhealthy resources**<br>This table helps you triage the recommendations that have had the most resources changed to unhealthy over the selected period.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-3.png" alt-text="Recommendations with the most unhealthy resources.":::|
-|**Scores for specific security controls**<br>Defender for Cloud's security controls are logical groupings of recommendations. This chart shows you, at a glance, the weekly scores for all of your controls.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-4.png" alt-text="Scores for your security controls over the selected time period.":::|
+|**Scores for specific security controls**<br>Defender for Cloud's security controls is logical groupings of recommendations. This chart shows you, at a glance, the weekly scores for all of your controls.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-4.png" alt-text="Scores for your security controls over the selected time period.":::|
|**Resources changes**<br>Recommendations with the most resources that have changed state (healthy, unhealthy, or not applicable) during the selected period are listed here. Select any recommendation from the list to open a new table listing the specific resources.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-5.png" alt-text="Recommendations with the most resources that have changed health state.":::| ### Use the 'System Updates' workbook
You can get more details on any of these alerts by selecting it.
:::image type="content" source="media/custom-dashboards-azure-workbooks/active-alerts-high.png" alt-text="Screenshot that shows all the active alerts with high severity from a specific resource.":::
-The MITRE ATT&CK tactics displays by the order of the kill-chain, and the number of alerts the subscription has at each stage.
+The MITRE ATT&CK tactics display by the order of the kill-chain, and the number of alerts the subscription has at each stage.
:::image type="content" source="media/custom-dashboards-azure-workbooks/mitre-attack-tactics.png" alt-text="Screenshot showing the order of the kill-chain, and the number of alerts":::
-You can see all of the active alerts in a table with the ability to filter by columns. By selecting an alert, the alert view button appears.
+You can see all of the active alerts in a table with the ability to filter by columns. Select an alert to view button appears.
:::image type="content" source="media/custom-dashboards-azure-workbooks/active-alerts-table.png" alt-text="Screenshot showing the table of active alerts.":::
By selecting Map View, you can also see all alerts based on their location.
:::image type="content" source="media/custom-dashboards-azure-workbooks/alerts-map-view.png" alt-text="Screenshot of the alerts when viewed in a map.":::
-By selecting a location on the map you will be able to view all of the alerts for that location.
+Select a location on the map to view all of the alerts for that location.
:::image type="content" source="media/custom-dashboards-azure-workbooks/map-alert-details.png" alt-text="Screenshot showing the alerts in a specific location."::: You can see the details for that alert with the Open Alert View button.
+### Use the 'DevOps Security (Preview)' workbook
+
+This workbook provides a customizable data analysis and gives you the ability to create visual reports. You can use this workbook to view insights into your DevOps security posture in coordination with Defender for DevOps. This workbook allows you to visualize the state of your DevOps posture for the connectors you've configured in Defender for Cloud, code, dependencies, and hardening. You can then investigate credential exposure, including types of credentials and repository locations.
++
+> [!NOTE]
+> You must have a [Github connector](quickstart-onboard-github.md) or a [DevOps connector](quickstart-onboard-devops.md), connected to your environment in order to utilize this workbook
+
+**To deploy the workbook**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Workbooks**.
+
+1. Select the **DevOps Security (Preview)** workbook.
+
+The workbook will load and show you the Overview tab where you can see the number of exposed secrets, code security and DevOps security. All of these findings are broken down by total for each repository and the severity.
+
+Select the Secrets tab to view the count by secret type.
++
+The Code tab displays your count findings by tool and repository and your code scanning by severity.
++
+The Open Source Security (OSS) Vulnerabilities tab displays your OSS vulnerabilities by severity and the count of findings by repository.
++
+The Infrastructure as Code tab displays your findings by tool and repository.
++
+The Posture tab displays your security posture by severity and repository.
++
+The Threats and Tactics tab displays the total count of threats and tactics and by repository.
++ ## Import workbooks from other workbook galleries
-If you've built workbooks in other Azure services and want to move them into your Microsoft Defender for Cloud workbooks gallery:
+To move workbooks that you've built in other Azure services into your Microsoft Defender for Cloud workbooks gallery:
1. Open the target workbook.
defender-for-cloud Episode Twenty Five https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-five.md
Last updated 01/24/2023
## Next steps > [!div class="nextstepaction"]
-> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
+> [Governance capability improvements in Defender for Cloud](episode-twenty-six.md)
defender-for-cloud Episode Twenty Six https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-six.md
+
+ Title: Governance capability improvements in Defender for Cloud | Defender for Cloud in the field
+
+description: Learn about the need for governance and new at scale governance capability
+ Last updated : 02/15/2023++
+# Governance capability improvements in Defender for Cloud | Defender for Cloud in the field
+
+**Episode description**: In this episode of Defender for Cloud in the Field, Lior Arviv joins Yuri Diogenes to talk about the Governance capability improvements in Defender for Cloud. Lior gives a quick recap of the business need for governance and covers the new at scale governance capability. Lior demonstrates how to deploy governance at scale and how to monitor rules assignments and define priorities.
+<br>
+<br>
+<iframe src="https://aka.ms/docs/player?id=b1581d03-6575-4f13-b2ed-5b0c22d80c63" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+- [01:13](/shows/mdc-in-the-field/governance-improvements#time=01m13s) - Reviewing the need for cloud security governance
+- [04:10](/shows/mdc-in-the-field/governance-improvements#time=04m10s) - Governance at scale
+- [07:03](/shows/mdc-in-the-field/governance-improvements#time=07m03s) - Deployment options
+- [07:45](/shows/mdc-in-the-field/governance-improvements#time=07m45s) - Demonstration
+- [19:00](/shows/mdc-in-the-field/governance-improvements#time=19m00s) - Learn more about governance
++
+## Recommended resources
+ - Learn how to [drive your organization to remediate security recommendations with governance](governance-rules.md)
+ - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
+ - Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+ - For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+- Follow us on social media:
+
+ - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ - [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+- Learn more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
defender-for-cloud Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/github-action.md
Title: Configure the Microsoft Security DevOps GitHub action description: Learn how to configure the Microsoft Security DevOps GitHub action. Previously updated : 01/24/2023 Last updated : 02/15/2023
Security DevOps uses the following Open Source tools:
```yml name: MSDO windows-latest
- on:
- push:
- branches: [ main ]
- pull_request:
- branches: [ main ]
- workflow_dispatch:
-
- jobs:
- sample:
-
- # MSDO runs on windows-latest and ubuntu-latest.
- # macos-latest supporting coming soon
- runs-on: windows-latest
-
- steps:
- - uses: actions/checkout@v2
-
- - uses: actions/setup-dotnet@v1
- with:
- dotnet-version: |
- 5.0.x
- 6.0.x
-
- # Run analyzers
- - name: Run Microsoft Security DevOps Analysis
- uses: microsoft/security-devops-action@preview
- id: msdo
-
- # Upload alerts to the Security tab
- - name: Upload alerts to Security tab
- uses: github/codeql-action/upload-sarif@v1
- with:
- sarif_file: ${{ steps.msdo.outputs.sarifFile }}
+ on:
+ push:
+ branches: [ main ]
+ pull_request:
+ branches: [ main ]
+ workflow_dispatch:
+
+ jobs:
+ sample:
+
+ # MSDO runs on windows-latest and ubuntu-latest.
+ # macos-latest supporting coming soon
+ runs-on: windows-latest
+
+ steps:
+ - uses: actions/checkout@v3
+
+ - uses: actions/setup-dotnet@v3
+ with:
+ dotnet-version: |
+ 5.0.x
+ 6.0.x
+
+ # Run analyzers
+ - name: Run Microsoft Security DevOps Analysis
+ uses: microsoft/security-devops-action@preview
+ id: msdo
+
+ # Upload alerts to the Security tab
+ - name: Upload alerts to Security tab
+ uses: github/codeql-action/upload-sarif@v1
+ with:
+ sarif_file: ${{ steps.msdo.outputs.sarifFile }}
```
-
+
For details on various input options, see [action.yml](https://github.com/microsoft/security-devops-action/blob/main/action.yml) 1. Select **Start commit**
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 02/09/2023 Last updated : 02/12/2023 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you'll find them in the [What's
| [Three alerts in Defender for Azure Resource Manager plan will be deprecated](#three-alerts-in-defender-for-azure-resource-manager-plan-will-be-deprecated) | March 2023 | | [Alerts automatic export to Log Analytics workspace will be deprecated](#alerts-automatic-export-to-log-analytics-workspace-will-be-deprecated) | March 2023 | | [Deprecation and improvement of selected alerts for Windows and Linux Servers](#deprecation-and-improvement-of-selected-alerts-for-windows-and-linux-servers) | April 2023 |
+| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | August 2023 |
### The built-in policy \[Preview]: Private endpoint should be configured for Key Vault will be deprecated
The related [policy definition](https://ms.portal.azure.com/#view/Microsoft_Azur
**Estimated date for change: March 2023**
-As we continue to improve the quality of our alerts, the following three alerts from the Defender for ARM plan will be deprecated:
+As we continue to improve the quality of our alerts, the following three alerts from the Defender for Azure Resource Manager plan will be deprecated:
1. `Activity from a risky IP address (ARM.MCAS_ActivityFromAnonymousIPAddresses)` 1. `Activity from infrequent country (ARM.MCAS_ActivityFromInfrequentCountry)` 1. `Impossible travel activity (ARM.MCAS_ImpossibleTravelActivity)` You can learn more details about each of these alerts from the [alerts reference list](alerts-reference.md#alerts-resourcemanager).
-In the scenario where an activity from a suspicious IP address is detected, one of the following Defender for ARM plan alerts `Azure Resource Manager operation from suspicious IP address` or `Azure Resource Manager operation from suspicious proxy IP address` will be present.
+In the scenario where an activity from a suspicious IP address is detected, one of the following Defender for Azure Resource Manager plan alerts `Azure Resource Manager operation from suspicious IP address` or `Azure Resource Manager operation from suspicious proxy IP address` will be present.
### Alerts automatic export to Log Analytics workspace will be deprecated **Estimated date for change: March 2023**
-Currently, Defender for Cloud security alerts are automatically exported to a default Log Analytics workspace on the resource level. This causes an indeterministic behavior and therefore, this feature is set to be deprecated.
+Currently, Defenders for Cloud security alerts are automatically exported to a default Log Analytics workspace on the resource level. This causes an indeterministic behavior and therefore, this feature is set to be deprecated.
You can export your security alerts to a dedicated Log Analytics workspace with the [Continuous Export](continuous-export.md#set-up-a-continuous-export) feature. If you have already configured continuous export of your alerts to a Log Analytics workspace, no further action is required.
You can learn more about [Microsoft Defender for Endpoint onboarding options](in
You can also view the [full list of alerts](alerts-reference.md#defender-for-servers-alerts-to-be-deprecated) that are set to be deprecated. Read the [Microsoft Defender for Cloud blog](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/defender-for-servers-security-alerts-improvements/ba-p/3714175).+
+### Multiple changes to identity recommendations
+
+**Estimated date for change: August 2023**
+
+We announced previously the [availability of identity recommendations V2 (preview)](release-notes.md#extra-recommendations-added-to-identity), which included enhanced capabilities.
+
+As part of these changes, the following recommendations will be released as General Availability (GA) and replace the V1 recommendations that are set to be deprecated.
+
+#### General Availability (GA) release of identity recommendations V2
+
+The following security recommendations will be released as GA and replace the V1 recommendations:
+
+|Recommendation | Assessment Key|
+|--|--|
+|Accounts with owner permissions on Azure resources should be MFA enabled | 6240402e-f77c-46fa-9060-a7ce53997754 |
+|Accounts with write permissions on Azure resources should be MFA enabled | c0cb17b2-0607-48a7-b0e0-903ed22de39b |
+| Accounts with read permissions on Azure resources should be MFA enabled | dabc9bc4-b8a8-45bd-9a5a-43000df8aa1c |
+| Guest accounts with owner permissions on Azure resources should be removed | 20606e75-05c4-48c0-9d97-add6daa2109a |
+| Guest accounts with write permissions on Azure resources should be removed | 0354476c-a12a-4fcc-a79d-f0ab7ffffdbb |
+| Guest accounts with read permissions on Azure resources should be removed | fde1c0c9-0fd2-4ecc-87b5-98956cbc1095 |
+| Blocked accounts with owner permissions on Azure resources should be removed | 050ac097-3dda-4d24-ab6d-82568e7a50cf |
+| Blocked accounts with read and write permissions on Azure resources should be removed | 1ff0b4c9-ed56-4de6-be9c-d7ab39645926 |
+
+#### Deprecation of identity recommendations V1
+
+The following security recommendations will be deprecated as part of this change:
+
+The following security recommendations will be deprecated as part of this change:
+
+
+| Recommendation | Assessment Key |
+|--|--|
+| MFA should be enabled on accounts with owner permissions on subscriptions | 94290b00-4d0c-d7b4-7cea-064a9554e681 |
+| MFA should be enabled on accounts with write permissions on subscriptions | 57e98606-6b1e-6193-0e3d-fe621387c16b |
+| MFA should be enabled on accounts with read permissions on subscriptions | 151e82c5-5341-a74b-1eb0-bc38d2c84bb5 |
+| External accounts with owner permissions should be removed from subscriptions | c3b6ae71-f1f0-31b4-e6c1-d5951285d03d |
+| External accounts with write permissions should be removed from subscriptions | 04e7147b-0deb-9796-2e5c-0336343ceb3d |
+| External accounts with read permissions should be removed from subscriptions | a8c6a4ad-d51e-88fe-2979-d3ee3c864f8b |
+| Deprecated accounts with owner permissions should be removed from subscriptions | e52064aa-6853-e252-a11e-dffc675689c2 |
+| Deprecated accounts should be removed from subscriptions | 00c6d40b-e990-6acf-d4f3-471e747a27c4 |
+
+We recommend updating custom scripts, workflows, and governance rules to correspond with the V2 recommendations.
+
+We've improved the coverage of the V2 identity recommendations by scanning all Azure resources (rather than just subscriptions) which allows security administrators to view role assignments per account. These changes may result in changes to your Secure Score throughout the GA process.
+ ## Next steps For all recent changes to Defender for Cloud, see [What's new in Microsoft Defender for Cloud?](release-notes.md).
defender-for-iot Neousys Nuvo 5006Lp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/neousys-nuvo-5006lp.md
This article describes the Neousys Nuvo-5006LP appliance for OT sensors.
|**Hardware profile** | L100 | |**Performance** | Max bandwidth: 30 Mbps<br>Max devices: 400 | |**Physical specifications** | Mounting: Mounting kit, Din Rail<br>Ports: 5x RJ45|
-|**Status** | Supported, Not available pre-configured|
+|**Status** | Not available pre-configured|
:::image type="content" source="../media/ot-system-requirements/cyberx.png" alt-text="Photo of a Neousys Nuvo-5006LP." border="false":::
defender-for-iot Ys Techsystems Ys Fit2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/ys-techsystems-ys-fit2.md
The following image shows a view of the YS-FIT2 back panel:
| Power Adapter |7V-20V (Optional 9V-36V) DC / 5W-15W Power AdapterVehicle DC cable for YS-FIT2 (Optional)| |UPS|Fit-uptime Miniature 12 V UPS for miniPCs (Optional)| |Mounting |VESA / wall or Din Rail mounting kit |
-| Temperature |0┬░C ~ 70┬░C |
+| Temperature |0┬░C ~ 60┬░C |
| Humidity |5% ~ 95%, non-condensing | | Vibration |IEC TR 60721-4-7:2001+A1:03, Class 7M1, test method IEC 60068-2-64 (up to 2 KHz, 3 axis)| |Shock|IEC TR 60721-4-7:2001+A1:03, Class 7M1, test method IEC 60068-2-27 (15 g , 6 directions)|
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
You can restore a sensor from a backup file using the sensor console or the CLI.
For more information, see [CLI command reference from OT network sensors](cli-ot-sensor.md).
-**To restore from the sensor console:**
+# [Restore from the sensor console](#tab/restore-from-sensor-console)
To restore a backup from the sensor console, the backup file must be accessible from the sensor.
To restore a backup from the sensor console, the backup file must be accessible
1. When the restore process is complete, select **Close**.
-**To restore the latest backup file by using the CLI:**
+# [Restore the latest backup file by using the CLI](#tab/restore-using-cli)
- Sign in to an administrative account and enter `cyberx-xsense-system-restore`. ++ ## Configure SMTP settings Define SMTP mail server settings for the sensor so that you configure the sensor to send data to other servers.
digital-twins How To Use Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-azure-digital-twins-explorer.md
You can also apply one of several layout algorithms to the model graph from the
### Filter and highlight model graph elements
-You can filter the types of connections that appear in the Model Graph. Turning off one of the connection types via the switches in this menu will prevent that connection type from displaying in the graph.
+You can filter the types of model references that appear in the Model Graph. Turning off one of the reference types via the switches in this menu will prevent that reference type from displaying in the graph.
-You can also filter the models and connections that appear in the graph by text, by selecting this **Filter** icon:
+You can also filter the models and references that appear in the graph by text, by selecting this **Filter** icon:
:::image type="content" source="media/how-to-use-azure-digital-twins-explorer/model-graph-panel-filter-text.png" alt-text="Screenshot of Azure Digital Twins Explorer Model Graph panel. The text filter icon is selected, showing the Filter tab where you can enter a search term." lightbox="media/how-to-use-azure-digital-twins-explorer/model-graph-panel-filter-text.png":::
-You can highlight the models and connections that appear in the graph by text, by selecting this **Highlight** icon:
+You can highlight the models and references that appear in the graph by text, by selecting this **Highlight** icon:
:::image type="content" source="media/how-to-use-azure-digital-twins-explorer/model-graph-panel-highlight-text.png" alt-text="Screenshot of Azure Digital Twins Explorer Model Graph panel. The text filter icon is selected, showing the Highlight tab where you can enter a search term." lightbox="media/how-to-use-azure-digital-twins-explorer/model-graph-panel-highlight-text.png":::
digital-twins Reference Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/reference-service-limits.md
description: Chart showing the limits of the Azure Digital Twins service. Previously updated : 02/25/2022 Last updated : 02/14/2023
dms Dms Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/dms-overview.md
Azure Database Migration Service is a fully managed service designed to enable s
With Azure Database Migration Service currently we offer two options: 1. [Azure SQL migration extension for Azure Data Studio](./migration-using-azure-data-studio.md)
-1. Database Migration Service - via Azure portal, PowerShell and Azure CLI.
+1. Database Migration Service (classic) - via Azure portal, PowerShell and Azure CLI.
**Azure SQL Migration extension for Azure Data Studio** is powered by the latest version of Database Migration Service and provides more features. Currently, it supports SQL Database modernization to Azure. For improved functionality and supportability, consider migrating to Azure SQL Database by using the Azure SQL migration extension for Azure Data Studio.
-**Database Migration Service** via Azure portal, PowerShell and Azure CLI is an older version of the Azure Database Migration Service. It offers database modernization to Azure and support scenarios like – SQL Server, PostgreSQL, MySQL, and MongoDB. 
+**Database Migration Service (classic)** via Azure portal, PowerShell and Azure CLI is an older version of the Azure Database Migration Service. It offers database modernization to Azure and support scenarios like – SQL Server, PostgreSQL, MySQL, and MongoDB. 
[!INCLUDE [database-migration-service-ads](../../includes/database-migration-service-ads.md)]
In 2021, a newer version of the Azure Database Migration Service was released as
The following table compares the functionality of the versions of the Database Migration Service:
-|Feature |DMS |Azure SQL extension for Azure Data Studio |Notes|
+|Feature |DMS (classic) |Azure SQL extension for Azure Data Studio |Notes|
||||| |Assessment | No | Yes | Assess compatibility of the source. | |SKU recommendation | No | Yes | SKU recommendations for the target based on the assessment of the source. |
dms Tutorial Sql Server Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online.md
Title: "Tutorial: Migrate SQL Server online to SQL Managed Instance"
-description: Learn to perform an online migration from SQL Server to an Azure SQL Managed Instance by using Azure Database Migration Service
+description: Learn to perform an online migration from SQL Server to an Azure SQL Managed Instance by using Azure Database Migration Service (classic)
Last updated 02/08/2023
-# Tutorial: Migrate SQL Server to an Azure SQL Managed Instance online using DMS
+# Tutorial: Migrate SQL Server to an Azure SQL Managed Instance online using DMS (classic)
> [!NOTE] > This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Managed Instance by using the [Azure SQL migration extension for Azure Data Studio](tutorial-sql-server-managed-instance-online-ads.md).
dms Tutorial Sql Server To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-azure-sql.md
Title: "Tutorial: Migrate SQL Server offline to Azure SQL Database"
-description: Learn to migrate from SQL Server to Azure SQL Database offline by using Azure Database Migration Service.
+description: Learn to migrate from SQL Server to Azure SQL Database offline by using Azure Database Migration Service (classic).
Last updated 02/08/2023
-# Tutorial: Migrate SQL Server to Azure SQL Database using DMS
+# Tutorial: Migrate SQL Server to Azure SQL Database using DMS (classic)
> [!NOTE] > This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Database by using the [Azure SQL migration extension for Azure Data Studio](tutorial-sql-server-azure-sql-database-offline-ads.md).
dms Tutorial Sql Server To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-managed-instance.md
Title: "Tutorial: Migrate SQL Server to SQL Managed Instance"
-description: Learn to migrate from SQL Server to an Azure SQL Managed Instance by using Azure Database Migration Service.
+description: Learn to migrate from SQL Server to an Azure SQL Managed Instance by using Azure Database Migration Service (classic).
Last updated 02/08/2023
-# Tutorial: Migrate SQL Server to an Azure SQL Managed Instance offline using DMS
+# Tutorial: Migrate SQL Server to an Azure SQL Managed Instance offline using DMS (classic)
> [!NOTE] > This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Managed Instance by using the [Azure SQL migration extension for Azure Data Studio](tutorial-sql-server-managed-instance-offline-ads.md).
dns Private Dns Autoregistration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-autoregistration.md
To enable auto registration, select the checkbox for "Enable auto registration"
* Auto registration works only for virtual machines. For all other resources like internal load balancers, you can create DNS records manually in the private DNS zone linked to the virtual network. * DNS records are created automatically only for the primary virtual machine NIC. If your virtual machines have more than one NIC, you can manually create the DNS records for other network interfaces. * DNS records are created automatically only if the primary virtual machine NIC is using DHCP. If you're using static IPs, such as a configuration with [multiple IP addresses in Azure](../virtual-network/ip-services/virtual-network-multiple-ip-addresses-portal.md#os-config), auto registration doesn't create records for that virtual machine.
-* Auto registration for IPv6 (AAAA records) isn't supported.
* A specific virtual network can be linked to only one private DNS zone when automatic VM DNS registration is enabled. You can, however, link multiple virtual networks to a single DNS zone. ## Next steps
dns Private Dns Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-scenarios.md
Title: Scenarios for Private Zones - Azure DNS
-description: In this article, learn about common scenarios for using Azure DNS Private Zones.
+ Title: Scenarios for Azure Private DNS zones
+description: In this article, learn about common scenarios for using Azure Private DNS zones.
Last updated 09/27/2022
-# Azure DNS private zones scenarios
+# Azure Private DNS zones scenarios
-Azure DNS Private Zones provide name resolution within a virtual network and between virtual networks. In this article, we'll look at some common scenarios that can benefit using this feature.
+Azure Private DNS zones provide name resolution within a virtual network and between virtual networks. In this article, we'll look at some common scenarios that can benefit using this feature.
## Scenario: Name resolution scoped to a single virtual network
Now when an internet client does a DNS query for `VNETA-VM1.contoso.com`, Azure
![Split Brian resolution](./media/private-dns-scenarios/split-brain-resolution.png) ## Next steps
-To learn more about private DNS zones, see [Using Azure DNS for private domains](private-dns-overview.md).
+To learn more about Private DNS zones, see [Using Azure DNS for private domains](private-dns-overview.md).
-Learn how to [create a private DNS zone](./private-dns-getstarted-powershell.md) in Azure DNS.
+Learn how to [create a Private DNS zone](./private-dns-getstarted-powershell.md) in Azure DNS.
Learn about DNS zones and records by visiting: [DNS zones and records overview](dns-zones-records.md).
event-grid Event Grid Powershell Webhook Secure Delivery Azure Ad User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/event-grid-powershell-webhook-secure-delivery-azure-ad-user.md
try {
} # Creates Azure Event Grid Azure AD Application if not exists-
- $eventGridAppId = "4962773b-9cdb-44cf-a8bf-237846a00ab7" # You don't need to modify this id
+ # You don't need to modify this id
+ # But Azure Event Grid Azure AD Application Id is different for different clouds
+
+ $eventGridAppId = "4962773b-9cdb-44cf-a8bf-237846a00ab7" # Azure Public Cloud
+ # $eventGridAppId = "54316b56-3481-47f9-8f30-0300f5542a7b" # Azure Government Cloud
$eventGridRoleName = "AzureEventGridSecureWebhookSubscriber" # You don't need to modify this role name $eventGridSP = Get-AzureADServicePrincipal -Filter ("appId eq '" + $eventGridAppId + "'") if ($eventGridSP -match "Microsoft.EventGrid")
event-grid Subscribe To Graph Api Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-graph-api-events.md
Last updated 09/01/2022
# Subscribe to events published by Microsoft Graph API This article describes steps to subscribe to events published by Microsoft Graph API. The following table lists the resources for which events are available through Graph API. For every resource, events for create, update and delete state changes are supported.
+> [!IMPORTANT]
+> Microsoft Graph API's ability to send events to Azure Event Grid is currently in **private preview**.
+ |Microsoft event source |Resource(s) | Available event types | |: | : | :-| |Azure Active Directory| [User](/graph/api/resources/user), [Group](/graph/api/resources/group) | [Azure AD event types](azure-active-directory-events.md) |
Besides the ability to subscribe to Microsoft Graph API events via Event Grid, y
## Enable Graph API events to flow to your partner topic > [!IMPORTANT]
-> Microsoft Graph API's (MGA) ability to send events to Event Grid (a generally available service) is in private preview. In the following steps, you will follow instructions from [Node.js](https://github.com/microsoftgraph/nodejs-webhooks-sample), [Java](https://github.com/microsoftgraph/java-spring-webhooks-sample), and[.NET Core](https://github.com/microsoftgraph/aspnetcore-webhooks-sample) Webhook samples to enable flow of events from Microsoft Graph API. At some point in the sample, you will have an application registered with Azure AD. Email your application ID to <a href="mailto:ask-graph-and-grid@microsoft.com?subject=Please allow my application ID">mailto:ask-graph-and-grid@service.microsoft.com?subject=Please allow my Azure AD application with ID to send events through Graph API</a> so that the Microsoft Graph API team can add your application ID to allow list to use this new capability.
+> In the following steps, you will follow instructions from [Node.js](https://github.com/microsoftgraph/nodejs-webhooks-sample), [Java](https://github.com/microsoftgraph/java-spring-webhooks-sample), and[.NET Core](https://github.com/microsoftgraph/aspnetcore-webhooks-sample) Webhook samples to enable flow of events from Microsoft Graph API. At some point in the sample, you will have an application registered with Azure AD. Email your application ID to <a href="mailto:ask-graph-and-grid@microsoft.com?subject=Please allow my application ID">mailto:ask-graph-and-grid@service.microsoft.com?subject=Please allow my Azure AD application with ID to send events through Graph API</a> so that the Microsoft Graph API team can add your application ID to allow list to use this new capability.
You request Microsoft Graph API to send events by creating a Graph API subscription. When you create a Graph API subscription, the http request should look like the following sample:
firewall Long Running Sessions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/long-running-sessions.md
The Azure Firewall engineering team updates the firewall on an as-needed basis (
### Idle timeout
-An idle timer is in place to recycle idle sessions. The default value is four minutes. Applications that maintain keepalives don't idle out. If the application needs more than 4 minutes (typical of IOT devices), you can contact support to extent the time for inbound connections to 30 minutes in the backend. Idle timeout for outbound or east-west traffic cannot be changed.
+An idle timer is in place to recycle idle sessions. The default value is four minutes for east-west connections and can't be changed. Applications that maintain keepalives don't idle out.
+
+For north-south connections that need more than 4 minutes (typical of IOT devices), you can contact support to extent the time for inbound connections to 30 minutes in the backend.
### Auto-recovery
frontdoor End To End Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/end-to-end-tls.md
For your own custom TLS/SSL certificate:
## Supported cipher suites
-For TLS1.2 the following cipher suites are supported:
+For TLS 1.2 the following cipher suites are supported:
* TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 * TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
For TLS1.2 the following cipher suites are supported:
> [!NOTE] > For Windows 10 and later versions, we recommend enabling one or both of the ECDHE_GCM cipher suites for better security. Windows 8.1, 8, and 7 aren't compatible with these ECDHE_GCM cipher suites. The ECDHE_CBC and DHE cipher suites have been provided for compatibility with those operating systems.
-Using custom domains with TLS1.0/1.1 enabled the following cipher suites are supported:
+When using custom domains with TLS 1.0 and 1.1 enabled, the following cipher suites are supported:
* TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 * TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
Using custom domains with TLS1.0/1.1 enabled the following cipher suites are sup
* TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 * TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
-Azure Front Door doesnΓÇÖt support configuring specific cipher suites.
+Azure Front Door doesnΓÇÖt support disabling or configuring specific cipher suites for your profile.
## Next steps
frontdoor Front Door Caching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-caching.md
Previously updated : 01/16/2023 Last updated : 02/16/2023 zone_pivot_groups: front-door-tiers
Only requests that use the `GET` request method are cacheable. All other request
## Delivery of large files
-Azure Front Door delivers large files without a cap on file size. If caching is enabled, Front Door uses a technique called *object chunking*. When a large file is requested, Front Door retrieves smaller pieces of the file from the origin. After receiving a full or byte-range file request, the Front Door environment requests the file from the origin in chunks of 8 MB.
+Azure Front Door delivers large files without a cap on file size. If caching is enabled, Front Door uses a technique called *object chunking*. When a large file is requested, Front Door retrieves smaller pieces of the file from the origin. After receiving a full file request or byte-range file request, the Azure Front Door environment requests the file from the origin in chunks of 8 MB.
-After the chunk arrives at the Front Door environment, it's cached and immediately served to the user. Front Door then pre-fetches the next chunk in parallel. This pre-fetch ensures that the content stays one chunk ahead of the user, which reduces latency. This process continues until the entire file gets downloaded (if requested) or the client closes the connection. For more information on the byte-range request, read [RFC 7233](https://www.rfc-editor.org/info/rfc7233).
+After the chunk arrives at the Azure Front Door environment, it's cached and immediately served to the user. Front Door then pre-fetches the next chunk in parallel. This pre-fetch ensures that the content stays one chunk ahead of the user, which reduces latency. This process continues until the entire file gets downloaded (if requested) or the client closes the connection. For more information on the byte-range request, read [RFC 7233](https://www.rfc-editor.org/info/rfc7233).
-Front Door caches any chunks as they're received so the entire file doesn't need to be cached on the Front Door cache. Ensuing requests for the file or byte ranges are served from the cache. If the chunks aren't all cached, pre-fetching is used to request chunks from the origin. This optimization relies on the origin's ability to support byte-range requests. If the origin doesn't support byte-range requests, this optimization isn't effective.
+Front Door caches any chunks as they're received so the entire file doesn't need to be cached on the Front Door cache. Subsequent requests for the file or byte ranges are served from the cache. If the chunks aren't all cached, pre-fetching is used to request chunks from the origin.
+
+This optimization relies on the origin's ability to support byte-range requests. If the origin doesn't support byte-range requests, or if it doesn't handle range requests correctly, then this optimization isn't effective.
+
+When your origin responds to a request with a `Range` header, it must respond in one of the following ways:
+
+- **Return a ranged response.** The response must use HTTP status code 206. Also, the `Content-Range` response header must be present, and must match the actual length of the content that your origin returns. If your origin doesn't send the correct response headers with valid values, Azure Front Door doesn't cache the response, and you might see inconsistent behavior.
+
+ > [!TIP]
+ > If your origin compresses the response, ensure that the `Content-Range` header value matches the actual length of the compressed response.
+
+- **Return a non-ranged response.** If your origin can't handle range requests, it can ignore the `Range` header and return a non-ranged response. Ensure that the origin returns a response status code other than 206. For example, the origin might return a 200 OK response.
## File compression
These formats are supported in the lists of paths to purge:
> **Purging wildcard domains**: Specifying cached paths for purging as discussed in this section doesn't apply to any wildcard domains that are associated with the Front Door. Currently, we don't support directly purging wildcard domains. You can purge paths from specific subdomains by specifying that specfic subdomain and the purge path. For example, if my Front Door has `*.contoso.com`, I can purge assets of my subdomain `foo.contoso.com` by typing `foo.contoso.com/path/*`. Currently, specifying host names in the purge content path is limited to subdomains of wildcard domains, if applicable. >
-Cache purges on the Front Door are case-insensitive. Additionally, they're query string agnostic, meaning purging a URL will purge all query-string variations of it.
+Cache purges on the Front Door are case-insensitive. Additionally, they're query string agnostic, which means that purging a URL purges all query string variations of it.
::: zone-end
If the origin response is cacheable, then the `Set-Cookie` header is removed bef
In addition, Front Door attaches the `X-Cache` header to all responses. The `X-Cache` response header includes one of the following values: -- `TCP_HIT` or `TCP_REMOTE_HIT`: The first 8MB chunk of the response is a cache hit, and the content is served from the Front Door cache.-- `TCP_MISS`: The first 8MB chunk of the response is a cache miss, and the content is fetched from the origin.
+- `TCP_HIT` or `TCP_REMOTE_HIT`: The first 8 MB chunk of the response is a cache hit, and the content is served from the Front Door cache.
+- `TCP_MISS`: The first 8 MB chunk of the response is a cache miss, and the content is fetched from the origin.
- `PRIVATE_NOSTORE`: Request can't be cached because the *Cache-Control* response header is set to either *private* or *no-store*. - `CONFIG_NOCACHE`: Request is configured to not cache in the Front Door profile.
Cache behavior and duration can be configured in Rules Engine. Rules Engine cach
* **When caching is disabled**, Azure Front Door doesnΓÇÖt cache the response contents, irrespective of the origin response directives.
-* **When caching is enabled**, the cache behavior is different based on the cache behavior value applied by the Rules Engine:
+* **When caching is enabled**, the cache behavior differs based on the cache behavior value applied by the Rules Engine:
* **Honor origin**: Azure Front Door will always honor origin response header directive. If the origin directive is missing, Azure Front Door will cache contents anywhere from one to three days. * **Override always**: Azure Front Door will always override with the cache duration, meaning that it will cache the contents for the cache duration ignoring the values from origin response directives. This behavior will only be applied if the response is cacheable.
hdinsight Apache Esp Kafka Ssl Encryption Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-esp-kafka-ssl-encryption-authentication.md
+
+ Title: Apache Kafka TLS encryption & authentication for ESP Kafka Clusters - Azure HDInsight
+description: Set up TLS encryption for communication between Kafka clients and Kafka brokers, Set up SSL authentication of clients for ESP Kafka clusters
+++ Last updated : 02/14/2023++
+# Set up TLS encryption and authentication for ESP Apache Kafka cluster in Azure HDInsight
+
+This article shows you how to set up Transport Layer Security (TLS) encryption, previously known as Secure Sockets Layer (SSL) encryption, between Apache Kafka clients and Apache Kafka brokers. It also shows you how to set up authentication of clients (sometimes referred to as two-way TLS).
+
+> [!Important]
+> There are two clients which you can use for Kafka applications: a Java client and a console client. Only the Java client `ProducerConsumer.java` can use TLS for both producing and consuming. The console producer client `console-producer.sh` does not work with TLS.
+
+## Apache Kafka broker setup
+
+The Kafka TLS broker setup uses four HDInsight cluster VMs in the following way:
+
+* headnode 0 - Certificate Authority (CA)
+* worker node 0, 1, and 2 - brokers
+
+> [!Note]
+> This guide uses self-signed certificates, but the most secure solution is to use certificates issued by trusted CAs.
+
+The summary of the broker setup process is as follows:
+
+1. The following steps are repeated on each of the three worker nodes:
+ 1. Generate a certificate.
+ 1. Create a cert signing request.
+ 1. Send the cert signing request to the Certificate Authority (CA).
+ 1. Sign in to the CA and sign the request.
+ 1. SCP the signed certificate back to the worker node.
+ 1. SCP the public certificate of the CA to the worker node.
+
+1. Once you have all of the certificates, put the certs into the cert store.
+1. Go to Ambari and change the configurations.
+
+Use the following detailed instructions to complete the broker setup:
+
+> [!Important]
+> In the following code snippets wnX is an abbreviation for one of the three worker nodes and should be substituted with `wn0`, `wn1` or `wn2` as appropriate. `WorkerNode0_Name` and `HeadNode0_Name` should be substituted with the names of the respective machines.
+
+1. Perform initial setup on head node 0, which for HDInsight fills the role of the Certificate Authority (CA).
+
+ ```bash
+ # Create a new directory 'ssl' and change into it
+ mkdir ssl
+ cd ssl
+ ```
+
+1. Perform the same initial setup on each of the brokers (worker nodes 0, 1 and 2).
+
+ ```bash
+ # Create a new directory 'ssl' and change into it
+ mkdir ssl
+ cd ssl
+ ```
+
+1. On each of the worker nodes, execute the following steps using the code snippet.
+ 1. Create a keystore and populate it with a new private certificate.
+ 1. Create a certificate signing request.
+ 1. SCP the certificate signing request to the CA (headnode0)
+
+ ```bash
+ keytool -genkey -keystore kafka.server.keystore.jks -validity 365 -storepass "MyServerPassword123" -keypass "MyServerPassword123" -dname "CN=FQDN_WORKER_NODE" -storetype pkcs12
+ keytool -keystore kafka.server.keystore.jks -certreq -file cert-file -storepass "MyServerPassword123" -keypass "MyServerPassword123"
+ scp cert-file sshuser@HeadNode0_Name:~/ssl/wnX-cert-sign-request
+ ```
+ > [!Note]
+ > FQDN_WORKER_NODE is Fully Qualified Domain Name of worker node machine.You can get that details from /etc/hosts file in head node
+
+ For example,
+ ```
+ wn0-espkaf.securehadooprc.onmicrosoft.com
+ wn0-kafka2.zbxwnwsmpcsuvbjqbmespcm1zg.bx.internal.cloudapp.net
+ ```
+ :::image type="content" source="./media/apache-esp-kafka-ssl-encryption-authentication/etc-hosts.png" alt-text="Screenshot showing host file output." border="true":::
+
+1. On the CA machine, run the following command to create ca-cert and ca-key files:
+
+ ```bash
+ openssl req -new -newkey rsa:4096 -days 365 -x509 -subj "/CN=Kafka-Security-CA" -keyout ca-key -out ca-cert -nodes
+ ```
+
+1. Change to the CA machine and sign all of the received cert signing requests:
+
+ ```bash
+ openssl x509 -req -CA ca-cert -CAkey ca-key -in wn0-cert-sign-request -out wn0-cert-signed -days 365 -CAcreateserial -passin pass:"MyServerPassword123"
+ openssl x509 -req -CA ca-cert -CAkey ca-key -in wn1-cert-sign-request -out wn1-cert-signed -days 365 -CAcreateserial -passin pass:"MyServerPassword123"
+ openssl x509 -req -CA ca-cert -CAkey ca-key -in wn2-cert-sign-request -out wn2-cert-signed -days 365 -CAcreateserial -passin pass:"MyServerPassword123"
+ ```
+
+1. Send the signed certificates back to the worker nodes from the CA (headnode0).
+
+ ```bash
+ scp wn0-cert-signed sshuser@WorkerNode0_Name:~/ssl/cert-signed
+ scp wn1-cert-signed sshuser@WorkerNode1_Name:~/ssl/cert-signed
+ scp wn2-cert-signed sshuser@WorkerNode2_Name:~/ssl/cert-signed
+ ```
+
+1. Send the public certificate of the CA to each worker node.
+
+ ```bash
+ scp ca-cert sshuser@WorkerNode0_Name:~/ssl/ca-cert
+ scp ca-cert sshuser@WorkerNode1_Name:~/ssl/ca-cert
+ scp ca-cert sshuser@WorkerNode2_Name:~/ssl/ca-cert
+ ```
+
+1. On each worker node, add the CAs public certificate to the truststore and keystore. Then add the worker node's own signed certificate to the keystore
+
+ ```bash
+ keytool -keystore kafka.server.truststore.jks -alias CARoot -import -file ca-cert -storepass "MyServerPassword123" -keypass "MyServerPassword123" -noprompt
+ keytool -keystore kafka.server.keystore.jks -alias CARoot -import -file ca-cert -storepass "MyServerPassword123" -keypass "MyServerPassword123" -noprompt
+ keytool -keystore kafka.server.keystore.jks -import -file cert-signed -storepass "MyServerPassword123" -keypass "MyServerPassword123" -noprompt
+
+ ```
+
+## Update Kafka configuration to use TLS and restart brokers
+
+You have now set up each Kafka broker with a keystore and truststore, and imported the correct certificates. Next, modify related Kafka configuration properties using Ambari and then restart the Kafka brokers.
+
+To complete the configuration modification, do the following steps:
+
+1. Sign in to the Azure portal and select your Azure HDInsight Apache Kafka cluster.
+1. Go to the Ambari UI by clicking **Ambari home** under **Cluster dashboards**.
+1. Under **Kafka Broker** set the **listeners** property to `PLAINTEXT://localhost:9092,SASL_SSL://localhost:9093`
+1. Under **Advanced kafka-broker** set the **security.inter.broker.protocol** property to `SASL_SSL`
+
+ :::image type="content" source="./media/apache-esp-kafka-ssl-encryption-authentication/properties-file-with-sasl.png" alt-text="Screenshot showing how to edit Kafka sasl configuration properties in Ambari." border="true":::
+
+1. Under **Custom kafka-broker** set the **ssl.client.auth** property to `required`.
+
+
+ > [!Note]
+ > This step is only required if you're setting up authentication and encryption.
+
+ :::image type="content" source="./media/apache-esp-kafka-ssl-encryption-authentication/editing-configuration-ambari2.png" alt-text="Screenshot showing how to edit Kafka ssl configuration properties in Ambari." border="true":::
+
+1. Here's the screenshot that shows Ambari configuration UI with these changes.
+
+ > [!Note]
+ > 1. ssl.keystore.location and ssl.truststore.location is the complete path of your keystore, truststore location in Certificate Authority (hn0)
+ > 1. ssl.keystore.password and ssl.truststore.password is the password set for the keystore and truststore. In this case as an example,` MyServerPassword123`
+ > 1. ssl.key.password is the key set for the keystore and trust store. In this case as an example, `MyServerPassword123`
+
+ For HDI version 4.0 or 5.0
+
+ a. If you're setting up authentication and encryption, then the screenshot looks like
+
+ :::image type="content" source="./media/apache-esp-kafka-ssl-encryption-authentication/properties-file-authentication-as-required.png" alt-text="Screenshot showing how to edit Kafka-env template property in Ambari authentication as required." border="true":::
+
+ b. If you are setting up encryption only, then the screenshot looks like
+
+ :::image type="content" source="./media/apache-esp-kafka-ssl-encryption-authentication/properties-file-authentication-as-none.png" alt-text="Screenshot showing how to edit Kafka-env template property in Ambari authentication as none." border="true":::
+
+1. Restart all Kafka brokers.
+
+## Client setup (without authentication)
+
+If you don't need authentication, the summary of the steps to set up only TLS encryption are:
+
+1. Sign in to the CA (active head node).
+1. Copy the CA cert to client machine from the CA machine (wn0).
+1. Sign in to the client machine (hn1) and navigate to the `~/ssl` folder.
+1. Import the CA cert to the truststore.
+1. Import the CA cert to the keystore.
+
+These steps are detailed in the following code snippets.
+
+1. Sign in to the CA node.
+
+ ```bash
+ ssh sshuser@HeadNode0_Name
+ cd ssl
+ ```
+
+1. Copy the ca-cert to the client machine
+
+ ```bash
+ scp ca-cert sshuser@HeadNode1_Name:~/ssl/ca-cert
+ ```
+
+1. Sign in to the client machine (standby head node).
+
+ ```bash
+ ssh sshuser@HeadNode1_Name
+ cd ssl
+ ```
+
+1. Import the CA certificate to the truststore.
+
+ ```bash
+ keytool -keystore kafka.client.truststore.jks -alias CARoot -import -file ca-cert -storepass "MyClientPassword123" -keypass "MyClientPassword123" -noprompt
+ ```
+
+1. Import the CA cert to keystore.
+
+ ```bash
+ keytool -keystore kafka.client.keystore.jks -alias CARoot -import -file ca-cert -storepass "MyClientPassword123" -keypass "MyClientPassword123" -noprompt
+ ```
+
+1. Create the file `client-ssl-auth.properties` on client machine (hn1) . It should have the following lines:
+
+ ```config
+ security.protocol=SASL_SSL
+ sasl.mechanism=GSSAPI
+ sasl.kerberos.service.name=kafka
+ ssl.truststore.location=/home/sshuser/ssl/kafka.client.truststore.jks
+ ssl.truststore.password=MyClientPassword123
+ ```
+
+1. Start the admin client with producer and consumer options to verify that both producers and consumers are working on port 9093. Refer to [Verification](apache-kafka-ssl-encryption-authentication.md#verification) section for steps needed to verify the setup using console producer/consumer.
+
+## Client setup (with authentication)
+
+> [!Note]
+> The following steps are required only if you are setting up both TLS encryption **and** authentication. If you are only setting up encryption, then see [Client setup without authentication](apache-kafka-ssl-encryption-authentication.md#client-setup-without-authentication).
+
+The following four steps summarize the tasks needed to complete the client setup:
+
+1. Sign in to the client machine (standby head node).
+1. Create a Java keystore and get a signed certificate for the broker. Then copy the certificate to the VM where the CA is running.
+1. Switch to the CA machine (active head node) to sign the client certificate.
+1. Go to the client machine (standby head node) and navigate to the `~/ssl` folder. Copy the signed cert to client machine.
+
+The details of each step are given.
+
+1. Sign in to the client machine (standby head node).
+
+ ```bash
+ ssh sshuser@HeadNode1_Name
+ ```
+
+1. Remove any existing ssl directory.
+
+ ```bash
+ rm -R ~/ssl
+ mkdir ssl
+ cd ssl
+ ```
+
+1. Create a Java keystore and create a certificate signing request.
+
+ ```bash
+ keytool -genkey -keystore kafka.client.keystore.jks -validity 365 -storepass "MyClientPassword123" -keypass "MyClientPassword123" -dname "CN=HEADNODE1_FQDN" -storetype pkcs12
+
+ keytool -keystore kafka.client.keystore.jks -certreq -file client-cert-sign-request -storepass "MyClientPassword123" -keypass "MyClientPassword123"
+ ```
+
+1. Copy the certificate signing request to the CA
+
+ ```bash
+ scp client-cert-sign-request sshuser@HeadNode0_Name:~/ssl/client-cert-sign-request
+ ```
+
+1. Switch to the CA machine (active head node) and sign the client certificate.
+
+ ```bash
+ ssh sshuser@HeadNode0_Name
+ cd ssl
+ openssl x509 -req -CA ca-cert -CAkey ca-key -in ~/ssl/client-cert-sign-request -out ~/ssl/client-cert-signed -days 365 -CAcreateserial -passin pass:MyClientPassword123
+ ```
+
+1. Copy signed client cert from the CA (active head node) to client machine.
+
+ ```bash
+ scp client-cert-signed sshuser@HeadNode1_Name:~/ssl/client-signed-cert
+ ```
+
+1. Copy the ca-cert to the client machine
+
+ ```bash
+ scp ca-cert sshuser@HeadNode1_Name:~/ssl/ca-cert
+ ```
+
+ 1. Sign in to the client machine (standby head node) and navigate to ssl directory.
+
+ ```bash
+ ssh sshuser@HeadNode1_Name
+ cd ssl
+ ```
+
+1. Create client store with signed cert, and import CA certificate into the keystore and truststore on client machine (hn1):
+
+ ```bash
+ keytool -keystore kafka.client.truststore.jks -alias CARoot -import -file ca-cert -storepass "MyClientPassword123" -keypass "MyClientPassword123" -noprompt
+
+ keytool -keystore kafka.client.keystore.jks -alias CARoot -import -file ca-cert -storepass "MyClientPassword123" -keypass "MyClientPassword123" -noprompt
+
+ keytool -keystore kafka.client.keystore.jks -import -file client-signed-cert -storepass "MyClientPassword123" -keypass "MyClientPassword123" -noprompt
+ ```
+
+1. Create a file `client-ssl-auth.properties` on client machine (hn1). It should have the following lines:
+
+ ```bash
+ security.protocol=SASL_SSL
+ sasl.mechanism=GSSAPI
+ sasl.kerberos.service.name=kafka
+
+ ssl.truststore.location=/home/sshuser/ssl/kafka.client.truststore.jks
+ ssl.truststore.password=MyClientPassword123
+ ssl.keystore.location=/home/sshuser/ssl/kafka.client.keystore.jks
+ ssl.keystore.password=MyClientPassword123
+ ssl.key.password=MyClientPassword123
+
+ ```
+
+## Verification
+
+Run these steps on the client machine.
+
+> [!Note]
+> If HDInsight 4.0 and Kafka 2.1 is installed, you can use the console producer/consumers to verify your setup. If not, run the Kafka producer on port 9092 and send messages to the topic, and then use the Kafka consumer on port 9093 which uses TLS.
+
+### Kafka 2.1 or above
+
+1. Create a topic if it doesn't exist already.
+
+ ```bash
+ /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --zookeeper <ZOOKEEPER_NODE>:2181 --create --topic topic1 --partitions 2 --replication-factor 2
+ ```
+
+1. Start console producer and provide the path to `client-ssl-auth.properties` as a configuration file for the producer.
+
+ ```bash
+ /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list <FQDN_WORKER_NODE>:9093 --topic topic1 --producer.config ~/ssl/client-ssl-auth.properties
+ ```
+
+1. Open another ssh connection to client machine and start console consumer and provide the path to `client-ssl-auth.properties` as a configuration file for the consumer.
+
+ ```bash
+ /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --bootstrap-server <FQDN_WORKER_NODE>:9093 --topic topic1 --consumer.config ~/ssl/client-ssl-auth.properties --from-beginning
+ ```
+
+## Next steps
+
+* [What is Apache Kafka on HDInsight?](apache-kafka-introduction.md)
hdinsight Apache Kafka Ssl Encryption Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-ssl-encryption-authentication.md
description: Set up TLS encryption for communication between Kafka clients and K
Previously updated : 02/03/2023 Last updated : 02/16/2023
-# Set up TLS encryption and authentication for Apache Kafka in Azure HDInsight
+# Set up TLS encryption and authentication for Non ESP Apache Kafka cluster in Azure HDInsight
This article shows you how to set up Transport Layer Security (TLS) encryption, previously known as Secure Sockets Layer (SSL) encryption, between Apache Kafka clients and Apache Kafka brokers. It also shows you how to set up authentication of clients (sometimes referred to as two-way TLS).
This article shows you how to set up Transport Layer Security (TLS) encryption,
## Apache Kafka broker setup
-The Kafka TLS broker setup will use four HDInsight cluster VMs in the following way:
+The Kafka TLS broker setup uses four HDInsight cluster VMs in the following way:
* headnode 0 - Certificate Authority (CA) * worker node 0, 1, and 2 - brokers > [!Note]
-> This guide will use self-signed certificates, but the most secure solution is to use certificates issued by trusted CAs.
+> This guide uses self-signed certificates, but the most secure solution is to use certificates issued by trusted CAs.
The summary of the broker setup process is as follows:
Use the following detailed instructions to complete the broker setup:
> [!Important] > In the following code snippets wnX is an abbreviation for one of the three worker nodes and should be substituted with `wn0`, `wn1` or `wn2` as appropriate. `WorkerNode0_Name` and `HeadNode0_Name` should be substituted with the names of the respective machines.
-1. Perform initial setup on head node 0, which for HDInsight will fill the role of the Certificate Authority (CA).
+1. Perform initial setup on head node 0, which for HDInsight fills the role of the Certificate Authority (CA).
```bash # Create a new directory 'ssl' and change into it
Use the following detailed instructions to complete the broker setup:
wn0-espkaf.securehadooprc.onmicrosoft.com wn0-kafka2.zbxwnwsmpcsuvbjqbmespcm1zg.bx.internal.cloudapp.net ```
- :::image type="content" source="./media/apache-kafka-ssl-encryption-authentication/etc-hosts.png" alt-text="Screenshot showing etc hosts output." border="true":::
+ :::image type="content" source="./media/apache-kafka-ssl-encryption-authentication/etc-hosts.png" alt-text="Screenshot showing hosts file output." border="true":::
1. On the CA machine, run the following command to create ca-cert and ca-key files:
To complete the configuration modification, do the following steps:
1. Under **Kafka Broker** set the **listeners** property to `PLAINTEXT://localhost:9092,SSL://localhost:9093` 1. Under **Advanced kafka-broker** set the **security.inter.broker.protocol** property to `SSL`
- :::image type="content" source="./media/apache-kafka-ssl-encryption-authentication/editing-configuration-ambari.png" alt-text="Editing Kafka ssl configuration properties in Ambari" border="true":::
+ :::image type="content" source="./media/apache-kafka-ssl-encryption-authentication/editing-configuration-ambari.png" alt-text="Editing Kafka ssl configuration properties in Ambari." border="true":::
1. Under **Custom kafka-broker** set the **ssl.client.auth** property to `required`. > [!Note]
- > Note: This step is only required if you are setting up authentication and encryption.
+ > Note: This step is only required if you're setting up authentication and encryption.
- :::image type="content" source="./media/apache-kafka-ssl-encryption-authentication/editing-configuration-ambari2.png" alt-text="Editing kafka ssl configuration properties in Ambari" border="true":::
+ :::image type="content" source="./media/apache-kafka-ssl-encryption-authentication/editing-configuration-ambari2.png" alt-text="Editing kafka ssl configuration properties in Ambari." border="true":::
1. Here's the screenshot that shows Ambari configuration UI with these changes.
+
+ > [!Note]
+ > 1. ssl.keystore.location and ssl.truststore.location is the complete path of your keystore, truststore location in Certificate Authority (hn0)
+ > 1. ssl.keystore.password and ssl.truststore.password is the password set for the keystore and truststore. In this case as an example, `MyServerPassword123`
+ > 1. ssl.key.password is the key set for the keystore and trust store. In this case as an example, `MyServerPassword123`
+ For HDI version 4.0 or 5.0
+
+ 1. If you're setting up authentication and encryption, then the screenshot looks like
- :::image type="content" source="./media/apache-kafka-ssl-encryption-authentication/editing-configuration-kafka-env-four.png" alt-text="Editing kafka-env template property in Ambari four" border="true":::
+ :::image type="content" source="./media/apache-kafka-ssl-encryption-authentication/editing-configuration-kafka-env-four.png" alt-text="Editing kafka-env template property in Ambari four." border="true":::
+
+ 1. If you are setting up encryption only, then the screenshot looks like
+
+ :::image type="content" source="./media/apache-kafka-ssl-encryption-authentication/editing-configuration-kafka-env-four-encryption-only.png" alt-text="Screenshot showing how to edit kafka-env template property field in Ambari for encryption only." border="true":::
+
1. Restart all Kafka brokers.
To complete the configuration modification, do the following steps:
If you don't need authentication, the summary of the steps to set up only TLS encryption are: 1. Sign in to the CA (active head node).
-1. Copy the CA cert to client machine from the CA machine (wn0).
+1. Copy the CA certificate to client machine from the CA machine (wn0).
1. Sign in to the client machine (hn1) and navigate to the `~/ssl` folder.
-1. Import the CA cert to the truststore.
-1. Import the CA cert to the keystore.
+1. Import the CA certificate to the truststore.
+1. Import the CA certificate to the keystore.
These steps are detailed in the following code snippets.
These steps are detailed in the following code snippets.
keytool -keystore kafka.client.truststore.jks -alias CARoot -import -file ca-cert -storepass "MyClientPassword123" -keypass "MyClientPassword123" -noprompt ```
-1. Import the CA cert to keystore.
+1. Import the CA certificate to keystore.
```bash keytool -keystore kafka.client.keystore.jks -alias CARoot -import -file ca-cert -storepass "MyClientPassword123" -keypass "MyClientPassword123" -noprompt
hdinsight Connect Kafka Cluster With Vm In Different Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/connect-kafka-cluster-with-vm-in-different-vnet.md
+
+ Title: Connect Apache Kafka cluster with VM in different VNet on Azure HDInsight - Azure HDInsight
+description: Learn how to connect Apache Kafka cluster with VM in different VNet on Azure HDInsight
+++ Last updated : 02/16/2023++
+# How to connect Kafka cluster with VM in different VNet
+
+This Document lists steps that must be followed to set up connectivity between VM and HDI Kafka residing in two different VNet.
+
+1. Create two different VNets where HDInsight Kafka cluster and VM will be hosted respectively. For more information, see [Create a virtual network using the Azure portal](https://learn.microsoft.com/azure/virtual-network/quick-create-portal)
+
+ > [!Note]
+ > These two VNets must be peered, so that IP addresses of their subnets must not overlap with each other. For more information, see [Create a virtual network using the Azure portal](https://learn.microsoft.com/azure/virtual-network/tutorial-connect-virtual-networks-portal)
+
+1. Make sure that the peering status shows as connected.
+
+ :::image type="content" source="./media/connect-kafka-cluster-with-different-vnet/kakfa-event-peering-window.png" alt-text="Screenshot showing Kafka event peering." border="true":::
+
+1. After the above steps are completed, we can create HDInsight Kafka cluster in one VNet. For more information, see [Create an Apache Kafka cluster](./apache-kafka-get-started.md#create-an-apache-kafka-cluster)
+
+1. Create a Virtual Machine in the second VNet. While creating the VM, specify the second VNet name where this virtual machine must be deployed. For more information, see [Create a Linux virtual machine in the Azure portal](https://learn.microsoft.com/azure/virtual-machines/linux/quick-create-portal)
+
+1. After this step, we can copy the entries of the file /etc/host from Kafka headnode to VM.
+
+ :::image type="content" source="./media/connect-kafka-cluster-with-different-vnet/etc-host-output.png" alt-text="Screenshot showing host file output." border="true":::
+
+1. Remove the `headnodehost` string entries from the file. For example, the above image has `headnodehost` entry for the ip 10.0.0.16. After removal, it will be as
+
+ :::image type="content" source="./media/connect-kafka-cluster-with-different-vnet/modified-etc-hosts-output.png" alt-text="Screenshot showing modified host file output." border="true":::
+
+1. After these entries are made, try to reach the Kafka Ambari dashboard using the curl command using the hn0 or hn1 FQDN as
+
+ From Linux VM
+
+ ```
+ curl hn0-vnetka.glarbkztnoqubmzjylls33qnse.bx.internal.cloudapp.net:8080
+ ```
+
+ Output:
+
+ ```
+ ΓÇ£<!--
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
+
+ -->
+
+ <!DOCTYPE html>
+ <html lang="en">
+
+ <head>
+
+ <meta charset="utf-8">
+ <meta http-equiv="X-UA-Compatible" content="IE=edge">
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
+ <link rel="stylesheet" href="stylesheets/vendor.css">
+ <link rel="stylesheet" href="stylesheets/app.css">
+ <script src="javascripts/vendor.js"></script>
+ <script src="javascripts/app.js"></script>
+
+ <script>
+ $(document).ready(function() {
+ require('initialize');
+ // make favicon work in firefox
+ $('link[type*=icon]').detach().appendTo('head');
+ $('#loading').remove();
+ });
+
+ </script>
+
+ <title>Ambari</title>
+ <link rel="shortcut icon" href="/img/logo.png" type="image/x-icon">
+ </head>
+ <body>
+ <div id="loading">...Loading...</div>
+ <div id="wrapper">
+ <!-- ApplicationView -->
+ </div>
+ <footer>
+
+ <div class="container footer-links">
+
+ <a data-qa="license-link" href="http://www.apache.org/licenses/LICENSE-2.0" target="_blank">Licensed under the Apache License, Version 2.0</a>. <br>
+
+ <a data-qa="third-party-link" href="/licenses/NOTICE.txt" target="_blank">See third-party tools/resources that Ambari uses and their respective authors</a>
+
+ </div>
+
+ </footer>
+
+ </body>
+
+ </html>ΓÇ¥
+ ```
+ From Windows VM
+
+ :::image type="content" source="./media/connect-kafka-cluster-with-different-vnet/windows-vm.png" alt-text="Screenshot showing windows VM output." border="true":::
+
+ > [!Note]
+ > 1. In Windows VM , static hostnames are added in the file hosts present in the path `C:\Windows\System32\drivers\etc\`
+ > 1. This document assumes that the Ambari server is active on hn0. If the Ambari server is active on hn1 use the FQDN of hn1 to access the Ambari UI.
+
+1. You can also send messages to kafka topic and read the topics from the VM. For that you can try to use this sample java application, https://github.com/Azure-Samples/hdinsight-kafka-java-get-started
+
+ Make sure to create the topic inside the Kafka cluster using the command
+
+ ```
+ java -jar kafka-producer-consumer.jar create <topic_name> $KAFKABROKERS
+ ```
+
+1. After creating the topic, we can use the below commands to produce and consume. The $KAFKABROKERS must be replaced appropriately with the broker worker node FQDN and port as mentioned in the documentation.
+
+ ```
+ java -jar kafka-producer-consumer.jar producer test $KAFKABROKERS `
+ java -jar kafka-producer-consumer.jar consumer test $KAFKABROKERS
+ ```
+
+1. After this step you get an output as
+
+ **Producer output:**
+
+ :::image type="content" source="./media/connect-kafka-cluster-with-different-vnet/kafka-producer-output.png" alt-text="Screenshot showing Kafka producer output VM." border="true":::
+
+ **Consumer output:**
+
+ :::image type="content" source="./media/connect-kafka-cluster-with-different-vnet/kafka-consumer-output.png" alt-text="Screenshot showing Kafka producer output." border="true":::
+
healthcare-apis Understand Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/understand-service.md
Title: Understand the MedTech service device message data transformation - Azure Health Data Services
-description: This article provides an understanding of the MedTech service device messaging data transformation to FHIR Observation resources. The MedTech service ingests, normalizes, groups, transforms, and persists device message data in the FHIR service.
+description: This article provides an overview of the MedTech service device messaging data transformation to FHIR Observation resources. The MedTech service ingests, normalizes, groups, transforms, and persists device message data in the FHIR service.
Previously updated : 02/09/2023 Last updated : 02/14/2023
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-This article provides an overview of the device message data processing stages within the [MedTech service](overview.md). The MedTech service transforms device message data into FHIR [Observation](https://www.hl7.org/fhir/observation.html) resources for persistence on the [FHIR service](../fhir/overview.md).
+This article provides an overview of the device message data processing stages within the [MedTech service](overview.md). The MedTech service transforms device message data into FHIR [Observation](https://www.hl7.org/fhir/observation.html) resources for persistence in the [FHIR service](../fhir/overview.md).
The MedTech service device message data processing follows these steps and in this order:
At this point, the [Device](https://www.hl7.org/fhir/device.html) resource, alon
> [!NOTE] > All identity look ups are cached once resolved to decrease load on the FHIR service. If you plan on reusing devices with multiple patients, it is advised you create a virtual device resource that is specific to the patient and send the virtual device identifier in the device message payload. The virtual device can be linked to the actual device resource as a parent.
-If no Device resource for a given device identifier exists in the FHIR service, the outcome depends upon the value of [Resolution Type](deploy-new-config.md#configure-the-destination-tab) set at the time of the MedTech service deployment. When set to `Lookup`, the specific message is ignored, and the pipeline continues to process other incoming device messages. If set to `Create`, the MedTech service creates minimal Device and Patient resources on the FHIR service.
+If no Device resource for a given device identifier exists in the FHIR service, the outcome depends upon the value of [Resolution Type](deploy-new-config.md#configure-the-destination-tab) set at the time of the MedTech service deployment. When set to `Lookup`, the specific message is ignored, and the pipeline continues to process other incoming device messages. If set to `Create`, the MedTech service creates minimal Device and Patient resources in the FHIR service.
> [!NOTE] > The `Resolution Type` can also be adjusted post deployment of the MedTech service in the event that a different type is later desired.
To learn how to configure the MedTech service device and FHIR destination mappin
> [!div class="nextstepaction"] > [How to configure FHIR destination mappings](how-to-configure-fhir-mappings.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
iot-edge How To Collect And Transport Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-collect-and-transport-metrics.md
description: Use Azure Monitor to remotely monitor IoT Edge's built-in metrics
Previously updated : 03/18/2022 Last updated : 2/14/2023
You can remotely monitor your IoT Edge fleet using Azure Monitor and built-in metrics integration. To enable this capability on your device, add the metrics-collector module to your deployment and configure it to collect and transport module metrics to Azure Monitor.
-To configure monitoring on your IoT Edge device follow the [Tutorial: Monitor IoT Edge devices](tutorial-monitor-with-workbooks.md). You'll learn how to add the metrics-collector module to your device. Otherwise, the information in this article (Collect and transport metrics) gives you an overview of the monitoring architecture and explains options you have for when it's time to configure metrics on your device.
+To configure monitoring on your IoT Edge device, follow the [Tutorial: Monitor IoT Edge devices](tutorial-monitor-with-workbooks.md). You learn how to add the metrics-collector module to your device. This article gives you an overview of the monitoring architecture and explains your options on configuring metrics on your device.
> [!VIDEO https://aka.ms/docs/player?id=94a7d988-4a35-4590-9dd8-a511cdd68bee]
It also available in the [IoT Edge Module Marketplace](https://aka.ms/edgemon-mo
## Metrics collector configuration
-All configuration for the metrics-collector is done using environment variables. Minimally, the variables noted in the table below marked as **Required** need to be specified.
+All configuration for the metrics-collector is done using environment variables. Minimally, the variables noted in this table marked as **Required** need to be specified.
# [IoT Hub](#tab/iothub)
All configuration for the metrics-collector is done using environment variables.
| `LogAnalyticsSharedKey` | [Log Analytics workspace key](../azure-monitor/agents/agent-windows.md#workspace-id-and-key). <br><br>**Required** only if *UploadTarget* is *AzureMonitor* <br><br> Default value: *none* | | `ScrapeFrequencyInSecs` | Recurring time interval in seconds at which to collect and transport metrics.<br><br> Example: *600* <br><br> **Not required** <br><br> Default value: *300* | | `MetricsEndpointsCSV` | Comma-separated list of endpoints to collect Prometheus metrics from. All module endpoints to collect metrics from must appear in this list.<br><br> Example: *http://edgeAgent:9600/metrics, http://edgeHub:9600/metrics, http://MetricsSpewer:9417/metrics* <br><br> **Not required** <br><br> Default value: *http://edgeHub:9600/metrics, http://edgeAgent:9600/metrics* |
-| `AllowedMetrics` | List of metrics to collect, all other metrics will be ignored. Set to an empty string to disable. For more information, see [allow and disallow lists](#allow-and-disallow-lists). <br><br>Example: *metricToScrape{quantile=0.99}[endpoint=http://MetricsSpewer:9417/metrics]*<br><br> **Not required** <br><br> Default value: *empty* |
-| `BlockedMetrics` | List of metrics to ignore. Overrides *AllowedMetrics*, so a metric won't be reported if it's included in both lists. For more information, see [allow and disallow lists](#allow-and-disallow-lists). <br><br> Example: *metricToIgnore{quantile=0.5}[endpoint=http://VeryNoisyModule:9001/metrics], docker_container_disk_write_bytes*<br><br> **Not required** <br><br>Default value: *empty* |
+| `AllowedMetrics` | List of metrics to collect, all other metrics are ignored. Set to an empty string to disable. For more information, see [allow and disallow lists](#allow-and-disallow-lists). <br><br>Example: *metricToScrape{quantile=0.99}[endpoint=http://MetricsSpewer:9417/metrics]*<br><br> **Not required** <br><br> Default value: *empty* |
+| `BlockedMetrics` | List of metrics to ignore. Overrides *AllowedMetrics*, so a metric isn't reported if it's included in both lists. For more information, see [allow and disallow lists](#allow-and-disallow-lists). <br><br> Example: *metricToIgnore{quantile=0.5}[endpoint=http://VeryNoisyModule:9001/metrics], docker_container_disk_write_bytes*<br><br> **Not required** <br><br>Default value: *empty* |
| `CompressForUpload` | Controls if compression should be used when uploading metrics. Applies to all upload targets.<br><br> Example: *true* <br><br> **Not required** <br><br> Default value: *true* | | `AzureDomain` | Specifies the top-level Azure domain to use when ingesting metrics directly to Log Analytics. <br><br> Example: *azure.us* <br><br> **Not required** <br><br> Default value: *azure.com* |
All configuration for the metrics-collector is done using environment variables.
| `LogAnalyticsSharedKey` | [Log Analytics workspace key](../azure-monitor/agents/agent-windows.md#workspace-id-and-key). <br><br>**Required** only if *UploadTarget* is *AzureMonitor* <br><br> Default value: *none* | | `ScrapeFrequencyInSecs` | Recurring time interval in seconds at which to collect and transport metrics.<br><br> Example: *600* <br><br> **Not required** <br><br> Default value: *300* | | `MetricsEndpointsCSV` | Comma-separated list of endpoints to collect Prometheus metrics from. All module endpoints to collect metrics from must appear in this list.<br><br> Example: *http://edgeAgent:9600/metrics, http://edgeHub:9600/metrics, http://MetricsSpewer:9417/metrics* <br><br> **Not required** <br><br> Default value: *http://edgeHub:9600/metrics, http://edgeAgent:9600/metrics* |
-| `AllowedMetrics` | List of metrics to collect, all other metrics will be ignored. Set to an empty string to disable. For more information, see [allow and disallow lists](#allow-and-disallow-lists). <br><br>Example: *metricToScrape{quantile=0.99}[endpoint=http://MetricsSpewer:9417/metrics]*<br><br> **Not required** <br><br> Default value: *empty* |
-| `BlockedMetrics` | List of metrics to ignore. Overrides *AllowedMetrics*, so a metric won't be reported if it's included in both lists. For more information, see [allow and disallow lists](#allow-and-disallow-lists). <br><br> Example: *metricToIgnore{quantile=0.5}[endpoint=http://VeryNoisyModule:9001/metrics], docker_container_disk_write_bytes*<br><br> **Not required** <br><br>Default value: *empty* |
+| `AllowedMetrics` | List of metrics to collect, all other metrics are ignored. Set to an empty string to disable. For more information, see [allow and disallow lists](#allow-and-disallow-lists). <br><br>Example: *metricToScrape{quantile=0.99}[endpoint=http://MetricsSpewer:9417/metrics]*<br><br> **Not required** <br><br> Default value: *empty* |
+| `BlockedMetrics` | List of metrics to ignore. Overrides *AllowedMetrics*, so a metric is reported if it's included in both lists. For more information, see [allow and disallow lists](#allow-and-disallow-lists). <br><br> Example: *metricToIgnore{quantile=0.5}[endpoint=http://VeryNoisyModule:9001/metrics], docker_container_disk_write_bytes*<br><br> **Not required** <br><br>Default value: *empty* |
| `CompressForUpload` | Controls if compression should be used when uploading metrics. Applies to all upload targets.<br><br> Example: *true* <br><br> **Not required** <br><br> Default value: *true* | | `AzureDomain` | Specifies the top-level Azure domain to use when ingesting metrics directly to Log Analytics. <br><br> Example: *azure.us* <br><br> **Not required** <br><br> Default value: *azure.com* |
The resource ID takes the following format:
You can find the resource ID in the **Properties** page of the IoT hub in the Azure portal. Or, you retrieve the ID with the [az resource show](/cli/azure/resource#az-resource-show) command:
If you set **UploadTarget** to **IoTMessage**, then your module metrics are publ
### Allow and disallow lists
-The `AllowedMetrics` and `BlockedMetrics` configuration options take space- or comma-separated lists of metric selectors. A metric will match the list and be included or excluded if it matches one or more metrics in either list.
+The `AllowedMetrics` and `BlockedMetrics` configuration options take space- or comma-separated lists of metric selectors. A metric matches the list and is included or excluded if it matches one or more metrics in either list.
Metric selectors use a format similar to a subset of the [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/) query language.
Metric name (`metricToSelect`).
Label-based selectors (`{quantile=0.5,otherLabel=~Re[ge]*|x}`). * Multiple metric values can be included in the curly brackets. The values should be comma-separated.
-* A metric will be matched if at least all labels in the selector are present and also match.
+* A metric is matched if at least all labels in the selector are present and also match.
* Like PromQL, the following matching operators are allowed. * `=` Match labels exactly equal to the provided string (case sensitive). * `!=` Match labels not exactly equal to the provided string. * `=~` Match labels to a provided regex. ex: `label=~CPU|Mem|[0-9]*` * `!~` Match labels that don't fit a provided regex.
- * Regex is fully anchored (A ^ and $ are automatically added to the start and end of each regex)
+ * Regex is fully anchored (A `^` and `$` are automatically added to the start and end of each regex)
* This component is optional in a metrics selector. Endpoint selector (`[http://VeryNoisyModule:9001/metrics]`).
Set `NO_PROXY` value to a comma-separated list of hostnames that should be exclu
# [IoT Hub](#tab/iothub)
-Sometimes it's necessary to ingest metrics though IoT Hub instead of sending them directly to Log Analytics. For example, when monitoring [IoT Edge devices in a nested configuration](tutorial-nested-iot-edge.md) where child devices have access only to the IoT Edge hub of their parent device. Another example is when deploying an IoT Edge device with outbound network access only to IoT Hub.
+Sometimes it's necessary to ingest metrics through IoT Hub instead of sending them directly to Log Analytics. For example, when monitoring [IoT Edge devices in a nested configuration](tutorial-nested-iot-edge.md) where child devices have access only to the IoT Edge hub of their parent device. Another example is when deploying an IoT Edge device with *outbound network access only* to IoT Hub.
To enable monitoring in this scenario, the metrics-collector module can be configured to send metrics as device-to-cloud (D2C) messages via the edgeHub module. The capability can be turned on by setting the `UploadTarget` environment variable to `IoTMessage` in the collector [configuration](#metrics-collector-configuration). >[!TIP] >Remember to add an edgeHub route to deliver metrics messages from the collector module to IoT Hub. It looks like `FROM /messages/modules/replace-with-collector-module-name/* INTO $upstream`.
-This option does require [extra setup](how-to-collect-and-transport-metrics.md#sample-cloud-workflow) to deliver metrics messages arriving at IoT Hub to the Log Analytics workspace. Without this set up, the other portions of the integration such as [curated visualizations](how-to-explore-curated-visualizations.md) and [alerts](how-to-create-alerts.md) won't work.
+This option does require extra setup, a cloud workflow, to deliver metrics messages arriving at IoT Hub to the Log Analytics workspace. Without this set up, the other portions of the integration such as [curated visualizations](how-to-explore-curated-visualizations.md) and [alerts](how-to-create-alerts.md) don't work.
>[!NOTE] >Be aware of additional costs with this option. Metrics messages will count against your IoT Hub message quota. You will also be charged for Log Analytics ingestion and cloud workflow resources.
A cloud workflow that delivers metrics messages from IoT Hub to Log Analytics is
# [IoT Central](#tab/iotcentral)
-Sometimes it's necessary to ingest metrics though IoT Central instead of sending them directly to Log Analytics. For example, when monitoring [IoT Edge devices in a nested configuration](tutorial-nested-iot-edge.md) where child devices have access only to the IoT Edge hub of their parent device. Another example is when deploying an IoT Edge device with outbound network access only to IoT Central.
+Sometimes it's necessary to ingest metrics through IoT Central instead of sending them directly to Log Analytics. For example, when monitoring [IoT Edge devices in a nested configuration](tutorial-nested-iot-edge.md) where child devices have access only to the IoT Edge hub of their parent device. Another example is when deploying an IoT Edge device with *outbound network access only* to IoT Central.
To enable monitoring in this scenario, the metrics-collector module can be configured to send metrics as device-to-cloud (D2C) messages via the edgeHub module. The capability can be turned on by setting the `UploadTarget` environment variable to `IoTMessage` in the collector [configuration](#metrics-collector-configuration).
To view the metrics from your IoT Edge device in your IoT Central application:
* Add the **IoT Edge Metrics standard interface** as an inherited interface to your [device template](../iot-central/core/concepts-device-templates.md):
- :::image type="content" source="media/how-to-collect-and-transport-metrics/add-metrics-interface.png" alt-text="Add the IoT Edge Metrics standard interface.":::
+ :::image type="content" source="media/how-to-collect-and-transport-metrics/add-metrics-interface.png" alt-text="Screenshot that shows how to add the IoT Edge Metrics standard interface." lightbox="media/how-to-collect-and-transport-metrics/add-metrics-interface.png":::
* Use the telemetry values defined in the interface to build any [dashboards](../iot-central/core/howto-manage-dashboards.md) you need to monitor your IoT Edge devices:
- :::image type="content" source="media/how-to-collect-and-transport-metrics/iot-edge-metrics-telemetry.png" alt-text="IoT Edge metrics available as telemetry.":::
+ :::image type="content" source="media/how-to-collect-and-transport-metrics/iot-edge-metrics-telemetry.png" alt-text="Screenshot that shows the IoT Edge metrics available as telemetry." lightbox="media/how-to-collect-and-transport-metrics/iot-edge-metrics-telemetry.png":::
>[!NOTE] >Be aware of additional costs with this option. Metrics messages will count against your IoT Central message quota.
iot-edge How To Provision Devices At Scale Linux On Windows Symmetric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-on-windows-symmetric.md
[!INCLUDE [iot-edge-version-1.4](includes/iot-edge-version-1.4.md)]
-This article provides end-to-end instructions for autoprovisioning one or more [IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md) devices using symmetric keys. You can automatically provision Azure IoT Edge devices with the [Azure IoT Hub device provisioning service](../iot-dps/index.yml) (DPS). If you're unfamiliar with the process of autoprovisioning, review the [provisioning overview](../iot-dps/about-iot-dps.md#provisioning-process) before continuing.
+This article shows how to autoprovision one or more [IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md) devices using symmetric keys. You can automatically provision Azure IoT Edge devices with the [Azure IoT Hub device provisioning service](../iot-dps/index.yml) (DPS). If you're unfamiliar with the process of autoprovisioning, review the [provisioning overview](../iot-dps/about-iot-dps.md#provisioning-process) before continuing.
The tasks are as follows:
iot-edge How To Store Data Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-store-data-blob.md
Watch the video for quick introduction
This module comes with **deviceToCloudUpload** and **deviceAutoDelete** features.
-**deviceToCloudUpload** is a configurable functionality. This function automatically uploads the data from your local blob storage to Azure with intermittent internet connectivity support. It allows you to:
+The **deviceToCloudUpload** feature is a configurable functionality. This function automatically uploads the data from your local blob storage to Azure with intermittent internet connectivity support. It allows you to:
* Turn ON/OFF the deviceToCloudUpload feature. * Choose the order in which the data is copied to Azure like NewestFirst or OldestFirst.
This module uses block level upload, when your blob consists of blocks. Here are
* Your application updates some blocks of a previously uploaded block blob or appends new blocks to an append blob, this module uploads only the updated blocks and not the whole blob. * The module is uploading blob and internet connection goes away, when the connectivity is back again it uploads only the remaining blocks and not the whole blob.
-If an unexpected process termination (like power failure) happens during a blob upload, all blocks that were due for the upload will be uploaded again once the module comes back online.
+If an unexpected process termination (like power failure) happens during a blob upload, all blocks due for the upload are uploaded again once the module comes back online.
**deviceAutoDelete** is a configurable functionality. This function automatically deletes your blobs from the local storage when the specified duration (measured in minutes) expires. It allows you to: * Turn ON/OFF the deviceAutoDelete feature.
-* Specify the time in minutes (deleteAfterMinutes) after which the blobs will be automatically deleted.
+* Specify the time in minutes (deleteAfterMinutes) after which the blobs are automatically deleted.
* Choose the ability to retain the blob while it's uploading if the deleteAfterMinutes value expires. ## Prerequisites
Use the module's desired properties to set **deviceToCloudUploadProperties** and
### deviceToCloudUploadProperties
-The name of this setting is `deviceToCloudUploadProperties`. If you are using the IoT Edge simulator, set the values to the related environment variables for these properties, which you can find in the explanation section.
+The name of this setting is `deviceToCloudUploadProperties`. If you're using the IoT Edge simulator, set the values to the related environment variables for these properties, which you can find in the explanation section.
| Property | Possible Values | Explanation | | -- | -- | - |
-| uploadOn | true, false | Set to `false` by default. If you want to turn the feature on, set this field to `true`. <br><br> Environment variable: `deviceToCloudUploadProperties__uploadOn={false,true}` |
+| uploadOn | true, false | Set to `false` by default. If you want to turn on the feature, set this field to `true`. <br><br> Environment variable: `deviceToCloudUploadProperties__uploadOn={false,true}` |
| uploadOrder | NewestFirst, OldestFirst | Allows you to choose the order in which the data is copied to Azure. Set to `OldestFirst` by default. The order is determined by last modified time of Blob. <br><br> Environment variable: `deviceToCloudUploadProperties__uploadOrder={NewestFirst,OldestFirst}` |
-| cloudStorageConnectionString | | `"DefaultEndpointsProtocol=https;AccountName=<your Azure Storage Account Name>;AccountKey=<your Azure Storage Account Key>;EndpointSuffix=<your end point suffix>"` is a connection string that allows you to specify the storage account to which you want your data uploaded. Specify `Azure Storage Account Name`, `Azure Storage Account Key`, `End point suffix`. Add appropriate EndpointSuffix of Azure where data will be uploaded, it varies for Global Azure, Government Azure, and Microsoft Azure Stack. <br><br> You can choose to specify Azure Storage SAS connection string here. But you have to update this property when it expires. SAS permissions may include create access for containers and create, write, and add access for blobs. <br><br> Environment variable: `deviceToCloudUploadProperties__cloudStorageConnectionString=<connection string>` |
-| storageContainersForUpload | `"<source container name1>": {"target": "<target container name>"}`,<br><br> `"<source container name1>": {"target": "%h-%d-%m-%c"}`, <br><br> `"<source container name1>": {"target": "%d-%c"}` | Allows you to specify the container names you want to upload to Azure. This module allows you to specify both source and target container names. If you don't specify the target container name, it will automatically assign the container name as `<IoTHubName>-<IotEdgeDeviceID>-<ModuleName>-<SourceContainerName>`. You can create template strings for target container name, check out the possible values column. <br>* %h -> IoT Hub Name (3-50 characters). <br>* %d -> IoT Edge Device ID (1 to 129 characters). <br>* %m -> Module Name (1 to 64 characters). <br>* %c -> Source Container Name (3 to 63 characters). <br><br>Maximum size of the container name is 63 characters, while automatically assigning the target container name if the size of container exceeds 63 characters it will trim each section (IoTHubName, IotEdgeDeviceID, ModuleName, SourceContainerName) to 15 characters. <br><br> Environment variable: `deviceToCloudUploadProperties__storageContainersForUpload__<sourceName>__target=<targetName>` |
-| deleteAfterUpload | true, false | Set to `false` by default. When it is set to `true`, it will automatically delete the data when upload to cloud storage is finished. <br><br> **CAUTION**: If you are using append blobs, this setting will delete append blobs from local storage after a successful upload, and any future Append Block operations to those blobs will fail. Use this setting with caution, do not enable this if your application does infrequent append operations or does not support continuous append operations<br><br> Environment variable: `deviceToCloudUploadProperties__deleteAfterUpload={false,true}`. |
+| cloudStorageConnectionString | | `"DefaultEndpointsProtocol=https;AccountName=<your Azure Storage Account Name>;AccountKey=<your Azure Storage Account Key>;EndpointSuffix=<your end point suffix>"` is a connection string that allows you to specify the storage account to which you want your data uploaded. Specify `Azure Storage Account Name`, `Azure Storage Account Key`, `End point suffix`. Add appropriate EndpointSuffix of Azure where data is uploaded, it varies for Global Azure, Government Azure, and Microsoft Azure Stack. <br><br> You can choose to specify Azure Storage SAS connection string here. But you have to update this property when it expires. SAS permissions may include create access for containers and create, write, and add access for blobs. <br><br> Environment variable: `deviceToCloudUploadProperties__cloudStorageConnectionString=<connection string>` |
+| storageContainersForUpload | `"<source container name1>": {"target": "<target container name>"}`,<br><br> `"<source container name1>": {"target": "%h-%d-%m-%c"}`, <br><br> `"<source container name1>": {"target": "%d-%c"}` | Allows you to specify the container names you want to upload to Azure. This module allows you to specify both source and target container names. If you don't specify the target container name, it's automatically assigned a container name such as `<IoTHubName>-<IotEdgeDeviceID>-<ModuleName>-<SourceContainerName>`. You can create template strings for target container name, check out the possible values column. <br>* %h -> IoT Hub Name (3-50 characters). <br>* %d -> IoT Edge Device ID (1 to 129 characters). <br>* %m -> Module Name (1 to 64 characters). <br>* %c -> Source Container Name (3 to 63 characters). <br><br>Maximum size of the container name is 63 characters. The name is automatically assigned the target container name if the size of container exceeds 63 characters. In this case, name is trimmed in each section (IoTHubName, IotEdgeDeviceID, ModuleName, SourceContainerName) to 15 characters. <br><br> Environment variable: `deviceToCloudUploadProperties__storageContainersForUpload__<sourceName>__target=<targetName>` |
+| deleteAfterUpload | true, false | Set to `false` by default. When set to `true`, the data automatically deletes when the upload to cloud storage is finished. <br><br> **CAUTION**: If you're using append blobs, this setting deletes append blobs from local storage after a successful upload, and any future Append Block operations to those blobs will fail. Use this setting with caution. Don't enable this setting if your application does infrequent append operations or doesn't support continuous append operations<br><br> Environment variable: `deviceToCloudUploadProperties__deleteAfterUpload={false,true}`. |
### deviceAutoDeleteProperties
-The name of this setting is `deviceAutoDeleteProperties`. If you are using the IoT Edge simulator, set the values to the related environment variables for these properties, which you can find in the explanation section.
+The name of this setting is `deviceAutoDeleteProperties`. If you're using the IoT Edge simulator, set the values to the related environment variables for these properties, which you can find in the explanation section.
| Property | Possible Values | Explanation | | -- | -- | - |
-| deleteOn | true, false | Set to `false` by default. If you want to turn the feature on, set this field to `true`. <br><br> Environment variable: `deviceAutoDeleteProperties__deleteOn={false,true}` |
-| deleteAfterMinutes | `<minutes>` | Specify the time in minutes. The module will automatically delete your blobs from local storage when this value expires. Current maximum minutes allowed is 35791. <br><br> Environment variable: `deviceAutoDeleteProperties__ deleteAfterMinutes=<minutes>` |
-| retainWhileUploading | true, false | By default it is set to `true`, and it will retain the blob while it is uploading to cloud storage if deleteAfterMinutes expires. You can set it to `false` and it will delete the data as soon as deleteAfterMinutes expires. Note: For this property to work uploadOn should be set to true. <br><br> **CAUTION**: If you are using append blobs, this setting will delete append blobs from local storage when the value expires, and any future Append Block operations to those blobs will fail. You may want to make sure the expiry value is large enough for the expected frequency of append operations performed by your application.<br><br> Environment variable: `deviceAutoDeleteProperties__retainWhileUploading={false,true}`|
+| deleteOn | true, false | Set to `false` by default. If you want to turn on the feature, set this field to `true`. <br><br> Environment variable: `deviceAutoDeleteProperties__deleteOn={false,true}` |
+| deleteAfterMinutes | `<minutes>` | Specify the time in minutes. The module automatically deletes your blobs from local storage when this value expires. Current maximum minutes allowed are 35791. <br><br> Environment variable: `deviceAutoDeleteProperties__ deleteAfterMinutes=<minutes>` |
+| retainWhileUploading | true, false | By default it's set to `true`, and retains the blob while it's uploading to cloud storage if `deleteAfterMinutes` expire. You can set it to `false` and it deletes the data as soon as `deleteAfterMinutes` expires. Note: For this property to work uploadOn should be set to true. <br><br> **CAUTION**: If you use append blobs, this setting deletes append blobs from local storage when the value expires, and any future Append Block operations to those blobs fail. Make sure the expiry value is large enough for the expected frequency of append operations performed by your application.<br><br> Environment variable: `deviceAutoDeleteProperties__retainWhileUploading={false,true}`|
## Using SMB share as your local storage
Make sure the SMB share and IoT device are in mutually trusted domains.
You can run `New-SmbGlobalMapping` PowerShell command to map the SMB share locally on the IoT device running Windows.
-Below are the configuration steps:
+The configuration steps:
```PowerShell $creds = Get-Credential
$creds = Get-Credential
New-SmbGlobalMapping -RemotePath \\contosofileserver\share1 -Credential $creds -LocalPath G: ```
-This command will use the credentials to authenticate with the remote SMB server. Then, map the remote share path to G: drive letter (can be any other available drive letter). The IoT device now have the data volume mapped to a path on the G: drive.
+This command uses the credentials to authenticate with the remote SMB server. Then, map the remote share path to G: drive letter (can be any other available drive letter). The IoT device now has the data volume mapped to a path on the G: drive.
Make sure the user in IoT device can read/write to the remote SMB share.
For your deployment the value of `<storage mount>` can be **G:/ContainerData:C:/
## Granting directory access to container user on Linux
-If you have used [volume mount](https://docs.docker.com/storage/volumes/) for storage in your create options for Linux containers then you don't have to do any extra steps, but if you used [bind mount](https://docs.docker.com/storage/bind-mounts/) then these steps are required to run the service correctly.
+If you use [volume mount](https://docs.docker.com/storage/volumes/) for storage in your create options for Linux containers, then you don't have to do any extra steps, but if you use [bind mount](https://docs.docker.com/storage/bind-mounts/), then these steps are required to run the service correctly.
-Following the principle of least privilege to limit the access rights for users to bare minimum permissions they need to perform their work, this module includes a user (name: absie, ID: 11000) and a user group (name: absie, ID: 11000). If the container is started as **root** (default user is **root**), our service will be started as the low-privilege **absie** user.
+Following the principle of least privilege to limit the access rights for users to bare minimum permissions they need to perform their work, this module includes a user (name: absie, ID: 11000) and a user group (name: absie, ID: 11000). If the container is started as **root** (default user is **root**), our service is started as the low-privilege **absie** user.
-This behavior makes configuration of the permissions on host path binds crucial for the service to work correctly, otherwise the service will crash with access denied errors. The path that is used in directory binding needs to be accessible by the container user (example: absie 11000). You can grant the container user access to the directory by executing the commands below on the host:
+This behavior makes configuration of the permissions on host path binds crucial for the service to work correctly, otherwise the service crashes with access denied errors. The path that is used in directory binding needs to be accessible by the container user (example: absie 11000). You can grant the container user access to the directory by executing these commands on the host:
```terminal sudo chown -R 11000:11000 <blob-dir>
sudo chown -R 11000:11000 /srv/containerdata
sudo chmod -R 700 /srv/containerdata ```
-If you need to run the service as a user other than **absie**, you can specify your custom user ID in createOptions under "User" property in your deployment manifest. In such case you need to use default or root group ID `0`.
+If you need to run the service as a user other than **absie**, you can specify your custom user ID in createOptions under "User" property in your deployment manifest. In such a case, use default or root group ID `0`.
```json "createOptions": {
The Azure Blob Storage documentation includes quickstart sample code in several
The following quickstart samples use languages that are also supported by IoT Edge, so you could deploy them as IoT Edge modules alongside the blob storage module: * [.NET](../storage/blobs/storage-quickstart-blobs-dotnet.md)
- * The Azure Blob Storage on Iot Edge module v1.4.0 and earlier are compatible with WindowsAzure.Storage 9.3.3 SDK and v1.4.1 also supports Azure.Storage.Blobs 12.8.0 SDK.
+ * The Azure Blob Storage on IoT Edge module v1.4.0 and earlier are compatible with WindowsAzure.Storage 9.3.3 SDK and v1.4.1 also supports Azure.Storage.Blobs 12.8.0 SDK.
* [Python](../storage/blobs/storage-quickstart-blobs-python.md)
- * Versions before V2.1 of the Python SDK have a known issue where the module does not return blob creation time. Because of that issue, some methods like list blobs does not work. As a workaround, explicitly set the API version on the blob client to '2017-04-17'. Example: `block_blob_service._X_MS_VERSION = '2017-04-17'`
+ * Versions before V2.1 of the Python SDK have a known issue where the module doesn't return the blob creation time. Because of that issue, some methods like list blobs don't work. As a workaround, explicitly set the API version on the blob client to '2017-04-17'. Example: `block_blob_service._X_MS_VERSION = '2017-04-17'`
* [Append Blob Sample](https://github.com/Azure/azure-storage-python/blob/master/samples/blob/append_blob_usage.py) * [Node.js](../storage/blobs/storage-quickstart-blobs-nodejs-legacy.md) * [JS/HTML](../storage/blobs/storage-quickstart-blobs-javascript-client-libraries-legacy.md)
Unsupported:
Supported: * Put block
-* Put and get block list
+* Put and get blocklist
Unsupported:
Here are the [release notes in docker hub](https://hub.docker.com/_/microsoft-az
Learn how to [Deploy Azure Blob Storage on IoT Edge](how-to-deploy-blob.md)
-Stay up-to-date with recent updates and announcement in the [Azure Blob Storage on IoT Edge blog](https://aka.ms/abs-iot-blogpost)
+Stay up-to-date with recent updates and announcement on the [Azure Blob Storage on IoT Edge release notes](https://hub.docker.com/_/microsoft-azure-blob-storage) page.
iot-edge How To Vs Code Develop Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-vs-code-develop-module.md
Install prerequisites specific to the language you're developing in:
# [Java](#tab/java) -- Install [Java SE Development Kit 10](/azure/developer/java/fundamentals/java-support-on-azure) and [Maven](https://maven.apache.org/). You'll need to [set the `JAVA_HOME` environment variable](https://docs.oracle.com/cd/E19182-01/820-7851/inst_cli_jdk_javahome_t/) to point to your JDK installation.
+- Install [Java SE Development Kit 10](/azure/developer/java/fundamentals/java-support-on-azure) and [Maven](https://maven.apache.org/). You need to [set the `JAVA_HOME` environment variable](https://docs.oracle.com/cd/E19182-01/820-7851/inst_cli_jdk_javahome_t/) to point to your JDK installation.
- Install [Java Extension Pack for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-java-pack) # [Node.js](#tab/node) -- Install [Node.js](https://nodejs.org). You'll also want to install [Yeoman](https://www.npmjs.com/package/yo) and the [Azure IoT Edge Node.js Module Generator](https://www.npmjs.com/package/generator-azure-iot-edge-module).
+- Install [Node.js](https://nodejs.org). Install [Yeoman](https://www.npmjs.com/package/yo) and the [Azure IoT Edge Node.js Module Generator](https://www.npmjs.com/package/generator-azure-iot-edge-module).
# [Python](#tab/python)
Install prerequisites specific to the language you're developing in:
-To test your module on a device, you'll need:
+To test your module on a device:
- An active IoT Hub with at least one IoT Edge device. - A physical IoT Edge device or a virtual device. To create a virtual device in Azure, follow the steps in the quickstart for [Linux](quickstart-linux.md) or [Windows](quickstart.md).
After solution creation, there are four items within the solution:
- A **.vscode** folder contains configuration file *launch.json*. - A **modules** folder has subfolders for each module. Within the subfolder for each module, the *module.json* file controls how modules are built and deployed.-- An **.env** file lists your environment variables. The environment variable for the container registry is *localhost* by default. If Azure Container Registry is your registry, you'll need to set an Azure Container Registry username and password. For example,
+- An **.env** file lists your environment variables. The environment variable for the container registry is *localhost* by default. If Azure Container Registry is your registry, set an Azure Container Registry username and password. For example,
```env CONTAINER_REGISTRY_SERVER="myacr.azurecr.io"
Use Visual Studio Code and the [Azure IoT Edge](https://marketplace.visualstudio
1. Enter a name for your solution. 1. Select a module template for your preferred development language to be the first module in the solution. 1. Enter a name for your module. Choose a name that's unique within your container registry.
-1. Provide the name of the module's image repository. Visual Studio Code autopopulates the module name with **localhost:5000/<your module name\>**. Replace it with your own registry information. Use **localhost** if you use a local Docker registry for testing. If you use Azure Container Registry, then use the login server from your registry's settings. The login server looks like **_\<registry name\>_.azurecr.io**. Only replace the **localhost:5000** part of the string so that the final result looks like **\<*registry name*\>.azurecr.io/_\<your module name\>_**.
+1. Provide the name of the module's image repository. Visual Studio Code autopopulates the module name with **localhost:5000/<your module name\>**. Replace it with your own registry information. Use **localhost** if you use a local Docker registry for testing. If you use Azure Container Registry, then use sign in server from your registry's settings. The sign in server looks like **_\<registry name\>_.azurecr.io**. Only replace the **localhost:5000** part of the string so that the final result looks like **\<*registry name*\>.azurecr.io/_\<your module name\>_**.
![Provide Docker image repository](./media/how-to-develop-csharp-module/repository.png)
Visual Studio Code takes the information you provided, creates an IoT Edge solut
There are four items within the solution: - A **.vscode** folder contains debug configurations.-- A **modules** folder has subfolders for each module. Within the folder for each module there's a file, **module.json** that controls how modules are built and deployed. This file would need to be modified to change the module deployment container registry from localhost to a remote registry. At this point, you only have one module. But you can add more if needed-- An **.env** file lists your environment variables. The environment variable for the container registry is *localhost* by default. If Azure Container Registry is your registry, you'll need to set an Azure Container Registry username and password. For example,
+- A **modules** folder has subfolders for each module. Within the folder for each module, there's a file called **module.json** that controls how modules are built and deployed. This file would need to be modified to change the module deployment container registry from localhost to a remote registry. At this point, you only have one module. But you can add more if needed
+- An **.env** file lists your environment variables. The environment variable for the container registry is *localhost* by default. If Azure Container Registry is your registry, set an Azure Container Registry username and password. For example,
```env CONTAINER_REGISTRY_SERVER="myacr.azurecr.io"
The IoT Edge extension defaults to the latest stable version of the IoT Edge run
::: zone-end
-## Add additional modules
+## Add more modules
To add more modules to your solution, change to *module* directory.
modules/*&lt;your module name&gt;*/**main.py**
-The sample modules are designed so that you can build the solution, push it to your container registry, and deploy it to a device to start testing without modifying any code. The sample module takes input from a source (in this case, the *SimulatedTemperatureSensor* module that simulates data) and pipes it to IoT Hub.
+The sample modules are designed so that you can build the solution, push to your container registry, and deploy to a device. This process lets you start testing without modifying any code. The sample module takes input from a source (in this case, the *SimulatedTemperatureSensor* module that simulates data) and pipes it to IoT Hub.
When you're ready to customize the template with your own code, use the [Azure IoT Hub SDKs](../iot-hub/iot-hub-devguide-sdks.md) to build modules that address the key needs for IoT solutions such as security, device management, and reliability.
Your default solution contains two modules, one is a simulated temperature senso
Currently, debugging in attach mode is supported only as follows: -- C# modules, including those for Azure Functions, support debugging in Linux amd64 containers
+- C# modules, including modules for Azure Functions, support debugging in Linux amd64 containers
- Node.js modules support debugging in Linux amd64 and arm32v7 containers, and Windows amd64 containers - Java modules support debugging in Linux amd64 and arm32v7 containers
On your development machine, you can start an IoT Edge simulator instead of inst
1. Select **Start Debugging** or press **F5**. Select the process to attach to.
-1. In Visual Studio Code Debug view, you'll see the variables in the left panel.
+1. In Visual Studio Code Debug view, you see the variables in the left panel.
1. To stop the debugging session, first select the Stop button or press **Shift + F5**, and then select **Azure IoT Edge: Stop IoT Edge Simulator** from the command palette.
In Visual Studio Code, open the *deployment.debug.template.json* deployment mani
::: zone pivot="iotedge-dev-cli"
-1. If you're using an Azure Container Registry to store your module image, you'll need to add your credentials to **deployment.debug.template.json** in the *edgeAgent* settings. For example,
+1. If you're using an Azure Container Registry to store your module image, add your credentials to **deployment.debug.template.json** in the *edgeAgent* settings. For example:
```json "modulesContent": {
In Visual Studio Code, open the *deployment.debug.template.json* deployment mani
"createOptions": "{\"HostConfig\":{\"PortBindings\":{\"5671/tcp\":[{\"HostPort\":\"5671\"}],\"8883/tcp\":[{\"HostPort\":\"8883\"}],\"443/tcp\":[{\"HostPort\":\"443\"}]}}}" ```
- For example, the *filtermodule* configuration should be similar to the following:
+ For example, the *filtermodule* configuration should be similar to:
```json "filtermodule": {
For details on how to use Remote SSH debugging in Visual Studio Code, see [Remot
In the Visual Studio Code Debug view, select the debug configuration file for your module. By default, the **.debug** Dockerfile, module's container `createOptions` settings, and `launch.json` file are configured to use *localhost*.
-Select **Start Debugging** or select **F5**. Select the process to attach to. In the Visual Studio Code Debug view, you'll see the variables in the left panel.
+Select **Start Debugging** or select **F5**. Select the process to attach to. In the Visual Studio Code Debug view, you see variables in the left panel.
## Debug using Docker Remote SSH
The Docker and Moby engines support SSH connections to containers allowing you t
docker ps ```
- The output should list the containers running on the remote device similar to the following:
+ The output should list the containers running on the remote device similar:
```output PS C:\> docker ps
The Docker and Moby engines support SSH connections to containers allowing you t
edgeAgent ```
-1. In the *.vscode* directory, add a new configuration to **launch.json** by opening the file in Visual Studio Code. Select **Add configuration** then choose the matching remote attach template for your module. For example, the following configuration is for .NET Core. Change the value for the *-H* parameter in *PipeArgs* to your device DNS name or IP address.
+1. In the *.Visual Studio Code* directory, add a new configuration to **launch.json** by opening the file in Visual Studio Code. Select **Add configuration** then choose the matching remote attach template for your module. For example, the following configuration is for .NET Core. Change the value for the *-H* parameter in *PipeArgs* to your device DNS name or IP address.
```json "configurations": [
The Docker and Moby engines support SSH connections to containers allowing you t
1. In Visual Studio Code Debug view, select the debug configuration *Remote Debug IoT Edge Module (.NET Core)*. 1. Select **Start Debugging** or select **F5**. Select the process to attach to.
-1. In the Visual Studio Code Debug view, you'll see the variables in the left panel.
+1. In the Visual Studio Code Debug view, you see the variables in the left panel.
1. In Visual Studio Code, set breakpoints in your custom module. 1. When a breakpoint is hit, you can inspect variables, step through code, and debug your module.
iot-edge Tutorial Develop For Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-develop-for-linux.md
[!INCLUDE [iot-edge-version-all-supported](includes/iot-edge-version-all-supported.md)]
-Use Visual Studio Code to develop and deploy code to devices running IoT Edge.
+Use [Visual Studio Code](https://code.visualstudio.com/) to develop and deploy code to devices running IoT Edge.
In the [Deploy code to a Linux device](quickstart-linux.md) quickstart, you created an IoT Edge device and deployed a module from the Azure Marketplace. This tutorial walks through developing and deploying your own code to an IoT Edge device. This article is a useful prerequisite for the other tutorials, which go into more detail about specific programming languages or Azure services.
In this tutorial, you learn how to:
A development machine:
-* You can use your own computer or a virtual machine.
-* Make sure your development machine supports [nested virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization). This capability is necessary for running a container engine, which you'll install in the next section.
+* Use your own computer or a virtual machine.
+* Your development machine must support [nested virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization) for running a container engine, which you'll install in the next section.
* Most operating systems that can run a container engine can be used to develop IoT Edge modules for Linux devices. This tutorial uses a Windows computer, but points out known differences on macOS or Linux. * Install [Git](https://git-scm.com/), to pull module template packages later in this tutorial. * [C# for Visual Studio Code (powered by OmniSharp) extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp).
A development machine:
An Azure IoT Edge device:
-* We recommend that you don't run IoT Edge on your development machine, but instead use a separate device. This distinction between development machine and IoT Edge device more accurately mirrors a true deployment scenario, and helps to keep the different concepts straight.
+* We recommend not to run IoT Edge on your development machine, but instead use a separate device. This distinction between development machine and IoT Edge device simulates a true deployment scenario and helps keep the different concepts straight.
* If you don't have a second device available, use the quickstart article [Deploy code to a Linux Device](quickstart-linux.md) to create an IoT Edge device in Azure. Cloud resources:
Cloud resources:
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] > [!TIP]
-> For guidance on interactive debugging in Visual Studio Code or Visual Studio 2019:
+> For guidance on interactive debugging in Visual Studio Code or Visual Studio 2022:
>* [Use Visual Studio Code to develop and debug modules for Azure IoT Edge](how-to-vs-code-develop-module.md)
->* [Use Visual Studio 2019 to develop and debug modules for Azure IoT Edge](how-to-visual-studio-develop-module.md)
+>* [Use Visual Studio 2022 to develop and debug modules for Azure IoT Edge](how-to-visual-studio-develop-module.md)
> >This tutorial teaches the development steps for Visual Studio Code.
This tutorial targets devices running IoT Edge with Linux containers. You can us
The following table lists the supported development scenarios for **Linux containers** in Visual Studio Code and Visual Studio.
-| | Visual Studio Code | Visual Studio 2017/2019 |
+| | Visual Studio Code | Visual Studio 2019/2022 |
| - | | | | **Linux device architecture** | Linux AMD64 <br> Linux ARM32 <br> Linux ARM64 | Linux AMD64 <br> Linux ARM32 <br> Linux ARM64 | | **Azure services** | Azure Functions <br> Azure Stream Analytics <br> Azure Machine Learning | | | **Languages** | C <br> C# <br> Java <br> Node.js <br> Python | C <br> C# |
-| **More information** | [Azure IoT Edge for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) <br> [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit)| [Azure IoT Edge Tools for Visual Studio 2017](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vsiotedgetools) <br> [Azure IoT Edge Tools for Visual Studio 2019](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs16iotedgetools) |
+| **More information** | [Azure IoT Edge for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) <br> [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit)| [Azure IoT Edge Tools for Visual Studio 2019](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs16iotedgetools) <br> [Azure IoT Edge Tools for Visual Studio 2022](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs17iotedgetools) |
## Install container engine
Use the IoT extensions for Visual Studio Code to develop IoT Edge modules. These
1. Install [Visual Studio Code](https://code.visualstudio.com/) on your development machine.
-2. Once the installation is finished, select **View** > **Extensions**.
+2. Once the installation is finished, open Visual Studio Code and select **View** > **Extensions**.
3. Search for **Azure IoT Edge** and **Azure IoT Hub**, which are extensions that help you interact with IoT Hub and IoT devices, as well as developing IoT Edge modules.
-4. Select **Install**. Each included extension installs individually.
+4. On each extension, select **Install**.
5. When the extensions are done installing, open the command palette by selecting **View** > **Command Palette**.
Use the IoT extensions for Visual Studio Code to develop IoT Edge modules. These
9. At the bottom of the explorer section, expand the collapsed **Azure IoT Hub / Devices** menu. You should see the devices and IoT Edge devices associated with the IoT hub that you selected through the command palette.
- ![View devices in your IoT hub](./media/tutorial-develop-for-linux/view-iot-hub-devices.png)
[!INCLUDE [iot-edge-create-container-registry](includes/iot-edge-create-container-registry.md)]
For this tutorial, we use the C# module template because it is the most commonly
### Create a project template
-In the Visual Studio Code command palette, search for and select **Azure IoT Edge: New IoT Edge Solution**. Follow the prompts and use the following values to create your solution:
+In the Visual Studio Code command palette, search for and select **Azure IoT Edge: New IoT Edge Solution**. Follow the prompts to create your solution:
- | Field | Value |
- | -- | -- |
- | Select folder | Choose the location on your development machine for Visual Studio Code to create the solution files. |
- | Provide a solution name | Enter a descriptive name for your solution or accept the default **EdgeSolution**. |
- | Select module template | Choose **C# Module**. |
- | Provide a module name | Accept the default **SampleModule**. |
- | Provide Docker image repository for the module | An image repository includes the name of your container registry and the name of your container image. Your container image is prepopulated from the name you provided in the last step. Replace **localhost:5000** with the **Login server** value from your Azure container registry. You can retrieve the Login server value from the Overview page of your container registry in the Azure portal. <br><br> The final image repository looks like \<registry name\>.azurecr.io/samplemodule. |
+1. Select folder: choose the location on your development machine for Visual Studio Code to create the solution files.
+1. Provide a solution name: enter a descriptive name for your solution or accept the default **EdgeSolution**.
+1. Select a module template: choose **C# Module**.
+1. Provide a module name: accept the default **SampleModule**.
+1. Provide Docker image repository for the module: an image repository includes the name of your container registry and the name of your container image. Your container image is prepopulated from the name you provided in the last step. Replace **localhost:5000** with the **Login server** value from your Azure container registry. You can retrieve the Login server value from the Overview page of your container registry in the Azure portal.
- ![Provide Docker image repository](./media/tutorial-develop-for-linux/image-repository.png)
+ The final image repository looks like:
+
+ \<registry name\>.azurecr.io/samplemodule.
+
+ :::image type="content" source="./media/tutorial-develop-for-linux/image-repository.png" alt-text="Screenshot showing where to provide a Docker image repository in the command palette.":::
Once your new solution loads in the Visual Studio Code window, take a moment to familiarize yourself with the files that it created: * The **.vscode** folder contains a file called **launch.json**, which is used for debugging modules. * The **modules** folder contains a folder for each module in your solution. Right now, that should only be **SampleModule**, or whatever name you gave to the module. The SampleModule folder contains the main program code, the module metadata, and several Docker files. * The **.env** file holds the credentials to your container registry. These credentials are shared with your IoT Edge device so that it has access to pull the container images.
-* The **deployment.debug.template.json** file and **deployment.template.json** file are templates that help you create a deployment manifest. A *deployment manifest* is a file that defines exactly which modules you want deployed on a device, how they should be configured, and how they can communicate with each other and the cloud. The template files use pointers for some values. When you transform the template into a true deployment manifest, the pointers are replaced with values taken from other solution files. Locate the two common placeholders in your deployment template:
-
- * In the registry credentials section, the address is auto-filled from the information you provided when you created the solution. However, the username and password reference the variables stored in the .env file. This configuration is for security, as the .env file is git ignored, but the deployment template is not.
- * In the SampleModule section, the container image isn't filled in even though you provided the image repository when you created the solution. This placeholder points to the **module.json** file inside the SampleModule folder. If you go to that file, you'll see that the image field does contain the repository, but also a tag value that is made up of the version and the platform of the container. You can iterate the version manually as part of your development cycle, and you select the container platform using a switcher that we introduce later in this section.
+* The **deployment.debug.template.json** file and **deployment.template.json** file are templates that help you create a deployment manifest. A *deployment manifest* is a file that defines exactly which modules you want deployed on a device, how they should be configured, and how they can communicate with each other and the cloud. The template files use pointers for some values. When you transform the template into a true deployment manifest, the pointers are replaced with values taken from other solution files.
+* Open the **deployment.template.json** file and locate two common placeholders:
+ * In the `registryCredentials` section, the address is auto-filled from the information you provided when you created the solution. However, the username and password reference the variables stored in the .env file. This configuration is for security, as the .env file is git ignored, but the deployment template is not.
+ * In the `SampleModule` section, the container image isn't filled in even though you provided the image repository when you created the solution. This placeholder points to the **module.json** file inside the SampleModule folder. If you go to that file, you'll see that the image field does contain the repository, but also a tag value that is made up of the version and the platform of the container. You can iterate the version manually as part of your development cycle, and you select the container platform using a switcher that we introduce later in this section.
### Set IoT Edge runtime version
After selecting a new runtime version, your deployment manifest is dynamically u
The environment file stores the credentials for your container registry and shares them with the IoT Edge runtime. The runtime needs these credentials to pull your container images onto the IoT Edge device. >[!NOTE]
->If you didn't replace the **localhost:5000** value with the login server value from your Azure container registry, in the [**Create a project template**](#create-a-project-template) step, the **.env** file and the `registryCredentials` section of the deployment manifest will be missing. If that section is missing, return to the **Provide Docker image repository for the module** prompt to see how to replace the **localhost:5000** value.
+>If you didn't replace the **localhost:5000** value with the login server value from your Azure container registry, in the [**Create a project template**](#create-a-project-template) step, the **.env** file and the `registryCredentials` section of the deployment manifest will be missing. If that section is missing, return to the **Provide Docker image repository for the module** step in the **Create a project template** section to see how to replace the **localhost:5000** value.
The IoT Edge extension tries to pull your container registry credentials from Azure and populate them in the environment file. Check to see if your credentials are already included. If not, add them now:
The IoT Edge extension tries to pull your container registry credentials from Az
Currently, Visual Studio Code can develop C# modules for Linux AMD64 and ARM32v7 devices. You need to select which architecture you're targeting with each solution, because that affects how the container is built and runs. The default is Linux AMD64.
-1. Open the command palette and search for **Azure IoT Edge: Set Default Target Platform for Edge Solution**, or select the shortcut icon in the side bar at the bottom of the window.
+1. Open the command palette and search for **Azure IoT Edge: Set Default Target Platform for Edge Solution**, or select the shortcut icon at the bottom of the window.
- ![Select architecture icon in side bar](./media/tutorial-develop-for-linux/select-architecture.png)
+ :::image type="content" source="./media/tutorial-develop-for-linux/select-architecture.png" alt-text="Screenshot showing the location of the architecture icon at the bottom of the Visual Studio Code window." lightbox="./media/tutorial-develop-for-linux/select-architecture.png":::
2. In the command palette, select the target architecture from the list of options. For this tutorial, we're using an Ubuntu virtual machine as the IoT Edge device, so will keep the default **amd64**.
The sample C# code that comes with the project template uses the [ModuleClient C
3. The [SetInputMessageHandlerAsync](/dotnet/api/microsoft.azure.devices.client.moduleclient.setinputmessagehandlerasync) method sets up an input queue to receive incoming messages. Review this method and see how it initializes an input queue called **input1**.
- ![Find the input name in SetInputMessageCallback constructor](./media/tutorial-develop-for-linux/declare-input-queue.png)
+ :::image type="content" source="./media/tutorial-develop-for-linux/declare-input-queue.png" alt-text="Screenshot showing where to find the input name in the SetInputMessageCallback constructor." lightbox="./media/tutorial-develop-for-linux/declare-input-queue.png":::
4. Next, find the **SendEventAsync** method.
-5. The [SendEventAsync](/dotnet/api/microsoft.azure.devices.client.moduleclient.sendeventasync) method processes received messages and sets up an output queue to pass them along. Review this method and see that it initializes an output queue called **output1**.
+ The [SendEventAsync](/dotnet/api/microsoft.azure.devices.client.moduleclient.sendeventasync) method processes received messages and sets up an output queue to pass them along. Review this method and see that it initializes an output queue called **output1**.
- ![Find the output name in SendEventToOutputAsync](./media/tutorial-develop-for-linux/declare-output-queue.png)
+ :::image type="content" source="./media/tutorial-develop-for-linux/declare-output-queue.png" alt-text="Screenshot showing where to find the output name in SendEventAsync method." lightbox="./media/tutorial-develop-for-linux/declare-output-queue.png":::
6. Open the **deployment.template.json** file.
-7. Find the **modules** property of the $edgeAgent desired properties.
+7. Find the **modules** property nested in **$edgeAgent**.
There should be two modules listed here. One is the **SimulatedTemperatureSensor** module, which is included in all the templates by default to provide simulated temperature data that you can use to test your modules. The other is the **SampleModule** module that you created as part of this solution.
-8. At the bottom of the file, find the desired properties for the **$edgeHub** module.
+8. At the bottom of the file, find **properties.desired** within the **$edgeHub** module.
One of the functions of the IoT Edge hub module is to route messages between all the modules in a deployment. Review the values in the **routes** property. One route, **SampleModuleToIoTHub**, uses a wildcard character (**\***) to indicate any messages coming from any output queues in the SampleModule module. These messages go into *$upstream*, which is a reserved name that indicates IoT Hub. The other route, **sensorToSampleModule**, takes messages coming from the SimulatedTemperatureSensor module and routes them to the *input1* input queue that you saw initialized in the SampleModule code.
- ![Review routes in deployment.template.json](./media/tutorial-develop-for-linux/deployment-routes.png)
+ :::image type="content" source="./media/tutorial-develop-for-linux/deployment-routes.png" alt-text="Screenshot showing routes in the deployment.template.json file." lightbox="./media/tutorial-develop-for-linux/deployment-routes.png":::
## Build and push your solution
Provide your container registry credentials to Docker so that it can push your c
1. Open the Visual Studio Code integrated terminal by selecting **View** > **Terminal**.
-2. Sign in to Docker with the Azure Container registry credentials that you saved after creating the registry.
+2. Sign in to Docker with the Azure Container Registry (ACR) credentials that you saved after creating the registry.
```cmd/sh docker login -u <ACR username> -p <ACR password> <ACR login server>
Visual Studio Code now has access to your container registry, so it's time to tu
1. In the Visual Studio Code explorer, right-click the **deployment.template.json** file and select **Build and Push IoT Edge Solution**.
- ![Build and push IoT Edge modules](./media/tutorial-develop-for-linux/build-and-push-modules.png)
+ :::image type="content" source="./media/tutorial-develop-for-linux/build-and-push-modules.png" alt-text="Screenshot showing the right-click menu option Build and Push IoT Edge Solution." lightbox="./media/tutorial-develop-for-linux/build-and-push-modules.png":::
The build and push command starts three operations. First, it creates a new folder in the solution called **config** that holds the full deployment manifest, built out of information in the deployment template and other solution files. Second, it runs `docker build` to build the container image based on the appropriate dockerfile for your target architecture. Then, it runs `docker push` to push the image repository to your container registry.
key-vault Quick Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-java.md
ms.devlang: java
Get started with the Azure Key Vault Certificate client library for Java. Follow the steps below to install the package and try out example code for basic tasks.
+> [!TIP]
+> If you're working with Azure Key Vault Certificates resources in a Spring application, we recommend that you consider [Spring Cloud Azure](/azure/developer/java/spring-framework/) as an alternative. Spring Cloud Azure is an open-source project that provides seamless Spring integration with Azure services. To learn more about Spring Cloud Azure, and to see an example using Key Vault Certificates, see [Enable HTTPS in Spring Boot with Azure Key Vault certificates](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-azure-key-vault-certificates).
+ Additional resources: - [Source code](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/keyvault/azure-security-keyvault-certificates)
key-vault Tutorial Python Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-python-virtual-machine.md
On the virtual machine, create a Python file called **sample.py**. Edit the file
from azure.keyvault.secrets import SecretClient from azure.identity import DefaultAzureCredential
-keyVaultName = "<your-unique-keyvault-name>"
-KVUri = f"https://{keyVaultName}.vault.azure.net"
-secretName = "mySecret"
+key_vault_name = "<your-unique-keyvault-name>"
+key_vault_uri = f"https://{key_vault_name}.vault.azure.net"
+secret_name = "mySecret"
credential = DefaultAzureCredential()
-client = SecretClient(vault_url=KVUri, credential=credential)
-retrieved_secret = client.get_secret(secretName)
+client = SecretClient(vault_url=key_vault_uri, credential=credential)
+retrieved_secret = client.get_secret(secret_name)
-print(f"The value of secret '{secretName}' in '{keyVaultName}' is: '{retrieved_secret.value}'")
+print(f"The value of secret '{secret_name}' in '{key_vault_name}' is: '{retrieved_secret.value}'")
``` ## Run the sample Python app
az group delete -g myResourceGroup
## Next steps
-[Azure Key Vault REST API](/rest/api/keyvault/)
+[Azure Key Vault REST API](/rest/api/keyvault/)
key-vault Quick Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-java.md
ms.devlang: java
Get started with the Azure Key Vault Secret client library for Java. Follow these steps to install the package and try out example code for basic tasks.
+> [!TIP]
+> If you're working with Azure Key Vault Secrets resources in a Spring application, we recommend that you consider [Spring Cloud Azure](/azure/developer/java/spring-framework/) as an alternative. Spring Cloud Azure is an open-source project that provides seamless Spring integration with Azure services. To learn more about Spring Cloud Azure, and to see an example using Key Vault Secrets, see [Load a secret from Azure Key Vault in a Spring Boot application](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-azure-key-vault).
+ Additional resources: - [Source code](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/keyvault/azure-security-keyvault-secrets)
load-testing Concept Load Testing Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/concept-load-testing-concepts.md
A test engine is computing infrastructure, managed by Microsoft that runs the Ap
The test engines are hosted in the same location as your Azure Load Testing resource. You can configure the Azure region when you create the Azure load testing resource.
-While the test script runs, Azure Load Testing collects and aggregates the Apache JMeter worker logs from all test engine instances. You can [download the logs for analyzing errors during the load test](./how-to-find-download-logs.md).
+While the test script runs, Azure Load Testing collects and aggregates the Apache JMeter worker logs from all test engine instances. You can [download the logs for analyzing errors during the load test](./how-to-troubleshoot-failing-test.md).
### Test run
load-testing How To Compare Multiple Test Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-compare-multiple-test-runs.md
When there's a performance issue, you can use the server-side metrics to analyze
## Next steps - Learn more about [exporting the load test results for reporting](./how-to-export-test-results.md).-- Learn more about [troubleshooting load test execution errors](./how-to-find-download-logs.md).
+- Learn more about [troubleshooting load test execution errors](./how-to-troubleshoot-failing-test.md).
- Learn more about [configuring automated performance testing with CI/CD](./tutorial-identify-performance-regression-with-cicd.md).
load-testing How To Configure User Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-configure-user-properties.md
Alternately, you also specify properties in the JMeter user interface. The follo
:::image type="content" source="media/how-to-configure-user-properties/jmeter-user-properties.png" alt-text="Screenshot that shows how to reference user properties in the JMeter user interface.":::
-You can [download the JMeter errors logs](./how-to-find-download-logs.md) to troubleshoot errors during the load test.
+You can [download the JMeter errors logs](./how-to-troubleshoot-failing-test.md) to troubleshoot errors during the load test.
## Next steps - Learn more about [JMeter properties that Azure Load Testing overrides](./resource-jmeter-property-overrides.md). - Learn more about [parameterizing a load test by using environment variables and secrets](./how-to-parameterize-load-tests.md).-- Learn more about [troubleshooting load test execution errors](./how-to-find-download-logs.md).
+- Learn more about [troubleshooting load test execution errors](./how-to-troubleshoot-failing-test.md).
load-testing How To Export Test Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-export-test-results.md
Previously updated : 03/31/2022 Last updated : 02/10/2023 # Export test results from Azure Load Testing for use in third-party tools
-In this article, you'll learn how to download the test results from Azure Load Testing in the Azure portal. You might use these results for reporting in third-party tools.
+In this article, you learn how to download the test results from Azure Load Testing in the Azure portal. You might use these results for reporting in third-party tools or for diagnosing test failures. Azure Load Testing generates the test results in comma-separated values (CSV) file format, and provides details of each application request for the load test.
-The test results contain comma-separated values (CSV) file(s) with details of each application request. See [Apache JMeter CSV log format](https://jmeter.apache.org/usermanual/listeners.html#csvlogformat) and the [Apache JMeter Glossary](https://jmeter.apache.org/usermanual/glossary.html) for details about the different fields.
-
-You can also use the test results to diagnose errors during a load test. The `responseCode` and `responseMessage` fields give you more information about failed requests. For more information about investigating errors, see [Troubleshoot test execution errors](./how-to-find-download-logs.md).
+You can also use the test results to diagnose errors during a load test. The `responseCode` and `responseMessage` fields give you more information about failed requests. For more information about investigating errors, see [Troubleshoot test execution errors](./how-to-troubleshoot-failing-test.md).
You can generate the Apache JMeter dashboard from the CSV log file following the steps mentioned [here](https://jmeter.apache.org/usermanual/generating-dashboard.html#report).
You can generate the Apache JMeter dashboard from the CSV log file following the
- An Azure account with an active subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - An Azure Load Testing resource that has a completed test run. If you need to create an Azure Load Testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md).
+## Test results file
+
+Azure Load Testing generates a test results CSV file for each [test engine instance](./concept-load-testing-concepts.md#test-engine). Learn how you can [scale out your load test](./how-to-high-scale-load.md).
+
+Azure Load Testing uses the [Apache JMeter CSV log format](https://jmeter.apache.org/usermanual/listeners.html#csvlogformat). For more information about the different fields, see the [JMeter Glossary in the Apache JMeter documentation](https://jmeter.apache.org/usermanual/glossary.html).
+
+You can find the details of each application request for the load test run in the test results file. The following snippet shows a sample test result:
+
+```output
+timeStamp,elapsed,label,responseCode,responseMessage,threadName,dataType,success,failureMessage,bytes,sentBytes,grpThreads,allThreads,URL,Latency,IdleTime,Connect
+1676040230680,104,Homepage,200,OK,172.18.33.7-Thread Group 1-5,text,true,,1607,133,5,5,https://www.example.com/,104,0,100
+1676040230681,101,Homepage,200,OK,172.18.33.7-Thread Group 1-3,text,true,,1591,133,5,5,https://www.example.com/,101,0,93
+1676040230680,101,Homepage,200,OK,172.18.33.7-Thread Group 1-1,text,true,,1591,133,5,5,https://www.example.com/,98,0,94
+```
+ ## Access and download load test results
-In this section, you'll retrieve and download the Azure Load Testing results file from the Azure portal.
+# [Azure portal](#tab/portal)
+
+To download the test results for a test run in the Azure portal:
1. In the [Azure portal](https://portal.azure.com), go to your Azure Load Testing resource.
In this section, you'll retrieve and download the Azure Load Testing results fil
The folder contains a separate CSV file for every test engine and contains details of requests that the test engine executed during the load test.
+# [GitHub Actions](#tab/github)
+
+When you run a load test as part of your CI/CD pipeline, Azure Load Testing generates a test results file. Follow these steps to publish these test results and attach them to your CI/CD pipeline run:
+
+1. Go to your GitHub repository, and select **Code**.
+
+1. In the **Code** window, select your GitHub Actions workflow YAML file in the `.github/workflow` folder.
+
+ :::image type="content" source="./media/how-to-export-test-results/github-repository-workflow-definition-file.png" alt-text="Screenshot that shows the folder that contains the GitHub Actions workflow definition file." lightbox="./media/how-to-export-test-results/github-repository-workflow-definition-file.png":::
+
+1. Edit the workflow file and add the `actions/upload-artifact` action after the `azure/load-testing` action in the workflow file.
+
+ Azure Load Testing places the test results in the `loadTest` folder of the GitHub Actions workspace.
+
+ ```yml
+ - name: 'Azure Load Testing'
+ uses: azure/load-testing@v1
+ with:
+ loadTestConfigFile: 'SampleApp.yaml'
+ loadTestResource: ${{ env.LOAD_TEST_RESOURCE }}
+ resourceGroup: ${{ env.LOAD_TEST_RESOURCE_GROUP }}
+
+ - uses: actions/upload-artifact@v2
+ with:
+ name: loadTestResults
+ path: ${{ github.workspace }}/loadTest
+ ```
+
+1. After your GitHub Actions workflow completes, you can select the test results from the **Artifacts** section on the **Summary** page of the workflow run.
+
+ :::image type="content" source="./media/how-to-export-test-results/github-actions-run-summary.png" alt-text="Screenshot that shows the GitHub Actions workflow summary page, highlighting the test results in the Artifacts section." lightbox="./media/how-to-export-test-results/github-actions-run-summary.png":::
+
+# [Azure Pipelines](#tab/pipelines)
+
+When you run a load test as part of your CI/CD pipeline, Azure Load Testing generates a test results file. Follow these steps to publish these test results and attach them to your CI/CD pipeline run:
+
+1. In your Azure DevOps project, select **Pipelines** in the left navigation, and select your pipeline from the list.
+
+1. On the pipeline details page, select **Edit** to edit the workflow definition.
+
+1. Edit the workflow file and add the `publish` task after the `AzureLoadTest` task in the workflow file.
+
+ Azure Load Testing places the test results in the `loadTest` folder of the Azure Pipelines default working directory.
+
+ ```yml
+ - task: AzureLoadTest@1
+ inputs:
+ azureSubscription: $(serviceConnection)
+ loadTestConfigFile: 'SampleApp.yaml'
+ resourceGroup: $(loadTestResourceGroup)
+ loadTestResource: $(loadTestResource)
+
+ - publish: $(System.DefaultWorkingDirectory)/loadTest
+ artifact: results
+ ```
+1. After your Azure Pipelines workflow completes, you can select the test results from the **Stages** section on the **Summary** page of the workflow run.
+
+ You can find and download the test results in the **Results** folder.
+
+ :::image type="content" source="./media/how-to-export-test-results/azure-pipelines-run-summary.png" alt-text="Screenshot that shows the Azure Pipelines workflow summary page, highlighting the test results in the Stages section." lightbox="./media/how-to-export-test-results/azure-pipelines-run-summary.png":::
++ ## Next steps -- Learn more about [Troubleshooting test execution errors](./how-to-find-download-logs.md).
+- Learn more about [Troubleshooting test execution errors](./how-to-troubleshoot-failing-test.md).
- For information about comparing test results, see [Compare multiple test results](./how-to-compare-multiple-test-runs.md). - To learn about performance test automation, see [Configure automated performance testing](./tutorial-identify-performance-regression-with-cicd.md).
load-testing How To Find Download Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-find-download-logs.md
- Title: Troubleshoot load test errors-
-description: Learn how you can diagnose and troubleshoot errors in Azure Load Testing. Download and analyze the Apache JMeter worker logs in the Azure portal.
---- Previously updated : 03/23/2022---
-# Troubleshoot load test errors by downloading Apache JMeter logs
-
-Learn how to diagnose and troubleshoot errors while running a load test with Azure Load Testing. Download the Apache JMeter worker logs or load test results for detailed logging information.
-
-When you start a load test, the Azure Load Testing test engines run your Apache JMeter script. Errors can occur at different levels. For example, during the execution of the JMeter script, while connecting to the application endpoint, or in the test engine instance.
-
-You can use different sources of information to diagnose these errors:
--- [Download the Apache JMeter worker logs](#download-apache-jmeter-worker-logs) to investigate issues with JMeter and the test script execution.-- [Export the load test result](./how-to-export-test-results.md) and analyze the response code and response message of each HTTP request.-
-There might also be problems with the application endpoint itself. If you host the application on Azure, you can [configure server-side monitoring](./how-to-monitor-server-side-metrics.md) to get detailed insights about the application components.
-
-## Load test error indicators
-
-After running a load test, there are multiple error indicators available:
--- The test run **Status** information is **Failed**.-
- :::image type="content" source="media/how-to-find-download-logs/dashboard-test-failed.png" alt-text="Screenshot that shows the load test dashboard, highlighting status information for a failed test.":::
--- The test run statistics shows a non-zero **Error percentage** value.-- The **Errors** graph in the client-side metrics shows errors.-
- :::image type="content" source="media/how-to-find-download-logs/dashboard-errors.png" alt-text="Screenshot that shows the load test dashboard, highlighting the error information.":::
-
-## Prerequisites
--- An Azure account with an active subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -- An Azure load testing resource that has a completed test run. If you need to create an Azure load testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md). -
-## Download Apache JMeter worker logs
-
-When you run a load test, the Azure Load Testing test engines execute your Apache JMeter test script. During the load test, Apache JMeter stores detailed logging in the worker node logs. You can download these JMeter worker logs for each test run in the Azure portal.
-
-For example, if there's a problem with your JMeter script, the load test status will be **Failed**. In the worker logs you might find additional information about the cause of the problem.
-
-To download the worker logs for an Azure Load Testing test run, follow these steps:
-
-1. In the [Azure portal](https://portal.azure.com), go to your Azure Load Testing resource.
-
-1. Select **Tests** to view the list of tests, and then select your load test.
-
- :::image type="content" source="media/how-to-find-download-logs/test-list.png" alt-text="Screenshot that shows the list of load tests for an Azure Load Test resource.":::
-
- >[!TIP]
- > To limit the number of tests, use the search box and the **Time range** filter.
-
-1. Select a test run from the list to view the test run dashboard.
-
- :::image type="content" source="media/how-to-find-download-logs/test-run.png" alt-text="Screenshot that shows a list of test runs for the selected load test.":::
-
-1. On the dashboard, select **Download**, and then select **Logs**.
-
- :::image type="content" source="media/how-to-find-download-logs/logs.png" alt-text="Screenshot that shows how to download the test log files from the test run details page.":::
-
- The browser should now start downloading the JMeter worker node log file *worker.log*.
-
-1. You can use a text editor to open the log file.
-
- :::image type="content" source="media/how-to-find-download-logs/jmeter-log.png" alt-text="Screenshot that shows the JMeter log file content.":::
-
- The *worker.log* file can help you diagnose the root cause of a failing load test. In the previous screenshot, you can see that the test failed because a file is missing.
-
-## Next steps
--- Learn how to [Export the load test result](./how-to-export-test-results.md).-- Learn how to [Monitor server-side application metrics](./how-to-monitor-server-side-metrics.md).-- Learn how to [Get detailed insights for Azure App Service based applications](./how-to-appservice-insights.md).-- Learn how to [Compare multiple load test runs](./how-to-compare-multiple-test-runs.md).
load-testing How To High Scale Load https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-high-scale-load.md
For example, if application latency is 20 milliseconds (0.02 second), and you're
To achieve a target number of requests per second, configure the total number of virtual users for your load test. > [!NOTE]
-> Apache JMeter only reports requests that made it to the server and back, either successful or not. If Apache JMeter is unable to connect to your application, the actual number of requests per second will be lower than the maximum value. Possible causes might be that the server is too busy to handle the request, or that an TLS/SSL certificate is missing. To diagnose connection problems, you can check the **Errors** chart in the load testing dashboard and [download the load test log files](./how-to-find-download-logs.md).
+> Apache JMeter only reports requests that made it to the server and back, either successful or not. If Apache JMeter is unable to connect to your application, the actual number of requests per second will be lower than the maximum value. Possible causes might be that the server is too busy to handle the request, or that an TLS/SSL certificate is missing. To diagnose connection problems, you can check the **Errors** chart in the load testing dashboard and [download the load test log files](./how-to-troubleshoot-failing-test.md).
## Test engine instances and virtual users
load-testing How To Parameterize Load Tests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-parameterize-load-tests.md
Previously updated : 03/22/2022 Last updated : 01/15/2023
load-testing How To Troubleshoot Failing Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-troubleshoot-failing-test.md
+
+ Title: Troubleshoot load test errors
+
+description: Learn how you can diagnose and troubleshoot errors in Azure Load Testing. Download and analyze the Apache JMeter worker logs in the Azure portal.
++++ Last updated : 02/15/2023+++
+# Troubleshoot failing load tests in Azure Load Testing
+
+Learn how to diagnose and troubleshoot errors while running a load test with Azure Load Testing. Download the Apache JMeter worker logs or load test results for detailed logging information. Alternately, you can configure server-side metrics to identify issues in specific Azure application components.
+
+Azure Load Testing runs your Apache JMeter script on the [test engine instances](./concept-load-testing-concepts.md#test-engine). During a load test run, errors might occur at different stages. For example, the JMeter test script could have an error that prevents the test from starting. Or there might be a problem to connect to the application endpoint, which results in the load test to have a large number of failed requests.
+
+Azure Load Testing provides different sources of information to diagnose these errors:
+
+- [Download the Apache JMeter worker logs](#download-apache-jmeter-worker-logs) to investigate issues with JMeter and the test script execution.
+- [Diagnose failing tests using test results](#diagnose-failing-tests-using-test-results) and analyze the response code and response message of each HTTP request.
+- [Diagnose failing tests using server-side metrics](#diagnose-failing-tests-using-server-side-metrics) to identify issues with specific Azure application components.
+
+## Prerequisites
+
+- An Azure account with an active subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- An Azure load testing resource that has a completed test run. If you need to create an Azure load testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md).
+
+## Identify load test errors
+
+You can identify errors in your load test in the following ways:
+
+# [Azure portal](#tab/portal)
+
+- The test run status is failed.
+
+ You can view the test run status in list of test runs for your load test, or in **Test details** in the load test dashboard for your test run.
+
+ :::image type="content" source="media/how-to-find-download-logs/dashboard-test-failed.png" alt-text="Screenshot that shows the load test dashboard, highlighting status information for a failed test." lightbox="media/how-to-find-download-logs/dashboard-test-failed.png":::
+
+- The test run has a non-zero error percentage value.
+
+ If the test error percentage is below the default threshold, your test run shows as succeeded, even though there are errors. You can add [test fail criteria](./how-to-define-test-criteria.md) based on the error percentage.
+
+ You can view the error percentage in the **Statistics** in the load test dashboard for your test run.
+
+- The errors chart in the client-side metrics in the load test dashboard shows errors.
+
+ :::image type="content" source="media/how-to-find-download-logs/dashboard-errors.png" alt-text="Screenshot that shows the load test dashboard, highlighting the error information." lightbox="media/how-to-find-download-logs/dashboard-errors.png":::
+
+# [GitHub Actions](#tab/github)
+
+- The test run status is failed.
+
+ You can view the test run status in GitHub Actions for your repository, on the **Summary** page, or drill down into the workflow run details.
+
+ :::image type="content" source="media/how-to-find-download-logs/github-actions-summary-failed-test.png" alt-text="Screenshot that shows the summary page for an Azure Pipelines run, highlighting the failed load test stage." lightbox="media/how-to-find-download-logs/github-actions-summary-failed-test.png":::
+
+- The test run has a non-zero error percentage value.
+
+ If the test error percentage is below the default threshold, your test run shows as succeeded, even though there are errors. You can add [test fail criteria](./how-to-define-test-criteria.md) based on the error percentage.
+
+ You can view the error percentage in GitHub Actions, in the workflow run logging information.
+
+ :::image type="content" source="media/how-to-find-download-logs/github-actions-log-error-percentage.png" alt-text="Screenshot that shows the GitHub Actions workflow logs, highlighting the error statistics information for a load test run." lightbox="media/how-to-find-download-logs/github-actions-log-error-percentage.png":::
+
+- The test run log contains errors.
+
+ When there's a problem running the load test, the test run log might contain details about the root cause.
+
+ You can view the list of errors in GitHub Actions, on the workflow run **Summary** page, in the **Annotations** section. From this section, you can drill down into the workflow run details to view the error details.
+
+# [Azure Pipelines](#tab/pipelines)
+
+- The test run status is failed.
+
+ You can view the test run status in Azure Pipelines, on the pipeline run **Summary** page, or drill down into the pipeline run details.
+
+ :::image type="content" source="media/how-to-find-download-logs/azure-pipelines-summary-failed-test.png" alt-text="Screenshot that shows the summary page for an Azure Pipelines run, highlighting the failed load test stage.":::
+
+- The test run has a non-zero error percentage value.
+
+ If the test error percentage is below the default threshold, your test run shows as succeeded, even though there are errors. You can add [test fail criteria](./how-to-define-test-criteria.md) based on the error percentage.
+
+ You can view the error percentage in Azure Pipelines, in the pipeline run logging information.
+
+ :::image type="content" source="media/how-to-find-download-logs/azure-pipelines-log-error-percentage.png" alt-text="Screenshot that shows the Azure Pipelines run logs, highlighting the error statistics information for a load test run." lightbox="media/how-to-find-download-logs/azure-pipelines-log-error-percentage.png":::
+
+- The test run log contains errors.
+
+ When there's a problem running the load test, the test run log might contain details about the root cause.
+
+ You can view the list of errors in Azure Pipelines, on the pipeline run **Summary** page, in the **Errors** section. From this section, you can drill down into the pipeline run details to view the error details.
+++
+## Download Apache JMeter worker logs
+
+When you run a load test, the Azure Load Testing test engines execute your Apache JMeter test script. During the load test, Apache JMeter stores detailed logging in the worker node logs. You can download these JMeter worker logs for each test run in the Azure portal. Azure Load Testing generates a worker log for each [test engine instance](./concept-load-testing-concepts.md#test-engine).
+
+For example, if there's a problem with your JMeter script, the load test status is **Failed**. In the worker logs you might find additional information about the cause of the problem.
+
+To download the worker logs for an Azure Load Testing test run, follow these steps:
+
+1. In the [Azure portal](https://portal.azure.com), go to your Azure Load Testing resource.
+
+1. Select **Tests** to view the list of tests, and then select your load test from the list.
+
+ :::image type="content" source="media/how-to-find-download-logs/test-list.png" alt-text="Screenshot that shows the list of load tests for an Azure Load Test resource.":::
+
+1. Select a test run from the list to view the test run dashboard.
+
+1. On the dashboard, select **Download**, and then select **Logs**.
+
+ The browser should now start downloading a zipped folder that contains the JMeter worker node log file for each [test engine instance](./concept-load-testing-concepts.md#test-engine).
+
+ :::image type="content" source="media/how-to-find-download-logs/logs.png" alt-text="Screenshot that shows how to download the test log files from the test run details page.":::
+
+1. You can use any zip tool to extract the folder and access the log files.
+
+ The *worker.log* file can help you diagnose the root cause of a failing load test. In the screenshot, you can see that the test failed because of a missing file.
+
+ :::image type="content" source="media/how-to-find-download-logs/jmeter-log.png" alt-text="Screenshot that shows the JMeter log file content.":::
+
+## Diagnose failing tests using test results
+
+To diagnose load tests that have failed requests, for example because the application endpoint is not available, the worker logs don't provide request details. You can use the test results to get detailed information about the individual application requests.
+
+1. Follow these steps to [download the test results for a load test run](./how-to-export-test-results.md).
+
+1. Open the test results `.csv` file in an editor of your choice.
+
+1. Use the information in the `responseCode` and `responseMessage` fields to determine the root cause of failing application requests.
+
+ In the following example, the test run failed because the application endpoint was not available (`java.net.UnknownHostException`):
+
+ ```output
+ timeStamp,elapsed,label,responseCode,responseMessage,threadName,dataType,success,failureMessage,bytes,sentBytes,grpThreads,allThreads,URL,Latency,IdleTime,Connect
+ 1676471293632,13,Home page,Non HTTP response code: java.net.UnknownHostException,Non HTTP response message: backend.contoso.com: Name does not resolve,172.18.44.4-Thread Group 1-1,text,false,,2470,0,1,1,https://backend.contoso.com/blabla,0,0,13
+ 1676471294339,0,Home page,Non HTTP response code: java.net.UnknownHostException,Non HTTP response message: backend.contoso.com,172.18.44.4-Thread Group 1-1,text,false,,2201,0,1,1,https://backend.contoso.com/blabla,0,0,0
+ 1676471294346,0,Home page,Non HTTP response code: java.net.UnknownHostException,Non HTTP response message: backend.contoso.com,172.18.44.4-Thread Group 1-1,text,false,,2201,0,1,1,https://backend.contoso.com/blabla,0,0,0
+ 1676471294350,0,Home page,Non HTTP response code: java.net.UnknownHostException,Non HTTP response message: backend.contoso.com,172.18.44.4-Thread Group 1-1,text,false,,2201,0,1,1,https://backend.contoso.com/blabla,0,0,0
+ 1676471294354,0,Home page,Non HTTP response code: java.net.UnknownHostException,Non HTTP response message: backend.contoso.com,172.18.44.4-Thread Group 1-1,text,false,,2201,0,1,1,https://backend.contoso.com/blabla,0,0,0
+ ```
+
+## Diagnose failing tests using server-side metrics
+
+For Azure-hosted applications, you can configure your load test to monitor resource metrics for your Azure application components. For example, a load test run might produce failed requests because an application component, such as a database, is throttling requests.
+
+Learn how you can [monitor server-side application metrics in Azure Load Testing](./how-to-monitor-server-side-metrics.md).
+
+For application endpoints that you host on Azure App Service, you can [use App Service Insights to get additional insights](./how-to-appservice-insights.md) about the application behavior.
+
+## Next steps
+
+- Learn how to [Export the load test result](./how-to-export-test-results.md).
+- Learn how to [Monitor server-side application metrics](./how-to-monitor-server-side-metrics.md).
+- Learn how to [Get detailed insights for Azure App Service based applications](./how-to-appservice-insights.md).
+- Learn how to [Compare multiple load test runs](./how-to-compare-multiple-test-runs.md).
load-testing How To Use Jmeter Plugins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-use-jmeter-plugins.md
You can add a plugin JAR file when you create a new load test, or anytime when y
For plugins from https://jmeter-plugins.org, you don't need to upload the JAR file. Azure Load Testing automatically configures these plugins for you.
+> [!NOTE]
+> We recommend that you build the executable JAR using Java 17.
+ # [Azure portal](#tab/portal) Follow these steps to upload a JAR file by using the Azure portal:
To reference the plugin JAR file in the test configuration YAML file:
## Next steps -- Learn how to [Download JMeter logs to troubleshoot a load test](./how-to-find-download-logs.md).
+- Learn how to [Download JMeter logs to troubleshoot a load test](./how-to-troubleshoot-failing-test.md).
- Learn how to [Monitor server-side application metrics](./how-to-monitor-server-side-metrics.md). - Learn how to [Automate load tests with CI/CD](./tutorial-identify-performance-regression-with-cicd.md).
load-testing Overview What Is Azure Load Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/overview-what-is-azure-load-testing.md
You can trigger Azure Load Testing from Azure Pipelines or GitHub Actions workfl
Azure Load Testing test engines abstract the required infrastructure for [running a high-scale load test](./how-to-high-scale-load.md). The test engines run the Apache JMeter script to simulate a large number of virtual users simultaneously accessing your application endpoints. When you create a load test based on a URL, Azure Load Testing automatically generates a JMeter test script for you. To scale out the load test, you can configure the number of test engines.
-Azure Load Testing uses Apache JMeter version 5.4.3 for running load tests. You can use Apache JMeter plugins from https://jmeter-plugins.org or [upload your own plugin code](./how-to-use-jmeter-plugins.md).
+Azure Load Testing uses Apache JMeter version 5.5 for running load tests. You can use Apache JMeter plugins from https://jmeter-plugins.org or [upload your own plugin code](./how-to-use-jmeter-plugins.md).
The application can be hosted anywhere: in Azure, on-premises, or in other clouds. To load test services that have no public endpoint, [deploy Azure Load Testing in a virtual network](./how-to-test-private-endpoint.md).
load-testing Resource Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/resource-limits-quotas-capacity.md
Azure Load Testing captures metrics, test results, and logs for each test run. T
| Server-side metrics | 90 days | Learn how to [configure server-side metrics](./how-to-monitor-server-side-metrics.md). | | Client-side metrics | 365 days | | | Test results | 6 months | Learn how to [export test results](./how-to-export-test-results.md). |
-| Test log files | 6 months | Learn how to [download the logs for troubleshooting tests](./how-to-find-download-logs.md). |
+| Test log files | 6 months | Learn how to [download the logs for troubleshooting tests](./how-to-troubleshoot-failing-test.md). |
## Request quota increases
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
__Inbound traffic__
| Source | Source<br>ports | Destination | Destination<b>ports| Purpose | | -- |:--:| -- |:--:| -- |
-| `AzureLoadBalancer` | Any | `VirtualNetwork` | 44224 | Inbound to compute instance/cluster. __Only needed if the instance/cluster is configured to use a public IP address__. |
+| `AzureMachineLearning` | Any | `VirtualNetwork` | 44224 | Inbound to compute instance/cluster. __Only needed if the instance/cluster is configured to use a public IP address__. |
> [!TIP] > A network security group (NSG) is created by default for this traffic. For more information, see [Default security rules](../virtual-network/network-security-groups-overview.md#inbound).
machine-learning How To Deploy Mlflow Models Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
Use the following steps to deploy an MLflow model with a custom scoring script.
image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04 conda_file: mlflow/sklearn-diabetes/environment/conda.yml code_configuration:
- source: mlflow/sklearn-diabetes/src
+ code: mlflow/sklearn-diabetes/src
scoring_script: score.py instance_type: Standard_F2s_v2 instance_count: 1
marketplace Preferred Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/preferred-solution.md
For partners, the Microsoft preferred solution badge aligns offers published to
Until July 2021, publishers with at least one co-sell ready offer were eligible to receive the Microsoft preferred solution badge for all offers published to the commercial marketplace. Starting in August 2021, to improve discovery of offers that have achieved co-sell incentive status, the preferred solution badge is awarded only to offers that meet the business and technical requirements to earn an Azure IP co-sell incentive or the Business Applications co-sell incentive.
+## How often are offers badged? 
+
+Badges are updated periodically every 30 days. Please allow a minimum of 45 days before reaching out to file a ticket with the support team.  
+ ## Next steps - To configure an offer for co-sell, see [Configure Co-sell for a commercial marketplace offer](/partner-center/co-sell-configure?context=/azure/marketplace/context/context) - For information about co-sell incentive status, see [Requirements for Azure IP Co-sell incentive status](/partner-center/co-sell-requirements?context=/azure/marketplace/context/context) or [Business Applications Co-sell incentive status](/partner-center/co-sell-requirements?context=/azure/marketplace/context/context)+
mysql How To Decide On Right Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/how-to-decide-on-right-migration-tools.md
To help you select the right tools for migrating to Azure Database for MySQL, co
| Migration Scenario | Tool(s) | Details | More information | |--||||
-| Single to Flexible Server (Azure portal) | Database Migration Service (DMS) and the Azure portal | [Tutorial: DMS with the Azure portal (offline)](../../dms/tutorial-mysql-azure-single-to-flex-offline-portal.md) | Recommended |
+| Single to Flexible Server (Azure portal) | Database Migration Service (classic) and the Azure portal | [Tutorial: DMS (classic) with the Azure portal (offline)](../../dms/tutorial-mysql-azure-single-to-flex-offline-portal.md) | Recommended |
| Single to Flexible Server (Azure CLI) | [Custom shell script](https://github.com/Azure/azure-mysql/tree/master/azuremysqltomysqlmigrate) | [Migrate from Azure Database for MySQL - Single Server to Flexible Server in five easy steps!](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/migrate-from-azure-database-for-mysql-single-server-to-flexible/ba-p/2674057) | The [script](https://github.com/Azure/azure-mysql/tree/master/azuremysqltomysqlmigrate) also moves other server components such as security settings and server parameter configurations. | | MySQL databases (>= 1 TB) to Azure Database for MySQL | Dump and Restore using **MyDumper/MyLoader** + High Compute VM | [Migrate large databases to Azure Database for MySQL using mydumper/myloader](concepts-migrate-mydumper-myloader.md) | [Best Practices for migrating large databases to Azure Database for MySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699) |
-| MySQL databases (< 1 TB) to Azure Database for MySQL | Database Migration Service (DMS) and the Azure portal | [Migrate MySQL databases to Azure Database for MySQL using DMS](../../dms/tutorial-mysql-azure-mysql-offline-portal.md) | If network bandwidth between source and target is good (e.g: High-speed express route), use Azure DMS (database migration service) |
+| MySQL databases (< 1 TB) to Azure Database for MySQL | Database Migration Service (classic) and the Azure portal | [Migrate MySQL databases to Azure Database for MySQL using DMS (classic)](../../dms/tutorial-mysql-azure-mysql-offline-portal.md) | If network bandwidth between source and target is good (e.g: High-speed express route), use Azure DMS (database migration service) |
| Amazon RDS for MySQL databases (< 1 TB) to Azure Database for MySQL | MySQL Workbench | [Migrate Amazon RDS for MySQL databases ( < 1 TB) to Azure Database for MySQL using MySQL Workbench](../single-server/how-to-migrate-rds-mysql-workbench.md) | If you have low network bandwidth between source and Azure, use **Mydumper/Myloader + High compute VM** to take advantage of compression settings to efficiently move data over low speed networks | | Import and export MySQL databases (< 1 TB) in Azure Database for MySQL | mysqldump or MySQL Workbench Import/Export utility | [Import and export - Azure Database for MySQL](../single-server/concepts-migrate-import-export.md) | Use the **mysqldump** and **MySQL Workbench Export/Import** utility tool to perform offline migrations for smaller databases. |
To help you select the right tools for migrating to Azure Database for MySQL - F
| Migration Scenario | Tool(s) | Details | More information | |--||||
-| Single to Flexible Server (Azure portal) | Database Migration Service (DMS) | [Tutorial: DMS with the Azure portal (online)](../../dms/tutorial-mysql-Azure-single-to-flex-online-portal.md) | Recommended |
+| Single to Flexible Server (Azure portal) | Database Migration Service (classic) | [Tutorial: DMS (classic) with the Azure portal (online)](../../dms/tutorial-mysql-Azure-single-to-flex-online-portal.md) | Recommended |
| Single to Flexible Server | Mydumper/Myloader with Data-in replication | [Migrate Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server with open-source tools](how-to-migrate-single-flexible-minimum-downtime.md) | N/A | | Azure Database for MySQL Flexible Server Data-in replication | **Mydumper/Myloader with Data-in replication** | [Configure Data-in replication - Azure Database for MySQL Flexible Server](../flexible-server/how-to-data-in-replication.md) | N/A |
mysql Quickstart Mysql Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-mysql-github-actions.md
Previously updated : 10/26/2022 Last updated : 02/15/2023 # Quickstart: Use GitHub Actions to connect to Azure MySQL
The file has two sections:
## Copy the MySQL connection string
-In the Azure portal, go to your Azure Database for MySQL server and open **Settings** > **Connection strings**. Copy the **ADO.NET** connection string. Replace the placeholder values for `your_database` and `your_password`. The connection string will look similar to the following.
+In the Azure portal, go to your Azure Database for MySQL server and open **Settings** > **Connection strings**. Copy the **ADO.NET** connection string. Replace the placeholder values for `your_database` and `your_password`. The connection string looks similar to the following.
> [!IMPORTANT] >
You'll use the connection string as a GitHub secret.
branches: [ main ] ```
-4. Rename your workflow `MySQL for GitHub Actions` and add the checkout and login actions. These actions will check out your site code and authenticate with Azure using the `AZURE_CREDENTIALS` GitHub secret you created earlier.
+4. Rename your workflow `MySQL for GitHub Actions` and add the checkout and login actions. These actions check out your site code and authenticate with Azure using the `AZURE_CREDENTIALS` GitHub secret you created earlier.
# [Service principal](#tab/userlevel)
You'll use the connection string as a GitHub secret.
sql-file: './data.sql' ```
-6. Complete your workflow by adding an action to sign out of Azure. Here's the completed workflow. The file will appear in the `.github/workflows` folder of your repository.
+6. Complete your workflow by adding an action to sign out of Azure. Here's the completed workflow. The file appears in the `.github/workflows` folder of your repository.
# [Service principal](#tab/userlevel)
mysql Whats Happening To Mysql Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/whats-happening-to-mysql-single-server.md
After years of evolving the Azure Database for MySQL - Single Server service, it
Azure Database for MySQL - Flexible Server is a fully managed production-ready database service designed for more granular control and flexibility over database management functions and configuration settings. For more information about Flexible Server, visit **[Azure Database for MySQL - Flexible Server](../flexible-server/overview.md)**.
-If you currently have an Azure Database for MySQL - Single Server service hosting production servers, we're glad to let you know that you can migrate your Azure Database for MySQL - Single Server servers to the Azure Database for MySQL - Flexible Server service free of cost using Azure Database Migration Service. Review the different ways to migrate using Azure Data Migration Service in the section below.
+If you currently have an Azure Database for MySQL - Single Server service hosting production servers, we're glad to let you know that you can migrate your Azure Database for MySQL - Single Server servers to the Azure Database for MySQL - Flexible Server service free of cost using Azure Database Migration Service (classic) . Review the different ways to migrate using Azure Data Migration Service (DMS) in the section below.
## Migrate from Single Server to Flexible Server
-Learn how to migrate from Azure Database for MySQL - Single Server to Azure Database for MySQL - Flexible Server using the Azure Database Migration Service (DMS).
+Learn how to migrate from Azure Database for MySQL - Single Server to Azure Database for MySQL - Flexible Server using the Azure Database Migration Service (Classic).
| Scenario | Tool(s) | Details | |-|||
-| Offline | Database Migration Service (DMS) and the Azure portal | [Tutorial: DMS with the Azure portal (offline)](../../dms/tutorial-mysql-azure-single-to-flex-offline-portal.md) |
-| Online | Database Migration Service (DMS) and the Azure portal | [Tutorial: DMS with the Azure portal (online)](../../dms/tutorial-mysql-Azure-single-to-flex-online-portal.md) |
+| Offline | Database Migration Service (classic) and the Azure portal | [Tutorial: DMS (classic) with the Azure portal (offline)](../../dms/tutorial-mysql-azure-single-to-flex-offline-portal.md) |
+| Online | Database Migration Service (classic) and the Azure portal | [Tutorial: DMS (classic) with the Azure portal (online)](../../dms/tutorial-mysql-Azure-single-to-flex-online-portal.md) |
For more information on migrating from Single Server to Flexible Server using other migration tools, visit [Select the right tools for migration to Azure Database for MySQL](../migrate/how-to-decide-on-right-migration-tools.md).
To upgrade to Azure Database for MySQL Flexible Server, it's important to know w
| Single Server configuration not supported for migration | How and when to migrate? | ||--| | Single servers with Private Link enabled | Private Link is on the road map for next year. You can also choose to migrate now and perform wNet injection via a point-in-time restore operation to move to private access network connectivity method. |
-| Single servers with Cross-Region Read Replicas enabled | Cross-Region Read Replicas for flexible servers are on the road map for later this year (for paired region) and next year (for any cross-region), post, which you can migrate your single server. |
-| Single server deployed in regions where flexible server isn't supported (Learn more about regions [here](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?regions=all&products=mysql)). | Azure Database Migration Service (DMS) supports cross-region migration. Deploy your target flexible server in a suitable region and migrate using DMS. |
+| Single servers with Cross-Region Read Replicas enabled | Cross-Region Read Replicas for flexible server (for paired region) is in private preview, and you can start migrating your single server. Cross-Region Read Replicas for flexible server (for any cross-region) is on the road map for later this year, post, which you can migrate your single server. |
+| Single server deployed in regions where flexible server isn't supported (Learn more about regions [here](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?regions=all&products=mysql)). | Azure Database Migration Service (classic) supports cross-region migration. Deploy your target flexible server in a suitable region and migrate using DMS (classic). |
## Frequently Asked Questions (FAQs)
A. You will still be able to create read replicas for your existing single serve
**Q. Are there additional costs associated with performing the migration?**
-A. When running the migration, you pay for the target flexible server and the source single server. The configuration and compute of the target flexible server determines the additional costs incurred. For more information, see, [Pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/). Once you've decommissioned the source single server post successful migration, you only pay for your running flexible server. There are no costs incurred while running the migration through the Azure Database Migration Service migration tooling.
+A. When running the migration, you pay for the target flexible server and the source single server. The configuration and compute of the target flexible server determines the additional costs incurred. For more information, see, [Pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/). Once you've decommissioned the source single server post successful migration, you only pay for your running flexible server. There are no costs incurred while running the migration through the Azure Database Migration Service (classic) migration tooling.
**Q. Will my billing be affected by running Flexible Server as compared to Single Server?**
A. Flexible serverΓÇÖs zone-redundant deployment provides 99.99% availability wi
**Q. What migration options are available to help me migrate my single server to a flexible server?**
-A. You can use Azure Database Migration Service (DMS) to run [online](../../dms/tutorial-mysql-Azure-single-to-flex-online-portal.md) or [offline](../../dms/tutorial-mysql-azure-single-to-flex-offline-portal.md) migrations (recommended). In addition, you can use community tools such as m[ydumper/myloader together with Data-in replication](../migrate/how-to-migrate-single-flexible-minimum-downtime.md) to perform migrations.
+A. You can use Database Migration Service (classic) to run [online](../../dms/tutorial-mysql-Azure-single-to-flex-online-portal.md) or [offline](../../dms/tutorial-mysql-azure-single-to-flex-offline-portal.md) migrations (recommended). In addition, you can use community tools such as m[ydumper/myloader together with Data-in replication](../migrate/how-to-migrate-single-flexible-minimum-downtime.md) to perform migrations.
**Q. My single server is deployed in a region that doesnΓÇÖt support flexible server. How should I proceed with migration?**
-A. Azure Database Migration Service supports cross-region migration, so you can select a suitable region for your target flexible server and then proceed with DMS migration.
+A. Azure Database Migration Service (classic) supports cross-region migration, so you can select a suitable region for your target flexible server and then proceed with DMS (classic) migration.
**Q. I have private link configured for my single server, and this feature is not currently supported in Flexible Server. How do I migrate?**
A. Flexible Server support for private link is on our road map as our highest pr
**Q. I have cross-region read replicas configured for my single server, and this feature is not currently supported in Flexible Server. How do I migrate?**
-A. Flexible Server support for cross-region read replicas is on our roadmap as our highest priority. Launch of the feature is planned in Q4 2022 (for paired region) and Q1 2023 (for any cross-region), and you have ample time to initiate your Single Server to Flexible Server migrations with cross-region read replicas configured.
+A. Flexible Server support for cross-region read replicas is on our roadmap as our highest priority. Cross-Region Read Replicas for flexible server (for paired region) is in private preview, and you can start migrating your single server. Cross-Region Read Replicas for flexible server (for any cross-region) is on the road map for later this year, post, which you can migrate your single server.
**Q. Is there an option to rollback a Single Server to Flexible Server migration?**
You can also reach out to the Azure Database for MySQL product team at <AskAzur
> [!Warning] > This article is not for Azure Database for MySQL - Flexible Server users. It is for Azure Database for MySQL - Single Server customers who need to upgrade to MySQL - Flexible Server.
-Visit the **[FAQ](../../dms/faq-mysql-single-to-flex.md)** for information about using the Azure Database Migration Service (DMS) for Azure Database for MySQL - Single Server to Flexible Server migrations.
+Visit the **[FAQ](../../dms/faq-mysql-single-to-flex.md)** for information about using the Azure Database Migration Service (classic) for Azure Database for MySQL - Single Server to Flexible Server migrations.
We know migrating services can be a frustrating experience, and we apologize in advance for any inconvenience this might cause you. You can choose what scenario best works for you and your environment. ## Next steps -- [Frequently Asked Questions about DMS migrations](../../dms/faq-mysql-single-to-flex.md)
+- [Frequently Asked Questions about DMS (classic) migrations](../../dms/faq-mysql-single-to-flex.md)
- [Select the right tools for migration to Azure Database for MySQL](../migrate/how-to-decide-on-right-migration-tools.md) - [What is Flexible Server](../flexible-server/overview.md)
network-watcher Network Watcher Troubleshoot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-troubleshoot-overview.md
Title: Introduction to VPN troubleshoot
+ Title: VPN troubleshoot overview
-description: This page provides an overview of Azure Network Watcher VPN troubleshoot capability.
+description: Learn about Azure Network Watcher VPN troubleshoot capability.
Previously updated : 03/31/2022 Last updated : 02/15/2023 -+
-# Introduction to virtual network gateway troubleshooting in Azure Network Watcher
+# VPN troubleshoot overview
-Virtual network gateways provide connectivity between on-premises resources and other virtual networks within Azure. Monitoring gateways and their connections are critical to ensuring communication is not broken. Network Watcher provides the capability to troubleshoot gateways and connections. The capability can be called through the portal, PowerShell, Azure CLI, or REST API. When called, Network Watcher diagnoses the health of the gateway, or connection, and returns the appropriate results. The request is a long running transaction. The results are returned once the diagnosis is complete.
+Virtual network gateways provide connectivity between on-premises resources and Azure Virtual Networks. Monitoring virtual network gateways and their connections are critical to ensure communication isn't broken. Azure Network Watcher provides the capability to troubleshoot virtual network gateways and their connections. The capability can be called through the Azure portal, Azure PowerShell, Azure CLI, or REST API. When called, Network Watcher diagnoses the health of the gateway, or connection, and returns the appropriate results. The request is a long running transaction. The results are returned once the diagnosis is complete.
-![Screenshot shows Network Watcher V P N Diagnostics.][2]
+
+## Supported Gateway types
+
+The following table lists which gateways and connections are supported with Network Watcher troubleshooting:
+
+| Gateway or connection | Supported |
+|||
+|**Gateway types** | |
+|VPN | Supported |
+|ExpressRoute | Not Supported |
+|**VPN types** | |
+|Route Based | Supported|
+|Policy Based | Not Supported|
+|**Connection types**||
+|IPSec| Supported|
+|VNet2VNet| Supported|
+|ExpressRoute| Not Supported|
+|VPNClient| Not Supported|
## Results
The following list is the values returned with the troubleshoot API:
* **startTime** - This value is the time the troubleshoot API call started. * **endTime** - This value is the time when the troubleshooting ended.
-* **code** - This value is UnHealthy, if there is a single diagnosis failure.
+* **code** - This value is UnHealthy, if there's a single diagnosis failure.
* **results** - Results is a collection of results returned on the Connection or the virtual network gateway. * **id** - This value is the fault type. * **summary** - This value is a summary of the fault.
The following list is the values returned with the troubleshoot API:
* **actionUri** - This value provides the URI to documentation on how to act. * **actionUriText** - This value is a short description of the action text.
-The following tables show the different fault types (id under results from the preceding list) that are available and if the fault creates logs.
+The following tables show the different fault types (ID under results from the preceding list) that are available and if the fault creates logs.
### Gateway | Fault Type | Reason | Log| |||| | NoFault | When no error is detected |Yes|
-| GatewayNotFound | Cannot find gateway or gateway is not provisioned |No|
+| GatewayNotFound | Can't find gateway or gateway isn't provisioned |No|
| PlannedMaintenance | Gateway instance is under maintenance |No| | UserDrivenUpdate | This fault occurs when a user update is in progress. The update could be a resize operation. | No | | VipUnResponsive | This fault occurs when the primary instance of the gateway can't be reached due to a health probe failure. | No |
-| PlatformInActive | There is an issue with the platform. | No|
-| ServiceNotRunning | The underlying service is not running. | No|
+| PlatformInActive | There's an issue with the platform. | No|
+| ServiceNotRunning | The underlying service isn't running. | No|
| NoConnectionsFoundForGateway | No connections exist on the gateway. This fault is only a warning.| No|
-| ConnectionsNotConnected | Connections are not connected. This fault is only a warning.| Yes|
+| ConnectionsNotConnected | Connections aren't connected. This fault is only a warning.| Yes|
| GatewayCPUUsageExceeded | The current gateway CPU usage is > 95%. | Yes | ### Connection
The following tables show the different fault types (id under results from the p
| Fault Type | Reason | Log| |||| | NoFault | When no error is detected |Yes|
-| GatewayNotFound | Cannot find gateway or gateway is not provisioned |No|
+| GatewayNotFound | Can't find gateway or gateway isn't provisioned |No|
| PlannedMaintenance | Gateway instance is under maintenance |No| | UserDrivenUpdate | This fault occurs when a user update is in progress. The update could be a resize operation. | No | | VipUnResponsive | This fault occurs when the primary instance of the gateway can't be reached due to a health probe failure. | No | | ConnectionEntityNotFound | Connection configuration is missing | No | | ConnectionIsMarkedDisconnected | The connection is marked "disconnected" |No|
-| ConnectionNotConfiguredOnGateway | The underlying service does not have the connection configured. | Yes |
+| ConnectionNotConfiguredOnGateway | The underlying service doesn't have the connection configured. | Yes |
| ConnectionMarkedStandby | The underlying service is marked as standby.| Yes| | Authentication | Preshared key mismatch | Yes|
-| PeerReachability | The peer gateway is not reachable. | Yes|
-| IkePolicyMismatch | The peer gateway has IKE policies that are not supported by Azure. | Yes|
+| PeerReachability | The peer gateway isn't reachable. | Yes|
+| IkePolicyMismatch | The peer gateway has IKE policies that aren't supported by Azure. | Yes|
| WfpParse Error | An error occurred parsing the WFP log. |Yes|
-## Supported Gateway types
-
-The following table lists which gateways and connections are supported with Network Watcher troubleshooting:
-
-| Gateway or connection | Supported |
-|||
-|**Gateway types** | |
-|VPN | Supported |
-|ExpressRoute | Not Supported |
-|**VPN types** | |
-|Route Based | Supported|
-|Policy Based | Not Supported|
-|**Connection types**||
-|IPSec| Supported|
-|VNet2Vnet| Supported|
-|ExpressRoute| Not Supported|
-|VPNClient| Not Supported|
## Log files The resource troubleshooting log files are stored in a storage account after resource troubleshooting is finished. The following image shows the example contents of a call that resulted in an error.
-![zip file][1]
> [!NOTE] > 1. In some cases, only a subset of the logs files is written to storage.
-> 2. For newer Gateway versions, the IkeErrors.txt, Scrubbed-wfpdiag.txt and wfpdiag.txt.sum have been replaced by an IkeLogs.txt file that contains the whole IKE activity (not just errors).
+> 2. For newer gateway versions, the IkeErrors.txt, Scrubbed-wfpdiag.txt and wfpdiag.txt.sum have been replaced by an IkeLogs.txt file that contains the whole IKE activity (not just errors).
-For instructions on downloading files from Azure storage accounts, refer to [Get started with Azure Blob storage using .NET](../storage/blobs/storage-quickstart-blobs-dotnet.md). Another tool that can be used is Storage Explorer. More information about Storage Explorer can be found here at the following link: [Storage Explorer](https://storageexplorer.com/)
+For instructions on downloading files from Azure storage accounts, see [Download a block blob](../storage/blobs/storage-quickstart-blobs-portal.md#download-a-block-blob). Another tool that can be used is Storage Explorer. For information about Azure Storage Explorer, see [Use Azure Storage Explorer to download blobs](../storage/blobs/quickstart-storage-explorer.md#download-blobs)
### ConnectionStats.txt
Error: On-prem device sent invalid payload.
The **Scrubbed-wfpdiag.txt** log file contains the wfp log. This log contains logging of packet drop and IKE/AuthIP failures.
-The following example shows the contents of the Scrubbed-wfpdiag.txt file. In this example, the shared key of a Connection was not correct as can be seen from the third line from the bottom. The following example is just a snippet of the entire log, as the log can be lengthy depending on the issue.
+The following example shows the contents of the Scrubbed-wfpdiag.txt file. In this example, the pre-shared key of a Connection wasn't correct as can be seen from the third line from the bottom. The following example is just a snippet of the entire log, as the log can be lengthy depending on the issue.
``` ...
Elapsed Time 330 sec
``` ## Considerations
-* Only one troubleshoot operation can be run at a time per subscription. To run another troubleshoot operation, wait for the previous one to complete. Triggering more operations while a previous one hasn't completed will cause subsequent operations to fail.
-* CLI Bug: If you are using Azure CLI to run the command, the VPN Gateway and the Storage account need to be in same resource group. Customers with the resources in different resource groups can use PowerShell or the Azure portal instead.
-
+* Only one VPN troubleshoot operation can be run at a time per subscription. To run another VPN troubleshoot operation, wait for the previous one to complete. Triggering a new operation while a previous one hasn't completed causes the subsequent operations to fail.
+* CLI Bug: If you're using Azure CLI to run the command, the VPN Gateway and the Storage account need to be in same resource group. Customers with the resources in different resource groups can use PowerShell or the Azure portal instead.
## Next steps
-To learn how to diagnose a problem with a gateway or gateway connection, see [Diagnose communication problems between networks](diagnose-communication-problem-between-networks.md).
-<!--Image references-->
-
-[1]: ./media/network-watcher-troubleshoot-overview/gateway-tenant-worker-logs-new.png
-[2]: ./media/network-watcher-troubleshoot-overview/portal.png
+To learn how to diagnose a problem with a virtual network gateway or gateway connection, see [Diagnose communication problems between networks](diagnose-communication-problem-between-networks.md).
open-datasets Dataset Genomics Data Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-genomics-data-lake.md
The Genomics Data Lake is hosted in the West US 2 and West Central US Azure regi
| [OpenCravat](dataset-open-cravat.md) | OpenCravat: Open Custom Ranked Analysis of Variants Toolkit | | [ENCODE](dataset-encode.md) | ENCODE: Encyclopedia of DNA Elements | | [GATK Resource Bundle](dataset-gatk-resource-bundle.md) | GATK Resource bundle |
-| [TCGA Open Data](dataset-tcga.md) | TCGA Open Data |
## Next steps
postgresql Concepts Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-data-encryption.md
Some of the reasons why server state can become *Inaccessible* are:
> [!NOTE]
-> CLI examples below are based on 2.43.0 version of Azure Database for PostgreSQL - Flexible Server CLI libraries, which are in preview and may be subject to changes.
+> CLI examples below are based on 2.45.0 version of Azure Database for PostgreSQL - Flexible Server CLI libraries
## Setup Customer Managed Key during Server Creation
The following are current limitations for configuring the customer-managed key i
- Once enabled, CMK encryption can't be removed. If customer desires to remove this feature, it can only be done via restore of the server to non-CMK server. -- No support for Geo backup enabled servers- - No support for Azure HSM Key Vault ## Next steps
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-extensions.md
You can now enable pg_hint_plan your Postgres database. Connect to the database
CREATE EXTENSION pg_hint_plan ; ```
+## pg_buffercache
+
+`Pg_buffercache` can be used to study the contents of *shared_buffers* . Using [this extension](https://www.postgresql.org/docs/current/pgbuffercache.html) you can tell if a particular relation is cached or not(in *shared_buffers*) . This extension can help you in troubleshooting performance issues (caching related performance issues)
+
+This is part of contrib and it is very easy to install this extension.
+
+```sql
+CREATE EXTENSION pg_buffercache;
+```
+++ ## Next steps
postgresql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-read-replicas.md
Scaling vCores or between General Purpose and Memory Optimized:
## Next steps * Learn how to [create and manage read replicas in the Azure portal](how-to-read-replicas-portal.md).
+* Learn about [Cross-region replication with VNET](concepts-networking.md#replication-across-azure-regions-and-virtual-networks-with-private-networking).
[//]: # (* Learn how to [create and manage read replicas in the Azure CLI and REST API]&#40;how-to-read-replicas-cli.md&#41;.)
postgresql Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-security.md
When you're running Azure Database for PostgreSQL - Flexible Server, you have tw
Best way to manage PostgreSQL database access permissions at scale is using the concept of [roles](https://www.postgresql.org/docs/current/user-manag.html). A role can be either a database user or a group of database users, moreover roles can own the database objects and assign privileges on those objects to other roles to control who has access to which objects. It is also possible to grant membership in a role to another role, thus allowing the member role to use privileges assigned to another role. PostgreSQL lets you grant permissions directly to the database users. As a good security practice, it can be recommended that you create roles with specific sets of permissions based on minimum application and access requirements and then assign the appropriate roles to each user. The roles should be used to enforce a *least privilege model* for accessing database objects.
-While you're creating the Azure Database for PostgreSQL server, you provide credentials for an **administrator role**. This administrator role can be used to create more [PostgreSQL roles](https://www.postgresql.org/docs/current/user-manag.html). The administrator role should never be used by the application.
+The Azure Database for PostgreSQL server is created with the 3 default roles defined. You can see these roles by running the command:
+```sql
+SELECT rolname FROM pg_roles;
+```
+* azure_pg_admin.
+* azuresu.
+* administrator role.
+While you're creating the Azure Database for PostgreSQL server, you provide credentials for an **administrator role**. This administrator role can be used to create more [PostgreSQL roles](https://www.postgresql.org/docs/current/user-manag.html).
For example, below we can create an example role called *demouser*, ```SQL postgres=> create role demouser with password 'password123'; ```
+The **administrator role** should never be used by the application.
+
+In cloud-based PaaS environments access to a PostgreSQL superuser account is restricted to control plane operations only by cloud operators. Therefore, the **azure_pg_admin** account is added to the database as a pseudo-superuser account. Your administrator role is a member of the **azure_pg_admin** role.
+However, the server admin account is not part of the **azuresu** role, which has superuser privileges and is used to perform control pane operations. Since this service is a managed PaaS service, only Microsoft is part of the superuser role.
+ You can periodically audit the list of roles in your server. For example, you can connect using `psql` client and query the `pg_roles` table which lists all the roles along with privileges such as create additional roles, create databases, replication etc.
rolbypassrls | f
rolconfig | oid | 24827 +++ ``` [Audit logging](concepts-audit.md) is also available with Flexible Server to track activity in your databases.
postgresql How To Create Server Customer Managed Key Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-create-server-customer-managed-key-cli.md
Last updated 12/10/2022
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)] > [!NOTE]
-> CLI examples below are based on 2.43.0 version of Azure Database for PostgreSQL - Flexible Server CLI libraries, which are in preview and may be subject to changes.
+> CLI examples below are based on 2.45.0 version of Azure Database for PostgreSQL - Flexible Server CLI libraries
In this article, you learn how to create and manage Azure Database for PostgreSQL - Flexible Server with data encrypted by Customer Managed Keys using the Azure CLI. To learn more about Customer Managed Keys (CMK) feature with Azure Database for PostgreSQL - Flexible Server, see the [overview](concepts-data-encryption.md).
postgresql How To Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-read-replicas-portal.md
The **Read Replica Lag** metric shows the time since the last replayed transacti
:::image type="content" source="./media/how-to-read-replicas-portal/metrics_read_replica_lag.png" alt-text=" screenshot of the Metrics blade showing Read Replica Lag metric.":::
- :::image-end:::
- 3. For your **Aggregation**, select **Max**. ## Next steps * Learn more about [read replicas in Azure Database for PostgreSQL](concepts-read-replicas.md).
-[//]: # (* Learn how to [create and manage read replicas in the Azure CLI and REST API]&#40;how-to-read-replicas-cli.md&#41;.)
+[//]: # (* Learn how to [create and manage read replicas in the Azure CLI and REST API]&#40;how-to-read-replicas-cli.md&#41;.)
postgresql How To Deploy Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-deploy-github-action.md
Previously updated : 06/24/2022 Last updated : 02/15/2023 # Quickstart: Use GitHub Actions to connect to Azure PostgreSQL
Get started with [GitHub Actions](https://docs.github.com/en/actions) by using a
## Prerequisites
-You will need:
+You'll need:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - A GitHub repository with sample data (`data.sql`). If you don't have a GitHub account, [sign up for free](https://github.com/join). - An Azure Database for PostgreSQL server.
The file has two sections:
## Copy the PostgreSQL connection string
-In the Azure portal, go to your Azure Database for PostgreSQL server and open **Settings** > **Connection strings**. Copy the **ADO.NET** connection string. Replace the placeholder values for `your_database` and `your_password`. The connection string will look similar to this.
+In the Azure portal, go to your Azure Database for PostgreSQL server and open **Settings** > **Connection strings**. Copy the **ADO.NET** connection string. Replace the placeholder values for `your_database` and `your_password`. The connection string looks similar to this.
> [!IMPORTANT] > - For Single server use ```user=adminusername@servername``` . Note the ```@servername``` is required.
In the Azure portal, go to your Azure Database for PostgreSQL server and open **
psql host={servername.postgres.database.azure.com} port=5432 dbname={your_database} user={adminusername} password={your_database_password} sslmode=require ```
-You will use the connection string as a GitHub secret.
+You'll use the connection string as a GitHub secret.
## Configure the GitHub secrets
You will use the connection string as a GitHub secret.
branches: [ main ] ```
-1. Rename your workflow `PostgreSQL for GitHub Actions` and add the checkout and login actions. These actions will checkout your site code and authenticate with Azure using the GitHub secret(s) you created earlier.
+1. Rename your workflow `PostgreSQL for GitHub Actions` and add the checkout and login actions. These actions check out your site code and authenticate with Azure using the GitHub secret(s) you created earlier.
# [Service principal](#tab/userlevel)
You will use the connection string as a GitHub secret.
sql-file: './data.sql' ```
-3. Complete your workflow by adding an action to logout of Azure. Here is the completed workflow. The file will appear in the `.github/workflows` folder of your repository.
+3. Complete your workflow by adding an action to logout of Azure. Here's the completed workflow. The file appears in the `.github/workflows` folder of your repository.
# [Service principal](#tab/userlevel)
purview Register Scan Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-databricks.md
Previously updated : 01/30/2023 Last updated : 02/16/2023
When scanning Azure Databricks source, Microsoft Purview supports:
- Tables including the columns, foreign keys, unique constraints, and storage description - Views including the columns and storage description -- Fetching relationship between external tables and Azure Data Lake Storage Gen2/Azure Blob assets. -- Fetching static lineage on assets relationships among tables and views.
+- Fetching relationship between external tables and Azure Data Lake Storage Gen2/Azure Blob assets (external locations).
+- Fetching static lineage between tables and views based on the view definition.
-This connector brings metadata from Databricks metastore. Comparing to scan via [Hive Metastore connector](register-scan-hive-metastore-source.md) in case you use it to scan Azure Databricks earlier:
+This connector brings metadata from Databricks metastore. Comparing to scan via [Hive Metastore connector](register-scan-hive-metastore-source.md) in case you use it to scan Azure Databricks earlier:
- You can directly set up scan for Azure Databricks workspaces without direct HMS access. It uses Databricks personal access token for authentication and connects to a cluster to perform scan. - The Databricks workspace info is captured.
From the Databricks workspace asset, you can find the associated Hive Metastore
Refer to the [supported capabilities](#supported-capabilities) section on the supported Azure Databricks scenarios. For more information about lineage in general, see [data lineage](concept-data-lineage.md) and [lineage user guide](catalog-lineage-user-guide.md).
-Go to the Hive table/view asset -> lineage tab, you can see the asset relationship when applicable. For relationship between table and external storage assets, you'll see Hive Table asset and the storage asset are directly connected bi-directionally, as they mutually impact each other.
+Go to the Hive table/view asset -> lineage tab, you can see the asset relationship when applicable. For relationship between table and external storage assets, you'll see Hive table asset and the storage asset are directly connected bi-directionally, as they mutually impact each other. If you use mount point in create table statement, you need to provide the mount point information in [scan settings](#scan) to extract such relationship.
:::image type="content" source="media/register-scan-azure-databricks/lineage.png" alt-text="Screenshot that shows Azure Databricks lineage example." border="true":::
reliability Reliability Guidance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-guidance-overview.md
Azure reliability guidance is a collection of service-specific reliability guide
[Azure SQL](/azure/azure-sql/database/high-availability-sla?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Storage: Blob Storage](../storage/common/storage-disaster-recovery-guidance.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Virtual Machine Scale Sets](../virtual-machines/availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
-[Azure Virtual Machines](../virtual-machines/availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
+[Azure Virtual Machines](../virtual-machines/virtual-machines-reliability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
[Azure Virtual Network](../vpn-gateway/create-zone-redundant-vnet-gateway.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure VPN Gateway](../vpn-gateway/about-zone-redundant-vnet-gateways.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
resource-mover Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/common-questions.md
Azure Resource Mover is currently available as follows:
Using Resource Mover, you can currently move the following resources across regions: -- Azure VMs and associated disks
+- Azure VMs and associated disks (Azure Spot VMs are not currently supported)
- NICs - Availability sets - Azure virtual networks
sap Cal S4h https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/cal-s4h.md
vm-linux Previously updated : 11/14/2022 Last updated : 02/15/2023
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/feature-availability.md
The following tables display the current Microsoft Sentinel feature availability
| - [Entity insights](../../sentinel/enable-entity-behavior-analytics.md) | GA | Public Preview | |- [SOC incident audit metrics](../../sentinel/manage-soc-with-incident-metrics.md) | GA | GA | | - [Incident advanced search](../../sentinel/investigate-cases.md#search-for-incidents) |GA |GA |
-| - [Microsoft 365 Defender incident integration](../../sentinel/microsoft-365-defender-sentinel-integration.md) |Public Preview |Public Preview|
+| - [Microsoft 365 Defender incident integration](../../sentinel/microsoft-365-defender-sentinel-integration.md) | GA | GA |
| - [Microsoft Teams integrations](../../sentinel/collaborate-in-microsoft-teams.md) |Public Preview |Not Available | |- [Bring Your Own ML (BYO-ML)](../../sentinel/bring-your-own-ml.md) | Public Preview | Public Preview | |- [Search large datasets](../../sentinel/investigate-large-datasets.md) | Public Preview | Not Available |
sentinel Connect Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-aws.md
This connector is available in two versions: the legacy connector for CloudTrail
- [Amazon Virtual Private Cloud (VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) - [VPC Flow Logs](https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html) - [Amazon GuardDuty](https://docs.aws.amazon.com/guardduty/latest/ug/what-is-guardduty.html) - [Findings](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_findings.html) - [AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) - [Management](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-management-events-with-cloudtrail.html) and [data](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html) events
+- [AWS CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) - [CloudWatch logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html)
> [!IMPORTANT] >
Microsoft recommends using the automatic setup script to deploy this connector.
### Prerequisites -- You must have an **S3 bucket** to which you will ship the logs from your AWS services - VPC, GuardDuty, or CloudTrail.
+- You must have an **S3 bucket** to which you will ship the logs from your AWS services - VPC, GuardDuty, CloudTrail, or CloudWatch.
- Create an [S3 storage bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) in AWS.
The manual setup consists of the following steps:
- [Create a trail for a single account](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-a-trail-using-the-console-first-time.html). - [Create a trail spanning multiple accounts across an organization](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-trail-organization.html).
+- [Export your CloudWatch log data to an S3 bucket](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3Export.html).
+ #### Create a Simple Queue Service (SQS) in AWS If you haven't yet [created an SQS queue](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-create-queue.html), do so now.
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
See these [important announcements](#announcements) about recent changes to feat
## February 2023
+- [New CloudWatch data type for the AWS S3 connector (Preview)](#new-cloudwatch-data-type-for-the-aws-s3-connector)
- [Audit and monitor the health of your analytics rules (Preview)](#audit-and-monitor-the-health-of-your-analytics-rules-preview) - [New behavior for alert grouping in analytics rules](#new-behavior-for-alert-grouping-in-analytics-rules) (in [Announcements](#announcements) section below) - [Microsoft 365 Defender data connector is now generally available](#microsoft-365-defender-data-connector-is-now-generally-available) - [Advanced scheduling for analytics rules (Preview)](#advanced-scheduling-for-analytics-rules-preview)
+### New CloudWatch data type for the AWS S3 connector
+
+The Microsoft Sentinel AWS S3 connector now supports [CloudWatch logs](connect-aws.md) in addition to the supported CloudTrail, VPC Flow, and Guard Duty logs. Logs from AWS CloudWatch provide operational information from different AWS sources, which enables Microsoft Sentinel customers with AWS footprints to better understand and operate their AWS systems and applications.
+
+The CloudWatch data type has the ability to perform the same data transformation functions as the other data types within the AWS S3 connector. Learn how to [transform your data for CloudWatch](../azure-monitor/logs/tutorial-workspace-transformations-portal.md).
+ ### Audit and monitor the health of your analytics rules (Preview) Microsoft Sentinel's **health monitoring feature is now available for analytics rules** in addition to automation rules, playbooks, and data connectors. Also now available for the first time, and currently only for analytics rules, is Microsoft Sentinel's **audit feature**. The audit feature collects information about any changes made to Sentinel resources (analytics rules) so that you can discover any unauthorized actions or tampering with the service.
service-bus-messaging Message Sequencing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-sequencing.md
The absolute arrival order matters, for example, in business scenarios in which
The time-stamping capability acts as a neutral and trustworthy authority that accurately captures the UTC time of arrival of a message, reflected in the **EnqueuedTimeUtc** property. The value is useful if a business scenario depends on deadlines, such as whether a work item was submitted on a certain date before midnight, but the processing is far behind the queue backlog. > [!NOTE]
-> Sequence number on its own guarantees the queuing order of messages, but not the extraction order, which requires [sessions](message-sessions.md).
+> Sequence number on its own guarantees the queuing order and the extractor order of messages, but not the processing order, which requires [sessions](message-sessions.md).
+>
+> Say, there are 5 messages in the queue and 2 consumers. Consumer 1 picks up message 1. Consumer 2 picks up message 2. Consumer 2 finishes processing message 2 and picks up message 3 while Consumer 1 is not done with processing message 1 yet. Consumer 2 finishes processing message 3 but consumer 1 is still not done with processing message 1 yet. Finally, consumer 1 completes processing message 1. So, the messages are processed in this order: message 2, message 3, and message 1. If you need message 1, 2, and 3 to be processed in order, you need to use sessions.
+>
+> So, if messages just need to be retrieved in order, you don't need to use sessions. If messages need to be processed in order, use sessions. The same session ID should be set on messages that belong together, which could be message 1, 4, and 8 in one set, and 2, 3, and 6 in another set.
+>
+> For more information, see [Service Bus message sessions](message-sessions.md).
## Scheduled messages
service-bus-messaging Message Sessions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-sessions.md
Title: Azure Service Bus message sessions | Microsoft Docs description: This article explains how to use sessions to enable joint and ordered handling of unbounded sequences of related messages. Previously updated : 10/25/2022 Last updated : 02/14/2023 # Message sessions
Azure Service Bus sessions enable joint and ordered handling of unbounded sequen
> The basic tier of Service Bus doesn't support sessions. The standard and premium tiers support sessions. For differences between these tiers, see [Service Bus pricing](https://azure.microsoft.com/pricing/details/service-bus/). ## First-in, first out (FIFO) pattern
-To realize a FIFO guarantee in Service Bus, use sessions. Service Bus isn't prescriptive about the nature of the relationship between messages, and also doesn't define a particular model for determining where a message sequence starts or ends.
+To realize a FIFO guarantee in processing messages in Service Bus queues or subscriptions, use sessions. Service Bus isn't prescriptive about the nature of the relationship between messages, and also doesn't define a particular model for determining where a message sequence starts or ends.
Any sender can create a session when submitting messages into a topic or queue by setting the **session ID** property to some application-defined identifier that's unique to the session. At the **AMQP 1.0** protocol level, this value maps to the **group-id** property.
The methods for managing session state, SetState and GetState, can be found on t
Session state remains as long as it isn't cleared up (returning **null**), even if all messages in a session are consumed.
-The session state held in a queue or in a subscription counts towards that entity's storage quota. When the application is finished with a session, it is therefore recommended for the application to clean up its retained state to avoid external management cost.
+The session state held in a queue or in a subscription counts towards that entity's storage quota. When the application is finished with a session, it's therefore recommended for the application to clean up its retained state to avoid external management cost.
### Impact of delivery count
-The definition of delivery count per message in the context of sessions varies slightly from the definition in the absence of sessions. Here is a table summarizing when the delivery count is incremented.
+The definition of delivery count per message in the context of sessions varies slightly from the definition in the absence of sessions. Here's a table summarizing when the delivery count is incremented.
| Scenario | Is the message's delivery count incremented | |-|| | Session is accepted, but the session lock expires (due to timeout) | Yes |
-| Session is accepted, the messages within the session aren't completed (even if they are locked), and the session is closed | No |
+| Session is accepted, the messages within the session aren't completed (even if they're locked), and the session is closed | No |
| Session is accepted, messages are completed, and then the session is explicitly closed | N/A (It's the standard flow. Here messages are removed from the session) | ## Request-response pattern
Multiple applications can send their requests to a single request queue, with a
> [!NOTE] > The application that sends the initial requests should know about the session ID and use it to accept the session so that the session on which it is expecting the response is locked. It's a good idea to use a GUID that uniquely identifies the instance of the application as a session id. There should be no session handler or a timeout specified on the session receiver for the queue to ensure that responses are available to be locked and processed by specific receivers.
+## Sequencing vs. sessions
+[Sequence number](message-sequencing.md) on its own guarantees the queuing order and the extractor order of messages, but not the processing order, which requires sessions.
+
+Say, there are three messages in the queue and two consumers. Consumer 1 picks up message 1. Consumer 2 picks up message 2. Consumer 2 finishes processing message 2 and picks up message 3 while Consumer 1 isn't done with processing message 1 yet. Consumer 2 finishes processing message 3 but consumer 1 is still not done with processing message 1 yet. Finally, consumer 1 completes processing message 1. So, the messages are processed in this order: message 2, message 3, and message 1. If you need message 1, 2, and 3 to be processed in order, you need to use sessions.
+
+So, if messages just need to be retrieved in order, you don't need to use sessions. If messages need to be processed in order, use sessions. The same session ID should be set on messages that belong together, which could be message 1, 4, and 8 in a set, and 2, 3, and 6 in another set.
+ ## Next steps You can enable message sessions while creating a queue using Azure portal, PowerShell, CLI, Resource Manager template, .NET, Java, Python, and JavaScript. For more information, see [Enable message sessions](enable-message-sessions.md). Try the samples in the language of your choice to explore Azure Service Bus features. --- [Azure Service Bus client library samples for .NET (latest)](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/)-- [Azure Service Bus client library samples for Java (latest)](/samples/azure/azure-sdk-for-java/servicebus-samples/)-- [Azure Service Bus client library samples for Python](/samples/azure/azure-sdk-for-python/servicebus-samples/)-- [Azure Service Bus client library samples for JavaScript](/samples/azure/azure-sdk-for-js/service-bus-javascript/)-- [Azure Service Bus client library samples for TypeScript](/samples/azure/azure-sdk-for-js/service-bus-typescript/)-
-Find samples for the older .NET and Java client libraries below:
-- [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/)-- [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus/MessageBrowse)-
+- .NET
+ - [Sending and receiving session messages](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/servicebus/Azure.Messaging.ServiceBus/samples/Sample03_SendReceiveSessions.md)
+ - [Using the session processor](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/servicebus/Azure.Messaging.ServiceBus/samples/Sample05_SessionProcessor.md)
+- Java
+ - [Send messages to a session](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/servicebus/azure-messaging-servicebus/src/samples/java/com/azure/messaging/servicebus/SendSessionMessageAsyncSample.java)
+ - [Receive messages from the first available session](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/servicebus/azure-messaging-servicebus/src/samples/java/com/azure/messaging/servicebus/ReceiveSingleSessionAsyncSample.java)
+ - [Receive messages from a specific session](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/servicebus/azure-messaging-servicebus/src/samples/java/com/azure/messaging/servicebus/ReceiveNamedSessionAsyncSample.java)
+ - [Process all session messages using a processor](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/servicebus/azure-messaging-servicebus/src/samples/java/com/azure/messaging/servicebus/ServiceBusSessionProcessorSample.java)
+- Python
+ - [Send and receive messages from a session-enabled queue](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/servicebus/azure-servicebus/samples/sync_samples/session_send_receive.py)
+ - [Receive messages from multiple available sessions in parallel with a thread pool](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/servicebus/azure-servicebus/samples/sync_samples/session_pool_receive.py)
+- JavaScript
+ - [Send to and receive messages from session enabled queues or subscriptions](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/servicebus/service-bus/samples/v7/javascript/session.js)
+ - [Continually read through all available sessions](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/servicebus/service-bus/samples/v7/javascript/advanced/sessionRoundRobin.js)
+ - [Use session state](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/servicebus/service-bus/samples/v7/javascript/advanced/sessionState.js)
+
+
[1]: ./media/message-sessions/sessions.png
service-bus-messaging Service Bus Dotnet Get Started With Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-dotnet-get-started-with-queues.md
Navigate to your Service Bus namespace in the Azure portal, and select **Delete*
See the following documentation and samples: -- [Abstract away infrastructure concerns with higher-level frameworks like NServiceBus](/azure/service-bus-messaging/build-message-driven-apps-nservicebus) - [Azure Service Bus client library for .NET - Readme](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/servicebus/Azure.Messaging.ServiceBus) - [Samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/servicebus/Azure.Messaging.ServiceBus/samples) - [.NET API reference](/dotnet/api/azure.messaging.servicebus)
+- [Abstract away infrastructure concerns with higher-level frameworks like NServiceBus](/azure/service-bus-messaging/build-message-driven-apps-nservicebus)
## Next steps
service-fabric How To Managed Cluster Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-availability-zones.md
Requirements:
``` If you run in to any problems reach out to support for assistance.
+## Enable FastZonalUpdate on Service Fabric managed clusters (preview)
+Service Fabric managed clusters support faster cluster and application upgrades by reducing the max upgrade domains per availability zone. The default configuration right now can have at most 15 UDs in multiple AZ nodetype. This huge number of UDs reduced the upgrade velocity. Using the new configuration, the max UDs are reduced, which results in faster updates, keeping the safety of the upgrades intact.
+
+The update should be done via ARM template by setting the zonalUpdateMode property to ΓÇ£fastΓÇ¥ and then modifying a node type attribute, such as adding a node and then removing the node to each nodetype (see required steps 2 and 3 below). The Service Fabric managed cluster resource apiVersion should be 2022-10-01-preview or later.
+
+1. Modify the ARM template with the new property mentioned above.
+```json
+ "resources": [
+ {
+ "type": "Microsoft.ServiceFabric/managedClusters",
+ "apiVersion": "2022-10-01-preview",
+ '''
+ "properties": {
+ '''
+ "zonalResiliency": true,
+ "zonalUpdateMode": ΓÇ£fastΓÇ¥,
+ ...
+ }
+ }]
+```
+2. Add a node to the node type from a cluster by following the procedure to [modify node type](how-to-managed-cluster-modify-node-type.md).
+
+3. Remove a node to the node type from a cluster by following the procedure to [modify node type](how-to-managed-cluster-modify-node-type.md).
+ [sf-architecture]: ./media/service-fabric-cross-availability-zones/sf-cross-az-topology.png [sf-architecture]: ./media/service-fabric-cross-availability-zones/sf-cross-az-topology.png [sf-multi-az-arch]: ./media/service-fabric-cross-availability-zones/sf-multi-az-topology.png
service-fabric Service Fabric Cluster Fabric Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-fabric-settings.md
The following is a list of Fabric settings that you can customize, organized by
|BodyChunkSize |Uint, default is 16384 |Dynamic| Gives the size of for the chunk in bytes used to read the body. | |CrlCheckingFlag|uint, default is 0x40000000 |Dynamic| Flags for application/service certificate chain validation; e.g. CRL checking 0x10000000 CERT_CHAIN_REVOCATION_CHECK_END_CERT 0x20000000 CERT_CHAIN_REVOCATION_CHECK_CHAIN 0x40000000 CERT_CHAIN_REVOCATION_CHECK_CHAIN_EXCLUDE_ROOT 0x80000000 CERT_CHAIN_REVOCATION_CHECK_CACHE_ONLY Setting to 0 disables CRL checking Full list of supported values is documented by dwFlags of CertGetCertificateChain: https://msdn.microsoft.com/library/windows/desktop/aa376078(v=vs.85).aspx | |DefaultHttpRequestTimeout |Time in seconds. default is 120 |Dynamic|Specify timespan in seconds. Gives the default request timeout for the http requests being processed in the http app gateway. |
-|ForwardClientCertificate|bool, default is FALSE|Dynamic|When set to false, reverse proxy will not request for the client certificate.When set to true, reverse proxy will request for the client certificate during the TLS handshake and forward the base64 encoded PEM format string to the service in a header named X-Client-Certificate.The service can fail the request with appropriate status code after inspecting the certificate data. If this is true and client does not present a certificate, reverse proxy will forward an empty header and let the service handle the case. Reverse proxy will act as a transparent layer. To learn more, see [Set up client certificate authentication](service-fabric-reverseproxy-configure-secure-communication.md#setting-up-client-certificate-authentication-through-the-reverse-proxy). |
+|ForwardClientCertificate|bool, default is FALSE|Dynamic|When set to false, reverse proxy will not request for the client certificate. When set to true, reverse proxy will request for the client certificate during the TLS handshake and forward the base64 encoded PEM format string to the service in a header named X-Client-Certificate. The service can fail the request with appropriate status code after inspecting the certificate data. If this is true and client does not present a certificate, reverse proxy will forward an empty header and let the service handle the case. Reverse proxy will act as a transparent layer. To learn more, see [Set up client certificate authentication](service-fabric-reverseproxy-configure-secure-communication.md#setting-up-client-certificate-authentication-through-the-reverse-proxy). |
|GatewayAuthCredentialType |string, default is "None" |Static| Indicates the type of security credentials to use at the http app gateway endpoint Valid values are None/X509. | |GatewayX509CertificateFindType |string, default is "FindByThumbprint" |Dynamic| Indicates how to search for certificate in the store specified by GatewayX509CertificateStoreName Supported value: FindByThumbprint; FindBySubjectName. | |GatewayX509CertificateFindValue | string, default is "" |Dynamic| Search filter value used to locate the http app gateway certificate. This certificate is configured on the https endpoint and can also be used to verify the identity of the app if needed by the services. FindValue is looked up first; and if that does not exist; FindValueSecondary is looked up. |
The following is a list of Fabric settings that you can customize, organized by
|ApplicationLogsFormatVersion |Int, default is 0 | Dynamic |Version for application logs format. Supported values are 0 and 1. Version 1 includes more fields from the ETW event record than version 0. | |AuditHttpRequests |Bool, default is false | Dynamic | Turn HTTP auditing on or off. The purpose of auditing is to see the activities that have been performed against the cluster; including who initiated the request. Note that this is a best attempt logging; and trace loss may occur. HTTP requests with "User" authentication is not recorded. | |CaptureHttpTelemetry|Bool, default is true | Dynamic | Turn HTTP telemetry on or off. The purpose of telemetry is for Service Fabric to be able to capture telemetry data to help plan future work and identify problem areas. Telemetry does not record any personal data or the request body. Telemetry captures all HTTP requests unless otherwise configured. |
-|ClusterId |String | Dynamic |The unique id of the cluster. This is generated when the cluster is created. |
+|ClusterId |String | Dynamic |The unique ID of the cluster. This is generated when the cluster is created. |
|ConsumerInstances |String | Dynamic |The list of DCA consumer instances. | |DiskFullSafetySpaceInMB |Int, default is 1024 | Dynamic |Remaining disk space in MB to protect from use by DCA. | |EnableCircularTraceSession |Bool, default is false | Static |Flag indicates whether circular trace sessions should be used. |
The following is a list of Fabric settings that you can customize, organized by
|ForwarderPoolStartPort|Int, default is 16700|Static|The start address for the forwarding pool that is used for recursive queries.| |InstanceCount|int, default is -1|Static|Default value is -1 which means that DnsService is running on every node. OneBox needs this to be set to 1 since DnsService uses well known port 53, so it cannot have multiple instances on the same machine.| |IsEnabled|bool, default is FALSE|Static|Enables/Disables DnsService. DnsService is disabled by default and this config needs to be set to enable it. |
-|PartitionPrefix|string, default is "--"|Static|Controls the partition prefix string value in DNS queries for partitioned services. The value : <ul><li>Should be RFC-compliant as it will be part of a DNS query.</li><li>Should not contain a dot, '.', as dot interferes with DNS suffix behavior.</li><li>Should not be longer than 5 characters.</li><li>Cannot be an empty string.</li><li>If the PartitionPrefix setting is overridden, then PartitionSuffix must be overridden, and vice-versa.</li></ul>For more information, see [Service Fabric DNS Service.](service-fabric-dnsservice.md).|
-|PartitionSuffix|string, default is ""|Static|Controls the partition suffix string value in DNS queries for partitioned services.The value : <ul><li>Should be RFC-compliant as it will be part of a DNS query.</li><li>Should not contain a dot, '.', as dot interferes with DNS suffix behavior.</li><li>Should not be longer than 5 characters.</li><li>If the PartitionPrefix setting is overridden, then PartitionSuffix must be overridden, and vice-versa.</li></ul>For more information, see [Service Fabric DNS Service.](service-fabric-dnsservice.md). |
+|PartitionPrefix|string, default is "--"|Static|Controls the partition prefix string value in DNS queries for partitioned services. The value: <ul><li>Should be RFC-compliant as it will be part of a DNS query.</li><li>Should not contain a dot, '.', as dot interferes with DNS suffix behavior.</li><li>Should not be longer than 5 characters.</li><li>Cannot be an empty string.</li><li>If the PartitionPrefix setting is overridden, then PartitionSuffix must be overridden, and vice-versa.</li></ul>For more information, see [Service Fabric DNS Service.](service-fabric-dnsservice.md).|
+|PartitionSuffix|string, default is ""|Static|Controls the partition suffix string value in DNS queries for partitioned services. The value: <ul><li>Should be RFC-compliant as it will be part of a DNS query.</li><li>Should not contain a dot, '.', as dot interferes with DNS suffix behavior.</li><li>Should not be longer than 5 characters.</li><li>If the PartitionPrefix setting is overridden, then PartitionSuffix must be overridden, and vice-versa.</li></ul>For more information, see [Service Fabric DNS Service.](service-fabric-dnsservice.md). |
|RecursiveQueryParallelMaxAttempts|Int, default is 0|Static|The number of times parallel queries will be attempted. Parallel queries are executed after the max attempts for serial queries have been exhausted.| |RecursiveQueryParallelTimeout|TimeSpan, default is Common::TimeSpan::FromSeconds(5)|Static|The timeout value in seconds for each attempted parallel query.|
-|RecursiveQuerySerialMaxAttempts|Int, default is 2|Static|The number of serial queries that will be attempted, at most. If this number is higher than the amount of forwarding DNS servers, querying will stop once all the servers have been attempted exactly once.|
+|RecursiveQuerySerialMaxAttempts|Int, default is 2|Static|The number of serial queries that will be attempted, at most. If this number is higher than the number of forwarding DNS servers, querying will stop once all the servers have been attempted exactly once.|
|RecursiveQuerySerialTimeout|TimeSpan, default is Common::TimeSpan::FromSeconds(5)|Static|The timeout value in seconds for each attempted serial query.| |TransientErrorMaxRetryCount|Int, default is 3|Static|Controls the number of times SF DNS will retry when a transient error occurs while calling SF APIs (e.g. when retrieving names and endpoints).| |TransientErrorRetryIntervalInMillis|Int, default is 0|Static|Sets the delay in milliseconds between retries for when SF DNS calls SF APIs.|
The following is a list of Fabric settings that you can customize, organized by
|ExpectedNodeDeactivationDuration|TimeSpan, default is Common::TimeSpan::FromSeconds(60.0 \* 30)|Dynamic|Specify timespan in seconds. This is the expected duration for a node to complete deactivation in. | |ExpectedNodeFabricUpgradeDuration|TimeSpan, default is Common::TimeSpan::FromSeconds(60.0 \* 30)|Dynamic|Specify timespan in seconds. This is the expected duration for a node to be upgraded during Windows Fabric upgrade. | |ExpectedReplicaUpgradeDuration|TimeSpan, default is Common::TimeSpan::FromSeconds(60.0 \* 30)|Dynamic|Specify timespan in seconds. This is the expected duration for all the replicas to be upgraded on a node during application upgrade. |
-|IgnoreReplicaRestartWaitDurationWhenBelowMinReplicaSetSize|bool, default is FALSE|Dynamic|If IgnoreReplicaRestartWaitDurationWhenBelowMinReplicaSetSize is set to:<br>- false : Windows Fabric will wait for fixed time specified in ReplicaRestartWaitDuration for a replica to come back up.<br>- true : Windows Fabric will wait for fixed time specified in ReplicaRestartWaitDuration for a replica to come back up if partition is above or at Min Replica Set Size. If partition is below Min Replica Set Size new replica will be created right away.|
+|IgnoreReplicaRestartWaitDurationWhenBelowMinReplicaSetSize|bool, default is FALSE|Dynamic|If IgnoreReplicaRestartWaitDurationWhenBelowMinReplicaSetSize is set to:<br>- false: Windows Fabric will wait for fixed time specified in ReplicaRestartWaitDuration for a replica to come back up.<br>- true: Windows Fabric will wait for fixed time specified in ReplicaRestartWaitDuration for a replica to come back up if partition is above or at Min Replica Set Size. If partition is below Min Replica Set Size new replica will be created right away.|
|IsSingletonReplicaMoveAllowedDuringUpgrade|bool, default is TRUE|Dynamic|If set to true; replicas with a target replica set size of 1 will be permitted to move during upgrade. | |MaxInstanceCloseDelayDurationInSeconds|uint, default is 1800|Dynamic|Maximum value of InstanceCloseDelay that can be configured to be used for FabricUpgrade/ApplicationUpgrade/NodeDeactivations | |MinReplicaSetSize|int, default is 3|Not Allowed|This is the minimum replica set size for the FM. If the number of active FM replicas drops below this value; the FM will reject changes to the cluster until at least the min number of replicas is recovered |
The following is a list of Fabric settings that you can customize, organized by
| **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** | | | | | |
-|CompletedActionKeepDurationInSeconds | Int, default is 604800 |Static| This is approximately how long to keep actions that are in a terminal state. This also depends on StoredActionCleanupIntervalInSeconds; since the work to cleanup is only done on that interval. 604800 is 7 days. |
+|CompletedActionKeepDurationInSeconds | Int, default is 604800 |Static| This is approximately how long to keep actions that are in a terminal state. This also depends on StoredActionCleanupIntervalInSeconds; since the work to clean up is only done on that interval. 604800 is 7 days. |
|DataLossCheckPollIntervalInSeconds|int, default is 5|Static|This is the time between the checks the system performs while waiting for data loss to happen. The number of times the data loss number will be checked per internal iteration is DataLossCheckWaitDurationInSeconds/this. | |DataLossCheckWaitDurationInSeconds|int, default is 25|Static|The total amount of time; in seconds; that the system will wait for data loss to happen. This is internally used when the StartPartitionDataLossAsync() api is called. | |MinReplicaSetSize |Int, default is 0 |Static|The MinReplicaSetSize for FaultAnalysisService. |
The following is a list of Fabric settings that you can customize, organized by
|DiskSpaceHealthReportingIntervalWhenCloseToOutOfDiskSpace |TimeSpan, default is Common::TimeSpan::FromMinutes(5)|Dynamic|Specify timespan in seconds. The time interval between checking of disk space for reporting health event when disk is close to out of space. | |DiskSpaceHealthReportingIntervalWhenEnoughDiskSpace |TimeSpan, default is Common::TimeSpan::FromMinutes(15)|Dynamic|Specify timespan in seconds. The time interval between checking of disk space for reporting health event when there is enough space on disk. | |EnableImageStoreHealthReporting |bool, default is TRUE |Static|Config to determine whether file store service should report its health. |
-|FreeDiskSpaceNotificationSizeInKB|int64, default is 25\*1024 |Dynamic|The size of free disk space below which health warning may occur. Minimum value of this config and FreeDiskSpaceNotificationThresholdPercentage config are used to determine sending of the health warning. |
+|FreeDiskSpaceNotificationSizeInKB|int64, default is 25\*1024 |Dynamic|The size of free disk space below which health warning may occur. Minimum values of this config and FreeDiskSpaceNotificationThresholdPercentage config are used to determine sending of the health warning. |
|FreeDiskSpaceNotificationThresholdPercentage|double, default is 0.02 |Dynamic|The percentage of free disk space below which health warning may occur. Minimum value of this config and FreeDiskSpaceNotificationInMB config are used to determine sending of health warning. | |GenerateV1CommonNameAccount| bool, default is TRUE|Static|Specifies whether to generate an account with user name V1 generation algorithm. Starting with Service Fabric version 6.1; an account with v2 generation is always created. The V1 account is necessary for upgrades from/to versions that do not support V2 generation (prior to 6.1).| |MaxCopyOperationThreads | Uint, default is 0 |Dynamic| The maximum number of parallel files that secondary can copy from primary. '0' == number of cores. |
The following is a list of Fabric settings that you can customize, organized by
|DeploymentMaxRetryInterval| TimeSpan, default is Common::TimeSpan::FromSeconds(3600)|Dynamic| Specify timespan in seconds. Max retry interval for the deployment. On every continuous failure the retry interval is calculated as Min( DeploymentMaxRetryInterval; Continuous Failure Count * DeploymentRetryBackoffInterval) | |DeploymentRetryBackoffInterval| TimeSpan, default is Common::TimeSpan::FromSeconds(10)|Dynamic|Specify timespan in seconds. Back-off interval for the deployment failure. On every continuous deployment failure the system will retry the deployment for up to the MaxDeploymentFailureCount. The retry interval is a product of continuous deployment failure and the deployment backoff interval. | |DisableContainers|bool, default is FALSE|Static|Config for disabling containers - used instead of DisableContainerServiceStartOnContainerActivatorOpen which is deprecated config |
-|DisableDockerRequestRetry|bool, default is FALSE |Dynamic| By default SF communicates with DD (docker dameon) with a timeout of 'DockerRequestTimeout' for each http request sent to it. If DD does not responds within this time period; SF resends the request if top level operation still has remaining time. With hyperv container; DD sometimes take much more time to bring up the container or deactivate it. In such cases DD request times out from SF perspective and SF retries the operation. Sometimes this seems to adds more pressure on DD. This config allows to disable this retry and wait for DD to respond. |
+|DisableDockerRequestRetry|bool, default is FALSE |Dynamic| By default SF communicates with DD (docker dameon) with a timeout of 'DockerRequestTimeout' for each http request sent to it. If DD does not responds within this time period; SF resends the request if top level operation still has remaining time. With Hyper-V container; DD sometimes take much more time to bring up the container or deactivate it. In such cases DD request times out from SF perspective and SF retries the operation. Sometimes this seems to adds more pressure on DD. This config allows to disable this retry and wait for DD to respond. |
|DisableLivenessProbes | wstring, default is L"" | Static | Config to disable Liveness probes in cluster. You can specify any non-empty value for SF to disable probes. | |DisableReadinessProbes | wstring, default is L"" | Static | Config to disable Readiness probes in cluster. You can specify any non-empty value for SF to disable probes. | |DnsServerListTwoIps | Bool, default is FALSE | Static | This flags adds the local dns server twice to help alleviate intermittent resolve issues. |
The following is a list of Fabric settings that you can customize, organized by
| **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** | | | | | | |AutomaticUnprovisionInterval|TimeSpan, default is Common::TimeSpan::FromMinutes(5)|Dynamic|Specify timespan in seconds. The cleanup interval for allowed for unregister application type during automatic application type cleanup.|
-|AzureStorageMaxConnections | Int, default is 5000 |Dynamic|The maximum number of concurrent connections to azure storage. |
+|AzureStorageMaxConnections | Int, default is 5000 |Dynamic|The maximum number of concurrent connections to Azure storage. |
|AzureStorageMaxWorkerThreads | Int, default is 25 |Dynamic|The maximum number of worker threads in parallel. | |AzureStorageOperationTimeout | Time in seconds, default is 6000 |Dynamic|Specify timespan in seconds. Time out for xstore operation to complete. | |CleanupApplicationPackageOnProvisionSuccess|bool, default is true |Dynamic|Enables or disables the automatic cleanup of application package on successful provision.
The following is a list of Fabric settings that you can customize, organized by
|PreferredLocationConstraintPriority | Int, default is 2| Dynamic|Determines the priority of preferred location constraint: 0: Hard; 1: Soft; 2: Optimization; negative: Ignore | |PreferredPrimaryDomainsConstraintPriority| Int, default is 1 | Dynamic| Determines the priority of preferred primary domain constraint: 0: Hard; 1: Soft; negative: Ignore | |PreferUpgradedUDs|bool,default is FALSE|Dynamic|Turns on and off logic which prefers moving to already upgraded UDs. Starting with SF 7.0, the default value for this parameter is changed from TRUE to FALSE.|
-|PreventTransientOvercommit | Bool, default is false | Dynamic|Determines should PLB immediately count on resources that will be freed up by the initiated moves. By default; PLB can initiate move out and move in on the same node which can create transient overcommit. Setting this parameter to true will prevent those kinds of overcommits and on-demand defrag (aka placementWithMove) will be disabled. |
+|PreventTransientOvercommit | Bool, default is false | Dynamic|Determines should PLB immediately count on resources that will be freed up by the initiated moves. By default; PLB can initiate move out and move in on the same node which can create transient overcommit. Setting this parameter to true will prevent those kinds of overcommits and on-demand defrag (also known as placementWithMove) will be disabled. |
|ScaleoutCountConstraintPriority | Int, default is 0 |Dynamic| Determines the priority of scaleout count constraint: 0: Hard; 1: Soft; negative: Ignore. | |SubclusteringEnabled|Bool, default is FALSE | Dynamic |Acknowledge subclustering when calculating standard deviation for balancing | |SubclusteringReportingPolicy| Int, default is 1 |Dynamic|Defines how and if the subclustering health reports are sent: 0: Do not report; 1: Warning; 2: OK |
The following is a list of Fabric settings that you can customize, organized by
## Security | **Parameter** | **Allowed Values** |**Upgrade Policy**| **Guidance or Short Description** | | | | | |
-|AADCertEndpointFormat|string, default is ""|Static|AAD Cert Endpoint Format, default Azure Commercial, specified for non-default environment such as Azure Government "https:\//login.microsoftonline.us/{0}/federationmetadata/2007-06/federationmetadata.xml" |
+|AADCertEndpointFormat|string, default is ""|Static|Azure Active Directory Cert Endpoint Format, default Azure Commercial, specified for non-default environment such as Azure Government "https:\//login.microsoftonline.us/{0}/federationmetadata/2007-06/federationmetadata.xml" |
|AADClientApplication|string, default is ""|Static|Native Client application name or ID representing Fabric Clients | |AADClusterApplication|string, default is ""|Static|Web API application name or ID representing the cluster |
-|AADLoginEndpoint|string, default is ""|Static|AAD Login Endpoint, default Azure Commercial, specified for non-default environment such as Azure Government "https:\//login.microsoftonline.us" |
+|AADLoginEndpoint|string, default is ""|Static|Azure Active Directory Login Endpoint, default Azure Commercial, specified for non-default environment such as Azure Government "https:\//login.microsoftonline.us" |
|AADTenantId|string, default is ""|Static|Tenant ID (GUID) | |AcceptExpiredPinnedClusterCertificate|bool, default is FALSE|Dynamic|Flag indicating whether to accept expired cluster certificates declared by thumbprint Applies only to cluster certificates; so as to keep the cluster alive. | |AdminClientCertThumbprints|string, default is ""|Dynamic|Thumbprints of certificates used by clients in admin role. It is a comma-separated name list. |
-|AADTokenEndpointFormat|string, default is ""|Static|AAD Token Endpoint, default Azure Commercial, specified for non-default environment such as Azure Government "https:\//login.microsoftonline.us/{0}" |
+|AADTokenEndpointFormat|string, default is ""|Static|Azure Active Directory Token Endpoint, default Azure Commercial, specified for non-default environment such as Azure Government "https:\//login.microsoftonline.us/{0}" |
|AdminClientClaims|string, default is ""|Dynamic|All possible claims expected from admin clients; the same format as ClientClaims; this list internally gets added to ClientClaims; so no need to also add the same entries to ClientClaims. | |AdminClientIdentities|string, default is ""|Dynamic|Windows identities of fabric clients in admin role; used to authorize privileged fabric operations. It is a comma-separated list; each entry is a domain account name or group name. For convenience; the account that runs fabric.exe is automatically assigned admin role; so is group ServiceFabricAdministrators. | |AppRunAsAccountGroupX509Folder|string, default is /home/sfuser/sfusercerts |Static|Folder where AppRunAsAccountGroup X509 certificates and private keys are located |
The following is a list of Fabric settings that you can customize, organized by
| **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** | | | | | |
+|BlockAccessToWireServer|bool, default is FALSE| Static |Blocks access to ports of the WireServer endpoint from Docker containers deployed as Service Fabric applications. This parameter is supported for Service Fabric clusters deployed on Azure Virtual Machines, Windows and Linux, and defaults to 'false' (access is permitted).|
|ContainerNetworkName|string, default is ""| Static |The network name to use when setting up a container network.| |ContainerNetworkSetup|bool, default is FALSE (Linux) and default is TRUE (Windows)| Static |Whether to set up a container network.| |FabricDataRoot |String | Not Allowed |Service Fabric data root directory. Default for Azure is d:\svcfab (Only for Standalone Deployments)|
The following is a list of Fabric settings that you can customize, organized by
| **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** | | | | | |
-|Providers |string, default is "DSTS" |Static|Comma separated list of token validation providers to enable (valid providers are: DSTS; AAD). Currently only a single provider can be enabled at any time. |
+|Providers |string, default is "DSTS" |Static|Comma separated list of token validation providers to enable (valid providers are: DSTS; Azure Active Directory). Currently only a single provider can be enabled at any time. |
## Trace/Etw
service-fabric Service Fabric Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-support.md
Report Azure Service Fabric issues at the [Service Fabric GitHub](https://github
The `azure-service-fabric` tag on [StackOverflow][stackoverflow] is used for asking general questions about how the platform works and how you may use it to accomplish certain tasks.
+## Service Fabric community Q & A schedule
+Join the community call on the following days to hear about new feature releases and key updates and get answers to the questions from the Service Fabric team.
+
+| Schedule |
+| |
+| March 30, 2023 |
+| May 25, 2023 |
+| July 27, 2023|
+| September 28, 2023|
+| January 25, 2024 |
+| March 28, 2024 |
+ ## Stay informed of updates and new releases <div class='icon is-large'>
site-recovery Site Recovery Retain Ip Azure Vm Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-retain-ip-azure-vm-failover.md
HereΓÇÖs what the network architecture looks like before failover.
- **Subnet 1**: 10.1.1.0/24 - **Subnet 2**: 10.1.2.0/24 - **Subnet 3**: 10.1.3.0/24, utilizing an Azure virtual network with address space 10.1.0.0/16. This virtual network is named **Source VNet**
- - The secondary (target) region is Azure Southeast Asia:
+- The secondary (target) region is Azure Southeast Asia:
- Southeast Asia has a recovery VNet (**Recovery VNet**) identical to **Source VNet**. - VMs in East Asia are connected to an on-premises datacenter with Azure ExpressRoute or site-to-site VPN. - To reduce RTO, Company B provisions gateways on Recovery VNet in Azure Southeast Asia prior to failover.
spring-apps How To Bind Postgres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-postgres.md
Last updated 09/26/2022 -+ # Bind an Azure Database for PostgreSQL to your application in Azure Spring Apps
spring-apps How To Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-deploy-powershell.md
ms.devlang: azurepowershell Last updated 2/15/2022-+ # Create and deploy applications by using PowerShell
spring-apps How To Deploy With Custom Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-deploy-with-custom-container-image.md
-+ Last updated 4/28/2022
spring-apps How To Elastic Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-elastic-diagnostic-settings.md
To configure diagnostics settings, use the following steps:
1. Enter a name for the setting, choose **Send to partner solution**, then select **Elastic** and an Elastic deployment where you want to send the logs. 1. Select **Save**. > [!NOTE] > There might be a gap of up to 15 minutes between when logs are emitted and when they appear in your Elastic deployment.
spring-apps How To Enterprise Deploy Static File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-deploy-static-file.md
Last updated 10/19/2022-+ # Deploy web static files
spring-apps Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/reference-architecture.md
Title: Azure Spring Apps reference architecture -+ description: This reference architecture is a foundation using a typical enterprise hub and spoke design for the use of Azure Spring Apps.
spring-apps Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/security-controls-policy.md
-+ # Azure Policy Regulatory Compliance controls for Azure Spring Apps
spring-apps Tutorial Managed Identities Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tutorial-managed-identities-functions.md
description: Use managed identity to invoke Azure Functions from an Azure Spring
-+ Last updated 07/10/2020
static-web-apps Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/custom-domain.md
The following table includes links to articles that demonstrate how to configure
<sup>1</sup> Some registrars like GoDaddy and Google don't support domain records that affect how you configure your apex domain. Consider using [Azure DNS](custom-domain-azure-dns.md) with these registrars to set up your apex domain. > [!NOTE]
-> Adding a custom domain to a [preview environment](preview-environments.md) is not supported. Unicode domains, including Pynocode domains and the `xn--` prefix are also not supported.
+> Adding a custom domain to a [preview environment](preview-environments.md) is not supported. Unicode domains, including Punycode domains and the `xn--` prefix are also not supported.
## About domains
storage Monitor Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/monitor-blob-storage.md
The following example shows how to read metric data on the metric supporting mul
## Analyzing logs
-You can access resource logs either as a blob in a storage account, as event data, or through Log Analytic queries. For information about how to find those logs, see [Azure resource logs](../../azure-monitor/essentials/resource-logs.md).
+You can access resource logs either as a blob in a storage account, as event data, or through Log Analytics queries. For information about how to find those logs, see [Azure resource logs](../../azure-monitor/essentials/resource-logs.md).
All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../../azure-monitor/essentials/resource-logs-schema.md). The schema for Azure Blob Storage resource logs is found in [Azure Blob Storage monitoring data reference](monitor-blob-storage-reference.md).
storage Network File System Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-support-how-to.md
Previously updated : 06/21/2021 Last updated : 02/14/2023
Currently, the only way to secure the data in your storage account is by using a
To secure the data in your account, see these recommendations: [Network security recommendations for Blob storage](security-recommendations.md#networking).
+> [!IMPORTANT]
+> The NFS 3.0 protocol uses ports 111 and 2048. If you're connecting from an on-premises network, make sure that your client allows outgoing communication through those ports. If you have granted access to specific VNets, make sure that any network security groups associated with those VNets don't contain security rules that block incoming communication through those ports.
+ ## Step 3: Create and configure a storage account To mount a container by using NFS 3.0, you must create a storage account. You can't enable existing accounts.
storage Network File System Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-support.md
Previously updated : 01/24/2023 Last updated : 02/14/2023
A client can connect over a public or a [private endpoint](../common/storage-pri
This can be done by using [VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md) or an [ExpressRoute gateway](../../expressroute/expressroute-howto-add-gateway-portal-resource-manager.md) along with [Gateway transit](/azure/architecture/reference-architectures/hybrid-networking/vnet-peering#gateway-transit). > [!IMPORTANT]
-> If you're connecting from an on-premises network, make sure that your client allows outgoing communication through ports 111 and 2048. The NFS 3.0 protocol uses these ports.
+> The NFS 3.0 protocol uses ports 111 and 2048. If you're connecting from an on-premises network, make sure that your client allows outgoing communication through those ports. If you have granted access to specific VNets, make sure that any network security groups associated with those VNets don't contain security rules that block incoming communication through those ports.
<a id="azure-storage-features-not-yet-supported"></a>
storage Snapshots Manage Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/snapshots-manage-dotnet.md
Title: Create and manage a blob snapshot in .NET
description: Learn how to use the .NET client library to create a read-only snapshot of a blob to back up blob data at a given moment in time. - + Last updated 08/27/2020
storage Soft Delete Blob Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/soft-delete-blob-overview.md
Previously updated : 02/06/2023 Last updated : 02/14/2023
Version 2017-07-29 and higher of the Azure Storage REST API support blob soft de
### How deletions are handled when soft delete is enabled
-When blob soft delete is enabled, deleting a blob marks that blob as soft-deleted. No snapshot is created. When the retention period expires, the soft-deleted blob is permanently deleted.
+When blob soft delete is enabled, deleting a blob marks that blob as soft-deleted. No snapshot is created. When the retention period expires, the soft-deleted blob is permanently deleted. In accounts that have a hierarchical namespace, the access control list of a blob is unaffected and will remain in tact if the blob is restored.
If a blob has snapshots, the blob can't be deleted unless the snapshots are also deleted. When you delete a blob and its snapshots, both the blob and snapshots are marked as soft-deleted. No new snapshots are created.
For premium storage accounts, soft-deleted snapshots don't count toward the per-
### Restoring soft-deleted objects
-You can restore soft-deleted blobs or directories (in a hierarchical namespace) by calling the [Undelete Blob](/rest/api/storageservices/undelete-blob) operation within the retention period. The **Undelete Blob** operation restores a blob and any soft-deleted snapshots associated with it. Any snapshots that were deleted during the retention period are restored.
+You can restore soft-deleted blobs or directories (in a hierarchical namespace) by calling the [Undelete Blob](/rest/api/storageservices/undelete-blob) operation within the retention period. The **Undelete Blob** operation restores a blob and any soft-deleted snapshots associated with it. Any snapshots that were deleted during the retention period are restored. In accounts that have a hierarchical namespace, the access control list of a blob is restored along with the blob.
In accounts that have a hierarchical namespace, the **Undelete Blob** operation can also be used to restore a soft-deleted directory and all its contents. If you rename a directory that contains soft deleted blobs, those soft deleted blobs become disconnected from the directory. If you want to restore those blobs, you'll have to revert the name of the directory back to its original name or create a separate directory that uses the original directory name. Otherwise, you'll receive an error when you attempt to restore those soft deleted blobs. You also cannot restore a directory or a blob to a filepath that has a directory or blob of that name already there. For example, if you delete a.txt (1) and upload a new file also named a.txt (2), you cannot restore the soft deleted a.txt (1) until the active a.txt (2) has either been deleted or renamed. You cannot access the contents of a soft-deleted directory until after the directory has been undeleted.
storage Storage Blob Event Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-event-overview.md
Title: Reacting to Azure Blob storage events
description: Use Azure Event Grid to subscribe and react to Blob storage events. Understand the event model, filtering events, and practices for consuming events. Previously updated : 06/13/2022 Last updated : 02/15/2023
First, subscribe an endpoint to an event. Then, when an event is triggered, the
See the [Blob storage events schema](../../event-grid/event-schema-blob-storage.md?toc=/azure/storage/blobs/toc.json) article to view:
-> [!div class="checklist"]
-> - A complete list of Blob storage events and how each event is triggered.
-> - An example of the data the Event Grid would send for each of these events.
-> - The purpose of each key value pair that appears in the data.
+- A complete list of Blob storage events and how each event is triggered.
+
+- An example of the data the Event Grid would send for each of these events.
+
+- The purpose of each key value pair that appears in the data.
## Filtering events
To match events from blobs created in specific container sharing a blob suffix,
## Practices for consuming events Applications that handle Blob storage events should follow a few recommended practices:
-> [!div class="checklist"]
-> - As multiple subscriptions can be configured to route events to the same event handler, it is important not to assume events are from a particular source, but to check the topic of the message to ensure that it comes from the storage account you are expecting.
-> - Similarly, check that the eventType is one you are prepared to process, and do not assume that all events you receive will be the types you expect.
-> - As messages can arrive after some delay, use the etag fields to understand if your information about objects is still up-to-date. To learn how to use the etag field, see [Managing concurrency in Blob storage](./concurrency-manage.md?toc=/azure/storage/blobs/toc.json#managing-concurrency-in-blob-storage).
-> - As messages can arrive out of order, use the sequencer fields to understand the order of events on any particular object. The sequencer field is a string value that represents the logical sequence of events for any particular blob name. You can use standard string comparison to understand the relative sequence of two events on the same blob name.
-> - Storage events guarantees at-least-once delivery to subscribers, which ensures that all messages are outputted. However due to retries between backend nodes and services or availability of subscriptions, duplicate messages may occur. To learn more about message delivery and retry, see [Event Grid message delivery and retry](../../event-grid/delivery-and-retry.md).
-> - Use the blobType field to understand what type of operations are allowed on the blob, and which client library types you should use to access the blob. Valid values are either `BlockBlob` or `PageBlob`.
-> - Use the url field with the `CloudBlockBlob` and `CloudAppendBlob` constructors to access the blob.
-> - Ignore fields you don't understand. This practice will help keep you resilient to new features that might be added in the future.
-> - If you want to ensure that the **Microsoft.Storage.BlobCreated** event is triggered only when a Block Blob is completely committed, filter the event for the `CopyBlob`, `PutBlob`, `PutBlockList` or `FlushWithClose` REST API calls. These API calls trigger the **Microsoft.Storage.BlobCreated** event only after data is fully committed to a Block Blob. To learn how to create a filter, see [Filter events for Event Grid](../../event-grid/how-to-filter-events.md).
+
+- As multiple subscriptions can be configured to route events to the same event handler, it is important not to assume events are from a particular source, but to check the topic of the message to ensure that it comes from the storage account you are expecting.
+
+- Similarly, check that the eventType is one you are prepared to process, and do not assume that all events you receive will be the types you expect.
+
+- There is no service level agreement around the time it takes for a message to arrive. It's not uncommon for messages to arrive anywhere from 30 minutes to two hours. As messages can arrive after some delay, use the etag fields to understand if your information about objects is still up-to-date. To learn how to use the etag field, see [Managing concurrency in Blob storage](./concurrency-manage.md?toc=/azure/storage/blobs/toc.json#managing-concurrency-in-blob-storage).
+
+- As messages can arrive out of order, use the sequencer fields to understand the order of events on any particular object. The sequencer field is a string value that represents the logical sequence of events for any particular blob name. You can use standard string comparison to understand the relative sequence of two events on the same blob name.
+
+- Storage events guarantees at-least-once delivery to subscribers, which ensures that all messages are outputted. However due to retries between backend nodes and services or availability of subscriptions, duplicate messages may occur. To learn more about message delivery and retry, see [Event Grid message delivery and retry](../../event-grid/delivery-and-retry.md).
+
+- Use the blobType field to understand what type of operations are allowed on the blob, and which client library types you should use to access the blob. Valid values are either `BlockBlob` or `PageBlob`.
+
+- Use the url field with the `CloudBlockBlob` and `CloudAppendBlob` constructors to access the blob.
+
+- Ignore fields you don't understand. This practice will help keep you resilient to new features that might be added in the future.
+
+- If you want to ensure that the **Microsoft.Storage.BlobCreated** event is triggered only when a Block Blob is completely committed, filter the event for the `CopyBlob`, `PutBlob`, `PutBlockList` or `FlushWithClose` REST API calls. These API calls trigger the **Microsoft.Storage.BlobCreated** event only after data is fully committed to a Block Blob. To learn how to create a filter, see [Filter events for Event Grid](../../event-grid/how-to-filter-events.md).
## Feature support
storage Storage Blobs Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-introduction.md
To learn how to create a storage account, see [Create a storage account](../comm
A container organizes a set of blobs, similar to a directory in a file system. A storage account can include an unlimited number of containers, and a container can store an unlimited number of blobs.
-A container name must be a valid DNS name, as it forms part of the unique URI used to address the container or its blobs. Follow these rules when naming a container:
+A container name must be a valid DNS name, as it forms part of the unique URI (Uniform resource identifier) used to address the container or its blobs. Follow these rules when naming a container:
- Container names can be between 3 and 63 characters long. - Container names must start with a letter or number, and can contain only lowercase letters, numbers, and the dash (-) character.
storage Storage Blobs List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list.md
Previously updated : 03/28/2022- Last updated : 02/14/2023 ms.devlang: csharp
Blob name: FolderA/FolderB/FolderC/blob3.txt
### List blob versions or snapshots
-To list blob versions or snapshots, specify the [BlobStates](/dotnet/api/azure.storage.blobs.models.blobstates) parameter with the **Version** or **Snapshot** field. Versions and snapshots are listed from oldest to newest.
+To list blob versions or snapshots, specify the [BlobStates](/dotnet/api/azure.storage.blobs.models.blobstates) parameter with the **Version** or **Snapshot** field. Versions and snapshots are listed from oldest to newest.
The following code example shows how to list blob versions.
-```csharp
-private static void ListBlobVersions(BlobContainerClient blobContainerClient,
- string blobName)
-{
- // Call the listing operation, specifying that blob versions are returned.
- // Use the blob name as the prefix.
- var blobVersions = blobContainerClient.GetBlobs
- (BlobTraits.None, BlobStates.Version, prefix: blobName)
- .OrderByDescending(version => version.VersionId);
-
- // Construct the URI for each blob version.
- foreach (var version in blobVersions)
- {
- BlobUriBuilder blobUriBuilder = new BlobUriBuilder(blobContainerClient.Uri)
- {
- BlobName = version.Name,
- VersionId = version.VersionId
- };
-
- if ((bool)version.IsLatestVersion.GetValueOrDefault())
- {
- Console.WriteLine("Current version: {0}", blobUriBuilder);
- }
- else
- {
- Console.WriteLine("Previous version: {0}", blobUriBuilder);
- }
- }
-}
-```
## Resources
storage Versioning Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/versioning-enable.md
Previously updated : 01/25/2023 Last updated : 02/14/2023
To list a blob's versions in the Azure portal:
:::image type="content" source="media/versioning-enable/portal-list-blob-versions.png" alt-text="Screenshot showing how to list blob versions in the Azure portal":::
+1. Toggle the **Show deleted versions** button to display soft-deleted versions. If blob soft delete is enabled for the storage account, then any soft-deleted versions that are still within the soft-delete retention interval will appear in the list.
+
+ :::image type="content" source="media/versioning-enable/portal-list-deleted-versions.png" alt-text="Screenshot showing how to list soft-deleted versions in Azure portal.":::
+ # [PowerShell](#tab/powershell) To list a blob's versions with PowerShell, call the [Get-AzStorageBlob](/powershell/module/az.storage/get-azstorageblob) command with the `-IncludeVersion` parameter:
N/A
-## Modify a blob to trigger a new version
-
-The following code example shows how to trigger the creation of a new version with the Azure Storage client library for .NET, version [12.5.1](https://www.nuget.org/packages/Azure.Storage.Blobs/12.5.1) or later. Before running this example, make sure you have enabled versioning for your storage account.
-
-The example creates a block blob, and then updates the blob's metadata. Updating the blob's metadata triggers the creation of a new version. The example retrieves the initial version and the current version, and shows that only the current version includes the metadata.
--
-## List blob versions with .NET
-
-To list blob versions or snapshots with the .NET v12 client library, specify the [BlobStates](/dotnet/api/azure.storage.blobs.models.blobstates) parameter with the **Version** field.
-
-The following code example shows how to list blobs version with the Azure Storage client library for .NET, version [12.5.1](https://www.nuget.org/packages/Azure.Storage.Blobs/12.5.1) or later. Before running this example, make sure you have enabled versioning for your storage account.
-- ## Next steps - [Blob versioning](versioning-overview.md)
storage Versioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/versioning-overview.md
Title: Blob versioning
-description: Blob storage versioning automatically maintains previous versions of an object and identifies them with timestamps. You can restore a previous version of a blob to recover your data if it is erroneously modified or deleted.
+description: Blob storage versioning automatically maintains previous versions of an object and identifies them with timestamps. You can restore a previous version of a blob to recover your data if it's erroneously modified or deleted.
Previously updated : 01/25/2023 Last updated : 02/14/2023
# Blob versioning
-You can enable Blob storage versioning to automatically maintain previous versions of an object. When blob versioning is enabled, you can access earlier versions of a blob to recover your data if it is modified or deleted.
+You can enable Blob storage versioning to automatically maintain previous versions of an object. When blob versioning is enabled, you can access earlier versions of a blob to recover your data if it's modified or deleted.
## Recommended data protection configuration Blob versioning is part of a comprehensive data protection strategy for blob data. For optimal protection for your blob data, Microsoft recommends enabling all of the following data protection features: -- Blob versioning, to automatically maintain previous versions of a blob. When blob versioning is enabled, you can restore an earlier version of a blob to recover your data if it is erroneously modified or deleted. To learn how to enable blob versioning, see [Enable and manage blob versioning](versioning-enable.md).
+- Blob versioning, to automatically maintain previous versions of a blob. When blob versioning is enabled, you can restore an earlier version of a blob to recover your data if it's erroneously modified or deleted. To learn how to enable blob versioning, see [Enable and manage blob versioning](versioning-enable.md).
- Container soft delete, to restore a container that has been deleted. To learn how to enable container soft delete, see [Enable and manage soft delete for containers](soft-delete-container-enable.md). - Blob soft delete, to restore a blob, snapshot, or version that has been deleted. To learn how to enable blob soft delete, see [Enable and manage soft delete for blobs](soft-delete-blob-enable.md).
A version captures the state of a blob at a given point in time. Each version is
A version ID can identify the current version or a previous version. A blob can have only one current version at a time.
-When you create a new blob, a single version exists, and that version is the current version. When you modify an existing blob, the current version becomes a previous version. A new version is created to capture the updated state, and that new version is the current version. When you delete a blob, the current version of the blob becomes a previous version, and there is no longer a current version. Any previous versions of the blob persist.
+When you create a new blob, a single version exists, and that version is the current version. When you modify an existing blob, the current version becomes a previous version. A new version is created to capture the updated state, and that new version is the current version. When you delete a blob, the current version of the blob becomes a previous version, and there's no longer a current version. Any previous versions of the blob persist.
The following diagram shows how versions are created on write operations, and how a previous version may be promoted to be the current version: :::image type="content" source="media/versioning-overview/blob-versioning-diagram.png" alt-text="Diagram showing how blob versioning works":::
-Blob versions are immutable. You cannot modify the content or metadata of an existing blob version.
+Blob versions are immutable. You can't modify the content or metadata of an existing blob version.
Having a large number of versions per blob can increase the latency for blob listing operations. Microsoft recommends maintaining fewer than 1000 versions per blob. You can use lifecycle management to automatically delete old versions. For more information about lifecycle management, see [Optimize costs by automating Azure Blob Storage access tiers](./lifecycle-management-overview.md).
-Blob versioning is available for standard general-purpose v2, premium block blob, and legacy Blob storage accounts. Storage accounts with a hierarchical namespace enabled for use with Azure Data Lake Storage Gen2 are not currently supported.
+Blob versioning is available for standard general-purpose v2, premium block blob, and legacy Blob storage accounts. Storage accounts with a hierarchical namespace enabled for use with Azure Data Lake Storage Gen2 aren't currently supported.
Version 2019-10-10 and higher of the Azure Storage REST API supports blob versioning.
The following diagram shows how write operations affect blob versions. For simpl
> [!NOTE] > A blob that was created prior to versioning being enabled for the storage account does not have a version ID. When that blob is modified, the modified blob becomes the current version, and a version is created to save the blob's state before the update. The version is assigned a version ID that is its creation time.
-When blob versioning is enabled for a storage account, all write operations on block blobs trigger the creation of a new version, with the exception of the [Put Block](/rest/api/storageservices/put-block) operation.
+When blob versioning is enabled for a storage account, all write operations on block blobs trigger the creation of a new version, except for the [Put Block](/rest/api/storageservices/put-block) operation.
-For page blobs and append blobs, only a subset of write operations trigger the creation of a version. These operations include:
+For page blobs and append blobs, only a subset of write operations triggers the creation of a version. These operations include:
- [Put Blob](/rest/api/storageservices/put-blob) - [Put Block List](/rest/api/storageservices/put-block-list) - [Set Blob Metadata](/rest/api/storageservices/set-blob-metadata) - [Copy Blob](/rest/api/storageservices/copy-blob)
-The following operations do not trigger the creation of a new version. To capture changes from those operations, take a manual snapshot:
+The following operations don't trigger the creation of a new version. To capture changes from those operations, take a manual snapshot:
- [Put Page](/rest/api/storageservices/put-page) (page blob) - [Append Block](/rest/api/storageservices/append-block) (append blob)
-All versions of a blob must be of the same blob type. If a blob has previous versions, you cannot overwrite a blob of one type with another type unless you first delete the blob and all of its versions.
+All versions of a blob must be of the same blob type. If a blob has previous versions, you can't overwrite a blob of one type with another type unless you first delete the blob and all of its versions.
### Versioning on delete operations
-When you call the [Delete Blob](/rest/api/storageservices/delete-blob) operation without specifying a version ID, the current version becomes a previous version, and there is no longer a current version. All existing previous versions of the blob are preserved.
+When you call the [Delete Blob](/rest/api/storageservices/delete-blob) operation without specifying a version ID, the current version becomes a previous version, and there's no longer a current version. All existing previous versions of the blob are preserved.
The following diagram shows the effect of a delete operation on a versioned blob:
To automate the process of moving block blobs to the appropriate tier, use blob
To learn how to enable or disable blob versioning, see [Enable and manage blob versioning](versioning-enable.md).
-Disabling blob versioning does not delete existing blobs, versions, or snapshots. When you turn off blob versioning, any existing versions remain accessible in your storage account. No new versions are subsequently created.
+Disabling blob versioning doesn't delete existing blobs, versions, or snapshots. When you turn off blob versioning, any existing versions remain accessible in your storage account. No new versions are subsequently created.
-After versioning is disabled, modifying the current version creates a blob that is not a version. All subsequent updates to the blob will overwrite its data without saving the previous state. All existing versions persist as previous versions.
+After versioning is disabled, modifying the current version creates a blob that isn't a version. All subsequent updates to the blob overwrite its data without saving the previous state. All existing versions persist as previous versions.
You can read or delete versions using the version ID after versioning is disabled. You can also list a blob's versions after versioning is disabled. Object replication relies on blob versioning. Before you can disable blob versioning, you must delete any object replication policies on the account. For more information about object replication, see [Object replication for block blobs](object-replication-overview.md).
-The following diagram shows how modifying a blob after versioning is disabled creates a blob that is not versioned. Any existing versions associated with the blob persist.
+The following diagram shows how modifying a blob after versioning is disabled creates a blob that isn't versioned. Any existing versions associated with the blob persist.
## Blob versioning and soft delete
-Blob versioning and blob soft delete are part of the recommended data protection configuration for storage accounts. For more information about Microsoft's recommendations for data protection, see [Recommended data protection configuration](#recommended-data-protection-configuration) in this article, as well as [Data protection overview](data-protection-overview.md).
+Blob versioning and blob soft delete are part of the recommended data protection configuration for storage accounts. For more information about Microsoft's recommendations for data protection, see [Recommended data protection configuration](#recommended-data-protection-configuration) in this article, and [Data protection overview](data-protection-overview.md).
### Overwriting a blob
-If blob versioning and blob soft delete are both enabled for a storage account, then overwriting a blob automatically creates a new version. The new version is not soft-deleted and is not removed when the soft-delete retention period expires. No soft-deleted snapshots are created.
+If blob versioning and blob soft delete are both enabled for a storage account, then overwriting a blob automatically creates a new version. The new version isn't soft-deleted and isn't removed when the soft-delete retention period expires. No soft-deleted snapshots are created.
### Deleting a blob or version
-If versioning and soft delete are both enabled for a storage account, then when you delete a blob, the current version of the blob becomes a previous version. No new version is created and no soft-deleted snapshots are created. The soft delete retention period is not in effect for the deleted blob.
+If versioning and soft delete are both enabled for a storage account, then when you delete a blob, the current version of the blob becomes a previous version. No new version is created and no soft-deleted snapshots are created. The soft delete retention period isn't in effect for the deleted blob.
-Soft delete offers additional protection for deleting blob versions. When you delete a previous version of the blob, that version is soft-deleted. The soft-deleted version is preserved until the soft delete retention period elapses, at which point it is permanently deleted.
+Soft delete offers additional protection for deleting blob versions. When you delete a previous version of the blob, that version is soft-deleted. The soft-deleted version is preserved until the soft delete retention period elapses, at which point it's permanently deleted.
To delete a previous version of a blob, call the **Delete Blob** operation and specify the version ID.
The following diagram shows what happens when you delete a blob or a blob versio
### Restoring a soft-deleted version
-You can use the [Undelete Blob](/rest/api/storageservices/undelete-blob) operation to restore soft-deleted versions during the soft delete retention period. The **Undelete Blob** operation always restores all soft-deleted versions of the blob. It is not possible to restore only a single soft-deleted version.
+You can use the [Undelete Blob](/rest/api/storageservices/undelete-blob) operation to restore soft-deleted versions during the soft delete retention period. The **Undelete Blob** operation always restores all soft-deleted versions of the blob. It isn't possible to restore only a single soft-deleted version.
-Restoring soft-deleted versions with the **Undelete Blob** operation does not promote any version to be the current version. To restore the current version, first restore all soft-deleted versions, and then use the [Copy Blob](/rest/api/storageservices/copy-blob) operation to copy a previous version to a new current version.
+Restoring soft-deleted versions with the **Undelete Blob** operation doesn't promote any version to be the current version. To restore the current version, first restore all soft-deleted versions, and then use the [Copy Blob](/rest/api/storageservices/copy-blob) operation to copy a previous version to a new current version.
The following diagram shows how to restore soft-deleted blob versions with the **Undelete Blob** operation, and how to restore the current version of the blob with the **Copy Blob** operation.
A blob snapshot is a read-only copy of a blob that's taken at a specific point i
### Snapshot a blob when versioning is enabled
-Although it is not recommended, you can take a snapshot of a blob that is also versioned. If you cannot update your application to stop taking snapshots of blobs when you enable versioning, your application can support both snapshots and versions.
+Although it isn't recommended, you can take a snapshot of a blob that is also versioned. If you can't update your application to stop taking snapshots of blobs when you enable versioning, your application can support both snapshots and versions.
When you take a snapshot of a versioned blob, a new version is created at the same time that the snapshot is created. A new current version is also created when a snapshot is taken.
The following table shows the permission required on a SAS to delete a blob vers
## Pricing and billing
-Enabling blob versioning can result in additional data storage charges to your account. When designing your application, it is important to be aware of how these charges might accrue so that you can minimize costs.
+Enabling blob versioning can result in additional data storage charges to your account. When designing your application, it's important to be aware of how these charges might accrue so that you can minimize costs.
Blob versions, like blob snapshots, are billed at the same rate as active data. How versions are billed depends on whether you have explicitly set the tier for the current or previous versions of a blob (or snapshots). For more information about blob tiers, see [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md).
-If you have not changed a blob or version's tier, then you are billed for unique blocks of data across that blob, its versions, and any snapshots it may have. For more information, see [Billing when the blob tier has not been explicitly set](#billing-when-the-blob-tier-has-not-been-explicitly-set).
+If you haven't changed a blob or version's tier, then you're billed for unique blocks of data across that blob, its versions, and any snapshots it may have. For more information, see [Billing when the blob tier has not been explicitly set](#billing-when-the-blob-tier-has-not-been-explicitly-set).
-If you have changed a blob or version's tier, then you are billed for the entire object, regardless of whether the blob and version are eventually in the same tier again. For more information, see [Billing when the blob tier has been explicitly set](#billing-when-the-blob-tier-has-been-explicitly-set).
+If you've changed a blob or version's tier, then you're billed for the entire object, regardless of whether the blob and version are eventually in the same tier again. For more information, see [Billing when the blob tier has been explicitly set](#billing-when-the-blob-tier-has-been-explicitly-set).
> [!NOTE] > Enabling versioning for data that is frequently overwritten may result in increased storage capacity charges and increased latency during listing operations. To mitigate these concerns, store frequently overwritten data in a separate storage account with versioning disabled.
For more information about billing details for blob snapshots, see [Blob snapsho
### Billing when the blob tier has not been explicitly set
-If you have not explicitly set the blob tier for any versions of a blob, then you are charged for unique blocks or pages across all versions, and any snapshots it may have. Data that is shared across blob versions is charged only once. When a blob is updated, then data in the new current version diverges from the data stored in previous versions, and the unique data is charged per block or page.
+If you have not explicitly set the blob tier for any versions of a blob, then you're charged for unique blocks or pages across all versions, and any snapshots it may have. Data that is shared across blob versions is charged only once. When a blob is updated, then data in the new current version diverges from the data stored in previous versions, and the unique data is charged per block or page.
-When you replace a block within a block blob, that block is subsequently charged as a unique block. This is true even if the block has the same block ID and the same data as it has in the previous version. After the block is committed again, it diverges from its counterpart in the previous version, and you will be charged for its data. The same holds true for a page in a page blob that's updated with identical data.
+When you replace a block within a block blob, that block is subsequently charged as a unique block. This is true even if the block has the same block ID and the same data as it has in the previous version. After the block is committed again, it diverges from its counterpart in the previous version, and you'll be charged for its data. The same holds true for a page in a page blob that's updated with identical data.
-Blob storage does not have a means to determine whether two blocks contain identical data. Each block that is uploaded and committed is treated as unique, even if it has the same data and the same block ID. Because charges accrue for unique blocks, it's important to keep in mind that updating a blob when versioning is enabled will result in additional unique blocks and additional charges.
+Blob storage doesn't have a means to determine whether two blocks contain identical data. Each block that is uploaded and committed is treated as unique, even if it has the same data and the same block ID. Because charges accrue for unique blocks, it's important to keep in mind that updating a blob when versioning is enabled will result in additional unique blocks and additional charges.
When blob versioning is enabled, call update operations on block blobs so that they update the least possible number of blocks. The write operations that permit fine-grained control over blocks are [Put Block](/rest/api/storageservices/put-block) and [Put Block List](/rest/api/storageservices/put-block-list). The [Put Blob](/rest/api/storageservices/put-blob) operation, on the other hand, replaces the entire contents of a blob and so may lead to additional charges.
-The following scenarios demonstrate how charges accrue for a block blob and its versions when the blob tier has not been explicitly set.
+The following scenarios demonstrate how charges accrue for a block blob and its versions when the blob tier hasn't been explicitly set.
#### Scenario 1
-In scenario 1, the blob has a previous version. The blob has not been updated since the version was created, so charges are incurred only for unique blocks 1, 2, and 3.
+In scenario 1, the blob has a previous version. The blob hasn't been updated since the version was created, so charges are incurred only for unique blocks 1, 2, and 3.
![Diagram 1 showing billing for unique blocks in base blob and previous version.](./media/versioning-overview/versions-billing-scenario-1.png) #### Scenario 2
-In scenario 2, one block (block 3 in the diagram) in the blob has been updated. Even though the updated block contains the same data and the same ID, it is not the same as block 3 in the previous version. As a result, the account is charged for four blocks.
+In scenario 2, one block (block 3 in the diagram) in the blob has been updated. Even though the updated block contains the same data and the same ID, it isn't the same as block 3 in the previous version. As a result, the account is charged for four blocks.
![Diagram 2 showing billing for unique blocks in base blob and previous version.](./media/versioning-overview/versions-billing-scenario-2.png) #### Scenario 3
-In scenario 3, the blob has been updated, but the version has not. Block 3 was replaced with block 4 in the current blob, but the previous version still reflects block 3. As a result, the account is charged for four blocks.
+In scenario 3, the blob has been updated, but the version hasn't. Block 3 was replaced with block 4 in the current blob, but the previous version still reflects block 3. As a result, the account is charged for four blocks.
![Diagram 3 showing billing for unique blocks in base blob and previous version.](./media/versioning-overview/versions-billing-scenario-3.png) #### Scenario 4
-In scenario 4, the current version has been completely updated and contains none of its original blocks. As a result, the account is charged for all eight unique blocks &mdash; four in the current version, and four combined in the two previous versions. This scenario can occur if you are writing to a blob with the [Put Blob](/rest/api/storageservices/put-blob) operation, because it replaces the entire contents of the blob.
+In scenario 4, the current version has been completely updated and contains none of its original blocks. As a result, the account is charged for all eight unique blocksΓÇöfour in the current version, and four combined in the two previous versions. This scenario can occur if you're writing to a blob with the [Put Blob](/rest/api/storageservices/put-blob) operation, because it replaces the entire contents of the blob.
![Diagram 4 showing billing for unique blocks in base blob and previous version.](./media/versioning-overview/versions-billing-scenario-4.png) ### Billing when the blob tier has been explicitly set
-If you have explicitly set the blob tier for a blob or version (or snapshot), then you are charged for the full content length of the object in the new tier, regardless of whether it shares blocks with an object in the original tier. You are also charged for the full content length of the oldest version in the original tier. Any other previous versions or snapshots that remain in the original tier are charged for unique blocks that they may share, as described in [Billing when the blob tier has not been explicitly set](#billing-when-the-blob-tier-has-not-been-explicitly-set).
+If you have explicitly set the blob tier for a blob or version (or snapshot), then you're charged for the full content length of the object in the new tier, regardless of whether it shares blocks with an object in the original tier. You're also charged for the full content length of the oldest version in the original tier. Any other previous versions or snapshots that remain in the original tier are charged for unique blocks that they may share, as described in [Billing when the blob tier has not been explicitly set](#billing-when-the-blob-tier-has-not-been-explicitly-set).
#### Moving a blob to a new tier
-The following table describes the billing behavior for a blob or version when it is moved to a new tier.
+The following table describes the billing behavior for a blob or version when it's moved to a new tier.
-| When blob tier is set… | Then you are billed for... |
+| When blob tier is set… | Then you're billed for... |
|-|-| | Explicitly on a version, whether current or previous | The full content length of that version. Versions that don't have an explicitly set tier are billed only for unique blocks.<sup>1</sup> | | To archive | The full content length of all versions and snapshots.<sup>1</sup>. |
-<sup>1</sup>If there are other previous versions or snapshots that have not been moved from their original tier, those versions or snapshots are charged based on the number of unique blocks they contain, as described in [Billing when the blob tier has not been explicitly set](#billing-when-the-blob-tier-has-not-been-explicitly-set).
+<sup>1</sup>If there are other previous versions or snapshots that haven't been moved from their original tier, those versions or snapshots are charged based on the number of unique blocks they contain, as described in [Billing when the blob tier has not been explicitly set](#billing-when-the-blob-tier-has-not-been-explicitly-set).
The following diagram illustrates how objects are billed when a versioned blob is moved to a different tier. :::image type="content" source="media/versioning-overview/versioning-billing-tiers.png" alt-text="Diagram showing how objects are billed when a versioned blob is explicitly tiered.":::
-Explicitly setting the tier for a blob, version, or snapshot cannot be undone. If you move a blob to a new tier and then move it back to its original tier, you are charged for the full content length of the object even if it shares blocks with other objects in the original tier.
+Explicitly setting the tier for a blob, version, or snapshot can't be undone. If you move a blob to a new tier and then move it back to its original tier, you're charged for the full content length of the object even if it shares blocks with other objects in the original tier.
Operations that explicitly set the tier of a blob, version, or snapshot include:
storage Versions Manage Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/versions-manage-dotnet.md
+
+ Title: Create and list blob versions in .NET
+
+description: Learn how to use the .NET client library to create a previous version of a blob.
+++++ Last updated : 02/14/2023+
+ms.devlang: csharp
+++
+# Create and list blob versions in .NET
+
+Blob versioning automatically creates a previous version of a blob anytime it is modified or deleted. When blob versioning is enabled, then you can restore an earlier version of a blob to recover your data if it is erroneously modified or deleted.
+
+For optimal data protection, Microsoft recommends enabling both blob versioning and blob soft delete for your storage account. For more information, see [Blob versioning](versioning-overview.md) and [Soft delete for blobs](soft-delete-blob-overview.md).
+
+## Modify a blob to trigger a new version
+
+The following code example shows how to trigger the creation of a new version with the Azure Storage client library for .NET, version [12.5.1](https://www.nuget.org/packages/Azure.Storage.Blobs/12.5.1) or later. Before running this example, make sure you have enabled versioning for your storage account.
+
+The example creates a block blob, and then updates the blob's metadata. Updating the blob's metadata triggers the creation of a new version. The example retrieves the initial version and the current version, and shows that only the current version includes the metadata.
++
+## List blob versions
+
+To list blob versions, specify the [BlobStates](/dotnet/api/azure.storage.blobs.models.blobstates) parameter with the **Version** field. Versions are listed from oldest to newest.
+
+The following code example shows how to list blob versions.
++
+## See also
+
+- [Blob versioning](versioning-overview.md)
+- [Enable and manage blob versioning](versioning-enable.md)
stream-analytics Cicd Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/cicd-autoscale.md
Title: Configure autoscale settings for a Stream Analytics job using ASA CI/CD tool
-description: This article shows how to configure autoscale settings for a Stream Analytics job using ASA CI/CD tool.
+ Title: Configure autoscale settings for a Stream Analytics job by using the CI/CD tool
+description: This article shows how to configure autoscale settings for a Stream Analytics job by using the CI/CD tool.
Last updated 02/08/2023
-# Configure autoscale settings for a Stream Analytics job using ASA CI/CD tool
+# Configure autoscale settings for a Stream Analytics job by using the CI/CD tool
-Streaming units (SUs) represent the computing resources that are allocated to execute a Stream Analytics job. The higher the number of SUs, the more CPU and memory resources are allocated to your job. The autoscale feature dynamically adjust SUs based on your rule definitions. You can configure autoscale settings for your Stream Analytics job in the Azure portal or using ASA CI/CD tool in your local machine.
+Streaming units (SUs) represent the computing resources that are allocated to run an Azure Stream Analytics job. The higher the number of SUs, the more CPU and memory resources are allocated to your job.
-This article explains how you can use ASA CI/CD tool to configure autoscale settings for Stream Analytics jobs. If you want to learn more about autoscaling jobs in the Azure portal, see [Autoscale streaming units (Preview)](stream-analytics-autoscale.md).
+The autoscale feature dynamically adjusts SUs based on your rule definitions. You can configure autoscale settings for your Stream Analytics job in the Azure portal or by using the Stream Analytics continuous integration and continuous delivery (CI/CD) tool on your local machine.
-The ASA CI/CD tool allows you to specify the maximum number of streaming units and configure set of rules for autoscaling your jobs. Then it determines to add SUs to handle increases in load or to reduce the number of SUs when computing resources are sitting idle.
+This article explains how you can use the Stream Analytics CI/CD tool to configure autoscale settings for Stream Analytics jobs. If you want to learn more about autoscaling jobs in the Azure portal, see [Autoscale streaming units (preview)](stream-analytics-autoscale.md).
-Examples of an autoscale setting:
-- If the maximum number of SUs is set to 12, it increases SUs when the average SU% utilization of the job over the last 2 minutes goes above 75%.
+The Stream Analytics CI/CD tool allows you to specify the maximum number of streaming units and configure a set of rules for autoscaling your jobs. Then it determines whether to add SUs (to handle increases in load) or reduce the number of SUs (when computing resources are sitting idle).
+
+Here's an example of an autoscale setting:
+
+- If the maximum number of SUs is set to 12, increase SUs when the average SU utilization of the job over the last 2 minutes goes above 75 percent.
## Prerequisites-- A Stream Analytics project in the local machine. If don't have one, follow this [guide](quick-create-visual-studio-code.md) to create one. -- Or you have a running ASA job in Azure.
-## How to configure autoscale settings?
+To complete the steps in this article, you need either:
+
+- A Stream Analytics project on the local machine. If don't have one, follow [this guide](quick-create-visual-studio-code.md) to create one.
+- A running Stream Analytics job in Azure.
-### Scenario 1: configure for a local Stream Analytics project
+## Configure autoscale settings
-If you have a working Stream Analytics project in the local machine, follow the steps to configure autoscale settings:
+### Scenario 1: Configure settings for a local Stream Analytics project
+
+If you have a working Stream Analytics project on the local machine, follow these steps to configure autoscale settings:
+
+1. Open your Stream Analytics project in Visual Studio Code.
+2. On the **Terminal** panel, run the following command to install the Stream Analytics CI/CD tool:
-1. Open your Stream Analytics project in Visual Studio Code.
-2. On the **Terminal** panel, run the command to install ASA CI/CD tool.
```powershell npm install -g azure-streamanalytics-cicd ```
-
- Here's the list of command supported for `azure-streamanalytics-cicd`:
+
+ Here's the list of supported commands for `azure-streamanalytics-cicd`:
|Command |Description | |||
- |build |Generate standard ARM template for the given Azure Stream Analytics Visual Studio Code project.|
- |localrun |Run locally for the given Azure Stream Analytics Visual Studio Code project.|
- |test |Test for given Azure Stream Analytics Visual Studio Code project.|
- |addtestcase |Add test cases for the given Azure Stream Analytics Visual Studio Code project.|
- |autoscale |Generate autoscale setting ARM template file.|
- |help |Display more information on a specific command.|
-
-3. Build project.
+ |`build` |Generate a standard Azure Resource Manager template (ARM template) for a Stream Analytics project in Visual Studio Code.|
+ |`localrun` |Run locally for a Stream Analytics project in Visual Studio Code.|
+ |`test` |Test for a Stream Analytics project in Visual Studio Code.|
+ |`addtestcase` |Add test cases for a Stream Analytics project in Visual Studio Code.|
+ |`autoscale` |Generate an ARM template file for an autoscale setting.|
+ |`help` |Display more information on a specific command.|
+
+3. Build the project:
+ ```powershell azure-streamanalytics-cicd build --v2 --project ./asaproj.json --outputPath ./Deploy ```
- If the project is built successfully, you see 2 JSON files created under **Deploy** folder. One is the ARM template file and the other one is the parameter file.
+ If you build the project successfully, two JSON files are created under the *Deploy* folder. One is the ARM template file, and the other is the parameter file.
- ![Screenshot showing the files generated after building project.](./media/cicd-autoscale/build-project.png)
+ ![Screenshot that shows the files generated after building a project.](./media/cicd-autoscale/build-project.png)
> [!NOTE]
- > It is highly recommended to use the `--v2` option for the updated ARM template schema, which has fewer parameters yet retains the same functionality as the previous version. The old ARM template will be deprecated in the future, and only templates created using `build --v2` will receive updates or bug fixes.
+ > We highly recommend that you use the `--v2` option for the updated ARM template schema. The updated schema has fewer parameters yet retains the same functionality as the previous version.
+ >
+ > The old ARM template will be deprecated in the future. After that, only templates that were created via `build --v2` will receive updates or bug fixes.
-4. Configure autoscale setting.
- You need to add parameter keys and values using `azure-streamanalytics-cicd autoscale` command.
+4. Configure the autoscale setting. Add parameter keys and values by using the `azure-streamanalytics-cicd autoscale` command.
|Parameter key | Value | Example| |-|-|--|
- |capacity| maximum SUs (1, 3, 6 or multiples of 6 up to 396)|12|
- |metrics | metrics used for autoscale rules | ProcessCPUUsagePercentage ResourceUtilization|
- |targetJobName| project name| ClickStream-Filter|
- |outputPath| output path for ARM templates | ./Deploy|
-
- Example:
+ |`capacity`| Maximum SUs (1, 3, 6, or multiples of 6 up to 396)|`12`|
+ |`metrics` | Metrics used for autoscale rules | `ProcessCPUUsagePercentage` `ResourceUtilization`|
+ |`targetJobName`| Project name| `ClickStream-Filter`|
+ |`outputPath`| Output path for ARM templates | `./Deploy`|
+
+ Here's an example:
+ ```powershell azure-streamanalytics-cicd autoscale --capacity 12 --metrics ProcessCPUUsagePercentage ResourceUtilization --targetJobName ClickStream-Filter --outputPath ./Deploy ```
- If the autoscale setting is configured successfully, you see 2 JSON files created under **Deploy** folder. One is the ARM template file and the other one is the parameter file.
-
- ![Screenshot showing the autoscale files generated after configuring autoscale.](./media/cicd-autoscale/configure-autoscale.png)
-
- Here's the list of metrics you can use for defining autoscale rules:
-
- |Metrics | Description |
- |-|-|
- |ProcessCPUUsagePercentage | CPU % Utilization |
- |ResourceUtilization | SU/Memory % Utilization |
- |OutputWatermarkDelaySeconds | Watermark Delay |
- |InputEventsSourcesBacklogged | Backlogged Input Events |
- |DroppedOrAdjustedEvents | Out of order Events |
- |Errors | Runtime Errors |
- |InputEventBytes | Input Event Bytes |
- |LateInputEvents | Late Input Events |
- |InputEvents | Input Events |
- |EarlyInputEvents | Early Input Events |
- |InputEventsSourcesPerSecond | Input Sources Received |
- |OutputEvents | Output Events |
- |AMLCalloutRequests | Function Requests |
- |AMLCalloutFailedRequests | Failed Function Requests |
- |AMLCalloutInputEvents | Function Events |
- |ConversionErrors | Data Conversion Errors |
- |DeserializationError | Input Deserialization Errors |
-
- The default value for all metric threshold is **70**. If you want to set the metric threshold to another number, open **\*.AutoscaleSettingTemplate.parameters.json** file and change the **Threshold** value.
-
- ![Screenshot showing how to set metric threshold in parameter file.](./media/cicd-autoscale/set-metric-threshold.png)
-
- To learn more about defining autoscale rules, visit [here](https://learn.microsoft.com/azure/azure-monitor/autoscale/autoscale-understanding-settings).
-
-5. Deploy to Azure
- 1. Connect to Azure account.
+ If you configure the autoscale setting successfully, two JSON files are created under the *Deploy* folder. One is the ARM template file, and the other is the parameter file.
+
+ ![Screenshot that shows autoscale files generated after configuration of autoscale.](./media/cicd-autoscale/configure-autoscale.png)
+
+ Here's the list of metrics that you can use to define autoscale rules:
+
+ |Metric | Description |
+ ||-|
+ |`ProcessCPUUsagePercentage` | CPU utilization percentage |
+ |`ResourceUtilization` | SU or memory utilization percentage |
+ |`OutputWatermarkDelaySeconds` | Watermark delay |
+ |`InputEventsSourcesBacklogged` | Backlogged input events |
+ |`DroppedOrAdjustedEvents` | Out-of-order events |
+ |`Errors` | Runtime errors |
+ |`InputEventBytes` | Input event bytes |
+ |`LateInputEvents` | Late input events |
+ |`InputEvents` | Input events |
+ |`EarlyInputEvents` | Early input events |
+ |`InputEventsSourcesPerSecond` | Input sources received |
+ |`OutputEvents` | Output events |
+ |`AMLCalloutRequests` | Function requests |
+ |`AMLCalloutFailedRequests` | Failed function requests |
+ |`AMLCalloutInputEvents` | Function events |
+ |`ConversionErrors` | Data conversion errors |
+ |`DeserializationError` | Input deserialization error |
+
+ The default value for all metric thresholds is `70`. If you want to set the metric threshold to another number, open the *\*.AutoscaleSettingTemplate.parameters.json* file and change the `Threshold` value.
+
+ ![Screenshot that shows how to set the metric threshold in a parameter file.](./media/cicd-autoscale/set-metric-threshold.png)
+
+ To learn more about defining autoscale rules, see [Understand autoscale settings](/azure/azure-monitor/autoscale/autoscale-understanding-settings).
+
+5. Deploy to Azure.
+
+ 1. Connect to your Azure account:
+ ```powershell # Connect to Azure Connect-AzAccount
- # Set Azure subscription.
+ # Set the Azure subscription
Set-AzContext [SubscriptionID/SubscriptionName] ```
- 1. Deploy your Stream Analytics project.
+
+ 1. Deploy your Stream Analytics project:
+ ```powershell $templateFile = ".\Deploy\ClickStream-Filter.JobTemplate.json" $parameterFile = ".\Deploy\ClickStream-Filter.JobTemplate.parameters.json"
If you have a working Stream Analytics project in the local machine, follow the
-TemplateFile $templateFile ` -TemplateParameterFile $parameterFile ```
- 1. Deploy your autoscale settings.
+
+ 1. Deploy your autoscale settings:
+ ```powershell $templateFile = ".\Deploy\ClickStream-Filter.AutoscaleSettingTemplate.json" $parameterFile = ".\Deploy\ClickStream-Filter.AutoscaleSettingTemplate.parameters.json"
If you have a working Stream Analytics project in the local machine, follow the
-TemplateParameterFile $parameterFile ```
-Once your project is deployed successfully, you can view the autoscale settings in Azure portal.
+After you deploy your project successfully, you can view the autoscale settings in the Azure portal.
+### Scenario 2: Configure settings for a running Stream Analytics job in Azure
-### Scenario 2: Configure for a running ASA job in Azure
+If you have a Stream Analytics job running in Azure, you can use the Stream Analytics CI/CD tool in PowerShell to configure autoscale settings.
-If you have a Stream Analytics job running in Azure, you can use ASA CI/CD tool in the PowerShell to configure autoscale settings.
+Run the following command. Replace `$jobResourceId` with the resource ID of your Stream Analytics job.
-Replace **$jobResourceId** with your Stream Analytics job resource ID and run this command:
```powershell azure-streamanalytics-cicd autoscale --capacity 12 --metrics ProcessCPUUsagePercentage ResourceUtilization --targetJobResourceId $jobResourceId --outputPath ./Deploy ```
-If configure successfully, you see ARM template and parameter files created in the current directory.
+If you configure the settings successfully, ARM template and parameter files are created in the current directory.
+
+Then you can deploy the autoscale settings to Azure by following the deployment steps in scenario 1.
-Then you can deploy the autoscale settings to Azure by following the Deployment steps in scenario 1.
+## Get help
-## Help
+For more information about autoscale settings, run this command in PowerShell:
-For more information about autoscale settings, run this command in PowerShell:
```powershell azure-streamanalytics-cicd autoscale --help ```
-If you have any issues about the ASA CI/CD tool, you can report it [here](https://github.com/microsoft/vscode-asa/issues).
+If you have any problems with the Stream Analytics CI/CD tool, you can report them in [GitHub](https://github.com/microsoft/vscode-asa/issues).
stream-analytics Stream Analytics Build An Iot Solution Using Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-build-an-iot-solution-using-stream-analytics.md
Title: Build an IoT solution by using Azure Stream Analytics description: Getting-started tutorial for the Stream Analytics IoT solution of a tollbooth scenario -+ Previously updated : 12/06/2018 Last updated : 02/15/2023
## Introduction In this solution, you learn how to use Azure Stream Analytics to get real-time insights from your data. Developers can easily combine streams of data, such as click-streams, logs, and device-generated events, with historical records or reference data to derive business insights. As a fully managed, real-time stream computation service that's hosted in Microsoft Azure, Azure Stream Analytics provides built-in resiliency, low latency, and scalability to get you up and running in minutes.
-After completing this solution, you are able to:
+After completing this solution, you're able to:
* Familiarize yourself with the Azure Stream Analytics portal. * Configure and deploy a streaming job.
You need the following prerequisites to complete this solution:
* An [Azure subscription](https://azure.microsoft.com/pricing/free-trial/) ## Scenario introduction: "Hello, Toll!"
-A toll station is a common phenomenon. You encounter them on many expressways, bridges, and tunnels across the world. Each toll station has multiple toll booths. At manual booths, you stop to pay the toll to an attendant. At automated booths, a sensor on top of each booth scans an RFID card that's affixed to the windshield of your vehicle as you pass the toll booth. It is easy to visualize the passage of vehicles through these toll stations as an event stream over which interesting operations can be performed.
+A toll station is a common phenomenon. You encounter them on many expressways, bridges, and tunnels across the world. Each toll station has multiple toll booths. At manual booths, you stop to pay the toll to an attendant. At automated booths, a sensor on top of each booth scans an RFID card that's affixed to the windshield of your vehicle as you pass the toll booth. It's easy to visualize the passage of vehicles through these toll stations as an event stream over which interesting operations can be performed.
![Picture of cars at toll booths](media/stream-analytics-build-an-iot-solution-using-stream-analytics/cars-in-toll-booth.jpg)
A toll station is a common phenomenon. You encounter them on many expressways, b
This solution works with two streams of data. Sensors installed in the entrance and exit of the toll stations produce the first stream. The second stream is a static lookup dataset that has vehicle registration data. ### Entry data stream
-The entry data stream contains information about cars as they enter toll stations. The exit data events are live streamed into an Event Hub queue from a Web App included in the sample app.
+The entry data stream contains information about cars as they enter toll stations. The exit data events are live streamed into an event hub from a Web App included in the sample app.
+```
| TollID | EntryTime | LicensePlate | State | Make | Model | VehicleType | VehicleWeight | Toll | Tag | | | | | | | | | | | | | 1 |2014-09-10 12:01:00.000 |JNB 7001 |NY |Honda |CRV |1 |0 |7 | |
The entry data stream contains information about cars as they enter toll station
| 2 |2014-09-10 12:03:00.000 |XYZ 1003 |CT |Toyota |Corolla |1 |0 |4 | | | 1 |2014-09-10 12:03:00.000 |BNJ 1007 |NY |Honda |CRV |1 |0 |5 |789123456 | | 2 |2014-09-10 12:05:00.000 |CDE 1007 |NJ |Toyota |4x4 |1 |0 |6 |321987654 |
+```
-Here is a short description of the columns:
+Here's a short description of the columns:
| Column | Description | | | |
Here is a short description of the columns:
| Tag |The e-Tag on the automobile that automates payment; blank where the payment was done manually | ### Exit data stream
-The exit data stream contains information about cars leaving the toll station. The exit data events are live streamed into an Event Hub queue from a Web App included in the sample app.
+The exit data stream contains information about cars leaving the toll station. The exit data events are live streamed into an event hub from a Web App included in the sample app.
| **TollId** | **ExitTime** | **LicensePlate** | | | | |
The exit data stream contains information about cars leaving the toll station. T
| 1 |2014-09-10T12:08:00.0000000Z |BNJ 1007 | | 2 |2014-09-10T12:07:00.0000000Z |CDE 1007 |
-Here is a short description of the columns:
+Here's a short description of the columns:
| Column | Description | | | |
The solution uses a static snapshot of a commercial vehicle registration databas
| SNY 7188 |592133890 |0 | | ELH 9896 |678427724 |1 |
-Here is a short description of the columns:
+Here's a short description of the columns:
| Column | Description | | | |
Here is a short description of the columns:
| Expired |The registration status of the vehicle: 0 if vehicle registration is active, 1 if registration is expired | ## Set up the environment for Azure Stream Analytics
-To complete this solution, you need a Microsoft Azure subscription. If you do not have an Azure account, you can [request a free trial version](https://azure.microsoft.com/pricing/free-trial/).
+To complete this solution, you need a Microsoft Azure subscription. If you don't have an Azure account, you can [request a free trial version](https://azure.microsoft.com/pricing/free-trial/).
Be sure to follow the steps in the "Clean up your Azure account" section at the end of this article so that you can make the best use of your Azure credit.
There are several resources that can easily be deployed in a resource group toge
5. Select an Azure location.
-6. Specify an **Interval** as a number of seconds. This value is used in the sample web app, for how frequently to send data into Event Hub.
+6. Specify an **Interval** as a number of seconds. This value is used in the sample web app, for how frequently to send data into an event hub.
7. **Check** to agree to the terms and conditions.
There are several resources that can easily be deployed in a resource group toge
- One Azure Cosmos DB Account - One Azure Stream Analytics Job - One Azure Storage Account
- - One Azure Event Hub
+ - One Azure event hub
- Two Web Apps ## Examine the sample TollApp job
-1. Starting from the resource group in the previous section, select the Stream Analytics streaming job starting with the name **tollapp** (name contains random characters for uniqueness).
+1. Starting from the resource group in the previous section, select the Stream Analytics streaming job starting with the name `tollapp` (name contains random characters for uniqueness).
2. On the **Overview** page of the job, notice the **Query** box to view the query syntax.
There are several resources that can easily be deployed in a resource group toge
As you can see, Azure Stream Analytics uses a query language that's like SQL and adds a few extensions to specify time-related aspects of the query. For more details, read about [Time Management](/stream-analytics-query/time-management-azure-stream-analytics) and [Windowing](/stream-analytics-query/windowing-azure-stream-analytics) constructs used in the query. 3. Examine the Inputs of the TollApp sample job. Only the EntryStream input is used in the current query.
- - **EntryStream** input is an Event Hub connection that queues data representing each time a car enters a tollbooth on the highway. A web app that is part of the sample is creating the events, and that data is queued in this Event Hub. Note that this input is queried in the FROM clause of the streaming query.
- - **ExitStream** input is an Event Hub connection that queues data representing each time a car exits a tollbooth on the highway. This streaming input is used in later variations of the query syntax.
+ - **EntryStream** input is an event hub connection that queues data representing each time a car enters a tollbooth on the highway. A web app that is part of the sample is creating the events, and that data is queued in this event hub. Note that this input is queried in the FROM clause of the streaming query.
+ - **ExitStream** input is an event hub connection that queues data representing each time a car exits a tollbooth on the highway. This streaming input is used in later variations of the query syntax.
- **Registration** input is an Azure Blob storage connection, pointing to a static registration.json file, used for lookups as needed. This reference data input is used in later variations of the query syntax. 4. Examine the Outputs of the TollApp sample job.
Follow these steps to start the streaming job:
4. Expand the **tollAppDatabase** > **tollAppCollection** > **Documents**.
-5. In the list of ids, several docs are shown once the output is available.
+5. In the list of IDs, several docs are shown once the output is available.
-6. Select each id to review the JSON document. Notice each tollid, windowend time, and the count of cars from that window.
+6. Select each ID to review the JSON document. Notice each `tollid`, `windowend time`, and the `count of cars` from that window.
-7. After an additional three minutes, another set of four documents is available, one document per tollid.
+7. After an additional three minutes, another set of four documents is available, one document per `tollid`.
## Report total time for each car
AND DATEDIFF (minute, EntryStream, ExitStream ) BETWEEN 0 AND 15
### Review the total time in the output Repeat the steps in the preceding section to review the Azure Cosmos DB output data from the streaming job. Review the latest JSON documents.
-For example, this document shows an example car with a certain license plate, the entrytime and exit time, and the DATEDIFF calculated durationinminutes field showing the toll booth duration as two minutes:
+For example, this document shows an example car with a certain license plate, the `entrytime` and `exit time`, and the DATEDIFF calculated `durationinminutes` field showing the toll booth duration as two minutes:
```JSON { "tollid": 4,
To scale up the streaming job to more streaming units:
4. Slide the **Streaming units** slider from 1 to 6. Streaming units define the amount of compute power that the job can receive. Select **Save**.
-5. **Start** the streaming job to demonstrate the additional scale. Azure Stream Analytics distributes work across more compute resources and achieve better throughput, partitioning the work across resources using the column designated in the PARTITION BY clause.
+5. **Start** the streaming job to demonstrate the additional scale. Azure Stream Analytics distributes work across more compute resources and achieves better throughput, partitioning the work across resources using the column designated in the PARTITION BY clause.
## Monitor the job The **MONITOR** area contains statistics about the running job. First-time configuration is needed to use the storage account in the same region (name toll like the rest of this document).
You can access **Activity Logs** from the job dashboard **Settings** area as wel
3. Select **Delete resource group**. Type the name of the resource group to confirm deletion. ## Conclusion
-This solution introduced you to the Azure Stream Analytics service. It demonstrated how to configure inputs and outputs for the Stream Analytics job. Using the Toll Data scenario, the solution explained common types of problems that arise in the space of data in motion and how they can be solved with simple SQL-like queries in Azure Stream Analytics. The solution described SQL extension constructs for working with temporal data. It showed how to join data streams, how to enrich the data stream with static reference data, and how to scale out a query to achieve higher throughput.
+This solution introduced you to the Azure Stream Analytics service. It demonstrated how to configure inputs and outputs for the Stream Analytics job. By using the Toll Data scenario, the solution explained common types of problems that arise in the space of data in motion and how they can be solved with simple SQL-like queries in Azure Stream Analytics. The solution described SQL extension constructs for working with temporal data. It showed how to join data streams, how to enrich the data stream with static reference data, and how to scale out a query to achieve higher throughput.
-Although this solution provides a good introduction, it is not complete by any means. You can find more query patterns using the SAQL language at [Query examples for common Stream Analytics usage patterns](stream-analytics-stream-analytics-query-patterns.md).
+Although this solution provides a good introduction, it isn't complete by any means. You can find more query patterns using the SAQL language at [Query examples for common Stream Analytics usage patterns](stream-analytics-stream-analytics-query-patterns.md).
stream-analytics Stream Analytics Custom Path Patterns Blob Storage Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-custom-path-patterns-blob-storage-output.md
Title: Azure Stream Analytics custom blob output partitioning description: This article describes the custom DateTime path patterns and the custom field or attributes features for blob storage output from Azure Stream Analytics jobs. -+ Previously updated : 05/30/2021 Last updated : 02/15/2023
Custom field or input attributes improve downstream data-processing and reportin
### Partition key options
-The partition key, or column name, used to partition input data may contain any character that is accepted for [blob names](/rest/api/storageservices/Naming-and-Referencing-Containers--Blobs--and-Metadata). It is not possible to use nested fields as a partition key unless used in conjunction with aliases, but you can use certain characters to create a hierarchy of files. For example, you can use the following query to create a column that combines data from two other columns to make a unique partition key.
+The partition key, or column name, used to partition input data may contain any character that is accepted for [blob names](/rest/api/storageservices/Naming-and-Referencing-Containers--Blobs--and-Metadata). It isn't possible to use nested fields as a partition key unless used in conjunction with aliases, but you can use certain characters to create a hierarchy of files. For example, you can use the following query to create a column that combines data from two other columns to make a unique partition key.
```sql SELECT name, id, CONCAT(name, "/", id) AS nameid ```
-The partition key must be NVARCHAR(MAX), BIGINT, FLOAT, or BIT (1.2 compatibility level or higher). DateTime, Array, and Records types are not supported, but could be used as partition keys if they are converted to Strings. For more information, see [Azure Stream Analytics Data types](/stream-analytics-query/data-types-azure-stream-analytics).
+The partition key must be NVARCHAR(MAX), BIGINT, FLOAT, or BIT (1.2 compatibility level or higher). DateTime, Array, and Records types aren't supported, but could be used as partition keys if they're converted to Strings. For more information, see [Azure Stream Analytics Data types](/stream-analytics-query/data-types-azure-stream-analytics).
### Example
Suppose a job takes input data from live user sessions connected to an external
Similarly, if the job input was sensor data from millions of sensors where each sensor had a **sensor_id**, the Path Pattern would be **{sensor_id}** to partition each sensor data to different folders.
-Using the REST API, the output section of a JSON file used for that request may look like the following:
+When you use the REST API, the output section of a JSON file used for that request may look like the following image:
![REST API output](./media/stream-analytics-custom-path-patterns-blob-storage-output/stream-analytics-rest-output.png)
-Once the job starts running, the *clients* container may look like the following:
+Once the job starts running, the `clients` container may look like the following image:
![Clients container](./media/stream-analytics-custom-path-patterns-blob-storage-output/stream-analytics-clients-container.png)
-Each folder may contain multiple blobs where each blob contains one or more records. In the above example, there is a single blob in a folder labeled "06000000" with the following contents:
+Each folder may contain multiple blobs where each blob contains one or more records. In the above example, there's a single blob in a folder labeled "06000000" with the following contents:
![Blob contents](./media/stream-analytics-custom-path-patterns-blob-storage-output/stream-analytics-blob-contents.png)
Notice that each record in the blob has a **client_id** column matching the fold
2. If customers want to use more than one input field, they can create a composite key in query for custom path partition in blob output by using **CONCAT**. For example: **select concat (col1, col2) as compositeColumn into blobOutput from input**. Then they can specify **compositeColumn** as the custom path in blob storage.
-3. Partition keys are case insensitive, so partition keys like "John" and "john" are equivalent. Also, expressions cannot be used as partition keys. For example, **{columnA + columnB}** does not work.
+3. Partition keys are case insensitive, so partition keys like `John` and `john` are equivalent. Also, expressions can't be used as partition keys. For example, **{columnA + columnB}** doesn't work.
-4. When an input stream consists of records with a partition key cardinality under 8000, the records will be appended to existing blobs and only create new blobs when necessary. If the cardinality is over 8000 there is no guarantee existing blobs will be written to and new blobs won't be created for an arbitrary number of records with the same partition key.
+4. When an input stream consists of records with a partition key cardinality under 8000, the records are appended to existing blobs, and only create new blobs when necessary. If the cardinality is over 8000, there's no guarantee existing blobs will be written to, and new blobs won't be created for an arbitrary number of records with the same partition key.
-5. If the blob output is [configured as immutable](../storage/blobs/immutable-storage-overview.md), Stream Analytics will create a new blob each time data is sent.
+5. If the blob output is [configured as immutable](../storage/blobs/immutable-storage-overview.md), Stream Analytics creates a new blob each time data is sent.
## Custom DateTime path patterns
The following format specifier tokens can be used alone or in combination to ach
|{datetime:m}|Minutes from 0 to 60|6| |{datetime:ss}|Seconds from 00 to 60|08|
-If you do not wish to use custom DateTime patterns, you can add the {date} and/or {time} token to the Path Prefix to generate a dropdown with built-in DateTime formats.
+If you don't wish to use custom DateTime patterns, you can add the {date} and/or {time} token to the Path Prefix to generate a dropdown with built-in DateTime formats.
![Stream Analytics old DateTime formats](./media/stream-analytics-custom-path-patterns-blob-storage-output/stream-analytics-old-date-time-formats.png)
stream-analytics Stream Analytics Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-quick-create-portal.md
Title: Quickstart - Create a Stream Analytics job by using the Azure portal description: This quickstart shows you how to get started by creating a Stream Analytic job, configuring inputs, outputs, and defining a query. -+ Last updated 09/02/2022
# Quickstart: Create a Stream Analytics job by using the Azure portal
-This quickstart shows you how to create a Stream Analytics job in the Azure portal. In this quickstart, you define a Stream Analytics job that reads real-time streaming data and filters messages with a temperature greater than 27. Your Stream Analytics job will read data from IoT Hub, transform the data, and write the output data to a container in blob storage. The input data used in this quickstart is generated by a Raspberry Pi online simulator.
+This quickstart shows you how to create a Stream Analytics job in the Azure portal. In this quickstart, you define a Stream Analytics job that reads real-time streaming data and filters messages with a temperature greater than 27. Your Stream Analytics job reads data from IoT Hub, transform the data, and write the output data to a container in blob storage. The input data used in this quickstart is generated by a Raspberry Pi online simulator.
## Before you begin If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/).
Before defining the Stream Analytics job, you should prepare the input data. The
:::image type="content" source="./media/stream-analytics-quick-create-portal/create-iot-hub.png" alt-text="Screenshot showing the IoT Hub page for creation."::: 4. On the **Networking** page, select **Next: Management** at the bottom of the page. 1. On the **Management** page, for **Pricing and scale tier**, select **F1: Free tier**, if it's still available on your subscription. For more information, see [IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub/).
-6. Select **Review + create**. Review your IoT Hub information and click **Create**. Your IoT Hub might take a few minutes to create. You can monitor the progress in the **Notifications** pane.
+6. Select **Review + create**. Review your IoT Hub information and select **Create**. Your IoT Hub might take a few minutes to create. You can monitor the progress in the **Notifications** pane.
1. After the resource (IoT hub) is created, select **Go to resource** to navigate to the IoT Hub page. 1. On the **IoT Hub** page, select **Devices** on the left menu, and then select **+ Add device**. :::image type="content" source="./media/stream-analytics-quick-create-portal/add-device-button.png" lightbox="./media/stream-analytics-quick-create-portal/add-device-button.png" alt-text="Screenshot showing the Add device button on the Devices page.":::
-7. Enter a **Device ID** and click **Save**.
+7. Enter a **Device ID** and select **Save**.
:::image type="content" source="./media/stream-analytics-quick-create-portal/add-device-iot-hub.png" alt-text="Screenshot showing the Create a device page."::: 8. Once the device is created, you should see the device from the **IoT devices** list. Select **Refresh** button on the page if you don't see it.
Before defining the Stream Analytics job, you should prepare the input data. The
## Create blob storage 1. From the upper left-hand corner of the Azure portal, select **Create a resource** > **Storage** > **Storage account**.
-2. In the **Create storage account** pane, enter a storage account name, location, and resource group. Choose the same location and resource group as the IoT Hub you created. Then click **Review** at the bottom of the page.
+2. In the **Create storage account** pane, enter a storage account name, location, and resource group. Choose the same location and resource group as the IoT Hub you created. Then select **Review** at the bottom of the page.
:::image type="content" source="./media/stream-analytics-quick-create-portal/create-storage-account.png" alt-text="Screenshot showing the Create a storage account page."::: 3. On the **Review** page, review your settings, and select **Create** to create the account.
Before defining the Stream Analytics job, you should prepare the input data. The
## Configure job input
-In this section, you'll configure an IoT Hub device input to the Stream Analytics job. Use the IoT Hub you created in the previous section of the quickstart.
+In this section, you configure an IoT Hub device input to the Stream Analytics job. Use the IoT Hub you created in the previous section of the quickstart.
1. On the **Stream Analytics job** page, select **Input** under **Job topology** on the left menu. 1. On the **Inputs** page, select **Add stream input** > **IoT Hub**.
In this section, you'll configure an IoT Hub device input to the Stream Analytic
1. Open the [Raspberry Pi Azure IoT Online Simulator](https://azure-samples.github.io/raspberry-pi-web-simulator/). 2. Replace the placeholder in Line 15 with the Azure IoT Hub device connection string you saved in a previous section.
-3. Click **Run**. The output should show the sensor data and messages that are being sent to your IoT Hub.
+3. Select **Run**. The output should show the sensor data and messages that are being sent to your IoT Hub.
:::image type="content" source="./media/stream-analytics-quick-create-portal/ras-pi-connection-string.png" lightbox="./media/stream-analytics-quick-create-portal/ras-pi-connection-string.png" alt-text="Screenshot showing the **Raspberry Pi Azure IoT Online Simulator** page with the sample query.":::
synapse-analytics Get Started Analyze Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-analyze-sql-on-demand.md
Title: 'Tutorial: Get started analyze data with a serverless SQL pool'
description: In this tutorial, you'll learn how to analyze data with a serverless SQL pool using data located in Spark databases. -+ -+ Previously updated : 11/18/2022 Last updated : 02/15/2023 # Analyze data with a serverless SQL pool
Every workspace comes with a pre-configured serverless SQL pool called **Built-i
FORMAT='PARQUET' ) AS [result] ```
-1. Click **Run**.
+1. Select **Run**.
Data exploration is just a simplified scenario where you can understand the basic characteristics of your data. Learn more about data exploration and analysis in this [tutorial](sql/tutorial-data-analyst.md).
However, as you continue data exploration, you might want to create some utility
``` > [!IMPORTANT]
- > Use a collation with `_UTF8` suffix to ensure that UTF-8 text is properly converted to `VARCHAR` columns. `Latin1_General_100_BIN2_UTF8` provides
- > the best performance in the queries that read data from Parquet files and Azure Cosmos DB containers.
+ > Use a collation with `_UTF8` suffix to ensure that UTF-8 text is properly converted to `VARCHAR` columns. `Latin1_General_100_BIN2_UTF8` provides the best performance in the queries that read data from Parquet files and Azure Cosmos DB containers. For more information on changing collations, refer to [Collation types supported for Synapse SQL](sql/reference-collation-types.md).
-1. Switch from master to `DataExplorationDB` using the following command. You can also use the UI control **use database** to switch your current database:
+1. Switch the database context from `master` to `DataExplorationDB` using the following command. You can also use the UI control **use database** to switch your current database:
```sql USE DataExplorationDB ```
-1. From the 'DataExplorationDB', create utility objects such as credentials and data sources.
+1. From `DataExplorationDB`, create utility objects such as credentials and data sources.
```sql CREATE EXTERNAL DATA SOURCE ContosoLake
However, as you continue data exploration, you might want to create some utility
> [!NOTE] > An external data source can be created without a credential. If a credential does not exist, the caller's identity will be used to access the external data source.
-1. Optionally, use the newly created 'DataExplorationDB' database to create a login for a user in DataExplorationDB that will access external data:
+1. Optionally, use the newly created `DataExplorationDB` database to create a login for a user in `DataExplorationDB` that will access external data:
```sql CREATE LOGIN data_explorer WITH PASSWORD = 'My Very Strong Password 1234!'; ```
- Next create a database user in 'DataExplorationDB' for the above login and grant the `ADMINISTER DATABASE BULK OPERATIONS` permission.
+ Next create a database user in `DataExplorationDB` for the above login and grant the `ADMINISTER DATABASE BULK OPERATIONS` permission.
```sql CREATE USER data_explorer FOR LOGIN data_explorer;
However, as you continue data exploration, you might want to create some utility
1. **Publish** your changes to the workspace.
-Data exploration database is just a simple placeholder where you can store your utility objects. Synapse SQL pool enables you to do much more and create a Logical Data Warehouse - a relational layer built on top of Azure data sources. Learn more about building Logical Data Warehouse in this [tutorial](sql/tutorial-data-analyst.md).
+Data exploration database is just a simple placeholder where you can store your utility objects. Synapse SQL pool enables you to do much more and create a Logical Data Warehouse - a relational layer built on top of Azure data sources. Learn more about [building a logical data warehouse in this tutorial](sql/tutorial-data-analyst.md).
## Next steps
synapse-analytics Sql Data Warehouse Reference Collation Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-reference-collation-types.md
description: Collation types supported for dedicated SQL pool (formerly SQL DW)
Previously updated : 12/04/2019 Last updated : 02/15/2023 --++ # Database collation support for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics You can change the default database collation from the Azure portal when you create a new dedicated SQL pool (formerly SQL DW). This capability makes it even easier to create a new database using one of the 3800 supported database collations.
+This article applies to dedicated SQL pools (formerly SQL DW), for more information on dedicated SQL pools in Azure Synapse workspaces, see [Collation types supported for Synapse SQL](../sql/reference-collation-types.md).
+ Collations provide the locale, code page, sort order and character sensitivity rules for character-based data types. Once chosen, all columns and expressions requiring collation information inherit the chosen collation from the database setting. The default inheritance can be overridden by explicitly stating a different collation for a character-based data type. > [!NOTE]
Collations provide the locale, code page, sort order and character sensitivity r
## Changing collation
-To change the default collation, update to the Collation field in the provisioning experience.
-
-For example, if you wanted to change the default collation to case sensitive, you would simply rename the Collation from SQL_Latin1_General_CP1_CI_AS to SQL_Latin1_General_CP1_CS_AS.
-
-## List of unsupported collation types
-
-* Japanese_Bushu_Kakusu_140_BIN
-* Japanese_Bushu_Kakusu_140_BIN2
-* Japanese_Bushu_Kakusu_140_CI_AI_VSS
-* Japanese_Bushu_Kakusu_140_CI_AI_WS_VSS
-* Japanese_Bushu_Kakusu_140_CI_AI_KS_VSS
-* Japanese_Bushu_Kakusu_140_CI_AI_KS_WS_VSS
-* Japanese_Bushu_Kakusu_140_CI_AS_VSS
-* Japanese_Bushu_Kakusu_140_CI_AS_WS_VSS
-* Japanese_Bushu_Kakusu_140_CI_AS_KS_VSS
-* Japanese_Bushu_Kakusu_140_CI_AS_KS_WS_VSS
-* Japanese_Bushu_Kakusu_140_CS_AI_VSS
-* Japanese_Bushu_Kakusu_140_CS_AI_WS_VSS
-* Japanese_Bushu_Kakusu_140_CS_AI_KS_VSS
-* Japanese_Bushu_Kakusu_140_CS_AI_KS_WS_VSS
-* Japanese_Bushu_Kakusu_140_CS_AS_VSS
-* Japanese_Bushu_Kakusu_140_CS_AS_WS_VSS
-* Japanese_Bushu_Kakusu_140_CS_AS_KS_VSS
-* Japanese_Bushu_Kakusu_140_CS_AS_KS_WS_VSS
-* Japanese_Bushu_Kakusu_140_CI_AI
-* Japanese_Bushu_Kakusu_140_CI_AI_WS
-* Japanese_Bushu_Kakusu_140_CI_AI_KS
-* Japanese_Bushu_Kakusu_140_CI_AI_KS_WS
-* Japanese_Bushu_Kakusu_140_CI_AS
-* Japanese_Bushu_Kakusu_140_CI_AS_WS
-* Japanese_Bushu_Kakusu_140_CI_AS_KS
-* Japanese_Bushu_Kakusu_140_CI_AS_KS_WS
-* Japanese_Bushu_Kakusu_140_CS_AI
-* Japanese_Bushu_Kakusu_140_CS_AI_WS
-* Japanese_Bushu_Kakusu_140_CS_AI_KS
-* Japanese_Bushu_Kakusu_140_CS_AI_KS_WS
-* Japanese_Bushu_Kakusu_140_CS_AS
-* Japanese_Bushu_Kakusu_140_CS_AS_WS
-* Japanese_Bushu_Kakusu_140_CS_AS_KS
-* Japanese_Bushu_Kakusu_140_CS_AS_KS_WS
-* Japanese_XJIS_140_BIN
-* Japanese_XJIS_140_BIN2
-* Japanese_XJIS_140_CI_AI_VSS
-* Japanese_XJIS_140_CI_AI_WS_VSS
-* Japanese_XJIS_140_CI_AI_KS_VSS
-* Japanese_XJIS_140_CI_AI_KS_WS_VSS
-* Japanese_XJIS_140_CI_AS_VSS
-* Japanese_XJIS_140_CI_AS_WS_VSS
-* Japanese_XJIS_140_CI_AS_KS_VSS
-* Japanese_XJIS_140_CI_AS_KS_WS_VSS
-* Japanese_XJIS_140_CS_AI_VSS
-* Japanese_XJIS_140_CS_AI_WS_VSS
-* Japanese_XJIS_140_CS_AI_KS_VSS
-* Japanese_XJIS_140_CS_AI_KS_WS_VSS
-* Japanese_XJIS_140_CS_AS_VSS
-* Japanese_XJIS_140_CS_AS_WS_VSS
-* Japanese_XJIS_140_CS_AS_KS_VSS
-* Japanese_XJIS_140_CS_AS_KS_WS_VSS
-* Japanese_XJIS_140_CI_AI
-* Japanese_XJIS_140_CI_AI_WS
-* Japanese_XJIS_140_CI_AI_KS
-* Japanese_XJIS_140_CI_AI_KS_WS
-* Japanese_XJIS_140_CI_AS
-* Japanese_XJIS_140_CI_AS_WS
-* Japanese_XJIS_140_CI_AS_KS
-* Japanese_XJIS_140_CI_AS_KS_WS
-* Japanese_XJIS_140_CS_AI
-* Japanese_XJIS_140_CS_AI_WS
-* Japanese_XJIS_140_CS_AI_KS
-* Japanese_XJIS_140_CS_AI_KS_WS
-* Japanese_XJIS_140_CS_AS
-* Japanese_XJIS_140_CS_AS_WS
-* Japanese_XJIS_140_CS_AS_KS
-* Japanese_XJIS_140_CS_AS_KS_WS
-* SQL_EBCDIC1141_CP1_CS_AS
-* SQL_EBCDIC277_2_CP1_CS_AS
+To change the default collation, update to the **Collation** field in the provisioning experience.
+
+For example, if you wanted to change the default collation to case sensitive, change the collation from `SQL_Latin1_General_CP1_CI_AS` to `SQL_Latin1_General_CP1_CS_AS`.
+
+## Collation support
+
+The following table shows which collation types are supported by which service.
+
+| Collation Type | Serverless SQL Pool | Dedicated SQL Pool - Database & Column Level | Dedicated SQL Pool - External Table (Native Support) | Dedicated SQL Pool - External Table (Hadoop/Polybase) |
+|:--:|:-:|:--:|::|::|
+| Non-UTF-8 Collations | Yes | Yes | Yes | Yes |
+| UTF-8 | Yes | Yes | No | No |
+| Japanese_Bushu_Kakusu_140_* | Yes | Yes | No | No |
+| Japanese_XJIS_140_* | Yes | Yes | No | No |
+| SQL_EBCDIC1141_CP1_CS_AS | No | No | No | No |
+| SQL_EBCDIC277_2_CP1_CS_AS | No | No | No | No |
## Checking the current collation
To check the current collation for the database, you can run the following T-SQL
SELECT DATABASEPROPERTYEX(DB_NAME(), 'Collation') AS Collation; ```
-When passed 'Collation' as the property parameter, the DatabasePropertyEx function returns the current collation for the database specified. For more information, see [DatabasePropertyEx](/sql/t-sql/functions/databasepropertyex-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true).
+When passed 'Collation' as the property parameter, the DatabasePropertyEx function returns the current collation for the database specified. For more information, see [DATABASEPROPERTYEX](/sql/t-sql/functions/databasepropertyex-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true).
++
+## Next steps
+
+Additional information on best practices for dedicated SQL pool and serverless SQL pool can be found in the following articles:
+
+- [Best practices for dedicated SQL pool](../sql/best-practices-dedicated-sql-pool.md)
+- [Best practices for serverless SQL pool](../sql/best-practices-serverless-sql-pool.md)
synapse-analytics Best Practices Serverless Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/best-practices-serverless-sql-pool.md
Previously updated : 09/01/2022 Last updated : 02/15/2023
Some generic guidelines are:
- Make sure the storage and serverless SQL pool are in the same region. Storage examples include Azure Data Lake Storage and Azure Cosmos DB. - Try to [optimize storage layout](#prepare-files-for-querying) by using partitioning and keeping your files in the range between 100 MB and 10 GB. - If you're returning a large number of results, make sure you're using SQL Server Management Studio or Azure Data Studio and not Azure Synapse Studio. Azure Synapse Studio is a web tool that isn't designed for large result sets.-- If you're filtering results by string column, try to use a `BIN2_UTF8` collation.
+- If you're filtering results by string column, try to use a `BIN2_UTF8` collation. For more information on changing collations, refer to [Collation types supported for Synapse SQL](reference-collation-types.md).
- Consider caching the results on the client side by using Power BI import mode or Azure Analysis Services, and periodically refresh them. Serverless SQL pools can't provide an interactive experience in Power BI Direct Query mode if you're using complex queries or processing a large amount of data. ## Client applications and network connections
The data types you use in your query affect performance and concurrency. You can
- Use the smallest data size that can accommodate the largest possible value. - If the maximum character value length is 30 characters, use a character data type of length 30. - If all character column values are of a fixed size, use **char** or **nchar**. Otherwise, use **varchar** or **nvarchar**.
- - If the maximum integer column value is 500, use **smallint** because it's the smallest data type that can accommodate this value. You can find integer data type ranges in [this article](/sql/t-sql/data-types/int-bigint-smallint-and-tinyint-transact-sql?view=azure-sqldw-latest&preserve-view=true).
+ - If the maximum integer column value is 500, use **smallint** because it's the smallest data type that can accommodate this value. For more information, see [integer data type ranges](/sql/t-sql/data-types/int-bigint-smallint-and-tinyint-transact-sql?view=azure-sqldw-latest&preserve-view=true).
- If possible, use **varchar** and **char** instead of **nvarchar** and **nchar**. - Use the **varchar** type with some UTF8 collation if you're reading data from Parquet, Azure Cosmos DB, Delta Lake, or CSV with UTF-8 encoding. - Use the **varchar** type without UTF8 collation if you're reading data from CSV non-Unicode files (for example, ASCII).
The data types you use in your query affect performance and concurrency. You can
[Schema inference](query-parquet-files.md#automatic-schema-inference) helps you quickly write queries and explore data without knowing file schemas. The cost of this convenience is that inferred data types might be larger than the actual data types. This discrepancy happens when there isn't enough information in the source files to make sure the appropriate data type is used. For example, Parquet files don't contain metadata about maximum character column length. So serverless SQL pool infers it as varchar(8000).
-You can use [sp_describe_first_results_set](/sql/relational-databases/system-stored-procedures/sp-describe-first-result-set-transact-sql?view=sql-server-ver15&preserve-view=true) to check the resulting data types of your query.
+You can use the system stored procedure [sp_describe_first_results_set](/sql/relational-databases/system-stored-procedures/sp-describe-first-result-set-transact-sql?view=sql-server-ver15&preserve-view=true) to check the resulting data types of your query.
The following example shows how you can optimize inferred data types. This procedure is used to show the inferred data types:
synapse-analytics Develop Tables External Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-tables-external-tables.md
Previously updated : 02/15/2022 Last updated : 02/15/2023 - # Use external tables with Synapse SQL
The key differences between Hadoop and native external tables are presented in t
| Serverless SQL pool | Not available | Available | | Supported formats | Delimited/CSV, Parquet, ORC, Hive RC, and RC | Serverless SQL pool: Delimited/CSV, Parquet, and [Delta Lake](query-delta-lake-format.md)<br/>Dedicated SQL pool: Parquet (preview) | | [Folder partition elimination](#folder-partition-elimination) | No | Partition elimination is available only in the partitioned tables created on Parquet or CSV formats that are synchronized from Apache Spark pools. You might create external tables on Parquet partitioned folders, but the partitioning columns will be inaccessible and ignored, while the partition elimination will not be applied. Do not create [external tables on Delta Lake folders](create-use-external-tables.md#delta-tables-on-partitioned-folders) because they are not supported. Use [Delta partitioned views](create-use-views.md#delta-lake-partitioned-views) if you need to query partitioned Delta Lake data. |
-| [File elimination](#file-elimination) (predicate pushdown) | No | Yes in serverless SQL pool. For the string pushdown, you need to use `Latin1_General_100_BIN2_UTF8` collation on the `VARCHAR` columns to enable pushdown. |
+| [File elimination](#file-elimination) (predicate pushdown) | No | Yes in serverless SQL pool. For the string pushdown, you need to use `Latin1_General_100_BIN2_UTF8` collation on the `VARCHAR` columns to enable pushdown. For more information on collations, refer to [Collation types supported for Synapse SQL](reference-collation-types.md).|
| Custom format for location | No | Yes, using wildcards like `/year=*/month=*/day=*` for Parquet or CSV formats. Custom folder paths are not available in Delta Lake. In the serverless SQL pool you can also use recursive wildcards `/logs/**` to reference Parquet or CSV files in any sub-folder beneath the referenced folder. | | Recursive folder scan | Yes | Yes. In serverless SQL pools must be specified `/**` at the end of the location path. In Dedicated pool the folders are always scanned recursively. |
-| Storage authentication | Storage Access Key(SAK), AAD passthrough, Managed identity, Custom application Azure AD identity | [Shared Access Signature(SAS)](develop-storage-files-storage-access-control.md?tabs=shared-access-signature), [AAD passthrough](develop-storage-files-storage-access-control.md?tabs=user-identity), [Managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity), [Custom application Azure AD identity](develop-storage-files-storage-access-control.md?tabs=service-principal). |
+| Storage authentication | Storage Access Key(SAK), Azure Active Directory passthrough, Managed identity, custom application Azure Active Directory identity | [Shared Access Signature(SAS)](develop-storage-files-storage-access-control.md?tabs=shared-access-signature), [Azure Active Directory passthrough](develop-storage-files-storage-access-control.md?tabs=user-identity), [Managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity), [Custom application Azure AD identity](develop-storage-files-storage-access-control.md?tabs=service-principal). |
| Column mapping | Ordinal - the columns in the external table definition are mapped to the columns in the underlying Parquet files by position. | Serverless pool: by name. The columns in the external table definition are mapped to the columns in the underlying Parquet files by column name matching. <br/> Dedicated pool: ordinal matching. The columns in the external table definition are mapped to the columns in the underlying Parquet files by position.| | CETAS (exporting/transformation) | Yes | CETAS with the native tables as a target works only in the serverless SQL pool. You cannot use the dedicated SQL pools to export data using native tables. |
You can create external tables in Synapse SQL pools via the following steps:
### Folder partition elimination
-The native external tables in Synapse pools are able to ignore the files placed in the folders that are not relevant for the queries. If your files are stored in a folder hierarchy (for example - **/year=2020/month=03/day=16**) and the values for **year**, **month**, and **day** are exposed as the columns, the queries that contain filters like `year=2020` will read the files only from the subfolders placed within the **year=2020** folder. The files and folders placed in other folders (**year=2021** or **year=2022**) will be ignored in this query. This elimination is known as **partition elimination**.
+The native external tables in Synapse pools are able to ignore the files placed in the folders that are not relevant for the queries. If your files are stored in a folder hierarchy (for example - `/year=2020/month=03/day=16`) and the values for `year`, `month`, and `day` are exposed as the columns, the queries that contain filters like `year=2020` will read the files only from the subfolders placed within the `year=2020` folder. The files and folders placed in other folders (`year=2021` or `year=2022`) will be ignored in this query. This elimination is known as **partition elimination**.
The folder partition elimination is available in the native external tables that are synchronized from the Synapse Spark pools. If you have partitioned data set and you would like to leverage the partition elimination with the external tables that you create, use [the partitioned views](create-use-views.md#partitioned-views) instead of the external tables. ### File elimination Some data formats such as Parquet and Delta contain file statistics for each column (for example, min/max values for each column). The queries that filter data will not read the files where the required column values do not exist. The query will first explore min/max values for the columns used in the query predicate to find the files that do not contain the required data. These files will be ignored and eliminated from the query plan.
-This technique is also known as filter predicate pushdown and it can improve the performance of your queries. Filter pushdown is available in the serverless SQL pools on Parquet and Delta formats. To leverage filter pushdown for the string types, use the VARCHAR type with the `Latin1_General_100_BIN2_UTF8` collation.
+This technique is also known as filter predicate pushdown and it can improve the performance of your queries. Filter pushdown is available in the serverless SQL pools on Parquet and Delta formats. To leverage filter pushdown for the string types, use the VARCHAR type with the `Latin1_General_100_BIN2_UTF8` collation. For more information on collations, refer to [Collation types supported for Synapse SQL](reference-collation-types.md).
### Security
External tables access underlying Azure storage using the database scoped creden
- Data source without credential enables external tables to access publicly available files on Azure storage. - Data source can have a credential that enables external tables to access only the files on Azure storage using SAS token or workspace Managed Identity - For examples, see [the Develop storage files storage access control](develop-storage-files-storage-access-control.md#examples) article. -- ## CREATE EXTERNAL DATA SOURCE
-External data sources are used to connect to storage accounts. The complete documentation is outlined [here](/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest&preserve-view=true).
+External data sources are used to connect to storage accounts. For more information, see [CREATE EXTERNAL DATA SOURCE](/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest&preserve-view=true).
### Syntax for CREATE EXTERNAL DATA SOURCE
If you're retrieving data from the text file, store each missing value by using
- 0 if the column is defined as a numeric column. Decimal columns aren't supported and will cause an error. - Empty string ("") if the column is a string column.-- 1900-01-01 if the column is a date column.
+- "1900-01-01" if the column is a date column.
FALSE - Store all missing values as NULL. Any NULL values that are stored by using the word NULL in the delimited text file are imported as the string 'NULL'.
synapse-analytics Query Delta Lake Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-delta-lake-format.md
Previously updated : 12/06/2022 Last updated : 02/15/2023
A serverless SQL pool can read Delta Lake files that are created using Apache Sp
Apache Spark pools in Azure Synapse enable data engineers to modify Delta Lake files using Scala, PySpark, and .NET. Serverless SQL pools help data analysts to create reports on Delta Lake files created by data engineers. > [!IMPORTANT]
-> Querying Delta Lake format using the serverless SQL pool is **Generally available** functionality. However, querying Spark Delta tables is still in public preview and not production ready. There are known issues that might happen if you query Delta tables created using the Spark pools. See the known issues in the [self-help page](resources-self-help-sql-on-demand.md#delta-lake).
+> Querying Delta Lake format using the serverless SQL pool is **Generally available** functionality. However, querying Spark Delta tables is still in public preview and not production ready. There are known issues that might happen if you query Delta tables created using the Spark pools. See the known issues in [Serverless SQL pool self-help](resources-self-help-sql-on-demand.md#delta-lake).
## Quickstart example
Make sure you can access your file. If your file is protected with SAS key or cu
> Ensure you are using a UTF-8 database collation (for example `Latin1_General_100_BIN2_UTF8`) because string values in Delta Lake files are encoded using UTF-8 encoding. > A mismatch between the text encoding in the Delta Lake file and the collation may cause unexpected conversion errors. > You can easily change the default collation of the current database using the following T-SQL statement:
-> `alter database current collate Latin1_General_100_BIN2_UTF8`
+> `ALTER DATABASE CURRENT COLLATE Latin1_General_100_BIN2_UTF8;`
+> For more information on collations, see [Collation types supported for Synapse SQL](reference-collation-types.md).
### Data source usage
With the explicit specification of the result set schema, you can minimize the t
### Query partitioned data+ The data set provided in this sample is divided (partitioned) into separate subfolders.+ Unlike [Parquet](query-parquet-files.md), you don't need to target specific partitions using the `FILEPATH` function. The `OPENROWSET` will identify partitioning columns in your Delta Lake folder structure and enable you to directly query data using these columns. This example shows fare amounts by year, month, and payment_type for the first three months of 2017.
synapse-analytics Query Parquet Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-parquet-files.md
Previously updated : 05/20/2020 Last updated : 02/15/2023
from openrowset(
format = 'parquet') as rows ```
-Make sure that you can access this file. If your file is protected with SAS key or custom Azure identity, you would need to setup [server level credential for sql login](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#server-scoped-credential).
+Make sure that you can access this file. If your file is protected with SAS key or custom Azure identity, you would need to set up [server level credential for sql login](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#server-scoped-credential).
> [!IMPORTANT] > Ensure you are using a UTF-8 database collation (for example `Latin1_General_100_BIN2_UTF8`) because string values in PARQUET files are encoded using UTF-8 encoding. > A mismatch between the text encoding in the PARQUET file and the collation may cause unexpected conversion errors. > You can easily change the default collation of the current database using the following T-SQL statement:
-> `alter database current collate Latin1_General_100_BIN2_UTF8`'
+> `ALTER DATABASE CURRENT COLLATE Latin1_General_100_BIN2_UTF8;`
+> For more information on collations, see [Collation types supported for Synapse SQL](reference-collation-types.md).
-If you use the `Latin1_General_100_BIN2_UTF8` collation you will get an additional performance boost compared to the other collations. The `Latin1_General_100_BIN2_UTF8` collation is compatible with parquet string sorting rules. The SQL pool is able to eliminate some parts of the parquet files that will not contain data needed in the queries (file/column-segment pruning). If you use other collations, all data from the parquet files will be loaded into Synapse SQL and the filtering is happening within the SQL process. The `Latin1_General_100_BIN2_UTF8` collation has additional performance optimization that works only for parquet and CosmosDB. The downside is that you lose fine-grained comparison rules like case insensitivity.
+If you use the `Latin1_General_100_BIN2_UTF8` collation you will get an additional performance boost compared to the other collations. The `Latin1_General_100_BIN2_UTF8` collation is compatible with parquet string sorting rules. The SQL pool is able to eliminate some parts of the parquet files that will not contain data needed in the queries (file/column-segment pruning). If you use other collations, all data from the parquet files will be loaded into Synapse SQL and the filtering is happening within the SQL process. The `Latin1_General_100_BIN2_UTF8` collation has additional performance optimization that works only for parquet and Cosmos DB. The downside is that you lose fine-grained comparison rules like case insensitivity.
### Data source usage
from openrowset(
) as rows ```
-If a data source is protected with SAS key or custom identity you can configure [data source with database scoped credential](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#database-scoped-credential).
+If a data source is protected with SAS key or custom identity, you can configure [data source with database scoped credential](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#database-scoped-credential).
### Explicitly specify schema
from openrowset(
> Make sure that you are explicilty specifying some UTF-8 collation (for example `Latin1_General_100_BIN2_UTF8`) for all string columns in `WITH` clause or set some UTF-8 collation at database level. > Mismatch between text encoding in the file and string column collation might cause unexpected conversion errors. > You can easily change default collation of the current database using the following T-SQL statement:
-> `alter database current collate Latin1_General_100_BIN2_UTF8`
-> You can easily set collation on the colum types using the following definition:
+> `ALTER DATABASE CURRENT COLLATE Latin1_General_100_BIN2_UTF8;`
+> You can easily set collation on the colum types, for example:
> `geo_id varchar(6) collate Latin1_General_100_BIN2_UTF8`
+> For more information on collations, see [Collation types supported for Synapse SQL](../sql/reference-collation-types.md).
-In the following sections you can see how to query various types of PARQUET files.
+In the following sections, you can see how to query various types of PARQUET files.
## Prerequisites
ORDER BY
You don't need to use the OPENROWSET WITH clause when reading Parquet files. Column names and data types are automatically read from Parquet files.
-The sample below shows the automatic schema inference capabilities for Parquet files. It returns the number of rows in September 2018 without specifying a schema.
+The following sample shows the automatic schema inference capabilities for Parquet files. It returns the number of rows in September 2018 without specifying a schema.
> [!NOTE] > You don't have to specify columns in the OPENROWSET WITH clause when reading Parquet files. In that case, serverless SQL pool query service will utilize metadata in the Parquet file and bind columns by name.
synapse-analytics Reference Collation Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/reference-collation-types.md
Previously updated : 04/15/2020 Last updated : 02/15/2023 -+ # Database collation support for Synapse SQL in Azure Synapse Analytics Collations provide the locale, code page, sort order, and character sensitivity rules for character-based data types. Once chosen, all columns and expressions requiring collation information inherit the chosen collation from the database setting. The default inheritance can be overridden by explicitly stating a different collation for a character-based data type.
+This article applies to dedicated SQL pools in Azure Synapse workspaces, for more information on dedicated SQL pools (formerly SQL DW), see [Collation types supported for dedicated SQL pool (formerly SQL DW)](../sql-data-warehouse/sql-data-warehouse-reference-collation-types.md).
+ You can change the default database collation from the Azure portal when you create a new dedicated SQL pool database. This capability makes it even easier to create a new database using one of the 3800 supported database collations. You can specify the default serverless SQL pool database collation at creation time using CREATE DATABASE statement.
You can specify the default serverless SQL pool database collation at creation t
> In Azure Synapse Analytics, query text (including variables, constants, etc.) is always handled using the database-level collation, and not the server-level collation as in other SQL Server offerings. ## Change collation
-To change the default collation for dedicated SQL pool database, update to the Collation field in the provisioning experience. For example, if you wanted to change the default collation to case sensitive, you would rename the Collation from SQL_Latin1_General_CP1_CI_AS to SQL_Latin1_General_CP1_CS_AS.
+
+To change the default collation for dedicated SQL pool database, update to the **Collation** field in the provisioning experience. For example, if you wanted to change the default collation to case sensitive, you would change the collation from `SQL_Latin1_General_CP1_CI_AS` to `SQL_Latin1_General_CP1_CS_AS`.
To change the default collation for a serverless SQL pool database, you can use ALTER DATABASE statement.
-## List of unsupported collation types for Dedicated SQL pools
-* Japanese_Bushu_Kakusu_140_BIN
-* Japanese_Bushu_Kakusu_140_BIN2
-* Japanese_Bushu_Kakusu_140_CI_AI_VSS
-* Japanese_Bushu_Kakusu_140_CI_AI_WS_VSS
-* Japanese_Bushu_Kakusu_140_CI_AI_KS_VSS
-* Japanese_Bushu_Kakusu_140_CI_AI_KS_WS_VSS
-* Japanese_Bushu_Kakusu_140_CI_AS_VSS
-* Japanese_Bushu_Kakusu_140_CI_AS_WS_VSS
-* Japanese_Bushu_Kakusu_140_CI_AS_KS_VSS
-* Japanese_Bushu_Kakusu_140_CI_AS_KS_WS_VSS
-* Japanese_Bushu_Kakusu_140_CS_AI_VSS
-* Japanese_Bushu_Kakusu_140_CS_AI_WS_VSS
-* Japanese_Bushu_Kakusu_140_CS_AI_KS_VSS
-* Japanese_Bushu_Kakusu_140_CS_AI_KS_WS_VSS
-* Japanese_Bushu_Kakusu_140_CS_AS_VSS
-* Japanese_Bushu_Kakusu_140_CS_AS_WS_VSS
-* Japanese_Bushu_Kakusu_140_CS_AS_KS_VSS
-* Japanese_Bushu_Kakusu_140_CS_AS_KS_WS_VSS
-* Japanese_Bushu_Kakusu_140_CI_AI
-* Japanese_Bushu_Kakusu_140_CI_AI_WS
-* Japanese_Bushu_Kakusu_140_CI_AI_KS
-* Japanese_Bushu_Kakusu_140_CI_AI_KS_WS
-* Japanese_Bushu_Kakusu_140_CI_AS
-* Japanese_Bushu_Kakusu_140_CI_AS_WS
-* Japanese_Bushu_Kakusu_140_CI_AS_KS
-* Japanese_Bushu_Kakusu_140_CI_AS_KS_WS
-* Japanese_Bushu_Kakusu_140_CS_AI
-* Japanese_Bushu_Kakusu_140_CS_AI_WS
-* Japanese_Bushu_Kakusu_140_CS_AI_KS
-* Japanese_Bushu_Kakusu_140_CS_AI_KS_WS
-* Japanese_Bushu_Kakusu_140_CS_AS
-* Japanese_Bushu_Kakusu_140_CS_AS_WS
-* Japanese_Bushu_Kakusu_140_CS_AS_KS
-* Japanese_Bushu_Kakusu_140_CS_AS_KS_WS
-* Japanese_XJIS_140_BIN
-* Japanese_XJIS_140_BIN2
-* Japanese_XJIS_140_CI_AI_VSS
-* Japanese_XJIS_140_CI_AI_WS_VSS
-* Japanese_XJIS_140_CI_AI_KS_VSS
-* Japanese_XJIS_140_CI_AI_KS_WS_VSS
-* Japanese_XJIS_140_CI_AS_VSS
-* Japanese_XJIS_140_CI_AS_WS_VSS
-* Japanese_XJIS_140_CI_AS_KS_VSS
-* Japanese_XJIS_140_CI_AS_KS_WS_VSS
-* Japanese_XJIS_140_CS_AI_VSS
-* Japanese_XJIS_140_CS_AI_WS_VSS
-* Japanese_XJIS_140_CS_AI_KS_VSS
-* Japanese_XJIS_140_CS_AI_KS_WS_VSS
-* Japanese_XJIS_140_CS_AS_VSS
-* Japanese_XJIS_140_CS_AS_WS_VSS
-* Japanese_XJIS_140_CS_AS_KS_VSS
-* Japanese_XJIS_140_CS_AS_KS_WS_VSS
-* Japanese_XJIS_140_CI_AI
-* Japanese_XJIS_140_CI_AI_WS
-* Japanese_XJIS_140_CI_AI_KS
-* Japanese_XJIS_140_CI_AI_KS_WS
-* Japanese_XJIS_140_CI_AS
-* Japanese_XJIS_140_CI_AS_WS
-* Japanese_XJIS_140_CI_AS_KS
-* Japanese_XJIS_140_CI_AS_KS_WS
-* Japanese_XJIS_140_CS_AI
-* Japanese_XJIS_140_CS_AI_WS
-* Japanese_XJIS_140_CS_AI_KS
-* Japanese_XJIS_140_CS_AI_KS_WS
-* Japanese_XJIS_140_CS_AS
-* Japanese_XJIS_140_CS_AS_WS
-* Japanese_XJIS_140_CS_AS_KS
-* Japanese_XJIS_140_CS_AS_KS_WS
-* SQL_EBCDIC1141_CP1_CS_AS
-* SQL_EBCDIC277_2_CP1_CS_AS
-* UTF-8
+## Collation support
+
+The following table shows which collation types are supported by which service.
+
+| Collation Type | Serverless SQL Pool | Dedicated SQL Pool - Database & Column Level | Dedicated SQL Pool - External Table (Native Support) | Dedicated SQL Pool - External Table (Hadoop/Polybase) |
+|:--:|:-:|:--:|::|::|
+| Non-UTF-8 Collations | Yes | Yes | Yes | Yes |
+| UTF-8 | Yes | Yes | No | No |
+| Japanese_Bushu_Kakusu_140_* | Yes | Yes | No | No |
+| Japanese_XJIS_140_* | Yes | Yes | No | No |
+| SQL_EBCDIC1141_CP1_CS_AS | No | No | No | No |
+| SQL_EBCDIC277_2_CP1_CS_AS | No | No | No | No |
+ ## Check the current collation+ To check the current collation for the database, you can run the following T-SQL snippet:+ ```sql SELECT DATABASEPROPERTYEX(DB_NAME(), 'Collation') AS Collation; ```
-When passed 'Collation' as the property parameter, the DatabasePropertyEx function returns the current collation for the database specified. You can learn more about the DatabasePropertyEx function on MSDN.
+
+When passed 'Collation' as the property parameter, the DatabasePropertyEx function returns the current collation for the database specified. For more information, see [DATABASEPROPERTYEX](/sql/t-sql/functions/databasepropertyex-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true).
## Next steps Additional information on best practices for dedicated SQL pool and serverless SQL pool can be found in the following articles: -- [Best Practices for dedicated SQL pool](./best-practices-dedicated-sql-pool.md)
+- [Best practices for dedicated SQL pool](./best-practices-dedicated-sql-pool.md)
- [Best practices for serverless SQL pool](./best-practices-serverless-sql-pool.md)
synapse-analytics Synapse Link For Sql Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/synapse-link-for-sql-known-issues.md
The following is the list of known limitations for Azure Synapse Link for SQL.
* The security configuration from the source database will **NOT** be reflected in the target dedicated SQL pool. * Enabling Azure Synapse Link for SQL will create a new schema named `changefeed`. Don't use this schema, as it is reserved for system use. * Source tables with collations that are unsupported by dedicated SQL pools, such as UTF8 and certain Japanese collations, can't be replicated. Here's the [supported collations in Synapse SQL Pool](../sql/reference-collation-types.md).
- * Additionally, some Thai language collations are currently supported by Azure Synapse Link for SQL. These unsupported collations include:
+ * Additionally, some Thai language collations are currently not supported by Azure Synapse Link for SQL. These unsupported collations include:
* Thai100CaseInsensitiveAccentInsensitiveKanaSensitive * Thai100CaseInsensitiveAccentSensitiveSupplementaryCharacters * Thai100CaseSensitiveAccentInsensitiveKanaSensitive
The following is the list of known limitations for Azure Synapse Link for SQL.
* Thai100CaseSensitiveAccentSensitiveKanaSensitive * Thai100CaseSensitiveAccentSensitiveSupplementaryCharacters * ThaiCaseSensitiveAccentInsensitiveWidthSensitive
+ * Currently, the collation **Latin1_General_BIN2** is not supported as there is a known issue where the link cannot be stopped nor underlying tables could be removed from replication.
* Single row updates (including off-page storage) of > 370 MB are not supported. * Currently, if the primary key column(s) of the table are not the first columns in the table, and columns to the left of primary key column(s) are deleted, replication may fail. To troubleshoot, see [Troubleshoot: Azure Synapse Link for SQL initia