Updates from: 06/11/2022 01:12:34
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Scenario Spa Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-acquire-token.md
Previously updated : 04/2/2021 Last updated : 06/10/2022 #Customer intent: As an application developer, I want to know how to write a single-page application by using the Microsoft identity platform.
# Single-page application: Acquire a token to call an API
-The pattern for acquiring tokens for APIs with [MSAL.js](https://github.com/AzureAD/microsoft-authentication-library-for-js) is to first attempt a silent token request by using the `acquireTokenSilent` method. When this method is called, the library first checks the cache in browser storage to see if a valid token exists and returns it. When no valid token is in the cache, it attempts to use its refresh token to get the token. If the refresh token's 24-hour lifetime has expired, MSAL.js will open a hidden iframe to silently request a new authorization code, which it will exchange for a new, valid refresh token. For more information about single sign-on session and token lifetime values in Azure AD, see [Token lifetimes](active-directory-configurable-token-lifetimes.md).
+The pattern for acquiring tokens for APIs with [MSAL.js](https://github.com/AzureAD/microsoft-authentication-library-for-js) is to first attempt a silent token request by using the `acquireTokenSilent` method. When this method is called, the library first checks the cache in browser storage to see if a valid token exists and returns it. When no valid token is in the cache, it attempts to use its refresh token to get the token. If the refresh token's 24-hour lifetime has expired, MSAL.js will open a hidden iframe to silently request a new authorization code, which it will exchange for a new, valid refresh token. For more information about single sign-on (SSO) session and token lifetime values in Azure Active Directory (Azure AD), see [Token lifetimes](active-directory-configurable-token-lifetimes.md).
-The silent token requests to Azure AD might fail for reasons like a password change or updated conditional access policies. More often, failures are due to the refresh token's 24-hour lifetime expiring and [the browser blocking 3rd party cookies](reference-third-party-cookies-spas.md), which prevents the use of hidden iframes to continue authenticating the user. In these cases, you should invoke one of the interactive methods (which may prompt the user) to acquire tokens:
+The silent token requests to Azure AD might fail for reasons like a password change or updated conditional access policies. More often, failures are due to the refresh token's 24-hour lifetime expiring and [the browser blocking third party cookies](reference-third-party-cookies-spas.md), which prevents the use of hidden iframes to continue authenticating the user. In these cases, you should invoke one of the interactive methods (which may prompt the user) to acquire tokens:
-* [Pop-up window](#acquire-a-token-with-a-pop-up-window), by using `acquireTokenPopup`
-* [Redirect](#acquire-a-token-with-a-redirect), by using `acquireTokenRedirect`
+- [Pop-up window](#acquire-a-token-with-a-pop-up-window), by using `acquireTokenPopup`
+- [Redirect](#acquire-a-token-with-a-redirect), by using `acquireTokenRedirect`
## Choose between a pop-up or redirect experience The choice between a pop-up or redirect experience depends on your application flow:
-* If you don't want users to move away from your main application page during authentication, we recommend the pop-up method. Because the authentication redirect happens in a pop-up window, the state of the main application is preserved.
+- If you don't want users to move away from your main application page during authentication, we recommend the pop-up method. Because the authentication redirect happens in a pop-up window, the state of the main application is preserved.
-* If users have browser constraints or policies where pop-up windows are disabled, you can use the redirect method. Use the redirect method with the Internet Explorer browser, because there are [known issues with pop-up windows on Internet Explorer](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/internet-explorer.md#popups).
+- If users have browser constraints or policies where pop-up windows are disabled, you can use the redirect method. Use the redirect method with the Internet Explorer browser, because there are [known issues with pop-up windows on Internet Explorer](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/internet-explorer.md#popups).
-You can set the API scopes that you want the access token to include when it's building the access token request. Note that all requested scopes might not be granted in the access token. That depends on the user's consent.
+You can set the API scopes that you want the access token to include when it's building the access token request. All requested scopes might not be granted in the access token. That depends on the user's consent.
## Acquire a token with a pop-up window
The following code combines the previously described pattern with the methods fo
const account = publicClientApplication.getAllAccounts()[0]; const accessTokenRequest = {
- scopes: ["user.read"],
- account: account
-}
+ scopes: ["user.read"],
+ account: account,
+};
-publicClientApplication.acquireTokenSilent(accessTokenRequest).then(function(accessTokenResponse) {
+publicClientApplication
+ .acquireTokenSilent(accessTokenRequest)
+ .then(function (accessTokenResponse) {
// Acquire token silent success let accessToken = accessTokenResponse.accessToken; // Call your API with token callApi(accessToken);
-}).catch(function (error) {
+ })
+ .catch(function (error) {
//Acquire token silent failure, and send an interactive request if (error instanceof InteractionRequiredAuthError) {
- publicClientApplication.acquireTokenPopup(accessTokenRequest).then(function(accessTokenResponse) {
- // Acquire token interactive success
- let accessToken = accessTokenResponse.accessToken;
- // Call your API with token
- callApi(accessToken);
- }).catch(function(error) {
- // Acquire token interactive failure
- console.log(error);
+ publicClientApplication
+ .acquireTokenPopup(accessTokenRequest)
+ .then(function (accessTokenResponse) {
+ // Acquire token interactive success
+ let accessToken = accessTokenResponse.accessToken;
+ // Call your API with token
+ callApi(accessToken);
+ })
+ .catch(function (error) {
+ // Acquire token interactive failure
+ console.log(error);
}); } console.log(error);
-});
+ });
``` # [JavaScript (MSAL.js v1)](#tab/javascript1)
The following code combines the previously described pattern with the methods fo
```javascript const accessTokenRequest = {
- scopes: ["user.read"]
-}
+ scopes: ["user.read"],
+};
-userAgentApplication.acquireTokenSilent(accessTokenRequest).then(function(accessTokenResponse) {
+userAgentApplication
+ .acquireTokenSilent(accessTokenRequest)
+ .then(function (accessTokenResponse) {
// Acquire token silent success // Call API with token let accessToken = accessTokenResponse.accessToken;
-}).catch(function (error) {
+ })
+ .catch(function (error) {
//Acquire token silent failure, and send an interactive request if (error.errorMessage.indexOf("interaction_required") !== -1) {
- userAgentApplication.acquireTokenPopup(accessTokenRequest).then(function(accessTokenResponse) {
- // Acquire token interactive success
- }).catch(function(error) {
- // Acquire token interactive failure
- console.log(error);
+ userAgentApplication
+ .acquireTokenPopup(accessTokenRequest)
+ .then(function (accessTokenResponse) {
+ // Acquire token interactive success
+ })
+ .catch(function (error) {
+ // Acquire token interactive failure
+ console.log(error);
}); } console.log(error);
-});
+ });
``` # [Angular (MSAL.js v2)](#tab/angular2) The MSAL Angular wrapper provides the HTTP interceptor, which will automatically acquire access tokens silently and attach them to the HTTP requests to APIs.
-You can specify the scopes for APIs in the `protectedResourceMap` configuration option. `MsalInterceptor` will request these scopes when automatically acquiring tokens.
+You can specify the scopes for APIs in the `protectedResourceMap` configuration option. `MsalInterceptor` will request the specified scopes when automatically acquiring tokens.
```javascript // In app.module.ts
-import { PublicClientApplication, InteractionType } from '@azure/msal-browser';
-import { MsalInterceptor, MsalModule } from '@azure/msal-angular';
+import { PublicClientApplication, InteractionType } from "@azure/msal-browser";
+import { MsalInterceptor, MsalModule } from "@azure/msal-angular";
@NgModule({
- declarations: [
- // ...
- ],
- imports: [
- // ...
- MsalModule.forRoot( new PublicClientApplication({
- auth: {
- clientId: 'Enter_the_Application_Id_Here',
- },
- cache: {
- cacheLocation: 'localStorage',
- storeAuthStateInCookie: isIE,
- }
- }), {
- interactionType: InteractionType.Popup,
- authRequest: {
- scopes: ['user.read']
- }
- }, {
- interactionType: InteractionType.Popup,
- protectedResourceMap: new Map([
- ['https://graph.microsoft.com/v1.0/me', ['user.read']]
- ])
- })
- ],
- providers: [
- {
- provide: HTTP_INTERCEPTORS,
- useClass: MsalInterceptor,
- multi: true
- }
- ],
- bootstrap: [AppComponent]
+ declarations: [
+ // ...
+ ],
+ imports: [
+ // ...
+ MsalModule.forRoot(
+ new PublicClientApplication({
+ auth: {
+ clientId: "Enter_the_Application_Id_Here",
+ },
+ cache: {
+ cacheLocation: "localStorage",
+ storeAuthStateInCookie: isIE,
+ },
+ }),
+ {
+ interactionType: InteractionType.Popup,
+ authRequest: {
+ scopes: ["user.read"],
+ },
+ },
+ {
+ interactionType: InteractionType.Popup,
+ protectedResourceMap: new Map([
+ ["https://graph.microsoft.com/v1.0/me", ["user.read"]],
+ ]),
+ }
+ ),
+ ],
+ providers: [
+ {
+ provide: HTTP_INTERCEPTORS,
+ useClass: MsalInterceptor,
+ multi: true,
+ },
+ ],
+ bootstrap: [AppComponent],
})
-export class AppModule { }
+export class AppModule {}
``` For success and failure of the silent token acquisition, MSAL Angular provides events that you can subscribe to. It's also important to remember to unsubscribe.
export class AppComponent implements OnInit {
Alternatively, you can explicitly acquire tokens by using the acquire-token methods as described in the core MSAL.js library. # [Angular (MSAL.js v1)](#tab/angular1)+ The MSAL Angular wrapper provides the HTTP interceptor, which will automatically acquire access tokens silently and attach them to the HTTP requests to APIs.
-You can specify the scopes for APIs in the `protectedResourceMap` configuration option. `MsalInterceptor` will request these scopes when automatically acquiring tokens.
+You can specify the scopes for APIs in the `protectedResourceMap` configuration option. `MsalInterceptor` will request the specified scopes when automatically acquiring tokens.
```javascript // app.module.ts
You can specify the scopes for APIs in the `protectedResourceMap` configuration
], imports: [ // ...
- MsalModule.forRoot({
- auth: {
- clientId: 'Enter_the_Application_Id_Here',
+ MsalModule.forRoot(
+ {
+ auth: {
+ clientId: "Enter_the_Application_Id_Here",
+ },
+ },
+ {
+ popUp: !isIE,
+ consentScopes: ["user.read", "openid", "profile"],
+ protectedResourceMap: [
+ ["https://graph.microsoft.com/v1.0/me", ["user.read"]],
+ ],
}
- },
- {
- popUp: !isIE,
- consentScopes: [
- 'user.read',
- 'openid',
- 'profile',
- ],
- protectedResourceMap: [
- ['https://graph.microsoft.com/v1.0/me', ['user.read']]
- ]
- })
+ ),
], providers: [ { provide: HTTP_INTERCEPTORS, useClass: MsalInterceptor,
- multi: true
- }
+ multi: true,
+ },
],
- bootstrap: [AppComponent]
+ bootstrap: [AppComponent],
})
-export class AppModule { }
+export class AppModule {}
``` For success and failure of the silent token acquisition, MSAL Angular provides callbacks that you can subscribe to. It's also important to remember to unsubscribe.
Alternatively, you can explicitly acquire tokens by using the acquire-token meth
The following code combines the previously described pattern with the methods for a pop-up experience: ```javascript
-import { InteractionRequiredAuthError, InteractionStatus } from "@azure/msal-browser";
+import {
+ InteractionRequiredAuthError,
+ InteractionStatus,
+} from "@azure/msal-browser";
import { AuthenticatedTemplate, useMsal } from "@azure/msal-react"; function ProtectedComponent() {
- const { instance, inProgress, accounts } = useMsal();
- const [apiData, setApiData] = useState(null);
--
- useEffect(() => {
- if (!apiData && inProgress === InteractionStatus.None) {
- const accessTokenRequest = {
- scopes: ["user.read"],
- account: accounts[0]
- }
- instance.acquireTokenSilent(accessTokenRequest).then((accessTokenResponse) => {
- // Acquire token silent success
+ const { instance, inProgress, accounts } = useMsal();
+ const [apiData, setApiData] = useState(null);
+
+ useEffect(() => {
+ if (!apiData && inProgress === InteractionStatus.None) {
+ const accessTokenRequest = {
+ scopes: ["user.read"],
+ account: accounts[0],
+ };
+ instance
+ .acquireTokenSilent(accessTokenRequest)
+ .then((accessTokenResponse) => {
+ // Acquire token silent success
+ let accessToken = accessTokenResponse.accessToken;
+ // Call your API with token
+ callApi(accessToken).then((response) => {
+ setApiData(response);
+ });
+ })
+ .catch((error) => {
+ if (error instanceof InteractionRequiredAuthError) {
+ instance
+ .acquireTokenPopup(accessTokenRequest)
+ .then(function (accessTokenResponse) {
+ // Acquire token interactive success
let accessToken = accessTokenResponse.accessToken; // Call your API with token
- callApi(accessToken).then((response) => { setApiData(response) });
- }).catch((error) => {
- if (error instanceof InteractionRequiredAuthError) {
- instance.acquireTokenPopup(accessTokenRequest).then(function(accessTokenResponse) {
- // Acquire token interactive success
- let accessToken = accessTokenResponse.accessToken;
- // Call your API with token
- callApi(accessToken).then((response) => { setApiData(response) });
- }).catch(function(error) {
- // Acquire token interactive failure
- console.log(error);
- });
- }
+ callApi(accessToken).then((response) => {
+ setApiData(response);
+ });
+ })
+ .catch(function (error) {
+ // Acquire token interactive failure
console.log(error);
- })
- }
- }, [instance, accounts, inProgress, apiData]);
+ });
+ }
+ console.log(error);
+ });
+ }
+ }, [instance, accounts, inProgress, apiData]);
- return <p>Return your protected content here: {apiData}</p>
+ return <p>Return your protected content here: {apiData}</p>;
} function App() {
- return (
- <AuthenticatedTemplate>
- <ProtectedComponent />
- </ AuthenticatedTemplate>
- )
+ return (
+ <AuthenticatedTemplate>
+ <ProtectedComponent />
+ </AuthenticatedTemplate>
+ );
} ```
-Alternatively, if you need to acquire a token outside of a React component you can call `acquireTokenSilent` but should not fallback to interaction if it fails. All interaction should take place underneath the `MsalProvider` component in your component tree.
+Alternatively, if you need to acquire a token outside of a React component you can call `acquireTokenSilent` but shouldn't fall back to interaction if it fails. All interactions should take place underneath the `MsalProvider` component in your component tree.
```javascript // MSAL.js v2 exposes several account APIs, logic to determine which account to use is the responsibility of the developer const account = publicClientApplication.getAllAccounts()[0]; const accessTokenRequest = {
- scopes: ["user.read"],
- account: account
-}
+ scopes: ["user.read"],
+ account: account,
+};
// Use the same publicClientApplication instance provided to MsalProvider
-publicClientApplication.acquireTokenSilent(accessTokenRequest).then(function(accessTokenResponse) {
+publicClientApplication
+ .acquireTokenSilent(accessTokenRequest)
+ .then(function (accessTokenResponse) {
// Acquire token silent success let accessToken = accessTokenResponse.accessToken; // Call your API with token callApi(accessToken);
-}).catch(function (error) {
+ })
+ .catch(function (error) {
//Acquire token silent failure console.log(error);
-});
+ });
```
The following pattern is as described earlier but shown with a redirect method t
```javascript const redirectResponse = await publicClientApplication.handleRedirectPromise(); if (redirectResponse !== null) {
- // Acquire token silent success
- let accessToken = redirectResponse.accessToken;
- // Call your API with token
- callApi(accessToken);
+ // Acquire token silent success
+ let accessToken = redirectResponse.accessToken;
+ // Call your API with token
+ callApi(accessToken);
} else {
- // MSAL.js v2 exposes several account APIs, logic to determine which account to use is the responsibility of the developer
- const account = publicClientApplication.getAllAccounts()[0];
-
- const accessTokenRequest = {
- scopes: ["user.read"],
- account: account
- }
-
- publicClientApplication.acquireTokenSilent(accessTokenRequest).then(function(accessTokenResponse) {
- // Acquire token silent success
- // Call API with token
- let accessToken = accessTokenResponse.accessToken;
- // Call your API with token
- callApi(accessToken);
- }).catch(function (error) {
- //Acquire token silent failure, and send an interactive request
- console.log(error);
- if (error instanceof InteractionRequiredAuthError) {
- publicClientApplication.acquireTokenRedirect(accessTokenRequest);
- }
+ // MSAL.js v2 exposes several account APIs, logic to determine which account to use is the responsibility of the developer
+ const account = publicClientApplication.getAllAccounts()[0];
+
+ const accessTokenRequest = {
+ scopes: ["user.read"],
+ account: account,
+ };
+
+ publicClientApplication
+ .acquireTokenSilent(accessTokenRequest)
+ .then(function (accessTokenResponse) {
+ // Acquire token silent success
+ // Call API with token
+ let accessToken = accessTokenResponse.accessToken;
+ // Call your API with token
+ callApi(accessToken);
+ })
+ .catch(function (error) {
+ //Acquire token silent failure, and send an interactive request
+ console.log(error);
+ if (error instanceof InteractionRequiredAuthError) {
+ publicClientApplication.acquireTokenRedirect(accessTokenRequest);
+ }
}); } ```
The following pattern is as described earlier but shown with a redirect method t
```javascript function authCallback(error, response) {
- // Handle redirect response
+ // Handle redirect response
} userAgentApplication.handleRedirectCallback(authCallback); const accessTokenRequest: AuthenticationParameters = {
- scopes: ["user.read"]
-}
+ scopes: ["user.read"],
+};
-userAgentApplication.acquireTokenSilent(accessTokenRequest).then(function(accessTokenResponse) {
+userAgentApplication
+ .acquireTokenSilent(accessTokenRequest)
+ .then(function (accessTokenResponse) {
// Acquire token silent success // Call API with token let accessToken = accessTokenResponse.accessToken;
-}).catch(function (error) {
+ })
+ .catch(function (error) {
//Acquire token silent failure, and send an interactive request console.log(error); if (error.errorMessage.indexOf("interaction_required") !== -1) {
- userAgentApplication.acquireTokenRedirect(accessTokenRequest);
+ userAgentApplication.acquireTokenRedirect(accessTokenRequest);
}
-});
+ });
``` ## Request optional claims You can use optional claims for the following purposes: -- Include additional claims in tokens for your application.
+- Include extra claims in tokens for your application.
- Change the behavior of certain claims that Azure AD returns in tokens. - Add and access custom claims for your application.
To request optional claims in `IdToken`, you can send a stringified claims objec
```javascript var claims = {
- optionalClaims:
- {
- idToken: [
- {
- name: "auth_time",
- essential: true
- }
- ],
- }
+ optionalClaims: {
+ idToken: [
+ {
+ name: "auth_time",
+ essential: true,
+ },
+ ],
+ },
};
-
+ var request = {
- scopes: ["user.read"],
- claimsRequest: JSON.stringify(claims)
+ scopes: ["user.read"],
+ claimsRequest: JSON.stringify(claims),
}; myMSALObj.acquireTokenPopup(request);
This code is the same as described earlier, except we recommend bootstrapping th
```javascript // In app.module.ts
-import { PublicClientApplication, InteractionType } from '@azure/msal-browser';
-import { MsalInterceptor, MsalModule, MsalRedirectComponent } from '@azure/msal-angular';
+import { PublicClientApplication, InteractionType } from "@azure/msal-browser";
+import {
+ MsalInterceptor,
+ MsalModule,
+ MsalRedirectComponent,
+} from "@azure/msal-angular";
@NgModule({
- declarations: [
- // ...
- ],
- imports: [
- // ...
- MsalModule.forRoot( new PublicClientApplication({
- auth: {
- clientId: 'Enter_the_Application_Id_Here',
- },
- cache: {
- cacheLocation: 'localStorage',
- storeAuthStateInCookie: isIE,
- }
- }), {
- interactionType: InteractionType.Redirect,
- authRequest: {
- scopes: ['user.read']
- }
- }, {
- interactionType: InteractionType.Redirect,
- protectedResourceMap: new Map([
- ['https://graph.microsoft.com/v1.0/me', ['user.read']]
- ])
- })
- ],
- providers: [
- {
- provide: HTTP_INTERCEPTORS,
- useClass: MsalInterceptor,
- multi: true
- }
- ],
- bootstrap: [AppComponent, MsalRedirectComponent]
+ declarations: [
+ // ...
+ ],
+ imports: [
+ // ...
+ MsalModule.forRoot(
+ new PublicClientApplication({
+ auth: {
+ clientId: "Enter_the_Application_Id_Here",
+ },
+ cache: {
+ cacheLocation: "localStorage",
+ storeAuthStateInCookie: isIE,
+ },
+ }),
+ {
+ interactionType: InteractionType.Redirect,
+ authRequest: {
+ scopes: ["user.read"],
+ },
+ },
+ {
+ interactionType: InteractionType.Redirect,
+ protectedResourceMap: new Map([
+ ["https://graph.microsoft.com/v1.0/me", ["user.read"]],
+ ]),
+ }
+ ),
+ ],
+ providers: [
+ {
+ provide: HTTP_INTERCEPTORS,
+ useClass: MsalInterceptor,
+ multi: true,
+ },
+ ],
+ bootstrap: [AppComponent, MsalRedirectComponent],
})
-export class AppModule { }
+export class AppModule {}
``` # [Angular (MSAL.js v1)](#tab/angular1)+ This code is the same as described earlier. # [React](#tab/react)
This code is the same as described earlier.
If `acquireTokenSilent` fails, fallback to `acquireTokenRedirect`. This method will initiate a full-frame redirect and the response will be handled when returning to the application. When this component is rendered after returning from the redirect, `acquireTokenSilent` should now succeed as the tokens will be pulled from the cache. ```javascript
-import { InteractionRequiredAuthError, InteractionStatus } from "@azure/msal-browser";
+import {
+ InteractionRequiredAuthError,
+ InteractionStatus,
+} from "@azure/msal-browser";
import { AuthenticatedTemplate, useMsal } from "@azure/msal-react"; function ProtectedComponent() {
- const { instance, inProgress, accounts } = useMsal();
- const [apiData, setApiData] = useState(null);
--
- useEffect(() => {
- const accessTokenRequest = {
- scopes: ["user.read"],
- account: accounts[0]
- }
- if (!apiData && inProgress === InteractionStatus.None) {
- instance.acquireTokenSilent(accessTokenRequest).then((accessTokenResponse) => {
- // Acquire token silent success
- let accessToken = accessTokenResponse.accessToken;
- // Call your API with token
- callApi(accessToken).then((response) => { setApiData(response) });
- }).catch((error) => {
- if (error instanceof InteractionRequiredAuthError) {
- instance.acquireTokenRedirect(accessTokenRequest);
- }
- console.log(error);
- })
- }
- }, [instance, accounts, inProgress, apiData]);
+ const { instance, inProgress, accounts } = useMsal();
+ const [apiData, setApiData] = useState(null);
- return <p>Return your protected content here: {apiData}</p>
+ useEffect(() => {
+ const accessTokenRequest = {
+ scopes: ["user.read"],
+ account: accounts[0],
+ };
+ if (!apiData && inProgress === InteractionStatus.None) {
+ instance
+ .acquireTokenSilent(accessTokenRequest)
+ .then((accessTokenResponse) => {
+ // Acquire token silent success
+ let accessToken = accessTokenResponse.accessToken;
+ // Call your API with token
+ callApi(accessToken).then((response) => {
+ setApiData(response);
+ });
+ })
+ .catch((error) => {
+ if (error instanceof InteractionRequiredAuthError) {
+ instance.acquireTokenRedirect(accessTokenRequest);
+ }
+ console.log(error);
+ });
+ }
+ }, [instance, accounts, inProgress, apiData]);
+
+ return <p>Return your protected content here: {apiData}</p>;
} function App() {
- return (
- <AuthenticatedTemplate>
- <ProtectedComponent />
- </ AuthenticatedTemplate>
- )
+ return (
+ <AuthenticatedTemplate>
+ <ProtectedComponent />
+ </AuthenticatedTemplate>
+ );
} ```
-Alternatively, if you need to acquire a token outside of a React component you can call `acquireTokenSilent` but should not fallback to interaction if it fails. All interaction should take place underneath the `MsalProvider` component in your component tree.
+Alternatively, if you need to acquire a token outside of a React component you can call `acquireTokenSilent` but shouldn't fall back to interaction if it fails. All interactions should take place underneath the `MsalProvider` component in your component tree.
```javascript // MSAL.js v2 exposes several account APIs, logic to determine which account to use is the responsibility of the developer const account = publicClientApplication.getAllAccounts()[0]; const accessTokenRequest = {
- scopes: ["user.read"],
- account: account
-}
+ scopes: ["user.read"],
+ account: account,
+};
// Use the same publicClientApplication instance provided to MsalProvider
-publicClientApplication.acquireTokenSilent(accessTokenRequest).then(function(accessTokenResponse) {
+publicClientApplication
+ .acquireTokenSilent(accessTokenRequest)
+ .then(function (accessTokenResponse) {
// Acquire token silent success let accessToken = accessTokenResponse.accessToken; // Call your API with token callApi(accessToken);
-}).catch(function (error) {
+ })
+ .catch(function (error) {
//Acquire token silent failure console.log(error);
-});
+ });
```
active-directory Groups Dynamic Membership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-membership.md
Previously updated : 06/03/2022 Last updated : 06/08/2022
A membership rule that automatically populates a group with users or devices is
- Operator - Value
-The order of the parts within an expression are important to avoid syntax errors.
+The order of the parts within an expression is important to avoid syntax errors.
## Supported properties
dirSyncEnabled |true false |user.dirSyncEnabled -eq true
| Properties | Allowed values | Usage | | | | |
-| city |Any string value or *null* |(user.city -eq "value") |
-| country |Any string value or *null* |(user.country -eq "value") |
-| companyName | Any string value or *null* | (user.companyName -eq "value") |
-| department |Any string value or *null* |(user.department -eq "value") |
-| displayName |Any string value |(user.displayName -eq "value") |
-| employeeId |Any string value |(user.employeeId -eq "value")<br>(user.employeeId -ne *null*) |
-| facsimileTelephoneNumber |Any string value or *null* |(user.facsimileTelephoneNumber -eq "value") |
-| givenName |Any string value or *null* |(user.givenName -eq "value") |
-| jobTitle |Any string value or *null* |(user.jobTitle -eq "value") |
-| mail |Any string value or *null* (SMTP address of the user) |(user.mail -eq "value") |
-| mailNickName |Any string value (mail alias of the user) |(user.mailNickName -eq "value") |
-| memberOf | Any string value (valid group object ID) | (device.memberof -any (group.objectId -in ['value'])) |
-| mobile |Any string value or *null* |(user.mobile -eq "value") |
-| objectId |GUID of the user object |(user.objectId -eq "11111111-1111-1111-1111-111111111111") |
-| onPremisesDistinguishedName (preview)| Any string value or *null* |(user.onPremisesDistinguishedName -eq "value") |
-| onPremisesSecurityIdentifier | On-premises security identifier (SID) for users who were synchronized from on-premises to the cloud. |(user.onPremisesSecurityIdentifier -eq "S-1-1-11-1111111111-1111111111-1111111111-1111111") |
-| passwordPolicies |None DisableStrongPassword DisablePasswordExpiration DisablePasswordExpiration, DisableStrongPassword |(user.passwordPolicies -eq "DisableStrongPassword") |
-| physicalDeliveryOfficeName |Any string value or *null* |(user.physicalDeliveryOfficeName -eq "value") |
-| postalCode |Any string value or *null* |(user.postalCode -eq "value") |
-| preferredLanguage |ISO 639-1 code |(user.preferredLanguage -eq "en-US") |
-| sipProxyAddress |Any string value or *null* |(user.sipProxyAddress -eq "value") |
-| state |Any string value or *null* |(user.state -eq "value") |
-| streetAddress |Any string value or *null* |(user.streetAddress -eq "value") |
-| surname |Any string value or *null* |(user.surname -eq "value") |
-| telephoneNumber |Any string value or *null* |(user.telephoneNumber -eq "value") |
-| usageLocation |Two lettered country/region code |(user.usageLocation -eq "US") |
-| userPrincipalName |Any string value |(user.userPrincipalName -eq "alias@domain") |
-| userType |member guest *null* |(user.userType -eq "Member") |
+| city |Any string value or *null* | user.city -eq "value" |
+| country |Any string value or *null* | user.country -eq "value" |
+| companyName | Any string value or *null* | user.companyName -eq "value" |
+| department |Any string value or *null* | user.department -eq "value" |
+| displayName |Any string value | user.displayName -eq "value" |
+| employeeId |Any string value | user.employeeId -eq "value"<br>user.employeeId -ne *null* |
+| facsimileTelephoneNumber |Any string value or *null* | user.facsimileTelephoneNumber -eq "value" |
+| givenName |Any string value or *null* | user.givenName -eq "value" |
+| jobTitle |Any string value or *null* | user.jobTitle -eq "value" |
+| mail |Any string value or *null* (SMTP address of the user) | user.mail -eq "value" |
+| mailNickName |Any string value (mail alias of the user) | user.mailNickName -eq "value" |
+| memberOf | Any string value (valid group object ID) | user.memberof -any (group.objectId -in ['value']) |
+| mobile |Any string value or *null* | user.mobile -eq "value" |
+| objectId |GUID of the user object | user.objectId -eq "11111111-1111-1111-1111-111111111111" |
+| onPremisesDistinguishedName (preview)| Any string value or *null* | user.onPremisesDistinguishedName -eq "value" |
+| onPremisesSecurityIdentifier | On-premises security identifier (SID) for users who were synchronized from on-premises to the cloud. | user.onPremisesSecurityIdentifier -eq "S-1-1-11-1111111111-1111111111-1111111111-1111111" |
+| passwordPolicies |None<br>DisableStrongPassword<br>DisablePasswordExpiration<br>DisablePasswordExpiration, DisableStrongPassword | user.passwordPolicies -eq "DisableStrongPassword" |
+| physicalDeliveryOfficeName |Any string value or *null* | user.physicalDeliveryOfficeName -eq "value" |
+| postalCode |Any string value or *null* | user.postalCode -eq "value" |
+| preferredLanguage |ISO 639-1 code | user.preferredLanguage -eq "en-US" |
+| sipProxyAddress |Any string value or *null* | user.sipProxyAddress -eq "value" |
+| state |Any string value or *null* | user.state -eq "value" |
+| streetAddress |Any string value or *null* | user.streetAddress -eq "value" |
+| surname |Any string value or *null* | user.surname -eq "value" |
+| telephoneNumber |Any string value or *null* | user.telephoneNumber -eq "value" |
+| usageLocation |Two lettered country/region code | user.usageLocation -eq "US" |
+| userPrincipalName |Any string value | user.userPrincipalName -eq "alias@domain" |
+| userType |member guest *null* | user.userType -eq "Member" |
### Properties of type string collection
-| Properties | Allowed values | Usage |
+| Properties | Allowed values | Example |
| | | |
-| otherMails |Any string value |(user.otherMails -contains "alias@domain") |
-| proxyAddresses |SMTP: alias@domain smtp: alias@domain |(user.proxyAddresses -contains "SMTP: alias@domain") |
+| otherMails |Any string value | user.otherMails -contains "alias@domain" |
+| proxyAddresses |SMTP: alias@domain smtp: alias@domain | user.proxyAddresses -contains "SMTP: alias@domain" |
For the properties used for device rules, see [Rules for devices](#rules-for-devices).
The **-match** operator is used for matching any regular expression. Examples:
``` user.displayName -match "Da.*" ```
-Da, Dav, David evaluate to true, aDa evaluates to false.
+`Da`, `Dav`, `David` evaluate to true, aDa evaluates to false.
``` user.displayName -match ".*vid" ```
-David evaluates to true, Da evaluates to false.
+`David` evaluates to true, `Da` evaluates to false.
## Supported values
user.assignedPlans -any (assignedPlan.service -eq "SCO" -and assignedPlan.capabi
#### Example 3
-The following expression selects all users who have no asigned service plan:
+The following expression selects all users who have no assigned service plan:
``` user.assignedPlans -all (assignedPlan.servicePlanId -eq "")
The following device attributes can be used.
Device attribute | Values | Example -- | -- | -
- accountEnabled | true false | (device.accountEnabled -eq true)
- displayName | any string value |(device.displayName -eq "Rob iPhone")
- deviceOSType | any string value | (device.deviceOSType -eq "iPad") -or (device.deviceOSType -eq "iPhone")<br>(device.deviceOSType -contains "AndroidEnterprise")<br>(device.deviceOSType -eq "AndroidForWork")<br>(device.deviceOSType -eq "Windows")
- deviceOSVersion | any string value | (device.deviceOSVersion -eq "9.1")<br>(device.deviceOSVersion -startsWith "10.0.1")
- deviceCategory | a valid device category name | (device.deviceCategory -eq "BYOD")
- deviceManufacturer | any string value | (device.deviceManufacturer -eq "Samsung")
- deviceModel | any string value | (device.deviceModel -eq "iPad Air")
- deviceOwnership | Personal, Company, Unknown | (device.deviceOwnership -eq "Company")
- enrollmentProfileName | Apple Device Enrollment Profile name, Android Enterprise Corporate-owned dedicated device Enrollment Profile name, or Windows Autopilot profile name | (device.enrollmentProfileName -eq "DEP iPhones")
- isRooted | true false | (device.isRooted -eq true)
- managementType | MDM (for mobile devices) | (device.managementType -eq "MDM")
- memberOf | Any string value (valid group object ID) | (user.memberof -any (group.objectId -in ['value']))
- deviceId | a valid Azure AD device ID | (device.deviceId -eq "d4fe7726-5966-431c-b3b8-cddc8fdb717d")
- objectId | a valid Azure AD object ID | (device.objectId -eq "76ad43c9-32c5-45e8-a272-7b58b58f596d")
- devicePhysicalIds | any string value used by Autopilot, such as all Autopilot devices, OrderID, or PurchaseOrderID | (device.devicePhysicalIDs -any _ -contains "[ZTDId]") (device.devicePhysicalIds -any _ -eq "[OrderID]:179887111881") (device.devicePhysicalIds -any _ -eq "[PurchaseOrderId]:76222342342")
- systemLabels | any string matching the Intune device property for tagging Modern Workplace devices | (device.systemLabels -contains "M365Managed")
+ accountEnabled | true false | device.accountEnabled -eq true
+ displayName | any string value | device.displayName -eq "Rob iPhone"
+ deviceOSType | any string value | (device.deviceOSType -eq "iPad") -or (device.deviceOSType -eq "iPhone")<br>device.deviceOSType -contains "AndroidEnterprise"<br>device.deviceOSType -eq "AndroidForWork"<br>device.deviceOSType -eq "Windows"
+ deviceOSVersion | any string value | device.deviceOSVersion -eq "9.1"<br>device.deviceOSVersion -startsWith "10.0.1"
+ deviceCategory | a valid device category name | device.deviceCategory -eq "BYOD"
+ deviceManufacturer | any string value | device.deviceManufacturer -eq "Samsung"
+ deviceModel | any string value | device.deviceModel -eq "iPad Air"
+ deviceOwnership | Personal, Company, Unknown | device.deviceOwnership -eq "Company"
+ enrollmentProfileName | Apple Device Enrollment Profile name, Android Enterprise Corporate-owned dedicated device Enrollment Profile name, or Windows Autopilot profile name | device.enrollmentProfileName -eq "DEP iPhones"
+ isRooted | true false | device.isRooted -eq true
+ managementType | MDM (for mobile devices) | device.managementType -eq "MDM"
+ memberOf | Any string value (valid group object ID) | device.memberof -any (group.objectId -in ['value'])
+ deviceId | a valid Azure AD device ID | device.deviceId -eq "d4fe7726-5966-431c-b3b8-cddc8fdb717d"
+ objectId | a valid Azure AD object ID | device.objectId -eq "76ad43c9-32c5-45e8-a272-7b58b58f596d"
+ devicePhysicalIds | any string value used by Autopilot, such as all Autopilot devices, OrderID, or PurchaseOrderID | device.devicePhysicalIDs -any _ -contains "[ZTDId]"<br>(device.devicePhysicalIds -any _ -eq "[OrderID]:179887111881"<br>(device.devicePhysicalIds -any _ -eq "[PurchaseOrderId]:76222342342"
+ systemLabels | any string matching the Intune device property for tagging Modern Workplace devices | device.systemLabels -contains "M365Managed"
> [!NOTE] > For the deviceOwnership when creating Dynamic Groups for devices you need to set the value equal to "Company". On Intune the device ownership is represented instead as Corporate. Refer to [OwnerTypes](/intune/reports-ref-devices#ownertypes) for more details.
active-directory Groups Dynamic Rule Member Of https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-rule-member-of.md
This feature can be used in the Azure AD portal, Microsoft Graph, and in PowerSh
1. Example device rule: `device.memberof -any (group.objectId -in ['groupId', 'groupId'])` 1. Select **OK**. 1. Select **Create group**.-
-## Next steps
-
-To report an issue, contact us in the [Teams channel](https://teams.microsoft.com/l/channel/19%3a39Q7HFuexXXE3Vh90woJRNQQBbZl1YyesJHIEquuQCw1%40thread.tacv2/General?groupId=bfd3bfb8-e0db-4e9e-9008-5d7ba8c996b0&tenantId=72f988bf-86f1-41af-91ab-2d7cd011db47).
active-directory Cross Cloud Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-cloud-settings.md
To set up B2B collaboration between partner organizations in different Microsoft
After each organization has completed these steps, Azure AD B2B collaboration between the organizations is enabled.
+> [!NOTE]
+> B2B direct connect is not supported for collaboration with Azure AD tenants in a different Microsoft cloud.
+ ## Before you begin - **Obtain the partner's tenant ID.** To enable B2B collaboration with a partner's Azure AD organization in another Microsoft Azure cloud, you'll need the partner's tenant ID. Using an organization's domain name for lookup isn't available in cross-cloud scenarios.
active-directory Cross Tenant Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-overview.md
To set up B2B collaboration, both organizations configure their Microsoft cloud
- Use B2B collaboration to invite a user in the partner tenant to access resources in your organization, including web line-of-business apps, SaaS apps, and SharePoint Online sites, documents, and files. - Apply Conditional Access policies to the B2B collaboration user and opt to trust device claims (compliant claims and hybrid Azure AD joined claims) from the userΓÇÖs home tenant.
+> [!NOTE]
+> B2B direct connect is not supported for collaboration with Azure AD tenants in a different Microsoft cloud.
+ For configuration steps, see [Configure Microsoft cloud settings for B2B collaboration (Preview)](cross-cloud-settings.md). > [!NOTE]
active-directory Identity Secure Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/identity-secure-score.md
Previously updated : 06/02/2021 Last updated : 06/09/2022
The secure score helps you to:
### Who can use the identity secure score?
-The identity secure score can be used by the following roles:
+To access identity secure score, you must be assigned one of the following roles in Azure Active Directory.
-- Global admin-- Security admin-- Security readers
+#### Read and write roles
+
+With read and write access, you can make changes and directly interact with identity secure score.
+
+* Global administrator
+* Security administrator
+* Exchange administrator
+* SharePoint administrator
+
+#### Read-only roles
+
+With read-only access, you aren't able to edit status for an improvement action.
+
+* Helpdesk administrator
+* User administrator
+* Service support administrator
+* Security reader
+* Security operator
+* Global reader
### How are controls scored?
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
For more information on setting the PowerShell execution policy, see [Set-Execut
### Azure AD Connect server The Azure AD Connect server contains critical identity data. It's important that administrative access to this server is properly secured. Follow the guidelines in [Securing privileged access](/windows-server/identity/securing-privileged-access/securing-privileged-access).
-The Azure AD Connect server must be treated as a Tier 0 component as documented in the [Active Directory administrative tier model](/windows-server/identity/securing-privileged-access/securing-privileged-access-reference-material). We recommend hardening the Azure AD Connect server as a Control Plane asset by following the guidance provided in [Secure Privileged Access]( https://docs.microsoft.com/security/compass/overview)
+The Azure AD Connect server must be treated as a Tier 0 component as documented in the [Active Directory administrative tier model](/windows-server/identity/securing-privileged-access/securing-privileged-access-reference-material). We recommend hardening the Azure AD Connect server as a Control Plane asset by following the guidance provided in [Secure Privileged Access](/security/compass/overview)
To read more about securing your Active Directory environment, see [Best practices for securing Active Directory](/windows-server/identity/ad-ds/plan/security-best-practices/best-practices-for-securing-active-directory).
To read more about securing your Active Directory environment, see [Best practic
### Harden your Azure AD Connect server We recommend that you harden your Azure AD Connect server to decrease the security attack surface for this critical component of your IT environment. Following these recommendations will help to mitigate some security risks to your organization. -- We recommend hardening the Azure AD Connect server as a Control Plane (formerly Tier 0) asset by following the guidance provided in [Secure Privileged Access]( https://docs.microsoft.com/security/compass/overview) and [Active Directory administrative tier model](/windows-server/identity/securing-privileged-access/securing-privileged-access-reference-material).
+- We recommend hardening the Azure AD Connect server as a Control Plane (formerly Tier 0) asset by following the guidance provided in [Secure Privileged Access](/security/compass/overview) and [Active Directory administrative tier model](/windows-server/identity/securing-privileged-access/securing-privileged-access-reference-material).
- Restrict administrative access to the Azure AD Connect server to only domain administrators or other tightly controlled security groups. - Create a [dedicated account for all personnel with privileged access](/windows-server/identity/securing-privileged-access/securing-privileged-access). Administrators shouldn't be browsing the web, checking their email, and doing day-to-day productivity tasks with highly privileged accounts. - Follow the guidance provided in [Securing privileged access](/windows-server/identity/securing-privileged-access/securing-privileged-access).
active-directory Admin Consent Workflow Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/admin-consent-workflow-overview.md
Previously updated : 03/30/2022 Last updated : 06/10/2022
active-directory Configure Admin Consent Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-admin-consent-workflow.md
Previously updated : 05/27/2022 Last updated : 06/10/2022
active-directory Secure Hybrid Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/secure-hybrid-access.md
The following partners offer pre-built solutions to support **conditional access
The following partners offer pre-built solutions and detailed guidance for integrating with Azure AD.
+- [AWS](../saas-apps/aws-clientvpn-tutorial.md)
+
+- [Check Point](../saas-apps/check-point-remote-access-vpn-tutorial.md)
+ - [Cisco AnyConnect](../saas-apps/cisco-anyconnect.md) - [Fortinet](../saas-apps/fortigate-ssl-vpn-tutorial.md)
active-directory Howto Use Azure Monitor Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md
Previously updated : 01/10/2022 Last updated : 06/10/2022 # How to use Azure Monitor workbooks for Azure Active Directory reports
-> [!IMPORTANT]
-> In order to optimize the underlying queries in this workbook, please click on "Edit", click on Settings icon and select the workspace where you want to run these queries. Workbooks by default will select all workspaces where you are routing your Azure AD logs.
+As an IT admin, you need powerful tools to turn the data about your Azure AD tenant into a visual representation that enables you to understand how your identity management environment is doing. Azure Monitor workbooks are an example for such a tool.
-Do you want to:
+This article gives you an overview of how you can use Azure Monitor workbooks for Azure Active Directory reports to analyze your Azure AD tenant.
-- Understand the effect of your [Conditional Access policies](../conditional-access/overview.md) on your users' sign-in experience? -- Troubleshoot sign-in failures to get a better view of your organization's sign-in health and to resolve issues quickly?
+## What it is
-- Understand risky users and risk detections trends in your tenant?
+Azure AD tracks all activities in your Azure AD in the activity logs. The data in your Azure AD logs enables you to assess how your Azure AD is doing. The Azure Active Directory portal gives you access to three activity logs:
-- Know who's using legacy authentications to sign in to your environment? (By [blocking legacy authentication](../conditional-access/block-legacy-authentication.md), you can improve your tenant's protection.)
+- **[Sign-ins](concept-sign-ins.md)** ΓÇô Information about sign-ins and how your resources are used by your users.
+- **[Audit](concept-audit-logs.md)** ΓÇô Information about changes applied to your tenant such as users and group management or updates applied to your tenantΓÇÖs resources.
+- **[Provisioning](concept-provisioning-logs.md)** ΓÇô Activities performed by the provisioning service, such as the creation of a group in ServiceNow or a user imported from Workday.
-- Do you need to understand the impact of Conditional Access policies in your tenant? -- Would you like the ability to review: sign-in log queries, with a workbook
-that reports how many users were granted or denied access, as well as how many users bypassed
-Conditional Access policies when accessing resources?
+Using the access capabilities provided by the Azure portal, you can review the information that is tracked in your activity logs. This option is helpful if you need to do a quick investigation of an event with a limited scope. For example, a user had trouble signing in during a period of a few hours. In this scenario, reviewing the recent records of this user in the sign-in logs can help to shed light on this issue.
-- Interested in developing a deeper understanding of conditional access, with a workbook details per
-condition so that the impact of a policy can be contextualized per condition,
-including device platform, device state, client app, sign-in risk, location, and application?
+For one-off investigations with a limited scope, the Azure portal is often the easiest way to find the data you need. However, there are also business problems requiring a more complex analysis of the data in your activity logs. This is, for example, true if you're watching for trends in signals of interest. One common example for a scenario that requires a trend analysis is related to blocking legacy authentication in your Azure AD tenant.
-- Archive and report on more than one year of historical application role and [access package assignment activity](../governance/entitlement-management-logs-and-reporting.md)?
+Azure AD supports several of the most widely used authentication and authorization protocols including legacy authentication. Legacy authentication refers to basic authentication, a widely used industry-standard method for collecting user name and password information. Examples of applications that commonly or only use legacy authentication are:
-To help you to address these questions, Azure Active Directory provides workbooks for monitoring. [Azure Monitor workbooks](../../azure-monitor/visualize/workbooks-overview.md) combine text, analytics queries, metrics, and parameters into rich interactive reports.
+- Microsoft Office 2013 or older.
+- Apps using mail protocols like POP, IMAP, and SMTP AUTH.
+Typically, legacy authentication clients can't enforce any type of second factor authentication. However, multi-factor authentication (MFA) is a common requirement in many environments to provide a high level of protection.
-This article:
+How can you determine whether it is safe to block legacy authentication in an environment? Answering this question requires an analysis of the sign-ins in your environment for a certain timeframe. This is a scenario where Azure Monitor workbooks can help you.
-- Assumes you're familiar with how to [Create interactive reports by using Monitor workbooks](../../azure-monitor/visualize/workbooks-overview.md).
+Workbooks provide a flexible canvas for data analysis and the creation of rich visual reports within the Azure portal. They allow you to tap into multiple data sources from across Azure, and combine them into unified interactive experiences.
-- Explains how to use Monitor workbooks to understand the effect of your Conditional Access policies, to troubleshoot sign-in failures, and to identify legacy authentications.
-
+With Azure Monitor workbooks, you can:
+- Query data from multiple sources in Azure
+- Visualize data for reporting and analysis
+- Combine multiple elements into a single interactive experience
-## Prerequisites
-
-To use Monitor workbooks, you need:
--- An Azure Active Directory tenant with a premium (P1 or P2) license. Learn how to [get a premium license](../fundamentals/active-directory-get-started-premium.md).--- A [Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md).--- [Access](../../azure-monitor/logs/manage-access.md#azure-rbac) to the log analytics workspace-- Following roles in Azure Active Directory (if you are accessing Log Analytics through Azure Active Directory portal)
- - Security administrator
- - Security reader
- - Report reader
- - Global administrator
-
-## Roles
-
-To access workbooks in Azure Active Directory, you must have access to the underlying [Log Analytics workspace](../../azure-monitor/logs/manage-access.md#azure-rbac) and be assigned to one of the following roles:
---- Global Reader--- Reports Reader--- Security Reader--- Application Administrator --- Cloud Application Administrator--- Company Administrator--- Security Administrator-----
-## Workbook access
-
-To access workbooks:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Navigate to **Azure Active Directory** > **Monitoring** > **Workbooks**.
-
-1. Select a report or template, or on the toolbar select **Open**.
-
-![Find the Azure Monitor workbooks in Azure AD](./media/howto-use-azure-monitor-workbooks/azure-monitor-workbooks-in-azure-ad.png)
-
-## Sign-in analysis
-
-To access the sign-in analysis workbook, in the **Usage** section, select **Sign-ins**.
-
-This workbook shows the following sign-in trends:
--- All sign-ins--- Success--- Pending user action--- Failure-
-You can filter each trend by the following categories:
--- Time range--- Apps--- Users-
-![Sign-in analysis](./media/howto-use-azure-monitor-workbooks/43.png)
--
-For each trend, you get a breakdown by the following categories:
--- Location-
- ![Sign-ins by location](./media/howto-use-azure-monitor-workbooks/45.png)
--- Device-
- ![Sign-ins by device](./media/howto-use-azure-monitor-workbooks/46.png)
--
-## Sign-ins using legacy authentication
--
-To access the workbook for sign-ins that use [legacy authentication](../conditional-access/block-legacy-authentication.md), in the **Usage** section, select **Sign-ins using Legacy Authentication**.
+For more information, see [Azure Monitor workbooks](../../azure-monitor/visualize/workbooks-overview.md).
-This workbook shows the following sign-in trends:
-- All sign-ins
+## How does it help me?
-- Success
+Common scenarios for using workbooks include:
+- Get shareable, at-a-glance summary reports about your Azure AD tenant, and build your own custom reports.
-You can filter each trend by the following categories:
+- Find and diagnose sign-in failures, and get a trending view of your organization's sign-in health.
-- Time range
+- Monitor Azure AD logs for sign-ins, tenant administrator actions, provisioning, and risk together in a flexible, customizable format.
-- Apps
+- Watch trends in your tenantΓÇÖs usage of Azure AD features such as conditional access, self-service password reset, and more.
-- Users
+- Know who's using legacy authentications to sign in to your environment.
-- Protocols
+- Understand the effect of your conditional access policies on your users' sign-in experience.
-![Sign-ins by legacy authentication](./media/howto-use-azure-monitor-workbooks/47.png)
-For each trend, you get a breakdown by app and protocol.
-![Legacy-authentication sign-ins by app and protocol](./media/howto-use-azure-monitor-workbooks/48.png)
+## Who should use it?
+Typical personas for workbooks are:
+- **Reporting admin** - Someone who is responsible for creating reports on top of the available data and workbook templates
-## Sign-ins by Conditional Access
+- **Tenant admins** - People who use the available reports to get insight and take action.
+- **Workbook template builder** - Someone who ΓÇ£graduatesΓÇ¥ from the role of reporting admin by turning a workbook into a template for others with similar needs to use as a basis for creating their own workbooks.
-To access the workbook for sign-ins by [Conditional Access policies](../conditional-access/overview.md), in the **Conditional Access** section, select **Sign-ins by Conditional Access**.
-This workbook shows the trends for disabled sign-ins. You can filter each trend by the following categories:
-- Time range
+## How to use it
-- Apps
+All workbooks are stored in the workbook [gallery](../../azure-monitor/visualize/workbooks-overview.md#gallery)
+You have two options for working with workbooks:
-- Users
+- Create a new workbook from scratch
+- Start with an existing workbook template from the gallery
-![Sign-ins using Conditional Access](./media/howto-use-azure-monitor-workbooks/49.png)
+By using an already existing template from the gallery, you can benefit from the work others have already invested into solving the same business problem as you.
-For disabled sign-ins, you get a breakdown by the Conditional Access status.
-
-![Screenshot shows Conditional access status and Recent sign-ins.](./media/howto-use-azure-monitor-workbooks/conditional-access-status.png)
--
-## Conditional Access Insights
-
-### Overview
-
-Workbooks contain sign-in log queries that can help IT administrators monitor the impact of Conditional Access policies in their tenant. You have the ability to report on how many users would have been granted or denied access. The workbook contains insights on how many users would have bypassed Conditional Access policies based on those usersΓÇÖ attributes at the time of sign-in. It contains details per condition so that the impact of a policy can be contextualized per condition, including device platform, device state, client app, sign-in risk, location, and application.
-
-### Instructions
-To access the workbook for Conditional Access Insights, select the **Conditional Access Insights** workbook in the Conditional Access section.
-This workbook shows the expected impact of each Conditional Access policy in your tenant. Select one or more Conditional Access policies from the dropdown list and narrow the scope of the workbook by applying the following filters:
--- **Time Range**--- **User**--- **Apps**--- **Data View**-
-![Screenshot shows the Conditional Access pane where you can select a Conditional Access Policy.](./media/howto-use-azure-monitor-workbooks/access-insights.png)
--
-The Impact Summary shows the number of users or sign-ins for which the selected policies had a particular result. Total is the number of users or sign-ins for which the selected policies were evaluated in the selected Time Range. Click on a tile to filter the data in the workbook by that result type.
-
-![Screenshot shows tiles to use to filter results such as Total, Success, and Failure.](./media/howto-use-azure-monitor-workbooks/impact-summary.png)
-
-This workbook also shows the impact of the selected policies broken down by each of six conditions:
-- **Device state**-- **Device platform**-- **Client apps**-- **Sign-in risk**-- **Location**-- **Applications**-
-![Screenshot shows the details from the Total sign-ins filter.](./media/howto-use-azure-monitor-workbooks/device-platform.png)
-
-You can also investigate individual sign-ins, filtered by the parameters selected in the workbook. Search for individual users, sorted by sign-in frequency, and view their corresponding sign-in events.
-
-![Screenshot shows individual sign-ins you can review.](./media/howto-use-azure-monitor-workbooks/filtered.png)
-
-## Sign-ins by grant controls
-
-To access the workbook for sign-ins by [grant controls](../conditional-access/controls.md), in the **Conditional Access** section, select **Sign-ins by Grant Controls**.
-
-This workbook shows the following disabled sign-in trends:
--- Require MFA
-
-- Require terms of use--- Require privacy statement--- Other--
-You can filter each trend by the following categories:
--- Time range--- Apps--- Users-
-![Sign-ins by grant controls](./media/howto-use-azure-monitor-workbooks/50.png)
--
-For each trend, you get a breakdown by app and protocol.
-
-![Breakdown of recent sign-ins](./media/howto-use-azure-monitor-workbooks/51.png)
----
-## Sign-ins failure analysis
-
-Use the **Sign-ins failure analysis** workbook to troubleshoot errors with:
--- Sign-ins-- Conditional Access policies-- Legacy authentication --
-To access the sign-ins by Conditional Access data, in the **Troubleshoot** section, select **Sign-ins using Legacy Authentication**.
-
-This workbook shows the following sign-in trends:
--- All sign-ins--- Success--- Pending action--- Failure
+## Prerequisites
+To use Monitor workbooks, you need:
-You can filter each trend by the following categories:
+- An Azure Active Directory tenant with a premium (P1 or P2) license. Learn how to [get a premium license](../fundamentals/active-directory-get-started-premium.md).
-- Time range
+- A [Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md).
-- Apps
+- [Access](../../azure-monitor/logs/manage-access.md#azure-rbac) to the log analytics workspace
+- Following roles in Azure Active Directory (if you are accessing Log Analytics through Azure Active Directory portal)
+ - Security administrator
+ - Security reader
+ - Report reader
+ - Global administrator
-- Users
+## Roles
-![Troubleshooting sign-ins](./media/howto-use-azure-monitor-workbooks/52.png)
+To access workbooks in Azure Active Directory, you must have access to the underlying [Log Analytics workspace](../../azure-monitor/logs/manage-access.md#azure-rbac) and be assigned to one of the following roles:
-To help you troubleshoot sign-ins, Azure Monitor gives you a breakdown by the following categories:
+- Global Reader
-- Top errors
+- Reports Reader
- ![Summary of top errors](./media/howto-use-azure-monitor-workbooks/53.png)
+- Security Reader
-- Sign-ins waiting on user action
+- Application Administrator
- ![Summary of sign-ins waiting on user action](./media/howto-use-azure-monitor-workbooks/54.png)
+- Cloud Application Administrator
+- Company Administrator
-## Identity Protection Risk Analysis
+- Security Administrator
-Use the **Identity Protection Risk Analysis** workbook in the **Usage** section to understand:
-- Distribution in risky users and risk detections by levels and types-- Opportunities to better remediate risk-- Where in the world risk is being detected
-You can filter the Risky Detections trends by:
-- Detection timing type-- Risk level
-Real-time risk detections are those that can be detected at the point of authentication. These detections can be challenged by risky sign-in policies using Conditional Access to require multi-factor authentication.
-You can filter the Risky Users trends by:
-- Risk detail-- Risk level
+## Workbook access
-If you have a high number of risky users where "no action" has been taken, consider enabling a Conditional Access policy to require secure password change when a user is high risk.
+To access workbooks:
-## Best practices
+1. Sign in to the [Azure portal](https://portal.azure.com).
-### Query partially succeeded
+1. Navigate to **Azure Active Directory** > **Monitoring** > **Workbooks**.
-After running a workbook, you might see the following error: "Query partially succeeded; results may be incomplete or incorrect"
+1. Select a report or template, or on the toolbar select **Open**.
-This error means that your query timed out in the database layer. In this case, it still ΓÇ£succeededΓÇ¥ to workbooks (it got results) but the results also contained an error/warning message that some part of the query failed. In this case, you review your query and start troubleshooting by reducing the scope of it.
-For example, you could add or rearrange a where condition to reduce the amount of data the query has to process.
+![Find the Azure Monitor workbooks in Azure AD](./media/howto-use-azure-monitor-workbooks/azure-monitor-workbooks-in-azure-ad.png)
active-directory Workbook Legacy Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-legacy authentication.md
This workbook supports multiple filters:
- For guidance on blocking legacy authentication in your environment, see [Block legacy authentication to Azure AD with conditional access](../conditional-access/block-legacy-authentication.md). -- Many email protocols that once relied on legacy authentication now support more secure modern authentication methods. If you see legacy email authentication protocols in this workbook, consider migrating to modern authentication for email instead. For more information, see [Deprecation of Basic authentication in Exchange Online](https://docs.microsoft.com/exchange/clients-and-mobile-in-exchange-online/deprecation-of-basic-authentication-exchange-online).
+- Many email protocols that once relied on legacy authentication now support more secure modern authentication methods. If you see legacy email authentication protocols in this workbook, consider migrating to modern authentication for email instead. For more information, see [Deprecation of Basic authentication in Exchange Online](/exchange/clients-and-mobile-in-exchange-online/deprecation-of-basic-authentication-exchange-online).
- Some clients can use both legacy authentication or modern authentication depending on client configuration. If you see ΓÇ£modern mobile/desktop clientΓÇ¥ or ΓÇ£browserΓÇ¥ for a client in the Azure AD logs, it is using modern authentication. If it has a specific client or protocol name, such as ΓÇ£Exchange ActiveSyncΓÇ¥, it is using legacy authentication to connect to Azure AD. The client types in conditional access, and the Azure AD reporting page in the Azure Portal demarcate modern authentication clients and legacy authentication clients for you, and only legacy authentication is captured in this workbook.
This workbook supports multiple filters:
- To learn more about identity protection, see [What is identity protection](../identity-protection/overview-identity-protection.md). - For more information about Azure AD workbooks, see [How to use Azure AD workbooks](howto-use-azure-monitor-workbooks.md).-
active-directory Atlassian Cloud Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atlassian-cloud-provisioning-tutorial.md
Title: 'Tutorial: Configure Atlassian Cloud for automatic user provisioning with Azure Active Directory | Microsoft Docs'
-description: Learn how to configure Azure Active Directory to automatically provision and de-provision user accounts to Atlassian Cloud.
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Atlassian Cloud.
-
-writer: zhchia
-
+documentationcenter: ''
+
+writer: Thwimmer
+
+ms.assetid: 53b804ba-b632-4c4b-a77e-ec6468536898
+ms.devlang: na
Last updated 12/27/2019-+ # Tutorial: Configure Atlassian Cloud for automatic user provisioning
-The objective of this tutorial is to demonstrate the steps to be performed in Atlassian Cloud and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and de-provision users and/or groups to [Atlassian Cloud](https://www.atlassian.com/licensing/cloud). For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+This tutorial describes the steps you need to perform in both Atlassian Cloud and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Atlassian Cloud](https://www.atlassian.com/cloud) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities supported
The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md). * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* Make sure you're an admin for an Atlassian organization. See [Organization administration.](https://support.atlassian.com/organization-administration/docs/explore-an-atlassian-organization).
+* Verify one or more or your domains in your organization. See [Domain verification](https://support.atlassian.com/user-management/docs/verify-a-domain-to-manage-accounts).
+* Subscribe to Atlassian Access from your organization. See [Atlassian Access security policies and features](https://support.atlassian.com/security-and-access-policies/docs/understand-atlassian-access).
* [An Atlassian Cloud tenant](https://www.atlassian.com/licensing/cloud) with an Atlassian Access subscription.
-* A user account in Atlassian Cloud with Admin permissions.
+* Make sure you're an admin for at least one Jira or Confluence site that you want to grant synced users access to.
-> [!NOTE]
-> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ > [!NOTE]
+ > This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
-2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-3. Determine what data to [map between Azure AD and Atlassian Cloud](../app-provisioning/customize-application-attributes.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Atlassian Cloud](../app-provisioning/customize-application-attributes.md).
## Step 2. Configure Atlassian Cloud to support provisioning with Azure AD
+1. Navigate to [Atlassian Admin Console](http://admin.atlassian.com/). Select your organization if you have more than one.
+1. Select **Settings > User provisioning**.
+ ![Screenshot showing the User Provisioning tab.](media/atlassian-cloud-provisioning-tutorial/atlassian-select-settings.png)
+1. Select **Create a directory**.
+1. Enter a name to identify the user directory, for example Azure AD users, then select **Create**.
+ ![Screenshot showing the Create directory page.](media/atlassian-cloud-provisioning-tutorial/atlassian-create-directory.png)
+1. Copy the values for **Directory base URL** and **API key**. You'll need those for your identity provider configuration later.
-1. Navigate to [Atlassian Organization Manager](https://admin.atlassian.com) **> select the org > directory**.
-
- ![Screenshot of the Administration page with the Directory option called out.](./media/atlassian-cloud-provisioning-tutorial/select-directory.png)
-
-2. Click **User Provisioning** and click on **Create a directory**. Copy the **Directory base URL** and **Bearer Token** which will be entered in the **Tenant URL** and **Secret Token** fields in the Provisioning tab of your Atlassian Cloud application in the Azure AD portal respectively.
-
- ![Screenshot of the Administration page with the User provisioning option called out.](./media/atlassian-cloud-provisioning-tutorial/secret-token-1.png)
- ![Screenshot of the Create a token page.](./media/atlassian-cloud-provisioning-tutorial/secret-token-2.png)
- ![Screenshot of the demo time directory token page.](./media/atlassian-cloud-provisioning-tutorial/secret-token-3.png)
+ > [!NOTE]
+ > Make sure you store these values in a safe place, as we won't show them to you again.
+ ![Screenshot showing the API key page.](media/atlassian-cloud-provisioning-tutorial/atlassian-apikey.png)
+ Users and groups will automatically be provisioned to your organization. See the [user provisioning](https://support.atlassian.com/provisioning-users/docs/understand-user-provisioning) page for more details on how your users and groups sync to your organization.
## Step 3. Add Atlassian Cloud from the Azure AD application gallery Add Atlassian Cloud from the Azure AD application gallery to start managing provisioning to Atlassian Cloud. If you have previously setup Atlassian Cloud for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
active-directory Blinq Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/blinq-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
[![Screenshot of the Blinq settings option.](media/blinq-provisioning-tutorial/blinq-settings.png)](media/blinq-provisioning-tutorial/blinq-settings.png#lightbox) 1. Under the **Integrations** page you should see **Team Card Provisioning** which contains a URL and Token. You will need to generate the token by clicking **Generate**.
-Copy the **URL** and **Token**. The URL and the Token are to be inserted into the **Tenant URL*** and **Secret Token** field in the Azure portal respectively.
+Copy the **URL** and **Token**. The URL and the Token are to be inserted into the **Tenant URL** and **Secret Token** field in the Azure portal respectively.
[![Screenshot of the Blinq integration page.](media/blinq-provisioning-tutorial/blinq-integrations-page.png)](media/blinq-provisioning-tutorial/blinq-integrations-page.png#lightbox)
active-directory Planview Leankit Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/planview-leankit-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Planview LeanKit'
+description: Learn how to configure single sign-on between Azure Active Directory and Planview LeanKit.
++++++++ Last updated : 06/09/2022++++
+# Tutorial: Azure AD SSO integration with Planview LeanKit
+
+In this tutorial, you'll learn how to integrate Planview LeanKit with Azure Active Directory (Azure AD). When you integrate Planview LeanKit with Azure AD, you can:
+
+* Control in Azure AD who has access to Planview LeanKit.
+* Enable your users to be automatically signed-in to Planview LeanKit with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Planview LeanKit single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Planview LeanKit supports **SP** and **IDP** initiated SSO.
+
+## Add Planview LeanKit from the gallery
+
+To configure the integration of Planview LeanKit into Azure AD, you need to add Planview LeanKit from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Planview LeanKit** in the search box.
+1. Select **Planview LeanKit** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Planview LeanKit
+
+Configure and test Azure AD SSO with Planview LeanKit using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Planview LeanKit.
+
+To configure and test Azure AD SSO with Planview LeanKit, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Planview LeanKit SSO](#configure-planview-leankit-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Planview LeanKit test user](#create-planview-leankit-test-user)** - to have a counterpart of B.Simon in Planview LeanKit that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Planview LeanKit** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://<HostName>.leankit.com`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://<HostName>.leankit.com/Account/Membership/ExternalLogin`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<HostName>.leankit.com/login`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Planview LeanKit support team](mailto:support@leankit.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Planview LeanKit** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Attributes")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Planview LeanKit.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Planview LeanKit**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Planview LeanKit SSO
+
+To configure single sign-on on **Planview LeanKit** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Planview LeanKit support team](mailto:support@leankit.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Planview LeanKit test user
+
+In this section, you create a user called Britta Simon in Planview LeanKit. Work with [Planview LeanKit support team](mailto:support@leankit.com) to add the users in the Planview LeanKit platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Planview LeanKit Sign on URL where you can initiate the login flow.
+
+* Go to Planview LeanKit Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Planview LeanKit for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Planview LeanKit tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Planview LeanKit for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Planview LeanKit you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Salesforce Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/salesforce-provisioning-tutorial.md
The objective of this tutorial is to show the steps required to perform in Salesforce and Azure AD to automatically provision and de-provision user accounts from Azure AD to Salesforce.
+> [!Note]
+> Microsoft uses v28 of the Salesforce API for automatic provisioning. Microsoft is aware of the upcoming deprecation of v21 through v30 and is working with Salesforce to migrate to a supported version prior to the deprecation date. No customer action is required.
+>
## Prerequisites The scenario outlined in this tutorial assumes that you already have the following items:
active-directory Skillcast Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/skillcast-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Skillcast'
+description: Learn how to configure single sign-on between Azure Active Directory and Skillcast.
++++++++ Last updated : 05/23/2022++++
+# Tutorial: Azure AD SSO integration with Skillcast
+
+In this tutorial, you'll learn how to integrate Skillcast with Azure Active Directory (Azure AD). When you integrate Skillcast with Azure AD, you can:
+
+* Control in Azure AD who has access to Skillcast.
+* Enable your users to be automatically signed-in to Skillcast with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Skillcast single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Skillcast supports **SP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Skillcast from the gallery
+
+To configure the integration of Skillcast into Azure AD, you need to add Skillcast from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Skillcast** in the search box.
+1. Select **Skillcast** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Skillcast
+
+Configure and test Azure AD SSO with Skillcast using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Skillcast.
+
+To configure and test Azure AD SSO with Skillcast, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Skillcast SSO](#configure-skillcast-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Skillcast test user](#create-skillcast-test-user)** - to have a counterpart of B.Simon in Skillcast that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Skillcast** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type the value:
+ `Skillcast-SP`
+
+ b. In the **Reply URL** textbox, type the URL:
+ `https://saml.e-learningportal.com/easyconnect/ACS/Post.aspx`
+
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `http://<subdomain>.e-learningportal.com`
+
+ > [!Note]
+ > The value is not real. Update the value with the actual Sign-On URL. Contact [Skillcast Customer Success Team](https://support.skillcast.com/hc/en-gb/requests/new) to get the value. You can also refer to the patterns shown in the Basic SAML Configuration section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Skillcast.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Skillcast**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Skillcast SSO
+
+To configure single sign-on on **Skillcast** side, you need to send the **App Federation Metadata Url** to [Skillcast Customer Success Team](https://support.skillcast.com/hc/en-gb/requests/new). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Skillcast test user
+
+In this section, you create a user called Britta Simon in Skillcast. Work with [Skillcast Customer Success Team](https://support.skillcast.com/hc/en-gb/requests/new) to add the users in the Skillcast platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Skillcast Sign-on URL where you can initiate the login flow.
+
+* Go to Skillcast Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Skillcast tile in the My Apps, this will redirect to Skillcast Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Skillcast you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Disk in Azure Kube
description: Learn how to use the Container Storage Interface (CSI) drivers for Azure disks in an Azure Kubernetes Service (AKS) cluster. Previously updated : 05/23/2022 Last updated : 05/31/2022
In addition to in-tree driver features, Azure disk CSI driver supports the follo
- `Premium_ZRS`, `StandardSSD_ZRS` disk types are supported, check more details about [Zone-redundant storage for managed disks](../virtual-machines/disks-redundancy.md) - [Snapshot](#volume-snapshots) - [Volume clone](#clone-volumes)-- [Resize disk PV without downtime](#resize-a-persistent-volume-without-downtime)
+- [Resize disk PV without downtime(Preview)](#resize-a-persistent-volume-without-downtime-preview)
## Storage class driver dynamic disk parameters |Name | Meaning | Available Value | Mandatory | Default value | | | | | |skuName | Azure disk storage account type (alias: `storageAccountType`)| `Standard_LRS`, `Premium_LRS`, `StandardSSD_LRS`, `UltraSSD_LRS`, `Premium_ZRS`, `StandardSSD_ZRS` | No | `StandardSSD_LRS`|
-|kind | Managed or unmanaged (blob based) disk | `managed` (`dedicated` and `shared` are deprecated) | No | `managed`|
|fsType | File System Type | `ext4`, `ext3`, `ext2`, `xfs`, `btrfs` for Linux, `ntfs` for Windows | No | `ext4` for Linux, `ntfs` for Windows| |cachingMode | [Azure Data Disk Host Cache Setting](../virtual-machines/windows/premium-storage-performance.md#disk-caching) | `None`, `ReadOnly`, `ReadWrite` | No | `ReadOnly`| |location | Specify Azure region where Azure disks will be created | `eastus`, `westus`, etc. | No | If empty, driver will use the same location name as current AKS cluster|
outfile
test.txt ```
-## Resize a persistent volume without downtime
+## Resize a persistent volume without downtime (Preview)
+> [!IMPORTANT]
+> Azure disk CSI driver supports resizing PVCs without downtime.
+> Follow this [link][expand-an-azure-managed-disk] to register the disk online resize feature.
You can request a larger volume for a PVC. Edit the PVC object, and specify a larger size. This change triggers the expansion of the underlying volume that backs the PV.
Filesystem Size Used Avail Use% Mounted on
/dev/sdc 9.8G 42M 9.8G 1% /mnt/azuredisk ```
-> [!IMPORTANT]
-> Azure disk CSI driver supports resizing PVCs without downtime in specific regions.
-> Follow this [link][expand-an-azure-managed-disk] to register the disk online resize feature.
-> If your cluster is not in the supported region list, you need to delete application first to detach disk on the node before expanding PVC.
- Expand the PVC by increasing the `spec.resources.requests.storage` field running the following command: ```console
aks Azure Disks Dynamic Pv https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disks-dynamic-pv.md
description: Learn how to dynamically create a persistent volume with Azure disks in Azure Kubernetes Service (AKS) Previously updated : 09/21/2020 Last updated : 05/31/2022 #Customer intent: As a developer, I want to learn how to dynamically create and attach storage to pods in AKS.
For more information on Kubernetes volumes, see [Storage options for application
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster with 1.21 or later version. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
Each AKS cluster includes four pre-created storage classes, two of them configur
* The *default* storage class provisions a standard SSD Azure disk. * Standard storage is backed by Standard SSDs and delivers cost-effective storage while still delivering reliable performance.
-* The *managed-premium* storage class provisions a premium Azure disk.
+* The *managed-csi-premium* storage class provisions a premium Azure disk.
* Premium disks are backed by SSD-based high-performance, low-latency disk. Perfect for VMs running production workload. If the AKS nodes in your cluster use premium storage, select the *managed-premium* class. If you use one of the default storage classes, you can't update the volume size after the storage class is created. To be able to update the volume size after a storage class is created, add the line `allowVolumeExpansion: true` to one of the default storage classes, or you can create you own custom storage class. Note that it's not supported to reduce the size of a PVC (to prevent data loss). You can edit an existing storage class by using the `kubectl edit sc` command.
Use the [kubectl get sc][kubectl-get] command to see the pre-created storage cla
$ kubectl get sc NAME PROVISIONER AGE
-default (default) kubernetes.io/azure-disk 1h
-managed-premium kubernetes.io/azure-disk 1h
+default (default) disk.csi.azure.com 1h
+managed-csi disk.csi.azure.com 1h
``` > [!NOTE]
managed-premium kubernetes.io/azure-disk 1h
A persistent volume claim (PVC) is used to automatically provision storage based on a storage class. In this case, a PVC can use one of the pre-created storage classes to create a standard or premium Azure managed disk.
-Create a file named `azure-premium.yaml`, and copy in the following manifest. The claim requests a disk named `azure-managed-disk` that is *5GB* in size with *ReadWriteOnce* access. The *managed-premium* storage class is specified as the storage class.
+Create a file named `azure-pvc.yaml`, and copy in the following manifest. The claim requests a disk named `azure-managed-disk` that is *5GB* in size with *ReadWriteOnce* access. The *managed-csi* storage class is specified as the storage class.
```yaml apiVersion: v1
metadata:
spec: accessModes: - ReadWriteOnce
- storageClassName: managed-premium
+ storageClassName: managed-csi
resources: requests: storage: 5Gi ``` > [!TIP]
-> To create a disk that uses standard storage, use `storageClassName: default` rather than *managed-premium*.
+> To create a disk that uses premium storage, use `storageClassName: managed-csi-premium` rather than *managed-csi*.
-Create the persistent volume claim with the [kubectl apply][kubectl-apply] command and specify your *azure-premium.yaml* file:
+Create the persistent volume claim with the [kubectl apply][kubectl-apply] command and specify your *azure-pvc.yaml* file:
```console
-$ kubectl apply -f azure-premium.yaml
+$ kubectl apply -f azure-pvc.yaml
persistentvolumeclaim/azure-managed-disk created ```
Learn more about Kubernetes persistent volumes using Azure disks.
[az-feature-register]: /cli/azure/feature#az_feature_register [az-feature-list]: /cli/azure/feature#az_feature_list [az-provider-register]: /cli/azure/provider#az_provider_register
-[use-tags]: use-tags.md
+[use-tags]: use-tags.md
aks Azure Files Dynamic Pv https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-dynamic-pv.md
description: Learn how to dynamically create a persistent volume with Azure Files for use with multiple concurrent pods in Azure Kubernetes Service (AKS) Previously updated : 03/22/2021 Last updated : 05/31/2022 #Customer intent: As a developer, I want to learn how to dynamically create and attach storage using Azure Files to pods in AKS.
For more information on Kubernetes volumes, see [Storage options for application
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster with 1.21 or later version. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
aks Cluster Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-extensions.md
A conceptual overview of this feature is available in [Cluster extensions - Azur
## Prerequisites
+> [!IMPORTANT]
+> Ensure that your AKS cluster is created with a managed identity, as cluster extensions won't work with service principal-based clusters.
+>
+> For new clusters created with `az aks create`, managed identity is configured by default. For existing service principal-based clusters that need to be switched over to managed identity, it can be enabled by running `az aks update` with the `--enable-managed-identity` flag. For more information, see [Use managed identity][use-managed-identity].
+ * An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). * [Azure CLI](/cli/azure/install-azure-cli) version >= 2.16.0 installed.
az k8s-extension delete --name azureml --cluster-name <clusterName> --resource-g
[gitops-overview]: ../azure-arc/kubernetes/conceptual-gitops-flux2.md [k8s-extension-reference]: /cli/azure/k8s-extension [use-azure-ad-pod-identity]: ./use-azure-ad-pod-identity.md
+[use-managed-identity]: ./use-managed-identity.md
<!-- EXTERNAL -->
-[arc-k8s-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc&regions=all
+[arc-k8s-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc&regions=all
aks Csi Storage Drivers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-storage-drivers.md
Title: Container Storage Interface (CSI) drivers in Azure Kubernetes Service (AKS)
-description: Learn how to enable the Container Storage Interface (CSI) drivers for Azure disks and Azure Files in an Azure Kubernetes Service (AKS) cluster.
+ Title: Container Storage Interface (CSI) drivers on Azure Kubernetes Service (AKS)
+description: Learn about and deploy the Container Storage Interface (CSI) drivers for Azure disks and Azure Files in an Azure Kubernetes Service (AKS) cluster
Last updated 05/23/2022
-# Container Storage Interface (CSI) drivers in Azure Kubernetes Service (AKS)
+# Container Storage Interface (CSI) drivers on Azure Kubernetes Service (AKS)
The Container Storage Interface (CSI) is a standard for exposing arbitrary block and file storage systems to containerized workloads on Kubernetes. By adopting and using CSI, Azure Kubernetes Service (AKS) can write, deploy, and iterate plug-ins to expose new or improve existing storage systems in Kubernetes without having to touch the core Kubernetes code and wait for its release cycles. The CSI storage driver support on AKS allows you to natively use: -- [**Azure disks**](azure-disk-csi.md) can be used to create a Kubernetes *DataDisk* resource. Disks can use Azure Premium Storage, backed by high-performance SSDs, or Azure Standard Storage, backed by regular HDDs or Standard SSDs. For most production and development workloads, use Premium Storage. Azure disks are mounted as *ReadWriteOnce* and are only available to a single pod. For storage volumes that can be accessed by multiple pods simultaneously, use Azure Files.-- [**Azure Files**](azure-files-csi.md) can be used to mount an SMB 3.0/3.1 share backed by an Azure storage account to pods. With Azure Files, you can share data across multiple nodes and pods. Azure Files can use Azure Standard storage backed by regular HDDs or Azure Premium storage backed by high-performance SSDs.
+ - [**Azure disks**](azure-disk-csi.md) can be used to create a Kubernetes *DataDisk* resource. Disks can use Azure Premium Storage, backed by high-performance SSDs, or Azure Standard Storage, backed by regular HDDs or Standard SSDs. For most production and development workloads, use Premium Storage. Azure disks are mounted as *ReadWriteOnce*, which makes it available to one node in AKS. For storage volumes that can be accessed by multiple pods simultaneously, use Azure Files.
+ - [**Azure Files**](azure-files-csi.md) can be used to mount an SMB 3.0/3.1 share backed by an Azure storage account to pods. With Azure Files, you can share data across multiple nodes and pods. Azure Files can use Azure Standard storage backed by regular HDDs or Azure Premium storage backed by high-performance SSDs.
> [!IMPORTANT] > Starting with Kubernetes version 1.21, AKS only uses CSI drivers by default and CSI migration is enabled. Existing in-tree persistent volumes will continue to function. However, internally Kubernetes hands control of all storage management operations (previously targeting in-tree drivers) to CSI drivers.
->
-> *In-tree drivers* refers to the current storage drivers that are part of the core Kubernetes code opposed to the new CSI drivers, which are plug-ins.
+>
+> *In-tree drivers* refers to the storage drivers that are part of the core Kubernetes code opposed to the CSI drivers, which are plug-ins.
> [!NOTE]
-> Azure disk CSI driver v2 (preview) improves scalability and reduces pod failover latency. It uses shared disks to provision attachment replicas on multiple cluster nodes and integrates with the pod scheduler to ensure a node with an attachment replica is chosen on pod failover. Azure disk CSI driver v2 (preview) also provides the ability to fine tune performance. If you're interested in participating in the preview, submit a request: [https://aka.ms/DiskCSIv2Preview](https://aka.ms/DiskCSIv2Preview). This preview version is provided without a service level agreement, and you can occasionally expect breaking changes while in preview. The preview version isn't recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Azure disks CSI driver v2 (preview) improves scalability and reduces pod failover latency. It uses shared disks to provision attachment replicas on multiple cluster nodes and integrates with the pod scheduler to ensure a node with an attachment replica is chosen on pod failover. Azure disks CSI driver v2 (preview) also provides the ability to fine tune performance. If you're interested in participating in the preview, submit a request: [https://aka.ms/DiskCSIv2Preview](https://aka.ms/DiskCSIv2Preview). This preview version is provided without a service level agreement, and you can occasionally expect breaking changes while in preview. The preview version isn't recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-## Migrate custom in-tree storage classes to CSI
+> [!NOTE]
+> AKS provides the option to enable and disable the CSI drivers (preview) on new and existing clusters. CSI drivers are enabled by default on new clusters. You should verify that there are no existing Persistent Volumes created by Azure disks and Azure Files CSI drivers and that there is not any existing VolumeSnapshot, VolumeSnapshotClass or VolumeSnapshotContent resources before running this command on existing cluster. This preview version is provided without a service level agreement, and you can occasionally expect breaking changes while in preview. The preview version isn't recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+* An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
+* [Azure CLI installed](/cli/azure/install-azure-cli).
+
+### Install the `aks-preview` Azure CLI
+
+You also need the *aks-preview* Azure CLI extension version 0.5.78 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+## Disable CSI storage drivers on a new cluster
+
+`--disable-disk-driver` allows you disable the CSI driver for [Azure disks][azure-disk-csi]. `--disable-file-driver` allows you to disable the CSI driver for [Azure Files][azure-files-csi]. `--disable-snapshot-controller` allows you to disable the [snapshot controller][snapshot-controller ].
+To disable CSI storage drivers on a new cluster, use `--disable-disk-driver`, `--disable-file-driver`, and `--disable-snapshot-controller`.
+```azurecli-interactive
+az aks create -n myAKSCluster -g myResourceGroup --disable-disk-driver --disable-file-driver --disable-snapshot-controller
+```
+
+## Disable CSI storage drivers on an existing cluster
+To disable CSI storage drivers on an existing cluster, use `--disable-disk-driver`, `--disable-file-driver`, and `--disable-snapshot-controller`.
+
+```azurecli-interactive
+az aks update -n myAKSCluster -g myResourceGroup --disable-disk-driver --disable-file-driver --disable-snapshot-controller
+```
+
+## Enable CSI storage drivers on an existing cluster
+
+`--enable-disk-driver` allows you enable the CSI driver for [Azure disks][azure-disk-csi]. `--enable-file-driver` allows you to enable the CSI driver for [Azure Files][azure-files-csi]. `--enable-snapshot-controller` allows you to enable the [snapshot controller][snapshot-controller].
+
+To enable CSI storage drivers on an existing cluster with CSI storage drivers disabled, use `--enable-disk-driver`, `--enable-file-driver`, and `--enable-snapshot-controller`.
+```azurecli-interactive
+az aks update -n myAKSCluster -g myResourceGroup --enable-disk-driver --enable-file-driver --enable-snapshot-controller
+```
+
+## Migrate custom in-tree storage classes to CSI
If you created in-tree driver storage classes, those storage classes continue to work since CSI migration is turned on after upgrading your cluster to 1.21.x. If you want to use CSI features you'll need to perform the migration.
-Migrating these storage classes involves deleting the existing ones, and re-creating them with the provisioner set to **disk.csi.azure.com** if using Azure disk storage, and **files.csi.azure.com** if using Azure Files.
+Migrating these storage classes involves deleting the existing ones, and re-creating them with the provisioner set to **disk.csi.azure.com** if using Azure disks, and **files.csi.azure.com** if using Azure Files.
### Migrate storage class provisioner
parameters:
storageAccountType: Premium_LRS ```
+The CSI storage system supports the same features as the In-tree drivers, so the only change needed would be the provisioner.
+ ## Migrate in-tree persistent volumes > [!IMPORTANT] > If your in-tree persistent volume `reclaimPolicy` is set to **Delete**, you need to change its policy to **Retain** to persist your data. This can be achieved using a [patch operation on the PV](https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/). For example: > > ```console
-> $ kubectl patch pv pv-azuredisk --type merge --patch '{"spec": {"persistentVolumeReclaimPolicy": "Retain"}}'
+> kubectl patch pv pv-azuredisk --type merge --patch '{"spec": {"persistentVolumeReclaimPolicy": "Retain"}}'
> ``` ### Migrate in-tree Azure disk persistent volumes
-If you have in-tree Azure disk persistent volumes, get `diskURI` from in-tree persistent volumes and then follow this [guide][azure-disk-static-mount] to set up CSI driver persistent volumes.
+If you have in-tree Azure disks persistent volumes, get `diskURI` from in-tree persistent volumes and then follow this [guide][azure-disk-static-mount] to set up CSI driver persistent volumes
### Migrate in-tree Azure File persistent volumes
-If you have in-tree Azure File persistent volumes, get `secretName`, `shareName` from in-tree persistent volumes and then follow this [guide][azure-file-static-mount] to set up CSI driver persistent volumes.
+If you have in-tree Azure File persistent volumes, get `secretName`, `shareName` from in-tree persistent volumes and then follow this [guide][azure-file-static-mount] to set up CSI driver persistent volumes
## Next steps
If you have in-tree Azure File persistent volumes, get `secretName`, `shareName`
[kubernetes-storage-classes]: https://kubernetes.io/docs/concepts/storage/storage-classes/ [kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/ [managed-disk-pricing-performance]: https://azure.microsoft.com/pricing/details/managed-disks/
+[azure-disk-csi]: https://github.com/kubernetes-sigs/azuredisk-csi-driver
+[azure-files-csi]: https://github.com/kubernetes-sigs/azurefile-csi-driver
+[snapshot-controller]: https://kubernetes-csi.github.io/docs/snapshot-controller.html
<!-- LINKS - internal --> [azure-disk-volume]: azure-disk-volume.md
aks Keda Deploy Add On Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-deploy-add-on-arm.md
Title: Deploy the Kubernetes Event-driven Autoscaling (KEDA) add-on by using an ARM template
+ Title: Install the Kubernetes Event-driven Autoscaling (KEDA) add-on by using an ARM template
description: Use an ARM template to deploy the Kubernetes Event-driven Autoscaling (KEDA) add-on to Azure Kubernetes Service (AKS).
Last updated 05/24/2022
-# Deploy the Kubernetes Event-driven Autoscaling (KEDA) add-on by using ARM template
+# Install the Kubernetes Event-driven Autoscaling (KEDA) add-on by using ARM template
This article shows you how to deploy the Kubernetes Event-driven Autoscaling (KEDA) add-on to Azure Kubernetes Service (AKS) by using an [ARM](../azure-resource-manager/templates/index.yml) template.
This article shows you how to deploy the Kubernetes Event-driven Autoscaling (KE
## Prerequisites
-> [!NOTE]
-> KEDA is currently only available in the `westcentralus` region.
- - An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). - [Azure CLI installed](/cli/azure/install-azure-cli).
When ready, refresh the registration of the *Microsoft.ContainerService* resourc
az provider register --namespace Microsoft.ContainerService ```
-## Deploy the KEDA add-on with Azure Resource Manager (ARM) templates
+## Install the KEDA add-on with Azure Resource Manager (ARM) templates
The KEDA add-on can be enabled by deploying an AKS cluster with an Azure Resource Manager template and specifying the `workloadAutoScalerProfile` field:
The KEDA add-on can be enabled by deploying an AKS cluster with an Azure Resourc
To connect to the Kubernetes cluster from your local computer, you use [kubectl][kubectl], the Kubernetes command-line client.
-If you use the Azure Cloud Shell, `kubectl` is already installed. You can also install it locally using the [az aks install-cli][az aks install-cli] command:
+If you use the Azure Cloud Shell, `kubectl` is already installed. You can also install it locally using the [az aks install-cli][] command:
```azurecli az aks install-cli ```
-To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials][az aks get-credentials] command. The following example gets credentials for the AKS cluster named *MyAKSCluster* in the *MyResourceGroup*:
+To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials][] command. The following example gets credentials for the AKS cluster named *MyAKSCluster* in the *MyResourceGroup*:
```azurecli az aks get-credentials --resource-group MyResourceGroup --name MyAKSCluster
To learn more about KEDA CRDs, follow the official [KEDA documentation][keda-sca
## Clean Up
-To remove the resource group, and all related resources, use the [az group delete][az-group-delete] command:
+To remove the resource group, and all related resources, use the [Az PowerShell module group delete][az-group-delete] command:
```azurecli az group delete --name MyResourceGroup ```+
+### Enabling add-on on clusters with self-managed open-source KEDA installations
+
+While Kubernetes only allows one metric server to be installed, you can in theory install KEDA multiple times. However, it isn't recommended given only one installation will work.
+
+When the KEDA add-on is installed in an AKS cluster, the previous installation of open-source KEDA will be overridden and the add-on will take over.
+
+This means that the customization and configuration of the self-installed KEDA deployment will get lost and no longer be applied.
+
+While there's a possibility that the existing autoscaling will keep on working, there's a risk given it will be configured differently and won't support features such as managed identity.
+
+It's recommended to uninstall existing KEDA installations before enabling the KEDA add-on given the installation will succeed without any error.
+
+Following error will be thrown in the operator logs but the installation of KEDA add-on will be completed.
+
+Error logged in now-suppressed non-participating KEDA operator pod:
+the error logged inside the already installed KEDA operator logs.
+E0520 11:51:24.868081 1 leaderelection.go:330] error retrieving resource lock default/operator.keda.sh: config maps "operator.keda.sh" is forbidden: User "system:serviceaccount:default:keda-operator" can't get resource "config maps" in API group "" in the namespace "default"
+ ## Next steps This article showed you how to install the KEDA add-on on an AKS cluster, and then verify that it's installed and running. With the KEDA add-on installed on your cluster, you can [deploy a sample application][keda-sample] to start scaling apps
aks Keda Deploy Add On Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-deploy-add-on-cli.md
+
+ Title: Install the Kubernetes Event-driven Autoscaling (KEDA) add-on by using Azure CLI
+description: Use Azure CLI to deploy the Kubernetes Event-driven Autoscaling (KEDA) add-on to Azure Kubernetes Service (AKS).
++++ Last updated : 06/08/2022+++
+# Install the Kubernetes Event-driven Autoscaling (KEDA) add-on by using Azure CLI
+
+This article shows you how to install the Kubernetes Event-driven Autoscaling (KEDA) add-on to Azure Kubernetes Service (AKS) by using Azure CLI. The article includes steps to verify that it's installed and running.
+++
+## Prerequisites
+
+- An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
+- [Azure CLI installed](/cli/azure/install-azure-cli).
+
+### Install the extension `aks-preview`
+
+Install the `aks-preview` extension in the AKS cluster to make sure you have the latest version of AKS extension before installing KEDA add-on.
+
+```azurecli
+- az extension add --upgrade --name aks-preview
+```
+
+### Register the `AKS-KedaPreview` feature flag
+
+To use the KEDA, you must enable the `AKS-KedaPreview` feature flag on your subscription.
+
+```azurecli
+az feature register --name AKS-KedaPreview --namespace Microsoft.ContainerService
+```
+
+You can check on the registration status by using the `az feature list` command:
+
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKS-KedaPreview')].{Name:name,State:properties.state}"
+```
+
+When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the `az provider register` command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+## Install the KEDA add-on with Azure CLI
+To install the KEDA add-on, use `--enable-keda` when creating or updating a cluster.
+
+The following example creates a *myResourceGroup* resource group. Then it creates a *myAKSCluster* cluster with the KEDA add-on.
+
+```azurecli-interactive
+az group create --name myResourceGroup --location eastus
+
+az aks create \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --enable-keda
+```
+
+For existing clusters, use `az aks update` with `--enable-keda` option. The following code shows an example.
+
+```azurecli-interactive
+az aks update \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --enable-keda
+```
+
+## Get the credentials for your cluster
+
+Get the credentials for your AKS cluster by using the `az aks get-credentials` command. The following example command gets the credentials for *myAKSCluster* in the *myResourceGroup* resource group:
+
+```azurecli-interactive
+az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
+```
+
+## Verify that the KEDA add-on is installed on your cluster
+
+To see if the KEDA add-on is installed on your cluster, verify that the `enabled` value is `true` for `keda` under `workloadAutoScalerProfile`.
+
+The following example shows the status of the KEDA add-on for *myAKSCluster* in *myResourceGroup*:
+
+```azurecli-interactive
+az aks show -g "myResourceGroup" --name myAKSCluster --query "workloadAutoScalerProfile.keda.enabled"
+```
+## Verify that KEDA is running on your cluster
+
+You can verify KEDA that's running on your cluster. Use `kubectl` to display the operator and metrics server installed in the AKS cluster under kube-system namespace. For example:
+
+```azurecli-interactive
+kubectl get pods -n kube-system
+```
+
+The following example output shows that the KEDA operator and metrics API server are installed in the AKS cluster along with its status.
+
+```output
+kubectl get pods -n kube-system
+
+keda-operator-********-k5rfv 1/1 Running 0 43m
+keda-operator-metrics-apiserver-*******-sj857 1/1 Running 0 43m
+```
+To verify the version of your KEDA, use `kubectl get crd/scaledobjects.keda.sh -o yaml `. For example:
+
+```azurecli-interactive
+kubectl get crd/scaledobjects.keda.sh -o yaml
+```
+The following example output shows the configuration of KEDA in the `app.kubernetes.io/version` label:
+
+```yaml
+kind: CustomResourceDefinition
+metadata:
+ annotations:
+ controller-gen.kubebuilder.io/version: v0.8.0
+ creationTimestamp: "2022-06-08T10:31:06Z"
+ generation: 1
+ labels:
+ addonmanager.kubernetes.io/mode: Reconcile
+ app.kubernetes.io/component: operator
+ app.kubernetes.io/name: keda-operator
+ app.kubernetes.io/part-of: keda-operator
+ app.kubernetes.io/version: 2.7.0
+ name: scaledobjects.keda.sh
+ resourceVersion: "2899"
+ uid: 85b8dec7-c3da-4059-8031-5954dc888a0b
+spec:
+ conversion:
+ strategy: None
+ group: keda.sh
+ names:
+ kind: ScaledObject
+ listKind: ScaledObjectList
+ plural: scaledobjects
+ shortNames:
+ - so
+ singular: scaledobject
+ scope: Namespaced
+ # Redacted for simplicity
+ ```
+
+While KEDA provides various customization options, the KEDA add-on currently provides basic common configuration.
+
+If you have requirement to run with another custom configurations, such as namespaces that should be watched or tweaking the log level, then you may edit the KEDA YAML manually and deploy it.
+
+However, when the installation is customized there will no support offered for custom configurations.
+
+## Disable KEDA add-on from your AKS cluster
+
+When you no longer need KEDA add-on in the cluster, use the `az aks update` command with--disable-keda option. This execution will disable KEDA workload auto-scaler.
+
+```azurecli-interactive
+az aks update \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --disable-keda
+```
+
+### Enabling add-on on clusters with self-managed open-source KEDA installations
+
+While Kubernetes only allows one metric server to be installed, you can in theory install KEDA multiple times. However, it isn't recommended given only one installation will work.
+
+When the KEDA add-on is installed in an AKS cluster, the previous installation of open-source KEDA will be overridden and the add-on will take over.
+
+This means that the customization and configuration of the self-installed KEDA deployment will get lost and no longer be applied.
+
+While there's a possibility that the existing autoscaling will keep on working, there's a risk given it will be configured differently and won't support features such as managed identity.
+
+It's recommended to uninstall existing KEDA installations before enabling the KEDA add-on given the installation will succeed without any error.
+
+Following error will be thrown in the operator logs but the installation of KEDA add-on will be completed.
+
+Error logged in now-suppressed non-participating KEDA operator pod:
+the error logged inside the already installed KEDA operator logs.
+E0520 11:51:24.868081 1 leaderelection.go:330] error retrieving resource lock default/operator.keda.sh: config maps "operator.keda.sh" is forbidden: User "system:serviceaccount:default:keda-operator" can't get resource "config maps" in API group "" in the namespace "default"
+
+## Next steps
+This article showed you how to install the KEDA add-on on an AKS cluster using Azure CLI. The steps to verify that KEDA add-on is installed and running are included. With the KEDA add-on installed on your cluster, you can [deploy a sample application][keda-sample] to start scaling apps.
+
+[az-aks-create]: /cli/azure/aks#az-aks-create
+[az aks install-cli]: /cli/azure/aks#az-aks-install-cli
+[az aks get-credentials]: /cli/azure/aks#az-aks-get-credentials
+[az aks update]: /cli/azure/aks#az-aks-update
+[az-group-delete]: /cli/azure/group#az-group-delete
+
+[kubectl]: https://kubernetes.io/docs/user-guide/kubectl
+[keda]: https://keda.sh/
+[keda-scalers]: https://keda.sh/docs/scalers/
+[keda-sample]: https://github.com/kedacore/sample-dotnet-worker-servicebus-queue
+
aks Open Service Mesh Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-integrations.md
Last updated 03/23/2022
The Open Service Mesh (OSM) add-on integrates with features provided by Azure as well as open source projects. > [!IMPORTANT]
-> Integrations with open source projects are not covered by the [AKS support policy][aks-support-policy].
+> Integrations with open source projects aren't covered by the [AKS support policy][aks-support-policy].
## Ingress
-Ingress allows for traffic external to the mesh to be routed to services within the mesh. With OSM, you can configure most ingress solutions to work with your mesh, but OSM works best with [Web Application Routing][web-app-routing], [NGINX ingress][osm-nginx], or [Contour ingress][osm-contour]. Open source projects integrating with OSM, including NGINX ingress and Contour ingress, are not covered by the [AKS support policy][aks-support-policy].
+Ingress allows for traffic external to the mesh to be routed to services within the mesh. With OSM, you can configure most ingress solutions to work with your mesh, but OSM works best with [Web Application Routing][web-app-routing], [NGINX ingress][osm-nginx], or [Contour ingress][osm-contour]. Open source projects integrating with OSM, including NGINX ingress and Contour ingress, aren't covered by the [AKS support policy][aks-support-policy].
-Using [Azure Gateway Ingress Controller (AGIC)][agic] for ingress with OSM is not supported and not recommended.
+Using [Azure Gateway Ingress Controller (AGIC)][agic] for ingress with OSM isn't supported and not recommended.
## Metrics observability
-Observability of metrics allows you to view the metrics of your mesh and the deployments in your mesh. With OSM, you can use [Prometheus and Grafana][osm-metrics] for metrics observability, but those integrations are not covered by the [AKS support policy][aks-support-policy].
+Observability of metrics allows you to view the metrics of your mesh and the deployments in your mesh. With OSM, you can use [Prometheus and Grafana][osm-metrics] for metrics observability, but those integrations aren't covered by the [AKS support policy][aks-support-policy].
You can also integrate OSM with [Azure Monitor][azure-monitor].
InsightsMetrics
## Automation and developer tools
-OSM can integrate with certain automation projects and developer tooling to help operators and developers build and release applications. For example, OSM integrates with [Flagger][osm-flagger] for progressive delivery and [Dapr][osm-dapr] for building applications. OSM's integration with Flagger and Dapr are not covered by the [AKS support policy][aks-support-policy].
+OSM can integrate with certain automation projects and developer tooling to help operators and developers build and release applications. For example, OSM integrates with [Flagger][osm-flagger] for progressive delivery and [Dapr][osm-dapr] for building applications. OSM's integration with Flagger and Dapr aren't covered by the [AKS support policy][aks-support-policy].
## External authorization
-External authorization allows you to offload authorization of HTTP requests to an external service. OSM can use external authorization by integrating with [Open Policy Agent (OPA)][osm-opa], but that integration is not covered by the [AKS support policy][aks-support-policy].
+External authorization allows you to offload authorization of HTTP requests to an external service. OSM can use external authorization by integrating with [Open Policy Agent (OPA)][osm-opa], but that integration isn't covered by the [AKS support policy][aks-support-policy].
## Certificate management
-OSM has several types of certificates it uses to operate on your AKS cluster. OSM includes its own certificate manager called Tresor, which is used by default. Alternatively, OSM allows you to integrate with [Hashicorp Vault][osm-hashi-vault], [Tresor][osm-tresor], and [cert-manager][osm-cert-manager], but those integrations are not covered by the [AKS support policy][aks-support-policy].
--
+OSM has several types of certificates it uses to operate on your AKS cluster. OSM includes its own certificate manager called [Tresor][osm-tresor], which is used by default. Alternatively, OSM allows you to integrate with [Hashicorp Vault][osm-hashi-vault] and [cert-manager][osm-cert-manager], but those integrations aren't covered by the [AKS support policy][aks-support-policy].
[agic]: ../application-gateway/ingress-controller-overview.md [agic-aks]: ../application-gateway/tutorial-ingress-controller-add-on-existing.md
api-management Api Management Using With Internal Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-using-with-internal-vnet.md
# Connect to a virtual network in internal mode using Azure API Management With Azure virtual networks (VNets), Azure API Management can manage internet-inaccessible APIs using several VPN technologies to make the connection. For VNet connectivity options, requirements, and considerations, see [Using a virtual network with Azure API Management](virtual-network-concepts.md).
-This article explains how to set up VNet connectivity for your API Management instance in the *internal* mode, In this mode, you can only access the following API Management endpoints within a VNet whose access you control.
+This article explains how to set up VNet connectivity for your API Management instance in the *internal* mode. In this mode, you can only access the following API Management endpoints within a VNet whose access you control.
* The API gateway * The developer portal * Direct management
If you deploy 1 [capacity unit](api-management-capacity.md) of API Management in
If the destination endpoint has allow-listed only a fixed set of DIPs, connection failures will result if you add new units in the future. For this reason and since the subnet is entirely in your control, we recommend allow-listing the entire subnet in the backend. + ## <a name="network-configuration-issues"> </a>Common network configuration issues This section has moved. See [Virtual network configuration reference](virtual-network-reference.md).
api-management Api Management Using With Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-using-with-vnet.md
For more information and considerations, see [IP addresses of Azure API Manageme
[!INCLUDE [api-management-virtual-network-vip-dip](../../includes/api-management-virtual-network-vip-dip.md)] - ## <a name="network-configuration-issues"> </a>Common network configuration issues
api-management Rp Source Ip Address Change Mar2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/rp-source-ip-address-change-mar2023.md
API Management publishes a _service tag_ that you can use to configure the NSG f
## What do I need to do?
-Update the NSG security rules that allow the API Management resource provider to communicate with your API Management instance. For detailed instructions on how to manage a NSG, review [Create, change, or delete a network security group] in the Azure Virtual Network documentation.
+Update the NSG security rules that allow the API Management resource provider to communicate with your API Management instance. For detailed instructions on how to manage an NSG, review [Create, change, or delete a network security group] in the Azure Virtual Network documentation.
1. Go to the [Azure portal](https://portal.azure.com) to view your NSGs. Search for and select **Network security groups**. 2. Select the name of the NSG associated with the virtual network hosting your API Management service.
Finally, check for any other systems that may impact the communication from the
<!-- Links --> [Configure NSG Rules]: ../api-management-using-with-internal-vnet.md#configure-nsg-rules [Virtual Network]: ../../virtual-network/index.yml
-[Force tunneling traffic]: ../virtual-network-reference.md#force-tunneling-traffic-to-on-premises-firewall-using-expressroute-or-network-virtual-appliance
-[Create, change, or delete a network security group]: /azure/virtual-network/manage-network-security-group
+[Force tunneling traffic]: ../api-management-using-with-internal-vnet.md#force-tunnel-traffic-to-on-premises-firewall-using-expressroute-or-network-virtual-appliance
+[Create, change, or delete a network security group]: ../../virtual-network/manage-network-security-group.md
api-management Virtual Network Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-reference.md
Enable publishing the [developer portal](api-management-howto-developer-portal.m
When adding virtual machines running Windows to the VNet, allow outbound connectivity on port `1688` to the [KMS endpoint](/troubleshoot/azure/virtual-machines/custom-routes-enable-kms-activation#solution) in your cloud. This configuration routes Windows VM traffic to the Azure Key Management Services (KMS) server to complete Windows activation.
-## Force tunneling traffic to on-premises firewall Using ExpressRoute or Network Virtual Appliance
- Commonly, you configure and define your own default route (0.0.0.0/0), forcing all traffic from the API Management subnet to flow through an on-premises firewall or to a network virtual appliance. This traffic flow breaks connectivity with Azure API Management, since outbound traffic is either blocked on-premises, or NAT'd to an unrecognizable set of addresses no longer working with various Azure endpoints. You can solve this issue via one of the following methods:
-
- * Enable [service endpoints][ServiceEndpoints] on the subnet in which the API Management service is deployed for:
- * Azure SQL (required only in the primary region if the API Management service is deployed to [multiple regions](api-management-howto-deploy-multi-region.md))
- * Azure Storage
- * Azure Event Hubs
- * Azure Key Vault (required when API Management is deployed on the v2 platform)
-
- By enabling endpoints directly from the API Management subnet to these services, you can use the Microsoft Azure backbone network, providing optimal routing for service traffic. If you use service endpoints with a force tunneled API Management, the above Azure services traffic isn't force tunneled. The other API Management service dependency traffic is force tunneled and can't be lost. If lost, the API Management service would not function properly.
-
- * All the control plane traffic from the internet to the management endpoint of your API Management service is routed through a specific set of inbound IPs, hosted by API Management. When the traffic is force tunneled, the responses will not symmetrically map back to these inbound source IPs. To overcome the limitation, set the destination of the following user-defined routes ([UDRs][UDRs]) to the "Internet", to steer traffic back to Azure. Find the set of inbound IPs for control plane traffic documented in [Control plane IP addresses](#control-plane-ip-addresses).
-
- * For other force tunneled API Management service dependencies, resolve the hostname and reach out to the endpoint. These include:
- - Metrics and Health Monitoring
- - Azure portal diagnostics
- - SMTP relay
- - Developer portal CAPTCHA
- - Azure KMS server
- ## Control plane IP addresses The following IP addresses are divided by **Azure Environment**. When allowing inbound requests, IP addresses marked with **Global** must be permitted, along with the **Region**-specific IP address. In some cases, two IP addresses are listed. Permit both IP addresses.
app-service Tutorial Send Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-send-email.md
Deploy an app with the language framework of your choice to App Service. To foll
![Screenshot that shows the splash page for the Logic Apps Designer with When an H T T P request is received highlighted.](./media/tutorial-send-email/receive-http-request.png) 1. In the dialog for **When an HTTP request is received**, select **Use sample payload to generate schema**.
- ![Screenshot that shows the When an H T T P request dialog box and the Use sample payload to generate schema option selected. ](./media/tutorial-send-email/generate-schema-with-payload.png)
+ ![Screenshot that shows the When an H T T P request dialog box and the Use sample payload to generate schema option selected.](./media/tutorial-send-email/generate-schema-with-payload.png)
1. Copy the following sample JSON into the textbox and select **Done**.
If you're testing this code on the sample app for [Build a Ruby and Postgres app
[Tutorial: Host a RESTful API with CORS in Azure App Service](app-service-web-tutorial-rest-api.md) [HTTP request/response reference for Logic Apps](../connectors/connectors-native-reqres.md) [Quickstart: Create your first workflow by using Azure Logic Apps - Azure portal](../logic-apps/quickstart-create-first-logic-app-workflow.md)-- [Environment variables and app settings reference](reference-app-settings.md)
+- [Environment variables and app settings reference](reference-app-settings.md)
application-gateway Application Gateway Create Probe Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-create-probe-portal.md
Previously updated : 07/09/2020 Last updated : 06/10/2022
> * [Azure Resource Manager PowerShell](application-gateway-create-probe-ps.md) > * [Azure Classic PowerShell](application-gateway-create-probe-classic-ps.md)
-In this article, you add a custom health probe to an existing application gateway through the Azure portal. Using the health probes, Azure Application Gateway monitors the health of the resources in the back-end pool.
+In this article, you add a custom health probe to an existing application gateway through the Azure portal. Azure Application Gateway uses these health probes to monitor the health of the resources in the back-end pool.
## Before you begin
-If you do not already have an application gateway, visit [Create an Application Gateway](./quick-create-portal.md) to create an application gateway to work with.
+If you don't already have an application gateway, visit [Create an Application Gateway](./quick-create-portal.md) to create an application gateway to work with.
## Create probe for Application Gateway v2 SKU
Probes are configured in a two-step process through the portal. The first step i
|||| |**Name**|customProbe|This value is a friendly name given to the probe that is accessible in the portal.| |**Protocol**|HTTP or HTTPS | The protocol that the health probe uses. |
- |**Host**|i.e contoso.com|This value is the name of the virtual host (different from the VM host name) running on the application server. The probe is sent to \<protocol\>://\<host name\>:\<port\>/\<urlPath\> This can also be the private IP of the server, or the public ip address, or the DNS entry of the public ip address. This will attempt to access the server when used with a file based path entry, and validate a specific file exists on the server as a health check.|
- |**Pick host name from backend HTTP settings**|Yes or No|Sets the *host* header in the probe to the host name from the HTTP settings to which this probe is associated to. Specially required in case of multi-tenant backends such as Azure app service. [Learn more](./configuration-http-settings.md#pick-host-name-from-back-end-address)|
- |**Pick port from backend HTTP settings**| Yes or No|Sets the *port* of the health probe to the port from HTTP settings to which this probe is associated to. If you choose no, you can enter a custom destination port to use |
+ |**Host**|i.e contoso.com|This value is the name of the virtual host (different from the VM host name) running on the application server. The probe is sent to \<protocol\>://\<host name\>:\<port\>/\<urlPath\> This can also be the private IP address of the server, or the public IP address, or the DNS entry of the public IP address. The probe will attempt to access the server when used with a file based path entry, and validate a specific file exists on the server as a health check.|
+ |**Pick host name from backend HTTP settings**|Yes or No|Sets the *host* header in the probe to the host name from the HTTP settings to which this probe is associated. Specially required for multi-tenant backends such as Azure app service. [Learn more](./configuration-http-settings.md#pick-host-name-from-back-end-address)|
+ |**Pick port from backend HTTP settings**| Yes or No|Sets the *port* of the health probe to the port from HTTP settings to which this probe is associated. If you choose no, you can enter a custom destination port to use |
|**Port**| 1-65535 | Custom port to be used for the health probes |
- |**Path**|/ or any valid path|The remainder of the full url for the custom probe. A valid path starts with '/'. For the default path of http:\//contoso.com just use '/'. You can also input a server path to a file for a static health check instead of web based. File paths should be used while using public / private ip, or public ip dns entry as the hostname entry.|
- |**Interval (secs)**|30|How often the probe is run to check for health. It is not recommended to set the lower than 30 seconds.|
- |**Timeout (secs)**|30|The amount of time the probe waits before timing out. If a valid response is not received within this time-out period, the probe is marked as failed. The timeout interval needs to be high enough that an http call can be made to ensure the backend health page is available. Note that the time-out value should not be more than the ΓÇÿIntervalΓÇÖ value used in this probe setting or the ΓÇÿRequest timeoutΓÇÖ value in the HTTP setting which will be associated with this probe.|
+ |**Path**|/ or any valid path|The remainder of the full url for the custom probe. A valid path starts with '/'. For the default path of http:\//contoso.com, just use '/'. You can also input a server path to a file for a static health check instead of web based. File paths should be used while using public / private ip, or public ip dns entry as the hostname entry.|
+ |**Interval (secs)**|30|How often the probe is run to check for health. It isn't recommended to set the lower than 30 seconds.|
+ |**Timeout (secs)**|30|The amount of time the probe waits before timing out. If a valid response isn't received within this time-out period, the probe is marked as failed. The timeout interval needs to be high enough that an http call can be made to ensure the backend health page is available. The time-out value shouldn't be more than the ΓÇÿIntervalΓÇÖ value used in this probe setting or the ΓÇÿRequest timeoutΓÇÖ value in the HTTP setting, which will be associated with this probe.|
|**Unhealthy threshold**|3|Number of consecutive failed attempts to be considered unhealthy. The threshold can be set to 1 or more.| |**Use probe matching conditions**|Yes or No|By default, an HTTP(S) response with status code between 200 and 399 is considered healthy. You can change the acceptable range of backend response code or backend response body. [Learn more](./application-gateway-probe-overview.md#probe-matching)|
- |**HTTP Settings**|selection from dropdown|Probe will get associated with the HTTP setting(s) selected here and therefore, will monitor the health of that backend pool which is associated with the selected HTTP setting. It will use the same port for the probe request as the one being used in the selected HTTP setting. You can only choose those HTTP setting(s) which are not associated with any other custom probe. <br>Note that only those HTTP setting(s) are available for association which have the same protocol as the protocol chosen in this probe configuration and have the same state for the *Pick Host Name From Backend HTTP setting* switch.|
+ |**HTTP Settings**|selection from dropdown|Probe will get associated with the HTTP settings selected here and therefore, will monitor the health of that backend pool, which is associated with the selected HTTP setting. It will use the same port for the probe request as the one being used in the selected HTTP setting. You can only choose those HTTP settings, which aren't associated with any other custom probe. <br>The only HTTP settings that are available for association are those that have the same protocol as the protocol chosen in this probe configuration, and have the same state for the *Pick Host Name From Backend HTTP setting* switch.|
> [!IMPORTANT]
- > The probe will monitor health of the backend only when it is associated with one or more HTTP Setting(s). It will monitor back-end resources of those back-end pools which are associated to the HTTP setting(s) to which this probe is associated with. The probe request will be sent as \<protocol\>://\<hostName\>:\<port\>/\<urlPath\>.
+ > The probe will monitor health of the backend only when it's associated with one or more HTTP settings. It will monitor back-end resources of those back-end pools which are associated to the HTTP settings to which this probe is associated with. The probe request will be sent as \<protocol\>://\<hostName\>:\<port\>/\<urlPath\>.
### Test backend health with the probe After entering the probe properties, you can test the health of the back-end resources to verify that the probe configuration is correct and that the back-end resources are working as expected.
-1. Select **Test** and note the result of the probe. The Application gateway tests the health of all the backend resources in the backend pools associated with the HTTP Setting(s) used for this probe.
+1. Select **Test** and note the result of the probe. The Application gateway tests the health of all the backend resources in the backend pools associated with the HTTP settings used for this probe.
![Test backend health][5] 2. If there are any unhealthy backend resources, then check the **Details** column to understand the reason for unhealthy state of the resource. If the resource has been marked unhealthy due to an incorrect probe configuration, then select the **Go back to probe** link and edit the probe configuration. Otherwise, if the resource has been marked unhealthy due to an issue with the backend, then resolve the issues with the backend resource and then test the backend again by selecting the **Go back to probe** link and select **Test**. > [!NOTE]
- > You can choose to save the probe even with unhealthy backend resources, but it is not recommended. This is because the Application Gateway will not forward requests to the backend servers from the backend pool which are determined to be unhealthy by the probe. In case there are no healthy resources in a backend pool, you will not be able to access your application and will get a HTTP 502 error.
+ > You can choose to save the probe even with unhealthy backend resources, but it isn't recommended. This is because the Application Gateway will not forward requests to the backend servers from the backend pool, which are determined to be unhealthy by the probe. In case there are no healthy resources in a backend pool, you will not be able to access your application and will get a HTTP 502 error.
![View probe result][6]
Probes are configured in a two-step process through the portal. The first step i
|**Name**|customProbe|This value is a friendly name given to the probe that is accessible in the portal.| |**Protocol**|HTTP or HTTPS | The protocol that the health probe uses. | |**Host**|i.e contoso.com|This value is the name of the virtual host (different from the VM host name) running on the application server. The probe is sent to (protocol)://(host name):(port from httpsetting)/urlPath. This is applicable when multi-site is configured on Application Gateway. If the Application Gateway is configured for a single site, then enter '127.0.0.1'. You can also input a server path to a file for a static health check instead of web based. File paths should be used while using public / private ip, or public ip dns entry as the hostname entry.|
- |**Pick host name from backend HTTP settings**|Yes or No|Sets the *host* header in the probe to the host name of the back-end resource in the back-end pool associated with the HTTP Setting to which this probe is associated to. Specially required in case of multi-tenant backends such as Azure app service. [Learn more](./configuration-http-settings.md#pick-host-name-from-back-end-address)|
- |**Path**|/ or any valid path|The remainder of the full url for the custom probe. A valid path starts with '/'. For the default path of http:\//contoso.com just use '/' You can also input a server path to a file for a static health check instead of web based. File paths should be used while using public / private ip, or public ip dns entry as the hostname entry.|
- |**Interval (secs)**|30|How often the probe is run to check for health. It is not recommended to set the lower than 30 seconds.|
- |**Timeout (secs)**|30|The amount of time the probe waits before timing out. If a valid response is not received within this time-out period, the probe is marked as failed. The timeout interval needs to be high enough that an http call can be made to ensure the backend health page is available. Note that the time-out value should not be more than the ΓÇÿIntervalΓÇÖ value used in this probe setting or the ΓÇÿRequest timeoutΓÇÖ value in the HTTP setting which will be associated with this probe.|
+ |**Pick host name from backend HTTP settings**|Yes or No|Sets the *host* header in the probe to the host name of the back-end resource in the back-end pool associated with the HTTP Setting to which this probe is associated. Specially required for multi-tenant backends such as Azure app service. [Learn more](./configuration-http-settings.md#pick-host-name-from-back-end-address)|
+ |**Path**|/ or any valid path|The remainder of the full url for the custom probe. A valid path starts with '/'. For the default path of http:\//contoso.com, just use '/' You can also input a server path to a file for a static health check instead of web based. File paths should be used while using public / private ip, or public ip dns entry as the hostname entry.|
+ |**Interval (secs)**|30|How often the probe is run to check for health. It isn't recommended to set the lower than 30 seconds.|
+ |**Timeout (secs)**|30|The amount of time the probe waits before timing out. If a valid response isn't received within this time-out period, the probe is marked as failed. The timeout interval needs to be high enough that an http call can be made to ensure the backend health page is available. The time-out value shouldn't be more than the ΓÇÿIntervalΓÇÖ value used in this probe setting or the ΓÇÿRequest timeoutΓÇÖ value in the HTTP setting, which will be associated with this probe.|
|**Unhealthy threshold**|3|Number of consecutive failed attempts to be considered unhealthy. The threshold can be set to 1 or more.| |**Use probe matching conditions**|Yes or No|By default, an HTTP(S) response with status code between 200 and 399 is considered healthy. You can change the acceptable range of backend response code or backend response body. [Learn more](./application-gateway-probe-overview.md#probe-matching)| > [!IMPORTANT]
- > The host name is not the same as server name. This value is the name of the virtual host running on the application server. The probe is sent to \<protocol\>://\<hostName\>:\<port from http settings\>/\<urlPath\>
+ > The host name isn't the same as server name. This value is the name of the virtual host running on the application server. The probe is sent to \<protocol\>://\<hostName\>:\<port from http settings\>/\<urlPath\>
### Add probe to the gateway
-Now that the probe has been created, it is time to add it to the gateway. Probe settings are set on the backend http settings of the application gateway.
+Now that the probe has been created, it's time to add it to the gateway. Probe settings are set on the backend http settings of the application gateway.
1. Click **HTTP settings** on the application gateway, to bring up the configuration blade click the current backend http settings listed in the window.
application-gateway Create Gateway Internal Load Balancer App Service Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-gateway-internal-load-balancer-app-service-environment.md
Title: Troubleshoot an Application Gateway in Azure ΓÇô ILB ASE | Microsoft Docs
description: Learn how to troubleshoot an application gateway by using an Internal Load Balancer with an App Service Environment in Azure documentationCenter: na-+ editor: '' tags: ''
na Previously updated : 06/09/2020- Last updated : 06/10/2022+
-# Back-end server certificate is not allow listed for an application gateway using an Internal Load Balancer with an App Service Environment
+# Back-end server certificate isn't allow-listed for an application gateway using an Internal Load Balancer with an App Service Environment
-This article troubleshoots the following issue: A certificate isn't allow listed when you create an application gateway by using an Internal Load Balancer (ILB) together with an App Service Environment (ASE) at the back end when using end-to-end TLS in Azure.
+This article troubleshoots the following issue: A certificate isn't allow-listed when you create an application gateway by using an Internal Load Balancer (ILB) together with an App Service Environment (ASE) at the back end when using end-to-end TLS in Azure.
## Symptoms
When you create an application gateway by using an ILB with an ASE at the back e
- **Port:**: 443 - **Custom Probe:** Hostname ΓÇô test.appgwtestase.com - **Authentication Certificate:** .cer of test.appgwtestase.com-- **Backend Health:** Unhealthy ΓÇô Backend server certificate is not allow listed with Application Gateway.
+- **Backend Health:** Unhealthy ΓÇô Backend server certificate isn't allow-listed with Application Gateway.
**ASE configuration:**
When you access the application gateway, you receive the following error message
## Solution
-When you don't use a host name to access a HTTPS website, the back-end server will return the configured certificate on the default website, in case SNI is disabled. For an ILB ASE, the default certificate comes from the ILB certificate. If there are no configured certificates for the ILB, the certificate comes from the ASE App certificate.
+When you don't use a host name to access an HTTPS website, the back-end server will return the configured certificate on the default website, in case SNI is disabled. For an ILB ASE, the default certificate comes from the ILB certificate. If there are no configured certificates for the ILB, the certificate comes from the ASE App certificate.
-When you use a fully qualified domain name (FQDN) to access the ILB, the back-end server will return the correct certificate that's uploaded in the HTTP settings. If that is not the case , consider the following options:
+When you use a fully qualified domain name (FQDN) to access the ILB, the back-end server will return the correct certificate that's uploaded in the HTTP settings. If that isn't the case , consider the following options:
- Use FQDN in the back-end pool of the application gateway to point to the IP address of the ILB. This option only works if you have a private DNS zone or a custom DNS configured. Otherwise, you have to create an "A" record for a public DNS. - Use the uploaded certificate on the ILB or the default certificate (ILB certificate) in the HTTP settings. The application gateway gets the certificate when it accesses the ILB's IP for the probe. -- Use a wildcard certificate on the ILB and the back-end server, so that for all the websites, the certificate is common. However, this solution is possible only in case of subdomains and not if each of the websites require different hostnames.
+- Use a wildcard certificate on the ILB and the back-end server, so that for all the websites, the certificate is common. However, this solution is possible only for subdomains and not if each of the websites require different hostnames.
-- Clear the **Use for App service** option for the application gateway in case you are using the IP address of the ILB.
+- Clear the **Use for App service** option for the application gateway in case you're using the IP address of the ILB.
To reduce overhead, you can upload the ILB certificate in the HTTP settings to make the probe path work. (This step is just for allow listing. It won't be used for TLS communication.) You can retrieve the ILB certificate by accessing the ILB with its IP address from your browser on HTTPS then exporting the TLS/SSL certificate in a Base-64 encoded CER format and uploading the certificate on the respective HTTP settings.
application-gateway Ingress Controller Add Health Probes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-add-health-probes.md
Title: Add health probes to your AKS pods description: This article provides information on how to add health probes (readiness and/or liveness) to AKS pods with an Application Gateway. -+ Previously updated : 11/4/2019- Last updated : 06/10/2022+ # Add Health Probes to your service
Kubernetes API Reference:
> [!NOTE] > * `readinessProbe` and `livenessProbe` are supported when configured with `httpGet`. > * Probing on a port other than the one exposed on the pod is currently not supported.
-> * `HttpHeaders`, `InitialDelaySeconds`, `SuccessThreshold` are not supported.
+> * `HttpHeaders`, `InitialDelaySeconds`, `SuccessThreshold` aren't supported.
## Without `readinessProbe` or `livenessProbe`
-If the above probes are not provided, then Ingress Controller make an assumption that the service is reachable on `Path` specified for `backend-path-prefix` annotation or the `path` specified in the `ingress` definition for the service.
+If the above probes aren't provided, then the Ingress Controller makes an assumption that the service is reachable on the `Path` specified for `backend-path-prefix` annotation, or the `path` specified in the `ingress` definition for the service.
## Default Values for Health Probe
-For any property that can not be inferred by the readiness/liveness probe, Default values are set.
+For any property that can't be inferred by the readiness/liveness probe, default values are set.
| Application Gateway Probe Property | Default Value | |-|-|
application-gateway Ingress Controller Expose Service Over Http Https https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-expose-service-over-http-https.md
Title: Expose an AKS service over HTTP or HTTPS using Application Gateway description: This article provides information on how to expose an AKS service over HTTP or HTTPS using Application Gateway. -+ Previously updated : 11/4/2019- Last updated : 06/09/2022+ # Expose an AKS service over HTTP or HTTPS using Application Gateway
These tutorials help illustrate the usage of [Kubernetes Ingress Resources](http
## Prerequisites - Installed `ingress-azure` helm chart.
- - [**Greenfield Deployment**](ingress-controller-install-new.md): If you are starting from scratch, refer to these installation instructions, which outlines steps to deploy an AKS cluster with Application Gateway and install application gateway ingress controller on the AKS cluster.
+ - [**Greenfield Deployment**](ingress-controller-install-new.md): If you're starting from scratch, refer to these installation instructions, which outlines steps to deploy an AKS cluster with Application Gateway and install application gateway ingress controller on the AKS cluster.
- [**Brownfield Deployment**](ingress-controller-install-existing.md): If you have an existing AKS cluster and Application Gateway, refer to these instructions to install application gateway ingress controller on the AKS cluster.-- If you want to use HTTPS on this application, you will need a x509 certificate and its private key.
+- If you want to use HTTPS on this application, you'll need an x509 certificate and its private key.
## Deploy `guestbook` application
-The guestbook application is a canonical Kubernetes application that composes of a Web UI frontend, a backend and a Redis database. By default, `guestbook` exposes its application through a service with name `frontend` on port `80`. Without a Kubernetes Ingress Resource, the service is not accessible from outside the AKS cluster. We will use the application and setup Ingress Resources to access the application through HTTP and HTTPS.
+The guestbook application is a canonical Kubernetes application that composes of a Web UI frontend, a backend and a Redis database. By default, `guestbook` exposes its application through a service with name `frontend` on port `80`. Without a Kubernetes Ingress Resource, the service isn't accessible from outside the AKS cluster. We'll use the application and setup Ingress Resources to access the application through HTTP and HTTPS.
Follow the instructions below to deploy the guestbook application.
Now, the `guestbook` application has been deployed.
## Expose services over HTTP
-In order to expose the guestbook application, we will be using the following ingress resource:
+In order to expose the guestbook application, we'll be using the following ingress resource:
```yaml apiVersion: extensions/v1beta1
Save the above ingress resource as `ing-guestbook.yaml`.
1. Check the log of the ingress controller for deployment status.
-Now the `guestbook` application should be available. You can check this by visiting the
-public address of the Application Gateway.
+Now the `guestbook` application should be available. You can check availability by visiting the public address of the Application Gateway.
## Expose services over HTTPS
Now the `guestbook` application will be available on both HTTP and HTTPS only on
## Integrate with other services
-The following ingress will allow you to add additional paths into this ingress and redirect those paths to other
+The following ingress will allow you to add other paths into this ingress and redirect those paths to other
```yaml apiVersion: extensions/v1beta1
application-gateway Ingress Controller Install New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-install-new.md
Title: Creating an ingress controller with a new Application Gateway description: This article provides information on how to deploy an Application Gateway Ingress Controller with a new Application Gateway. -+ Previously updated : 11/4/2019- Last updated : 06/09/2022+ # How to Install an Application Gateway Ingress Controller (AGIC) Using a New Application Gateway
Alternatively, launch Cloud Shell from Azure portal using the following icon:
![Portal launch](./media/application-gateway-ingress-controller-install-new/portal-launch-icon.png)
-Your [Azure Cloud Shell](https://shell.azure.com/) already has all necessary tools. Should you
-choose to use another environment, please ensure the following command-line tools are installed:
+Your [Azure Cloud Shell](https://shell.azure.com/) already has all necessary tools. If you
+choose to use another environment, ensure the following command-line tools are installed:
* `az` - Azure CLI: [installation instructions](/cli/azure/install-azure-cli) * `kubectl` - Kubernetes command-line tool: [installation instructions](https://kubernetes.io/docs/tasks/tools/install-kubectl)
choose to use another environment, please ensure the following command-line tool
## Create an Identity
-Follow the steps below to create an Azure Active Directory (AAD) [service principal object](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object). Please record the `appId`, `password`, and `objectId` values - these will be used in the following steps.
+Follow the steps below to create an Azure Active Directory (Azure AD) [service principal object](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object). Record the `appId`, `password`, and `objectId` values - these values will be used in the following steps.
1. Create AD service principal ([Read more about Azure RBAC](../role-based-access-control/overview.md)): ```azurecli
This step will add the following components to your subscription:
- [Azure Kubernetes Service](../aks/intro-kubernetes.md) - [Application Gateway](./overview.md) v2-- [Virtual Network](../virtual-network/virtual-networks-overview.md) with 2 [subnets](../virtual-network/virtual-networks-overview.md)
+- [Virtual Network](../virtual-network/virtual-networks-overview.md) with two [subnets](../virtual-network/virtual-networks-overview.md)
- [Public IP Address](../virtual-network/ip-services/virtual-network-public-ip-address.md)-- [Managed Identity](../active-directory/managed-identities-azure-resources/overview.md), which will be used by [AAD Pod Identity](https://github.com/Azure/aad-pod-identity/blob/master/README.md)
+- [Managed Identity](../active-directory/managed-identities-azure-resources/overview.md), which will be used by [Azure AD Pod Identity](https://github.com/Azure/aad-pod-identity/blob/master/README.md)
1. Download the Azure Resource Manager template and modify the template as needed. ```bash wget https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/master/deploy/azuredeploy.json -O template.json ```
-1. Deploy the Azure Resource Manager template using `az cli`. This may take up to 5 minutes.
+1. Deploy the Azure Resource Manager template using `az cli`. The deployment might take up to 5 minutes.
```azurecli resourceGroupName="MyResourceGroup" location="westus2"
This step will add the following components to your subscription:
## Set up Application Gateway Ingress Controller
-With the instructions in the previous section, we created and configured a new AKS cluster and
-an Application Gateway. We are now ready to deploy a sample app and an ingress controller to our new
-Kubernetes infrastructure.
+With the instructions in the previous section, we created and configured a new AKS cluster and an Application Gateway. We're now ready to deploy a sample app and an ingress controller to our new Kubernetes infrastructure.
### Setup Kubernetes Credentials For the following steps, we need setup [kubectl](https://kubectl.docs.kubernetes.io/) command,
-which we will use to connect to our new Kubernetes cluster. [Cloud Shell](https://shell.azure.com/) has `kubectl` already installed. We will use `az` CLI to obtain credentials for Kubernetes.
+which we'll use to connect to our new Kubernetes cluster. [Cloud Shell](https://shell.azure.com/) has `kubectl` already installed. We'll use `az` CLI to obtain credentials for Kubernetes.
Get credentials for your newly deployed AKS ([read more](../aks/manage-azure-rbac.md#use-azure-rbac-for-kubernetes-authorization-with-kubectl)):
resourceGroupName=$(jq -r ".resourceGroupName.value" deployment-outputs.json)
az aks get-credentials --resource-group $resourceGroupName --name $aksClusterName ```
-### Install AAD Pod Identity
+### Install Azure AD Pod Identity
Azure Active Directory Pod Identity provides token-based access to [Azure Resource Manager (ARM)](../azure-resource-manager/management/overview.md).
- [AAD Pod Identity](https://github.com/Azure/aad-pod-identity) will add the following components to your Kubernetes cluster:
+ [Azure AD Pod Identity](https://github.com/Azure/aad-pod-identity) will add the following components to your Kubernetes cluster:
* Kubernetes [CRDs](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/): `AzureIdentity`, `AzureAssignedIdentity`, `AzureIdentityBinding` * [Managed Identity Controller (MIC)](https://github.com/Azure/aad-pod-identity#managed-identity-controllermic) component * [Node Managed Identity (NMI)](https://github.com/Azure/aad-pod-identity#node-managed-identitynmi) component
-To install AAD Pod Identity to your cluster:
+To install Azure AD Pod Identity to your cluster:
- *Kubernetes RBAC enabled* AKS cluster
To install AAD Pod Identity to your cluster:
### Install Helm [Helm](../aks/kubernetes-helm.md) is a package manager for
-Kubernetes. We will leverage it to install the `application-gateway-kubernetes-ingress` package:
+Kubernetes. We'll use it to install the `application-gateway-kubernetes-ingress` package:
1. Install [Helm](../aks/kubernetes-helm.md) and run the following to add `application-gateway-kubernetes-ingress` helm package:
Kubernetes. We will leverage it to install the `application-gateway-kubernetes-i
- `appgw.resourceGroup`: Name of the Azure Resource Group in which Application Gateway was created. Example: `app-gw-resource-group` - `appgw.name`: Name of the Application Gateway. Example: `applicationgatewayd0f0` - `appgw.shared`: This boolean flag should be defaulted to `false`. Set to `true` should you need a [Shared Application Gateway](https://github.com/Azure/application-gateway-kubernetes-ingress/blob/072626cb4e37f7b7a1b0c4578c38d1eadc3e8701/docs/setup/install-existing.md#multi-cluster--shared-app-gateway).
- - `kubernetes.watchNamespace`: Specify the name space, which AGIC should watch. This could be a single string value, or a comma-separated list of namespaces.
+ - `kubernetes.watchNamespace`: Specify the namespace that AGIC should watch. The namespace value can be a single string value, or a comma-separated list of namespaces.
- `armAuth.type`: could be `aadPodIdentity` or `servicePrincipal` - `armAuth.identityResourceID`: Resource ID of the Azure Managed Identity
- - `armAuth.identityClientId`: The Client ID of the Identity. See below for more information on Identity
+ - `armAuth.identityClientID`: The Client ID of the Identity. More information about **identityClientID** is provided below.
- `armAuth.secretJSON`: Only needed when Service Principal Secret type is chosen (when `armAuth.type` has been set to `servicePrincipal`) > [!NOTE]
- > The `identityResourceID` and `identityClientID` are values that were created
- during the [Deploy Components](ingress-controller-install-new.md#deploy-components)
- steps, and could be obtained again using the following command:
+ > The `identityResourceID` and `identityClientID` are values that were created during the [Deploy Components](ingress-controller-install-new.md#deploy-components) steps, and could be obtained again using the following command:
> ```azurecli > az identity show -g <resource-group> -n <identity-name> > ```
application-gateway Ingress Controller Letsencrypt Certificate Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-letsencrypt-certificate-application-gateway.md
Title: Use LetsEncrypt.org certificates with Application Gateway description: This article provides information on how to obtain a certificate from LetsEncrypt.org and use it on your Application Gateway for AKS clusters. -+ Previously updated : 11/4/2019- Last updated : 06/10/2022+ # Use certificates with LetsEncrypt.org on Application Gateway for AKS clusters
-This section configures your AKS to leverage [LetsEncrypt.org](https://letsencrypt.org/) and automatically obtain a
-TLS/SSL certificate for your domain. The certificate will be installed on Application Gateway, which will perform
-SSL/TLS termination for your AKS cluster. The setup described here uses the
-[cert-manager](https://github.com/jetstack/cert-manager) Kubernetes add-on, which automates the creation and management of certificates.
+This section configures your AKS to use [LetsEncrypt.org](https://letsencrypt.org/) and automatically obtain a TLS/SSL certificate for your domain. The certificate will be installed on Application Gateway, which will perform SSL/TLS termination for your AKS cluster. The setup described here uses the [cert-manager](https://github.com/jetstack/cert-manager) Kubernetes add-on, which automates the creation and management of certificates.
Follow the steps below to install [cert-manager](https://docs.cert-manager.io) on your existing AKS cluster.
Follow the steps below to install [cert-manager](https://docs.cert-manager.io) o
2. ClusterIssuer Resource
- Create a `ClusterIssuer` resource. It is required by `cert-manager` to represent the `Lets Encrypt` certificate
+ Create a `ClusterIssuer` resource. It's required by `cert-manager` to represent the `Lets Encrypt` certificate
authority where the signed certificates will be obtained.
- By using the non-namespaced `ClusterIssuer` resource, cert-manager will issue certificates that can be consumed from
+ Using the non-namespaced `ClusterIssuer` resource, cert-manager will issue certificates that can be consumed from
multiple namespaces. `LetΓÇÖs Encrypt` uses the ACME protocol to verify that you control a given domain name and to issue you a certificate. More details on configuring `ClusterIssuer` properties [here](https://docs.cert-manager.io/en/latest/tasks/issuers/https://docsupdatetracker.net/index.html). `ClusterIssuer` will instruct `cert-manager`
Follow the steps below to install [cert-manager](https://docs.cert-manager.io) o
Create an Ingress resource to Expose the `guestbook` application using the Application Gateway with the Lets Encrypt Certificate.
- Ensure you Application Gateway has a public Frontend IP configuration with a DNS name (either using the
+ Ensure your Application Gateway has a public Frontend IP configuration with a DNS name (either using the
default `azure.com` domain, or provision a `Azure DNS Zone` service, and assign your own custom domain). Note the annotation `certmanager.k8s.io/cluster-issuer: letsencrypt-staging`, which tells cert-manager to process the tagged Ingress resource.
Follow the steps below to install [cert-manager](https://docs.cert-manager.io) o
``` After a few seconds, you can access the `guestbook` service through the Application Gateway HTTPS url using the automatically issued **staging** `Lets Encrypt` certificate.
- Your browser may warn you of an invalid cert authority. The staging certificate is issued by `CN=Fake LE Intermediate X1`. This is an indication that the system worked as expected and you are ready for your production certificate.
+ Your browser may warn you of an invalid certificate authority. The staging certificate is issued by `CN=Fake LE Intermediate X1`. This warning is an indication that the system worked as expected and you're ready for your production certificate.
4. Production Certificate
- Once your staging certificate is setup successfully you can switch to a production ACME server:
+ Once your staging certificate is set up successfully you can switch to a production ACME server:
1. Replace the staging annotation on your Ingress resource with: `certmanager.k8s.io/cluster-issuer: letsencrypt-prod` 1. Delete the existing staging `ClusterIssuer` you created in the previous step and create a new one by replacing the ACME server from the ClusterIssuer YAML above with `https://acme-v02.api.letsencrypt.org/directory` 5. Certificate Expiration and Renewal
- Before the `Lets Encrypt` certificate expires, `cert-manager` will automatically update the certificate in the Kubernetes secret store. At that point, Application Gateway Ingress Controller will apply the updated secret referenced in the ingress resources it is using to configure the Application Gateway.
+ Before the `Lets Encrypt` certificate expires, `cert-manager` will automatically update the certificate in the Kubernetes secret store. At that point, Application Gateway Ingress Controller will apply the updated secret referenced in the ingress resources it's using to configure the Application Gateway.
application-gateway Ingress Controller Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-troubleshoot.md
Title: Application Gateway Ingress Controller troubleshooting
-description: This article provides documentation on how to troubleshoot common questions and/or issues with the Application Gateway Ingress Controller.
+description: This article provides documentation on how to troubleshoot common questions and issues with the Application Gateway Ingress Controller.
-+ Previously updated : 06/18/2020- Last updated : 06/09/2022+ # Troubleshoot common questions or issues with Ingress Controller
The steps below assume:
- AGIC has been installed on the AKS cluster - You already have an Application Gateway on a VNET shared with your AKS cluster
-To verify that the Application Gateway + AKS + AGIC installation is setup correctly, deploy
+To verify that the Application Gateway + AKS + AGIC installation is set up correctly, deploy
the simplest possible app: ```bash
spec:
EOF ```
-Copy and paste all lines at once from the
-script above into a [Azure Cloud Shell](https://shell.azure.com/). Please ensure the entire
-command is copied - starting with `cat` and including the last `EOF`.
+Copy and paste all lines at once from the script above into a [Azure Cloud Shell](https://shell.azure.com/). Verify that the entire command is copied - starting with `cat` and including the last `EOF`.
![apply](./media/application-gateway-ingress-controller-troubleshooting/tsg--apply-config.png) After a successful deployment of the app above your AKS cluster will have a new Pod, Service and an Ingress.
-Get the list of pods with [Cloud Shell](https://shell.azure.com/): `kubectl get pods -o wide`.
-We expect for a pod named 'test-agic-app-pod' to have been created. It will have an IP address. This address
-must be within the VNET of the Application Gateway, which is used with AKS.
+Get the list of pods with [Cloud Shell](https://shell.azure.com/): `kubectl get pods -o wide`. We expect for a pod named 'test-agic-app-pod' to have been created. It will have an IP address. This address must be within the VNET of the Application Gateway, which is used with AKS.
![Screenshot of the Bash window in Azure Cloud Shell showing a list of pods that includes test-agic-app-pod in the list.](./media/application-gateway-ingress-controller-troubleshooting/tsg--get-pods.png)
-Get the list of
-'test-agic-app-service'.
+Get the list of
![Screenshot of the Bash window in Azure Cloud Shell showing a list of services that includes test-agic-app-pod in the list.](./media/application-gateway-ingress-controller-troubleshooting/tsg--get-services.png)
-Get the list of the ingresses: `kubectl get ingress`. We expect an Ingress resource named
-'test-agic-app-ingress' to have been created. The resource will have a host name 'test.agic.contoso.com'.
+Get the list of the ingresses: `kubectl get ingress`. We expect an Ingress resource named 'test-agic-app-ingress' to have been created. The resource will have a host name 'test.agic.contoso.com'.
![Screenshot of the Bash window in Azure Cloud Shell showing a list of ingresses that includes test-agic-app-ingress in the list.](./media/application-gateway-ingress-controller-troubleshooting/tsg--get-ingress.png)
-One of the pods will be AGIC. `kubectl get pods` will show a list of pods, one of which will begin
-with 'ingress-azure'. Get all logs of that pod with `kubectl logs <name-of-ingress-controller-pod>`
-to verify that we have had a successful deployment. A successful deployment would have added the following
-lines to the log:
+One of the pods will be AGIC. `kubectl get pods` will show a list of pods, one of which will begin with 'ingress-azure'. Get all logs of that pod with `kubectl logs <name-of-ingress-controller-pod>` to verify that we've had a successful deployment. A successful deployment would have added the following lines to the log:
``` I0927 22:34:51.281437 1 process.go:156] Applied Application Gateway config in 20.461335266s I0927 22:34:51.281585 1 process.go:165] cache: Updated with latest applied config. I0927 22:34:51.282342 1 process.go:171] END AppGateway deployment ```
-Alternatively, from [Cloud Shell](https://shell.azure.com/) we can retrieve only the lines
-indicating successful Application Gateway configuration with
-`kubectl logs <ingress-azure-....> | grep 'Applied App Gateway config in'`, where
-`<ingress-azure....>` should be the exact name of the AGIC pod.
+Alternatively, from [Cloud Shell](https://shell.azure.com/) we can retrieve only the lines indicating successful Application Gateway configuration with `kubectl logs <ingress-azure-....> | grep 'Applied App Gateway config in'`, where `<ingress-azure....>` should be the exact name of the AGIC pod.
Application Gateway will have the following configuration applied:
Application Gateway will have the following configuration applied:
![backend_pool](./media/application-gateway-ingress-controller-troubleshooting/tsg--backendpools.png)
-Finally we can use the `cURL` command from within [Cloud Shell](https://shell.azure.com/) to
-establish an HTTP connection to the newly deployed app:
+Finally we can use the `cURL` command from within [Cloud Shell](https://shell.azure.com/) to establish an HTTP connection to the newly deployed app:
1. Use `kubectl get ingress` to get the Public IP address of Application Gateway 2. Use `curl -I -H 'test.agic.contoso.com' <publitc-ip-address-from-previous-command>`
A result of `HTTP/1.1 200 OK` indicates that the Application Gateway + AKS + AGI
Application Gateway Ingress Controller (AGIC) continuously monitors the following Kubernetes resources: [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment) or [Pod](https://kubernetes.io/docs/concepts/workloads/pods/pod/#what-is-a-pod), [Service](https://kubernetes.io/docs/concepts/services-networking/service/), [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/)
-The following must be in place for AGIC to function as expected:
+The following conditions must be in place for AGIC to function as expected:
1. AKS must have one or more healthy **pods**.
- Verify this from [Cloud Shell](https://shell.azure.com/) with `kubectl get pods -o wide --show-labels`
+ Verify this configuration from [Cloud Shell](https://shell.azure.com/) with `kubectl get pods -o wide --show-labels`
If you have a Pod with an `apsnetapp`, your output may look like this: ```bash delyan@Azure:~$ kubectl get pods -o wide --show-labels
The following must be in place for AGIC to function as expected:
``` 2. One or more **services**, referencing the pods above via matching `selector` labels.
- Verify this from [Cloud Shell](https://shell.azure.com/) with `kubectl get services -o wide`
+ Verify this configuration from [Cloud Shell](https://shell.azure.com/) with `kubectl get services -o wide`
```bash delyan@Azure:~$ kubectl get services -o wide --show-labels
The following must be in place for AGIC to function as expected:
``` 3. **Ingress**, annotated with `kubernetes.io/ingress.class: azure/application-gateway`, referencing the service above
- Verify this from [Cloud Shell](https://shell.azure.com/) with `kubectl get ingress -o wide --show-labels`
+ Verify this configuration from [Cloud Shell](https://shell.azure.com/) with `kubectl get ingress -o wide --show-labels`
```bash delyan@Azure:~$ kubectl get ingress -o wide --show-labels
The following must be in place for AGIC to function as expected:
### Verify Observed Namespace
-* Get the existing namespaces in Kubernetes cluster. What namespace is your app
-running in? Is AGIC watching that namespace? Refer to the
-[Multiple Namespace Support](./ingress-controller-multiple-namespace-support.md#enable-multiple-namespace-support)
-documentation on how to properly configure observed namespaces.
+* Get the existing namespaces in Kubernetes cluster. What namespace is your app running in? Is AGIC watching that namespace? Refer to the [Multiple Namespace Support](./ingress-controller-multiple-namespace-support.md#enable-multiple-namespace-support) documentation on how to properly configure observed namespaces.
```bash # What namespaces exist on your cluster
documentation on how to properly configure observed namespaces.
```
-* If the AGIC pod is not healthy (`STATUS` column from the command above is not `Running`):
+* If the AGIC pod isn't healthy (`STATUS` column from the command above isn't `Running`), then:
- get logs to understand why: `kubectl logs <pod-name>`
- - for the previous instance of the pod: `kubectl logs <pod-name> --previous`
+ - get logs for the previous instance of the pod: `kubectl logs <pod-name> --previous`
- describe the pod to get more context: `kubectl describe pod <pod-name>`
documentation on how to properly configure observed namespaces.
```
-* AGIC emits Kubernetes events for certain critical errors. You can view these:
+* AGIC emits Kubernetes events for certain critical errors. You can view these events:
- in your terminal via `kubectl get events --sort-by=.metadata.creationTimestamp` - in your browser using the [Kubernetes Web UI (Dashboard)](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) ## Logging Levels
-AGIC has 3 logging levels. Level 1 is the default one and it shows minimal number of log lines.
-Level 5, on the other hand, would display all logs, including sanitized contents of config applied
-to ARM.
+AGIC has three logging levels. Level 1 is the default one and it shows minimal number of log lines. Level 5, on the other hand, would display all logs, including sanitized contents of config applied to ARM.
-The Kubernetes community has established 9 levels of logging for
-the [kubectl](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-output-verbosity-and-debugging) tool. In this
-repository we are utilizing 3 of these, with similar semantics:
+The Kubernetes community has established nine levels of logging for the [kubectl](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-output-verbosity-and-debugging) tool. In this repository, we're utilizing three of these levels, with similar semantics:
| Verbosity | Description |
The verbosity levels are adjustable via the `verbosityLevel` variable in the
[helm-config.yaml](#sample-helm-config-file) file. Increase verbosity level to `5` to get the JSON config dispatched to [ARM](../azure-resource-manager/management/overview.md):
- - add `verbosityLevel: 5` on a line by itself in [helm-config.yaml](#sample-helm-config-file) and re-install
+ - add `verbosityLevel: 5` on a line by itself in [helm-config.yaml](#sample-helm-config-file) and reinstall
- get logs with `kubectl logs <pod-name>` ### Sample Helm config file
application-gateway Ingress Controller Update Ingress Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-update-ingress-controller.md
Title: Upgrade ingress controller with Helm description: This article provides information on how to upgrade an Application Gateway Ingress using Helm. -+ Previously updated : 11/4/2019- Last updated : 06/09/2022+ # How to upgrade Application Gateway Ingress Controller using Helm
The Azure Application Gateway Ingress Controller for Kubernetes (AGIC) can be upgraded using a Helm repository hosted on Azure Storage.
-Before we begin the upgrade procedure, ensure that you have added the required repository:
+Before beginning the upgrade procedure, ensure that you've added the required repository:
- View your currently added Helm repositories with:
Before we begin the upgrade procedure, ensure that you have added the required r
odd-billygoat 22 Fri Jun 21 15:56:06 2019 FAILED ingress-azure-0.7.0-rc1 0.7.0-rc1 default ```
- The Helm chart installation from the sample response above is named `odd-billygoat`. We will
- use this name for the rest of the commands. Your actual deployment name will most likely differ.
+ The Helm chart installation from the sample response above is named **odd-billygoat**. This name will be used for the commands. Your actual deployment name will be different.
1. Upgrade the Helm deployment to a new version:
Before we begin the upgrade procedure, ensure that you have added the required r
## Rollback
-Should the Helm deployment fail, you can rollback to a previous release.
+If the Helm deployment fails, you can roll back to a previous release.
1. Get the last known healthy release number:
Should the Helm deployment fail, you can rollback to a previous release.
2 Fri Jun 21 15:56:06 2019 FAILED ingress-azure-xx xxxx ```
- From the sample output of the `helm history` command it looks like the last successful deployment of our `odd-billygoat` was revision `1`
+ Based on the sample output of the **helm history** command, the last successful deployment of our **odd-billygoat** was revision **1**.
-1. Rollback to the last successful revision:
+1. Roll back to the last successful revision:
```bash helm rollback odd-billygoat 1
application-gateway Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-portal.md
description: In this quickstart, you learn how to use the Azure portal to create
Previously updated : 06/14/2021 Last updated : 06/10/2022
# Quickstart: Direct web traffic with Azure Application Gateway - Azure portal
-In this quickstart, you use the Azure portal to create an application gateway. Then you test it to make sure it works correctly.
-
-The application gateway directs application web traffic to specific resources in a backend pool. You assign listeners to ports, create rules, and add resources to a backend pool. For the sake of simplicity, this article uses a simple setup with a public front-end IP, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines in the backend pool.
+In this quickstart, you use the Azure portal to create an [Azure Application Gateway](overview.md) and test it to make sure it works correctly. You will assign listeners to ports, create rules, and add resources to a backend pool. For the sake of simplicity, a simple setup is used with a public front-end IP address, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines (VMs) in the backend pool.
:::image type="content" source="media/quick-create-portal/application-gateway-qs-resources.png" alt-text="application gateway resources":::
-You can also complete this quickstart using [Azure PowerShell](quick-create-powershell.md) or [Azure CLI](quick-create-cli.md).
+For more information about the components of an application gateway, see [Application gateway components](application-gateway-components.md).
+You can also complete this quickstart using [Azure PowerShell](quick-create-powershell.md) or [Azure CLI](quick-create-cli.md).
## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-
-## Sign in to the Azure portal
+An Azure account with an active subscription is required. If you don't already have an account, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
You'll create the application gateway using the tabs on the **Create an application gateway** page. 1. On the Azure portal menu or from the **Home** page, select **Create a resource**. The **New** window appears.- 2. Select **Networking** and then select **Application Gateway** in the **Featured** list. ### Basics tab
application-gateway Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-template.md
description: In this quickstart, you learn how to use a Resource Manager templat
Previously updated : 06/14/2021 Last updated : 06/10/2022
In this quickstart, you use an Azure Resource Manager template (ARM template) to
You can also complete this quickstart using the [Azure portal](quick-create-portal.md), [Azure PowerShell](quick-create-powershell.md), or [Azure CLI](quick-create-cli.md). - If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal. [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-docs-qs%2Fazuredeploy.json)
applied-ai-services Concept Receipt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-receipt.md
You'll need a receipt document. You can use our [sample receipt document](https:
| Items.*.Date | Date | Item date | yyyy-mm-dd | | Items.*.Description | String | Item description | | | Items.*.TotalPrice | Number | Item total price | Two-decimal float |
-| Locale | String | Locale of the receipt, for example, en-US. | ISO language-county code |
| MerchantAddress | String | Listed address of merchant | | | MerchantAliases | Array| | | | MerchantAliases.* | String | Alternative name of merchant | |
applied-ai-services How To Cache Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/how-to-cache-token.md
This article demonstrates how to cache the authentication token in order to impr
Import the **Microsoft.IdentityModel.Clients.ActiveDirectory** NuGet package, which is used to acquire a token. Next, use the following code to acquire an `AuthenticationResult`, using the authentication values you got when you [created the Immersive Reader resource](./how-to-create-immersive-reader.md).
+> [!IMPORTANT]
+> The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](/azure/active-directory/develop/msal-migration) for more details.
++ ```csharp private async Task<AuthenticationResult> GetTokenAsync() {
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/overview.md
Metrics Advisor will keep at most **10,000** time intervals ([what is an interva
| 60(=hourly) | 416.67 | | 1440(=daily)|10000.00|
-ThereΓÇÖre also further limitations, please refer to [FAQ](faq.yml#what-are-the-data-retention-and-limitations-of-metrics-advisor-) for more details.
+There are also further limitations. Refer to [FAQ](faq.yml#what-are-the-data-retention-and-limitations-of-metrics-advisor-) for more details.
+
+## Use cases for Metrics Advisor
+
+* [Protect your organization's growth by using Azure Metrics Advisor](https://techcommunity.microsoft.com/t5/azure-ai/protect-your-organization-s-growth-by-using-azure-metrics/ba-p/2564682)
+* [Supply chain anomaly detection and root cause analysis with Azure Metric Advisor](https://techcommunity.microsoft.com/t5/azure-ai/supply-chain-anomaly-detection-and-root-cause-analysis-with/ba-p/2871920)
+* [Customer support: How Azure Metrics Advisor can help improve customer satisfaction](https://techcommunity.microsoft.com/t5/azure-ai-blog/customer-support-how-azure-metrics-advisor-can-help-improve/ba-p/3038907)
+* [Detecting methane leaks using Azure Metrics Advisor](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/detecting-methane-leaks-using-azure-metrics-advisor/ba-p/3254005)
+* [AIOps with Azure Metrics Advisor - OpenDataScience.com](https://opendatascience.com/aiops-with-azure-metrics-advisor/)
## Next steps
attestation Claim Rule Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/claim-rule-functions.md
Title: Azure Attestation claim rule functions
-description: Claim rule concepts for Azure Attestation Policy.
+description: Learn about claim rule concepts for Azure Attestation policy and the improvements made to the policy language.
-# Claim Rule functions (supported in policy version 1.2+)
+# Claim rule functions supported in policy version 1.2+
-The attestation policy language is updated to enable querying of the incoming evidence in a JSON format. This section explains all the improvements made to the policy language.
+Azure Attestation policy language is updated to enable querying of the incoming evidence in a JSON format. This article explains all the improvements made to the policy language.
-One new operator and Six functions are introduced in this policy version.
+One new operator and six functions are introduced in this policy version.
## Functions
-Attestation evidence, which was earlier processed to generate specific incoming claims is now made available to the policy writer in the form of a JSON input. The policy language is also updated to use functions to extract the information. As specified in [Azure Attestation claim rule grammar](claim-rule-grammar.md), the language supports three valueTypeΓÇÖs and an implicit assignment operator.
+Attestation evidence, which was earlier processed to generate specific incoming claims, is now made available to the policy writer in the form of a JSON input. The policy language is also updated to use functions to extract the information. As specified in [Azure Attestation claim rule grammar](claim-rule-grammar.md), the language supports the following three value types and an implicit assignment operator:
- Boolean - Integer - String - Claim property access
-The new function call expression can now be used to process the incoming claims set. The function call expression can be used with a claim property and is structured as:
+You can use the new function call expression to process the incoming claim set. Use the function call expression with a claim property. It's structured as:
```JSON value=FunctionName((Expression (, Expression)*)?)) ```
-The function call expression starts with the name of the function, followed by parentheses containing zero or more arguments separated by comma. Since the arguments of the function are expressions, the language allows specifying a function call as an argument to another function. The arguments are evaluated from left to right before the evaluation of the function begins.
+The function call expression starts with the name of the function, followed by parentheses that contain zero or more arguments separated by commas. Because the arguments of the function are expressions, the language allows specifying a function call as an argument to another function. The arguments are evaluated from left to right before the evaluation of the function begins.
-The following functions are implemented in the policy language:
+The following functions are implemented in the policy language.
## JmesPath function
-JmesPath is a query language for JSON. It specifies several operators and functions that can be used to search data in a JSON document. The search always returns a valid JSON as output. The JmesPath function is structured as:
+JmesPath is a query language for JSON. It specifies several operators and functions that you can use to search data in a JSON document. The search always returns a valid JSON as output. The JmesPath function is structured as:
```JSON JmesPath(Expression, Expression)
The function requires exactly two arguments. The first argument must evaluate a
### Evaluation
-Evaluates the JmesPath expression represented by the second argument, against JSON data represented by the first argument and returns the result JSON as a string.
+The JmesPath expression represented by the second argument is evaluated against JSON data represented by the first argument, which returns the resultant JSON as a string.
### Effect of read-only arguments on result
-The result JSON string is read-only if either of the input arguments are retrieved from read-only claims.
+The resultant JSON string is read-only if either of the input arguments are retrieved from read-only claims.
### Usage example
-#### Literal arguments (Boolean, Integer, String)
+The following example illustrates the concept.
+
+#### Literal arguments (Boolean, integer, string)
Consider the following rule:
Consider the following rule:
=>add(type="JmesPathResult", value=JmesPath("{\"foo\": \"bar\"}", "foo")); ```
-During the evaluation of this rule, the JmesPath function is evaluated. The evaluation boils down to evaluating JmesPath query *foo* against the JSON *{ "foo" : "bar" }.* The result of this search operation is a JSON value *"bar".* So, the evaluation of the rule adds a new claim with type *"JmesPathResult"* and string value *"bar"* to the incoming claims set.
+During the evaluation of this rule, the JmesPath function is evaluated. The evaluation boils down to evaluating JmesPath query *foo* against the JSON *{ "foo" : "bar" }.* The result of this search operation is a JSON value *"bar".* So, the evaluation of the rule adds a new claim with type *"JmesPathResult"* and string value *"bar"* to the incoming claim set.
-Notice that the backslash character is used to escape the double-quote character within the literal string representing the JSON data.
+The backslash character is used to escape the double quotation-mark character within the literal string that represents the JSON data.
#### Arguments constructed from claims
-Assume the following claims are available in the incoming claims set:
+Assume the following claims are available in the incoming claim set:
```JSON Claim1: {type="JsonData", value="{\"values\": [0,1,2,3,4]}"} Claim2: {type="JmesPathQuery", value="values[2]"} ```
-A JmesPath query can be written as following:
+A JmesPath query can be written as:
```JSON c1:[type="JsonData"] && c2:[type=="JmesPathQuery"] => add(type="JmesPathResult", value=JmesPath(c1.value, c2.value)); ```
-The evaluation of the JmesPath function boils down to evaluating the JmesPath query *values[2]* against the JSON *{"values":[0,1,2,3,4]}*. So, the evaluation of the rule adds a new claim with type *"JmesPathResult"* and string value *"2"* to the incoming claims set. So, the incoming set is updated as:
+The evaluation of the JmesPath function boils down to evaluating the JmesPath query *values[2]* against the JSON *{"values":[0,1,2,3,4]}*. So, the evaluation of the rule adds a new claim with type *"JmesPathResult"* and string value *"2"* to the incoming claim set. So, the incoming set is updated as:
```JSON Claim1: {type="JsonData", value="{\"values\": [0,1,2,3,4]}"}
The policy language only supports four types of claim values:
- String - Set
-In the policy language, a JSON value is represented as a string claim whose value is equal to the string representation of the JSON value. The JsonToClaimValue function is used to convert JSON values to claim values that the policy language supports. The function is structured as:
+In the policy language, a JSON value is represented as a string claim whose value is equal to the string representation of the JSON value. The **JsonToClaimValue** function is used to convert JSON values to claim values that the policy language supports. The function is structured as:
```JSON JsonToClaimValue(Expression)
The function requires exactly one argument, which must evaluate to a valid JSON
### Evaluation
-Here is how each type of the JSON values are converted to a claim value:
+Here's how each type of the six JSON values is converted to a claim value:
-- **Number**: If the number is an integer, the function returns a claim value with same integer value. If the number is a fraction, an error is generated.
+- **Number**: If the number is an integer, the function returns a claim value with the same integer value. If the number is a fraction, an error is generated.
- **Boolean**: The function returns a claim value with the same Boolean value. - **String**: The function returns a claim value with the same string value.-- **Object**: The function does not support JSON objects. If the argument is a JSON object, an error is generated.-- **Array**: The function only supports arrays of primitive (number, Boolean, string, null) types. Such an array is converted to a set, containing claim with the same type but with values created by converting the JSON values from the array. If the argument to the function is an array of non-primitive (object, array) types, an error is generated.-- **Null**: If the input is a JSON null, the function returns an empty claim value. If such a claim value is used to construct a claim, the claim is an empty claim. If a rule attempts to add or issue an empty claim, no claim is added to the incoming or the outgoing claims set. In other words, a rule attempting to add or issue an empty claim result in a no-op.
+- **Object**: The function doesn't support JSON objects. If the argument is a JSON object, an error is generated.
+- **Array**: The function only supports arrays of primitive (number, Boolean, string, and null) types. Such an array is converted to a set that contains a claim with the same type but with values created by converting the JSON values from the array. If the argument to the function is an array of non-primitive (object and array) types, an error is generated.
+- **Null**: If the input is a JSON null, the function returns an empty claim value. If such a claim value is used to construct a claim, the claim is an empty claim. If a rule attempts to add or issue an empty claim, no claim is added to the incoming or the outgoing claim set. In other words, a rule attempting to add or issue an empty claim results in a no-op.
### Effect of read-only arguments on result
-The result claim value/values is/are read-only if the input argument is read-only.
+The resultant claim values are read-only if the input argument is read-only.
### Usage example
-#### JSON Number/Boolean/String
+The following example illustrates the concept.
+
+#### JSON number/Boolean/string
Assume the following claims are available in the incoming claim set:
c:[type=="JsonBooleanData"] => add(type="BooleanResult", value=JsonToClaimValue(
c:[type=="JsonStringData"] => add(type="StringResult", value=JsonToClaimValue(c.value)); ```
-Updated Incoming claims set:
+Updated incoming claim set:
```JSON Claim1: { type = "JsonIntegerData", value="100" }
Claim5: { type = "JsonStringData", value="abc" }
Claim6: { type = "StringResult", value="abc" } ```
-#### JSON Array
+#### JSON array
-Assume the following claims are available in the incoming claims set:
+Assume the following claims are available in the incoming claim set:
```JSON Claim1: { type="JsonData", value="[0, \"abc\", true]" }
Evaluating rule:
c:[type=="JsonData"] => add(type="Result", value=JsonToClaimValue(c.value)); ```
-Updated incoming claims set:
+Updated incoming claim set:
```JSON Claim1: { type="JsonData", value="[0, \"abc\", true]" }
Claim3: { type="Result", value="abc" }
Claim4: { type="Result", value=true} ```
-Note the type in the claims is the same and only the value differs. If Multiple entries exist with the same value in the array, multiple claims will be created.
+The type in the claims is the same and only the value differs. If multiple entries exist with the same value in the array, multiple claims are created.
-#### JSON Null
+#### JSON null
-Assume the following claims are available in the incoming claims set:
+Assume the following claims are available in the incoming claim set:
```JSON Claim1: { type="JsonData", value="null" }
Evaluating rule:
c:[type=="JsonData"] => add(type="Result", value=JsonToClaimValue(c.value)); ```
-Updated incoming claims set:
+Updated incoming claim set:
```JSON Claim1: { type="JsonData", value="null" } ```
-The rule attempts to add a claim with type *Result* and an empty value, since it's not allowed no claim is created, and the incoming claims set remains unchanged.
+The rule attempts to add a claim with the type *Result* and an empty value. Because it's not allowed, no claim is created, and the incoming claim set remains unchanged.
## IsSubsetOf function
-This function is used to check if a set of claims is subset of another set of claims. The function is structured as:
+This function is used to check if a set of claims is a subset of another set of claims. The function is structured as:
```JSON IsSubsetOf(Expression, Expression)
IsSubsetOf(Expression, Expression)
### Argument constraints
-This function requires exactly two arguments. Both arguments can be sets of any cardinality. The sets in policy language are inherently heterogeneous, so there is no restriction on which type of values can be present in the argument sets.
+This function requires exactly two arguments. Both arguments can be sets of any cardinality. The sets in policy language are inherently heterogeneous, so there's no restriction on which type of values can be present in the argument sets.
### Evaluation
-The function checks if the set represented by the first argument is a subset of the set represented by the second argument. If so, it returns true, otherwise it returns false.
+The function checks if the set represented by the first argument is a subset of the set represented by the second argument. If so, it returns true. Otherwise, it returns false.
### Effect of read-only arguments on result
-Since the function simply creates and returns a Boolean value, the returned claim value is always non-read-only.
+Because the function simply creates and returns a Boolean value, the returned claim value is always non-read-only.
### Usage example
-Assume the following claims are available in the incoming claims set:
+
+Assume the following claims are available in the incoming claim set:
```JSON Claim1: { type="Subset", value="abc" }
Evaluating rule:
c1:[type == "Subset"] && c2:[type=="Superset"] => add(type="IsSubset", value=IsSubsetOf(c1.value, c2.value)); ```
-Updated incoming claims set:
+Updated incoming claim set:
```JSON Claim1: { type="Subset", value="abc" }
Claim6: { type="IsSubset", value=true }
``` ## AppendString function+ This function is used to append two string values. The function is structured as: ```JSON
This function requires exactly two arguments. Both the arguments must evaluate s
### Evaluation
-This function appends the string value of the second argument to the string value of the first argument and returns the result string value.
+This function appends the string value of the second argument to the string value of the first argument and returns the resultant string value.
-### Effect of read-only arguments on result
+### Effect of read-only arguments on the result
-The result string value is considered to be read-only if either of the arguments are retrieved from read-only claims.
+The resultant string value is considered to be read-only if either one of the arguments is retrieved from read-only claims.
### Usage example
-Assume the following claims are available in the incoming claims set:
+Assume the following claims are available in the incoming claim set:
```JSON Claim1: { type="String1", value="abc" }
Evaluating rule:
c:[type=="String1"] && c2:[type=="String2"] => add(type="Result", value=AppendString(c1.value, c2.value)); ```
-Updated incoming claims set:
+Updated incoming claim set:
```JSON Claim1: { type="String1", value="abc" }
Claim3: { type="Result", value="abcxyz" }
``` ## NegateBool function+ This function is used to negate a Boolean claim value. The function is structured as: ```JSON
The function requires exactly one argument, which must evaluate a Boolean value.
This function negates the Boolean value represented by the argument and returns the negated value.
-### Effect of read-only arguments on result
+### Effect of read-only arguments on the result
The resultant Boolean value is considered to be read-only if the argument is retrieved from a read-only claim. ### Usage example
-Assume the following claims are available in the incoming claims set:
+Assume the following claims are available in the incoming claim set:
```JSON Claim1: { type="Input", value=true }
Evaluating rule:
c:[type=="Input"] => add(type="Result", value=NegateBol(c.value)); ```
-Updated incoming claims set:
+Updated incoming claim set:
```JSON Claim1: { type="Input", value=true }
Claim2: { type="Result", value=false }
## ContainsOnlyValue function
-This function is used to check if a set of claims only contains a specific claim value. The function is structured as:
+This function is used to check if a set of claims contains only a specific claim value. The function is structured as:
```JSON ContainsOnlyValue(Expression, Expression)
ContainsOnlyValue(Expression, Expression)
### Argument constraints
-This function requires exactly two arguments. The first argument can evaluate a heterogeneous set of any cardinality. The second argument must evaluate a single value of any type (Boolean, string, integer) supported by the policy language.
+This function requires exactly two arguments. The first argument can evaluate a heterogeneous set of any cardinality. The second argument must evaluate a single value of any type (Boolean, string, or integer) supported by the policy language.
### Evaluation
-The function returns true if the set represented by the first argument is not empty and only contains the value represented by the second argument. The function returns false, if the set represented by the first argument is empty or contains any value other than the value represented by the second argument.
+The function returns true if the set represented by the first argument isn't empty and only contains the value represented by the second argument. The function returns false if the set represented by the first argument is empty or contains any value other than the value represented by the second argument.
-### Effect of read-only arguments on result
+### Effect of read-only arguments on the result
-Since the function simply creates and returns a Boolean value, the returned claim value is always non-read-only.
+Because the function simply creates and returns a Boolean value, the returned claim value is always non-read-only.
### Usage example
-Assume the following claims are available in the incoming claims set:
+Assume the following claims are available in the incoming claim set:
```JSON Claim1: {type="Set", value=100}
Evaluating rule:
c:[type=="Set"] => add(type="Result", value=ContainsOnlyValue(100)); ```
-Updated incoming claims set:
+Updated incoming claim set:
```JSON Claim1: {type="Set", value=100}
Claim2: {type="Set", value=101}
Claim3: {type="Result", value=false} ```
-## Not Condition Operator
+## Not condition operator
-The rules in the policy language start with an optional list of conditions that act as filtering criteria on the incoming claims set. The conditions can be used to identify if a claim is present in the incoming claims set. But there was no way of checking if a claim was absent. So, a new operator (!) was introduced that could be applied to the individual conditions in the conditions list. This operator changes the evaluation behavior of the condition from checking the presence of a claim to checking the absence of a claim.
+The rules in the policy language start with an optional list of conditions that act as filtering criteria on the incoming claim set. The conditions can be used to identify if a claim is present in the incoming claim set. But there's no way of checking if a claim is absent. So, a new operator (!) is introduced that can be applied to the individual conditions in the conditions list. This operator changes the evaluation behavior of the condition from checking the presence of a claim to checking the absence of a claim.
### Usage example
-Assume the following claims are available in the incoming claims set:
+Assume the following claims are available in the incoming claim set:
```JSON Claim1: {type="Claim1", value=100}
Evaluating rule:
![type=="Claim3"] => add(type="Claim3", value=300) ```
-This rule effectively translates to: *If a claim with type "Claim3" is not present in the incoming claims set, add a new claim with type "Claim3" and value 300 to the incoming claims set.*
+This rule effectively translates to: *If a claim with type "Claim3" is not present in the incoming claim set, add a new claim with type "Claim3" and value 300 to the incoming claim set.*
-Updated incoming claims set:
+Updated incoming claim set:
```JSON Claim1: {type="Claim1", value=100}
Claim2: {type="Claim2", value=200}
Claim3: {type="Claim2", value=300} ```
-### Sample Policy using policy version 1.2
+### Sample policy using policy version 1.2
-Windows [implements measured boot](/windows/security/information-protection/secure-the-windows-10-boot-process) and along with attestation the protections provided is greatly enhanced which can be securely and reliably used to detect and protect against vulnerable and malicious boot components. These measurements can now be used to build the attestation policy.
+Windows [implements Measured Boot](/windows/security/information-protection/secure-the-windows-10-boot-process) and, along with attestation, the protection that's provided is greatly enhanced. You can use it to detect and protect against vulnerable and malicious boot components securely and reliably. You can now use these measurements to build the attestation policy.
``` version=1.2;
attestation Policy Version 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-version-1-0.md
Title: Azure Attestation policy version 1.0
-description: policy version 1.0.
+description: Learn about Azure Attestation policy version 1.0 to define what must be validated during the attestation flow.
-# Attestation Policy Version 1.0
+# Attestation policy version 1.0
-Instance owners can use the Attestation policy to define what needs to be validated during the attestation flow.
-This article introduces the workings of the attestation service and the policy engine. Each attestation type has its own attestation policy, however the supported grammar, and processing is broadly the same.
+Instance owners can use the Azure Attestation policy to define what must be validated during the attestation flow. This article introduces the workings of the attestation service and the policy engine. Each attestation type has its own attestation policy. The supported grammar and processing are broadly the same.
## Policy version 1.0 The minimum version of the policy supported by the service is version 1.0. The attestation service flow is as follows:+ - The platform sends the attestation evidence in the attest call to the attestation service.-- The attestation service parses the evidence and creates a list of claims that is then used in the attestation evaluation. These claims are logically categorized as incoming claims sets.
+- The attestation service parses the evidence and creates a list of claims that's used in the attestation evaluation. These claims are logically categorized as incoming claim sets.
- The uploaded attestation policy is used to evaluate the evidence over the rules authored in the attestation policy.
-For Policy version 1.0:
-
-The policy has three segments, as seen above:
+Policy version 1.0 has three segments:
-- **version**: The version is the version number of the grammar that is followed.-- **authorizationrules**: A collection of claim rules that will be checked first, to determine if attestation should proceed to issuancerules. This section should be used to filter out calls that donΓÇÖt require the issuancerules to be applied. No claims can be issued from this section to the response token. These rules can be used to fail attestation.-- **issuancerules**: A collection of claim rules that will be evaluated to add information to the attestation result as defined in the policy. The claim rules apply in the order they are defined and are also optional. A collection of claim rules that will be evaluated to add information to the attestation result as defined in the policy. The claim rules apply in the order they are defined and are also optional. These rules can be used to add to the outgoing claim set and the response token, these rules cannot be used to fail attestation.
+- **version**: The version is the version number of the grammar that's followed.
+- **authorizationrules**: A collection of claim rules that are checked first to determine if attestation should proceed to issuancerules. Use this section to filter out calls that don't require the issuance rules to be applied. No claims can be issued from this section to the response token. These rules can be used to fail attestation.
+- **issuancerules**: A collection of claim rules that are evaluated to add information to the attestation result as defined in the policy. The claim rules apply in the order in which they're defined. They're also optional. These rules can be used to add to the outgoing claim set and the response token. These rules can't be used to fail attestation.
-List of claims supported by policy version 1.0 as part of the incoming claims.
+The following claims are supported by policy version 1.0 as part of the incoming claims.
### TPM attestation
-Claims to be used by policy authors to define authorization rules in a TPM attestation policy:
+Use these claims to define authorization rules in a Trusted Platform Module (TPM) attestation policy:
-- **aikValidated**: Boolean value containing information if the Attestation Identity Key (AIK) cert has been validated or not-- **aikPubHash**: String containing the base64(SHA256(AIK public key in DER format))-- **tpmVersion**: Integer value containing the Trusted Platform Module (TPM) major version-- **secureBootEnabled**: Boolean value to indicate if secure boot is enabled-- **iommuEnabled**: Boolean value to indicate if Input-output memory management unit (Iommu) is enabled-- **bootDebuggingDisabled**: Boolean value to indicate if boot debugging is disabled-- **notSafeMode**: Boolean value to indicate if the Windows is not running on safe mode-- **notWinPE**: Boolean value indicating if Windows is not running in WinPE mode-- **vbsEnabled**: Boolean value indicating if VBS is enabled-- **vbsReportPresent**: Boolean value indicating if VBS enclave report is available
+- **aikValidated**: The Boolean value that contains information if the attestation identity key (AIK) certificate has been validated or not.
+- **aikPubHash**: The string that contains the base64 (SHA256) AIK public key in DER format.
+- **tpmVersion**: The integer value that contains the TPM major version.
+- **secureBootEnabled**: The Boolean value that indicates if secure boot is enabled.
+- **iommuEnabled**:The Boolean value that indicates if the input-output memory management unit is enabled.
+- **bootDebuggingDisabled**: The Boolean value that indicates if boot debugging is disabled.
+- **notSafeMode**: The Boolean value that indicates if Windows isn't running in safe mode.
+- **notWinPE**: The Boolean value that indicates if Windows isn't running in WinPE mode.
+- **vbsEnabled**: The Boolean value that indicates if virtualization-based security (VBS) is enabled.
+- **vbsReportPresent**: The Boolean value that indicates if a VBS enclave report is available.
### VBS attestation
-In addition to the TPM attestation policy claims, below claims can be used by policy authors to define authorization rules in a VBS attestation policy.
+Use the following claims to define authorization rules in a VBS attestation policy:
-- **enclaveAuthorId**: String value containing the Base64Url encoded value of the enclave author id-The author identifier of the primary module for the enclave-- **enclaveImageId**: String value containing the Base64Url encoded value of the enclave Image id-The image identifier of the primary module for the enclave-- **enclaveOwnerId**: String value containing the Base64Url encoded value of the enclave Owner id-The identifier of the owner for the enclave-- **enclaveFamilyId**: String value containing the Base64Url encoded value of the enclave Family ID. The family identifier of the primary module for the enclave-- **enclaveSvn**: Integer value containing the security version number of the primary module for the enclave-- **enclavePlatformSvn**: Integer value containing the security version number of the platform that hosts the enclave-- **enclaveFlags**: The enclaveFlags claim is an Integer value containing Flags that describe the runtime policy for the enclave
+- **enclaveAuthorId**: The string value that contains the Base64Url encoded value of the enclave author ID. It's the author identifier of the primary module for the enclave.
+- **enclaveImageId**: The string value that contains the Base64Url encoded value of the enclave image ID. It's the image identifier of the primary module for the enclave.
+- **enclaveOwnerId**: The string value that contains the Base64Url encoded value of the enclave owner ID. It's the identifier of the owner for the enclave.
+- **enclaveFamilyId**: The string value that contains the Base64Url encoded value of the enclave family ID. It's the family identifier of the primary module for the enclave.
+- **enclaveSvn**: The integer value that contains the security version number of the primary module for the enclave.
+- **enclavePlatformSvn**: The integer value that contains the security version number of the platform that hosts the enclave.
+- **enclaveFlags**: The enclaveFlags claim is an integer value that contains flags that describe the runtime policy for the enclave.
## Sample policies for various attestation types
attestation Policy Version 1 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-version-1-1.md
Title: Azure Attestation policy version 1.1
-description: policy version 1.1.
+description: Learn about Azure Attestation policy version 1.1 to define what must be validated during the attestation flow.
-# Attestation Policy Version 1.1
+# Attestation policy version 1.1
-Instance owners can use the Attestation policy to define what needs to be validated during the attestation flow.
-This article introduces the workings of the attestation service and the policy engine. Each attestation type has its own attestation policy, however the supported grammar, and processing is broadly the same.
+Instance owners can use the Azure Attestation policy to define what must be validated during the attestation flow. This article introduces the workings of the attestation service and the policy engine. Each attestation type has its own attestation policy. The supported grammar and processing are broadly the same.
## Policy version 1.1 The attestation flow is as follows:+ - The platform sends the attestation evidence in the attest call to the attestation service.-- The attestation service parses the evidence and creates a list of claims that is then used during rule evaluation. The claims are logically categorized as incoming claims sets.
+- The attestation service parses the evidence and creates a list of claims that's used during rule evaluation. The claims are logically categorized as incoming claim sets.
- The attestation policy uploaded by the owner of the attestation service instance is then used to evaluate and issue claims to the response. - During the evaluation, configuration rules can be used to indicate to the policy evaluation engine how to handle certain claims.
-For Policy version 1.1:
-The policy has four segments, as seen above:
+Policy version 1.1 has four segments:
-- **version**: The version is the version number of the grammar that is followed.-- **configurationrules**: During policy evaluation, sometimes it may be required to control the behavior of the policy engine itself. This is where configuration rules can be used to indicate to the policy evaluation engine how to handle some claims in the evaluation.-- **authorizationrules**: A collection of claim rules that will be checked first, to determine if attestation should proceed to issuancerules. This section should be used to filter out calls that donΓÇÖt require the issuancerules to be applied. No claims can be issued from this section to the response token. These rules can be used to fail attestation.-- **issuancerules**: A collection of claim rules that will be evaluated to add information to the attestation result as defined in the policy. The claim rules apply in the defined order and are also optional. These rules can also be used to add to the outgoing claim set and the response token, however these rules cannot be used to fail attestation.
+- **version**: The version is the version number of the grammar that's followed.
+- **configurationrules**: During policy evaluation, you might be required to control the behavior of the policy engine itself. You can use configuration rules to indicate to the policy evaluation engine how to handle some claims in the evaluation.
+- **authorizationrules**: A collection of claim rules that are checked first to determine if attestation should proceed to issuancerules. Use this section to filter out calls that don't require the issuance rules to be applied. No claims can be issued from this section to the response token. These rules can be used to fail attestation.
+- **issuancerules**: A collection of claim rules that are evaluated to add information to the attestation result as defined in the policy. The claim rules apply in the defined order. They're also optional. These rules can also be used to add to the outgoing claim set and the response token. These rules can't be used to fail attestation.
-The following **configurationrules** are available to the policy author.
+The following configuration rules are available to the policy author.
-| Attestation Type | ConfigurationRule Property Name | Type | Default Value | Description |
+| Attestation type | ConfigurationRule property name | Type | Default value | Description |
| -- | -- | -- | -- |-- |
-| TPM, VBS | require_valid_aik_cert | Bool | true | Indicates whether a valid AIK certificate is required. Only applied when TPM data is present.|
+| Trusted Platform Module (TPM), virtualization-based security (VBS) | require_valid_aik_cert | Bool | true | Indicates whether a valid attestation identity key (AIK) certificate is required. It's only applied when TPM data is present.|
| TPM, VBS | required_pcr_mask | Int | 0xFFFFFF | The bitmask for PCR indices that must be included in the TPM quote. Bit 0 represents PCR 0, bit 1 represents PCR 1, and so on. |
-List of claims supported as part of the incoming claims.
+The following claims are supported as part of the incoming claims.
### TPM attestation
-Claims to be used by policy authors to define authorization rules in a TPM attestation policy:
+Use these claims to define authorization rules in a TPM attestation policy:
-- **aikValidated**: Boolean value containing information if the Attestation Identity Key (AIK) cert has been validated or not-- **aikPubHash**: String containing the base64(SHA256(AIK public key in DER format))-- **tpmVersion**: Integer value containing the Trusted Platform Module (TPM) major version-- **secureBootEnabled**: Boolean value to indicate if secure boot is enabled-- **iommuEnabled**: Boolean value to indicate if Input-output memory management unit (Iommu) is enabled-- **bootDebuggingDisabled**: Boolean value to indicate if boot debugging is disabled-- **notSafeMode**: Boolean value to indicate if the Windows is not running on safe mode-- **notWinPE**: Boolean value indicating if Windows is not running in WinPE mode-- **vbsEnabled**: Boolean value indicating if VBS is enabled-- **vbsReportPresent**: Boolean value indicating if VBS enclave report is available
+- **aikValidated**: The Boolean value that contains information if the AIK certification has been validated or not.
+- **aikPubHash**: The string that contains the base64 (SHA256) AIK public key in DER format.
+- **tpmVersion**: The integer value that contains the TPM major version.
+- **secureBootEnabled**: The Boolean value that indicates if secure boot is enabled.
+- **iommuEnabled**: The Boolean value that indicates if input-output memory management unit is enabled.
+- **bootDebuggingDisabled**: The Boolean value that indicates if boot debugging is disabled.
+- **notSafeMode**: The Boolean value that indicates if Windows isn't running in safe mode.
+- **notWinPE**: The Boolean value that indicates if Windows isn't running in WinPE mode.
+- **vbsEnabled**: The Boolean value that indicates if VBS is enabled.
+- **vbsReportPresent**: The Boolean value that indicates if a VBS enclave report is available.
### VBS attestation
-In addition to the TPM attestation policy claims, below claims can be used by policy authors to define authorization rules in a VBS attestation policy.
+Use the following claims to define authorization rules in a VBS attestation policy:
-- **enclaveAuthorId**: String value containing the Base64Url encoded value of the enclave author id-The author identifier of the primary module for the enclave-- **enclaveImageId**: String value containing the Base64Url encoded value of the enclave Image id-The image identifier of the primary module for the enclave-- **enclaveOwnerId**: String value containing the Base64Url encoded value of the enclave Owner id-The identifier of the owner for the enclave-- **enclaveFamilyId**: String value containing the Base64Url encoded value of the enclave Family ID. The family identifier of the primary module for the enclave-- **enclaveSvn**: Integer value containing the security version number of the primary module for the enclave-- **enclavePlatformSvn**: Integer value containing the security version number of the platform that hosts the enclave-- **enclaveFlags**: The enclaveFlags claim is an Integer value containing Flags that describe the runtime policy for the enclave
+- **enclaveAuthorId**: The string value that contains the Base64Url encoded value of the enclave author ID. It's the author identifier of the primary module for the enclave.
+- **enclaveImageId**: The string value that contains the Base64Url encoded value of the enclave image ID. It's the image identifier of the primary module for the enclave.
+- **enclaveOwnerId**: The string value that contains the Base64Url encoded value of the enclave owner ID. It's the identifier of the owner for the enclave.
+- **enclaveFamilyId**: The string value that contains the Base64Url encoded value of the enclave family ID. It's the family identifier of the primary module for the enclave.
+- **enclaveSvn**: The integer value that contains the security version number of the primary module for the enclave.
+- **enclavePlatformSvn**: The integer value that contains the security version number of the platform that hosts the enclave.
+- **enclaveFlags**: The enclaveFlags claim is an integer value that contains flags that describe the runtime policy for the enclave.
## Sample policies for various attestation types
issuancerules
[type=="notSafeMode", value==true] => issue(type="PlatformAttested", value=false); }; ```
-The required_pcr_mask restricts the evaluation of PCR matches to only PCR 0,1,2,3.
-The require_valid_aik_cert marked as false, indicates that the aik cert is not a requirement and is later verified in the issuancerules to determine the PlatformAttested state.
+
+The `required_pcr_mask` type restricts the evaluation of PCR matches to only PCR 0, 1, 2, and 3.
+The `require_valid_aik_cert` type marked as `false` indicates that the AIK certification isn't a requirement and is later verified in `issuancerules` to determine the `PlatformAttested` state.
attestation Policy Version 1 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-version-1-2.md
Title: Azure Attestation policy version 1.2
-description: policy version 1.2
+description: Learn about Azure Attestation policy version 1.2 to define what must be validated during the attestation flow.
-# Attestation Policy Version 1.2
+# Attestation policy version 1.2
-Instance owners can use the Attestation policy to define what needs to be validated during the attestation flow.
-This article introduces the workings of the attestation service and the policy engine. Each attestation type has its own attestation policy, however the supported grammar, and processing is broadly the same.
+Instance owners can use the Azure Attestation policy to define what must be validated during the attestation flow. This article introduces the workings of the attestation service and the policy engine. Each attestation type has its own attestation policy. The supported grammar and processing are broadly the same.
-## Policy Version 1.2
+## Policy version 1.2
The attestation flow is as follows:+ - The platform sends the attestation evidence in the attest call to the attestation service.-- The attestation service parses the evidence and creates a list of claims that is then used in the attestation evaluation. The evidence is also parsed and maintained as a JSON format, which is used to provide a broader set of measurements to the policy writer. These claims are logically categorized as incoming claims sets.-- The attestation policy uploaded by the owner of the attestation service instance is then used to evaluate and issue claims to the response. The policy writer can now use JmesPath based queries to search in the evidence to create their own claims and subsequent claim rules. During the evaluation, configuration rules can also be used to indicate to the policy evaluation engine how to handle certain claims.
+- The attestation service parses the evidence and creates a list of claims that are used in the attestation evaluation. The evidence is also parsed and maintained as a JSON format, which is used to provide a broader set of measurements to the policy writer. These claims are logically categorized as incoming claim sets.
+- The attestation policy uploaded by the owner of the attestation service instance is then used to evaluate and issue claims to the response. You can now use JmesPath-based queries to search in the evidence to create your own claims and subsequent claim rules. During the evaluation, configuration rules can also be used to indicate to the policy evaluation engine how to handle certain claims.
Policy version 1.2 has four segments: - **version:** The version is the version number of the grammar.-- **configurationrules:** During policy evaluation, sometimes it may be required to control the behavior of the policy engine itself. Configuration rules can be used to indicate to the policy evaluation engine how to handle some claims in the evaluation.-- **authorizationrules:** A collection of claim rules that will be checked first, to determine if attestation should continue to issuancerules. This section should be used to filter out calls that donΓÇÖt require the issuancerules to be applied. No claims can be issued from this section to the response token. These rules can be used to fail attestation.
-**issuancerules:** A collection of claim rules that will be evaluated to add information to the attestation result as defined in the policy. The claim rules apply in the order they're defined and are also optional. A collection of claim rules that will be evaluated to add information to the attestation result as defined in the policy. The claim rules apply in the order they are defined and are also optional. These rules can be used to add to the outgoing claim set and the response token, these rules can't be used to fail attestation.
+- **configurationrules:** During policy evaluation, sometimes you might be required to control the behavior of the policy engine itself. You can use configuration rules to indicate to the policy evaluation engine how to handle some claims in the evaluation.
+- **authorizationrules:** A collection of claim rules that are checked first to determine if attestation should continue to issuancerules. Use this section to filter out calls that don't require issuance rules to be applied. No claims can be issued from this section to the response token. These rules can be used to fail attestation.
+- **issuancerules:** A collection of claim rules that are evaluated to add information to the attestation result as defined in the policy. The claim rules apply in the order in which they're defined. They're also optional. These rules can be used to add to the outgoing claim set and the response token. These rules can't be used to fail attestation.
-The following **configurationrules** are available to the policy author.
+The following configuration rules are available to the policy author.
-| Attestation Type | ConfigurationRule Property Name | Type | Default Value | Description |
+| Attestation type | ConfigurationRule property name | Type | Default value | Description |
| -- | -- | -- | -- |-- |
-| TPM, VBS | require_valid_aik_cert | Bool | true | Indicates whether a valid AIK certificate is required. Only applied when TPM data is present.|
+| Trusted Platform Module (TPM), virtualization-based security (VBS) | require_valid_aik_cert | Bool | true | Indicates whether a valid attestation identity key certificate is required. It's only applied when TPM data is present.|
| TPM, VBS | required_pcr_mask | Int | 0xFFFFFF | The bitmask for PCR indices that must be included in the TPM quote. Bit 0 represents PCR 0, bit 1 represents PCR 1, and so on. | ## List of claims supported as part of the incoming claims
-Policy Version 1.2 also introduces functions to the policy grammar. Read more about the functions [here](claim-rule-functions.md). With the introduction of JmesPath-based functions, incoming claims can be generated as needed by the attestation policy author.
+Policy version 1.2 also introduces functions to the policy grammar. Read more about the functions in [Claim rule functions](claim-rule-functions.md). With the introduction of JmesPath-based functions, incoming claims can be generated as needed by the attestation policy author.
-Some of the key rules that can be used to generate claims are listed below.
+Some of the key rules you can use to generate claims are listed here.
-|Feature |Brief Description |Policy Rule |
+|Feature |Description |Policy rule |
|--|-|--|
-| Secure Boot |Device boots using only software that is trusted by the (OEM): Msft | `c:[type == "events", issuer=="AttestationService"] => add(type = "efiConfigVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && ProcessedData.VariableGuid == '8BE4DF61-93CA-11D2-AA0D-00E098032B8C']")); => issue(type = "secureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] \| length(@) == `1` && @[0].ProcessedData.VariableData == 'AQ'"))); \![type=="secureBootEnabled", issuer=="AttestationPolicy"] => issue(type="secureBootEnabled", value=false);` |
-| Code Integrity |Code integrity is a feature that validates the integrity of a driver or system file each time it is loaded into memory| `// Retrieve bool propertiesc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="codeIntegrityEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_CODEINTEGRITY")));c:[type=="codeIntegrityEnabledSet", issuer=="AttestationPolicy"] => issue(type="codeIntegrityEnabled", value=ContainsOnlyValue(c.value, true));\![type=="codeIntegrityEnabled", issuer=="AttestationPolicy"] => issue(type="codeIntegrityEnabled", value=false);` |
+| Secure boot |Device boots using only software that's trusted by the OEM, which is Microsoft. | `c:[type == "events", issuer=="AttestationService"] => add(type = "efiConfigVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && ProcessedData.VariableGuid == '8BE4DF61-93CA-11D2-AA0D-00E098032B8C']")); => issue(type = "secureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] \| length(@) == `1` && @[0].ProcessedData.VariableData == 'AQ'"))); \![type=="secureBootEnabled", issuer=="AttestationPolicy"] => issue(type="secureBootEnabled", value=false);` |
+| Code integrity |Code integrity is a feature that validates the integrity of a driver or system file each time it's loaded into memory.| `// Retrieve bool propertiesc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="codeIntegrityEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_CODEINTEGRITY")));c:[type=="codeIntegrityEnabledSet", issuer=="AttestationPolicy"] => issue(type="codeIntegrityEnabled", value=ContainsOnlyValue(c.value, true));\![type=="codeIntegrityEnabled", issuer=="AttestationPolicy"] => issue(type="codeIntegrityEnabled", value=false);` |
|BitLocker [Boot state] |Used for encryption of device drives.| `// Bitlocker Boot Status, The first non zero measurement or zero.c:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => issue(type="bitlockerStatus", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_BITLOCKER_UNLOCK \| @[? Value != `0`].Value \| @[0]")));\![type=="bitlockerStatus"] => issue(type="bitlockerStatus", value=0);Nonzero means enabled.` |
-| Early launch Antimalware | ELAM protects against loading unsigned/malicious drivers during boot. | `// Elam Driver (windows defender) Loaded.c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => issue(type="elamDriverLoaded", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_LOADEDMODULE_AGGREGATION[] \| [? EVENT_IMAGEVALIDATED == `true` && (equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wdboot.sys') \|\| equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wd\\wdboot.sys'))] \| @ != `null`")));![type=="elamDriverLoaded", issuer=="AttestationPolicy"] => issue(type="elamDriverLoaded", value=false);` |
-| Boot Debugging |Allows the user to connect to a boot debugger. Can be used to bypass Secure Boot and other boot protections. | `// Boot debuggingc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="bootDebuggingEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_BOOTDEBUGGING")));c:[type=="bootDebuggingEnabledSet", issuer=="AttestationPolicy"] => issue(type="bootDebuggingDisabled", value=ContainsOnlyValue(c.value, false));\![type=="bootDebuggingDisabled", issuer=="AttestationPolicy"] => issue(type="bootDebuggingDisabled", value=false);` |
-| Kernel Debugging | Allows the user to connect a kernel debugger. Grants access to all system resources (less VSM protected resources). | `// Kernel Debuggingc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="osKernelDebuggingEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_OSKERNELDEBUG")));c:[type=="osKernelDebuggingEnabledSet", issuer=="AttestationPolicy"] => issue(type="osKernelDebuggingDisabled", value=ContainsOnlyValue(c.value, false));\![type=="osKernelDebuggingDisabled", issuer=="AttestationPolicy"] => issue(type="osKernelDebuggingDisabled", value=false);` |
-|Data Execution Prevention Policy | Data Execution Prevention (DEP) Policy defines is a set of hardware and software technologies that perform additional checks on memory to help prevent malicious code from running on a system. | `// DEP Policyc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => issue(type="depPolicy", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_DATAEXECUTIONPREVENTION.Value \| @[-1]")));\![type=="depPolicy"] => issue(type="depPolicy", value=0);` |
-| Test and Flight Signing | Enables the user to run test signed code. | `// Test Signing< c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY")); c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="testSigningEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_TESTSIGNING"))); c:[type=="testSigningEnabledSet", issuer=="AttestationPolicy"] => issue(type="testSigningDisabled", value=ContainsOnlyValue(c.value, false)); ![type=="testSigningDisabled", issuer=="AttestationPolicy"] => issue(type="testSigningDisabled", value=false);//Flight Signingc:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="flightSigningEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "\[\*\].EVENT_FLIGHTSIGNING")));c:[type=="flightSigningEnabledSet", issuer=="AttestationPolicy"] => issue(type="flightSigningNotEnabled", value=ContainsOnlyValue(c.value, false));![type=="flightSigningNotEnabled", issuer=="AttestationPolicy"] => issue(type="flightSigningNotEnabled", value=false);` |
-| Virtual Security Mode (VSM/VBS) | VBS uses the Windows hypervisor to create this virtual secure mode that is used to protect vital system and operating system resources, credentials, etc. | `// VSM enabled c:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="vsmEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "\[\*\].EVENT_VBS_VSM_REQUIRED")));c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="vsmEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_VBS_MANDATORY_ENFORCEMENT")));c:[type=="vsmEnabledSet", issuer=="AttestationPolicy"] => issue(type="vsmEnabled", value=ContainsOnlyValue(c.value, true));![type=="vsmEnabled", issuer=="AttestationPolicy"] => issue(type="vsmEnabled", value=false);c:[type=="vsmEnabled", issuer=="AttestationPolicy"] => issue(type="vbsEnabled", value=c.value);` |
-| HVCI | Hyper Visor based Code integrity is a feature that validates the integrity of a system file each time it is loaded into memory.| `// HVCIc:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="hvciEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_VBS_HVCI_POLICY \| @[?String == 'HypervisorEnforcedCodeIntegrityEnable'].Value")));c:[type=="hvciEnabledSet", issuer=="AttestationPolicy"] => issue(type="hvciEnabled", value=ContainsOnlyValue(c.value, 1));![type=="hvciEnabled", issuer=="AttestationPolicy"] => issue(type="hvciEnabled", value=false);` |
-| IOMMUInput Output Memory Management Unit | Input Output Memory Management Unit (IOMMU) translates virtual to physical memory addresses for Direct Memory Access (DMA) capable device peripherals. Protects sensitive memory regions. | `// IOMMUc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="iommuEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_VBS_IOMMU_REQUIRED")));c:[type=="iommuEnabledSet", issuer=="AttestationPolicy"] => issue(type="iommuEnabled", value=ContainsOnlyValue(c.value, true));![type=="iommuEnabled", issuer=="AttestationPolicy"] => issue(type="iommuEnabled", value=false);` |
-| PCR Value evaluation | PCRs contain measurement(s) of components that are made during the boot. These can be used to verify the components against golden/known measurements. | `//PCRS are only read-only and thus cannot be used with issue operation, but they can be used to validate expected/golden measurements.c:[type == "pcrs", issuer=="AttestationService"] && c1:[type=="pcrMatchesExpectedValue", value==JsonToClaimValue(JmesPath(c.value, "PCRs[? Index == `0`].Digests.SHA1 \| @[0] == `\"KCk6Ow\"`"))] => issue(claim = c1);` |
-| Boot Manager Version | The security version number of the Boot Manager that was loaded during initial boot on the attested device. | `// Find the first EVENT_APPLICATION_SVN. That value is the Boot Manager SVN// Find the first EV_SEPARATOR in PCR 12, 13, Or 14c:[type=="events", issuer=="AttestationService"] => add(type="evSeparatorSeq", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_SEPARATOR' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `14`)] \| @[0].EventSeq"));c:[type=="evSeparatorSeq", value != "null", issuer=="AttestationPolicy"] => add(type="beforeEvSepClause", value=AppendString(AppendString("Events[? EventSeq < `", c.value), "`"));[type=="evSeparatorSeq", value=="null", issuer=="AttestationPolicy"] => add(type="beforeEvSepClause", value="Events[? `true` ");// Find the first EVENT_APPLICATION_SVN. That value is the Boot Manager SVNc:[type=="beforeEvSepClause", issuer=="AttestationPolicy"] => add(type="bootMgrSvnSeqQuery", value=AppendString(c.value, " && EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `12` && ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_APPLICATION_SVN] \| @[0].EventSeq"));c1:[type=="bootMgrSvnSeqQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => add(type="bootMgrSvnSeq", value=JmesPath(c2.value, c1.value));c:[type=="bootMgrSvnSeq", value!="null", issuer=="AttestationPolicy"] => add(type="bootMgrSvnQuery", value=AppendString(AppendString("Events[? EventSeq == `", c.value), "`].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_APPLICATION_SVN \| @[0]"));c1:[type=="bootMgrSvnQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => issue(type="bootMgrSvn", value=JsonToClaimValue(JmesPath(c2.value, c1.value)));` |
-| Safe Mode | Safe mode is a troubleshooting option for Windows that starts your computer in a limited state. Only the basic files and drivers necessary to run Windows are started. | `// Safe modec:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="safeModeEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_SAFEMODE")));c:[type=="safeModeEnabledSet", issuer=="AttestationPolicy"] => issue(type="notSafeMode", value=ContainsOnlyValue(c.value, false));![type=="notSafeMode", issuer=="AttestationPolicy"] => issue(type="notSafeMode", value=true);` |
-| Win PE boot | Windows pre-installation Environment (Windows PE) is a minimal operating system with limited services that is used to prepare a computer for Windows installation, to copy disk images from a network file server, and to initiate Windows Setup. | `// Win PEc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="winPEEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "\[\*\].EVENT_WINPE")));c:[type=="winPEEnabledSet", issuer=="AttestationPolicy"] => issue(type="notWinPE", value=ContainsOnlyValue(c.value, false));![type=="notWinPE", issuer=="AttestationPolicy"] => issue(type="notWinPE", value=true);` |
-| CI Policy | Hash of Code Integrity policy that is controlling the security of the boot environment | `// CI Policyc :[type=="events", issuer=="AttestationService"] => issue(type="codeIntegrityPolicy", value=JsonToClaimValue(JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `13`].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_SI_POLICY[].RawData")));`|
-| Secure Boot Configuration Policy Hash | SBCPHash is the fingerprint of the Custom Secure Boot Configuration Policy (SBCP) that was loaded during boot in Windows devices, except PCs. | `// Secure Boot Custom Policyc:[type=="events", issuer=="AttestationService"] => issue(type="secureBootCustomPolicy", value=JsonToClaimValue(JmesPath(c.value, "Events[? EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && PcrIndex == `7` && ProcessedData.UnicodeName == 'CurrentPolicy' && ProcessedData.VariableGuid == '77FA9ABD-0359-4D32-BD60-28F4E78F784B'].ProcessedData.VariableData \| @[0]")));` |
-| Boot Application SVN | The version of the Boot Manager that is running on the device. | `// Find the first EV_SEPARATOR in PCR 12, 13, Or 14, the ordering of the events is critical to ensure correctness.c:[type=="events", issuer=="AttestationService"] => add(type="evSeparatorSeq", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_SEPARATOR' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `14`)] \| @[0].EventSeq"));c:[type=="evSeparatorSeq", value != "null", issuer=="AttestationPolicy"] => add(type="beforeEvSepClause", value=AppendString(AppendString("Events[? EventSeq < `", c.value), "`"));[type=="evSeparatorSeq", value=="null", issuer=="AttestationPolicy"] => add(type="beforeEvSepClause", value="Events[? `true` "); // No restriction of EV_SEPARATOR in case it is not present// Find the first EVENT_TRANSFER_CONTROL with value 1 or 2 in PCR 12 that is before the EV_SEPARATORc1:[type=="beforeEvSepClause", issuer=="AttestationPolicy"] && c2:[type=="bootMgrSvnSeq", value != "null", issuer=="AttestationPolicy"] => add(type="beforeEvSepAfterBootMgrSvnClause", value=AppendString(AppendString(AppendString(c1.value, "&& EventSeq >= `"), c2.value), "`"));c:[type=="beforeEvSepAfterBootMgrSvnClause", issuer=="AttestationPolicy"] => add(type="tranferControlQuery", value=AppendString(c.value, " && EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `12`&& (ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_TRANSFER_CONTROL.Value == `1` \|\| ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_TRANSFER_CONTROL.Value == `2`)] \| @[0].EventSeq"));c1:[type=="tranferControlQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => add(type="tranferControlSeq", value=JmesPath(c2.value, c1.value));// Find the first non-null EVENT_MODULE_SVN in PCR 13 after the transfer control.c:[type=="tranferControlSeq", value!="null", issuer=="AttestationPolicy"] => add(type="afterTransferCtrlClause", value=AppendString(AppendString(" && EventSeq > `", c.value), "`"));c1:[type=="beforeEvSepClause", issuer=="AttestationPolicy"] && c2:[type=="afterTransferCtrlClause", issuer=="AttestationPolicy"] => add(type="moduleQuery", value=AppendString(AppendString(c1.value, c2.value), " && EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `13` && ((ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_LOADEDMODULE_AGGREGATION[].EVENT_MODULE_SVN \| @[0]) \|\| (ProcessedData.EVENT_LOADEDMODULE_AGGREGATION[].EVENT_MODULE_SVN \| @[0]))].EventSeq \| @[0]"));c1:[type=="moduleQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => add(type="moduleSeq", value=JmesPath(c2.value, c1.value));// Find the first EVENT_APPLICATION_SVN after EV_EVENT_TAG in PCR 12. That value is Boot App SVNc:[type=="moduleSeq", value!="null", issuer=="AttestationPolicy"] => add(type="applicationSvnAfterModuleClause", value=AppendString(AppendString(" && EventSeq > `", c.value), "`"));c1:[type=="beforeEvSepClause", issuer=="AttestationPolicy"] && c2:[type=="applicationSvnAfterModuleClause", issuer=="AttestationPolicy"] => add(type="bootAppSvnQuery", value=AppendString(AppendString(c1.value, c2.value), " && EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `12`].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_APPLICATION_SVN \| @[0]"));c1:[type=="bootAppSvnQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => issue(type="bootAppSvn", value=JsonToClaimValue(JmesPath(c2.value, c1.value)));` |
-| Boot Revision List | Boot Revision List used to Direct the device to an enterprise honeypot, to further monitor the device's activities. | `// Boot Rev List Info c:[type=="events", issuer=="AttestationService"] => issue(type="bootRevListInfo", value=JsonToClaimValue(JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `13`].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_BOOT_REVOCATION_LIST.RawData \| @[0]")));` |
+| Early Launch Antimalware (ELAM) | ELAM protects against loading unsigned or malicious drivers during boot. | `// Elam Driver (windows defender) Loaded.c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => issue(type="elamDriverLoaded", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_LOADEDMODULE_AGGREGATION[] \| [? EVENT_IMAGEVALIDATED == `true` && (equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wdboot.sys') \|\| equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wd\\wdboot.sys'))] \| @ != `null`")));![type=="elamDriverLoaded", issuer=="AttestationPolicy"] => issue(type="elamDriverLoaded", value=false);` |
+| Boot debugging |Allows the user to connect to a boot debugger. Can be used to bypass secure boot and other boot protections. | `// Boot debuggingc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="bootDebuggingEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_BOOTDEBUGGING")));c:[type=="bootDebuggingEnabledSet", issuer=="AttestationPolicy"] => issue(type="bootDebuggingDisabled", value=ContainsOnlyValue(c.value, false));\![type=="bootDebuggingDisabled", issuer=="AttestationPolicy"] => issue(type="bootDebuggingDisabled", value=false);` |
+| Kernel debugging | Allows the user to connect a kernel debugger. Grants access to all system resources (less virtualization-based security [VBS] protected resources). | `// Kernel Debuggingc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="osKernelDebuggingEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_OSKERNELDEBUG")));c:[type=="osKernelDebuggingEnabledSet", issuer=="AttestationPolicy"] => issue(type="osKernelDebuggingDisabled", value=ContainsOnlyValue(c.value, false));\![type=="osKernelDebuggingDisabled", issuer=="AttestationPolicy"] => issue(type="osKernelDebuggingDisabled", value=false);` |
+|Data Execution Prevention (DEP) policy | DEP policy is a set of hardware and software technologies that perform extra checks on memory to help prevent malicious code from running on a system. | `// DEP Policyc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => issue(type="depPolicy", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_DATAEXECUTIONPREVENTION.Value \| @[-1]")));\![type=="depPolicy"] => issue(type="depPolicy", value=0);` |
+| Test and flight signing | Enables the user to run test-signed code. | `// Test Signing< c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY")); c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="testSigningEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_TESTSIGNING"))); c:[type=="testSigningEnabledSet", issuer=="AttestationPolicy"] => issue(type="testSigningDisabled", value=ContainsOnlyValue(c.value, false)); ![type=="testSigningDisabled", issuer=="AttestationPolicy"] => issue(type="testSigningDisabled", value=false);//Flight Signingc:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="flightSigningEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "\[\*\].EVENT_FLIGHTSIGNING")));c:[type=="flightSigningEnabledSet", issuer=="AttestationPolicy"] => issue(type="flightSigningNotEnabled", value=ContainsOnlyValue(c.value, false));![type=="flightSigningNotEnabled", issuer=="AttestationPolicy"] => issue(type="flightSigningNotEnabled", value=false);` |
+| Virtual Secure Mode/VBS | VBS uses the Windows hypervisor to create this virtual secure mode that's used to protect vital system and operating system resources and credentials. | `// VSM enabled c:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="vsmEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "\[\*\].EVENT_VBS_VSM_REQUIRED")));c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="vsmEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_VBS_MANDATORY_ENFORCEMENT")));c:[type=="vsmEnabledSet", issuer=="AttestationPolicy"] => issue(type="vsmEnabled", value=ContainsOnlyValue(c.value, true));![type=="vsmEnabled", issuer=="AttestationPolicy"] => issue(type="vsmEnabled", value=false);c:[type=="vsmEnabled", issuer=="AttestationPolicy"] => issue(type="vbsEnabled", value=c.value);` |
+| Hypervisor-protected code integrity (HVCI) | HVCI is a feature that validates the integrity of a system file each time it's loaded into memory.| `// HVCIc:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="hvciEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_VBS_HVCI_POLICY \| @[?String == 'HypervisorEnforcedCodeIntegrityEnable'].Value")));c:[type=="hvciEnabledSet", issuer=="AttestationPolicy"] => issue(type="hvciEnabled", value=ContainsOnlyValue(c.value, 1));![type=="hvciEnabled", issuer=="AttestationPolicy"] => issue(type="hvciEnabled", value=false);` |
+| Input-output memory management unit (IOMMU) | IOMMU translates virtual to physical memory addresses for Direct memory access-capable device peripherals. IOMMU protects sensitive memory regions. | `// IOMMUc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="iommuEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_VBS_IOMMU_REQUIRED")));c:[type=="iommuEnabledSet", issuer=="AttestationPolicy"] => issue(type="iommuEnabled", value=ContainsOnlyValue(c.value, true));![type=="iommuEnabled", issuer=="AttestationPolicy"] => issue(type="iommuEnabled", value=false);` |
+| PCR value evaluation | PCRs contain measurements of components that are made during the boot. These measurements can be used to verify the components against golden or known measurements. | `//PCRS are only read-only and thus cannot be used with issue operation, but they can be used to validate expected/golden measurements.c:[type == "pcrs", issuer=="AttestationService"] && c1:[type=="pcrMatchesExpectedValue", value==JsonToClaimValue(JmesPath(c.value, "PCRs[? Index == `0`].Digests.SHA1 \| @[0] == `\"KCk6Ow\"`"))] => issue(claim = c1);` |
+| Boot Manager version | The security version number of the Boot Manager that was loaded during initial boot on the attested device. | `// Find the first EVENT_APPLICATION_SVN. That value is the Boot Manager SVN// Find the first EV_SEPARATOR in PCR 12, 13, Or 14c:[type=="events", issuer=="AttestationService"] => add(type="evSeparatorSeq", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_SEPARATOR' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `14`)] \| @[0].EventSeq"));c:[type=="evSeparatorSeq", value != "null", issuer=="AttestationPolicy"] => add(type="beforeEvSepClause", value=AppendString(AppendString("Events[? EventSeq < `", c.value), "`"));[type=="evSeparatorSeq", value=="null", issuer=="AttestationPolicy"] => add(type="beforeEvSepClause", value="Events[? `true` ");// Find the first EVENT_APPLICATION_SVN. That value is the Boot Manager SVNc:[type=="beforeEvSepClause", issuer=="AttestationPolicy"] => add(type="bootMgrSvnSeqQuery", value=AppendString(c.value, " && EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `12` && ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_APPLICATION_SVN] \| @[0].EventSeq"));c1:[type=="bootMgrSvnSeqQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => add(type="bootMgrSvnSeq", value=JmesPath(c2.value, c1.value));c:[type=="bootMgrSvnSeq", value!="null", issuer=="AttestationPolicy"] => add(type="bootMgrSvnQuery", value=AppendString(AppendString("Events[? EventSeq == `", c.value), "`].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_APPLICATION_SVN \| @[0]"));c1:[type=="bootMgrSvnQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => issue(type="bootMgrSvn", value=JsonToClaimValue(JmesPath(c2.value, c1.value)));` |
+| Safe mode | Safe mode is a troubleshooting option for Windows that starts your computer in a limited state. Only the basic files and drivers necessary to run Windows are started. | `// Safe modec:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="safeModeEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_SAFEMODE")));c:[type=="safeModeEnabledSet", issuer=="AttestationPolicy"] => issue(type="notSafeMode", value=ContainsOnlyValue(c.value, false));![type=="notSafeMode", issuer=="AttestationPolicy"] => issue(type="notSafeMode", value=true);` |
+| WinPE boot | Windows pre-installation Environment (Windows PE) is a minimal operating system with limited services that's used to prepare a computer for Windows installation, to copy disk images from a network file server, and to initiate Windows setup. | `// Win PEc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="winPEEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "\[\*\].EVENT_WINPE")));c:[type=="winPEEnabledSet", issuer=="AttestationPolicy"] => issue(type="notWinPE", value=ContainsOnlyValue(c.value, false));![type=="notWinPE", issuer=="AttestationPolicy"] => issue(type="notWinPE", value=true);` |
+| Code integrity (CI) policy | Hash of CI policy that's controlling the security of the boot environment. | `// CI Policyc :[type=="events", issuer=="AttestationService"] => issue(type="codeIntegrityPolicy", value=JsonToClaimValue(JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `13`].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_SI_POLICY[].RawData")));`|
+| Secure Boot Configuration Policy Hash (SBCPHash) | SBCPHash is the fingerprint of the Custom SBCP that was loaded during boot in Windows devices, except PCs. | `// Secure Boot Custom Policyc:[type=="events", issuer=="AttestationService"] => issue(type="secureBootCustomPolicy", value=JsonToClaimValue(JmesPath(c.value, "Events[? EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && PcrIndex == `7` && ProcessedData.UnicodeName == 'CurrentPolicy' && ProcessedData.VariableGuid == '77FA9ABD-0359-4D32-BD60-28F4E78F784B'].ProcessedData.VariableData \| @[0]")));` |
+| Boot application subversion | The version of the Boot Manager that's running on the device. | `// Find the first EV_SEPARATOR in PCR 12, 13, Or 14, the ordering of the events is critical to ensure correctness.c:[type=="events", issuer=="AttestationService"] => add(type="evSeparatorSeq", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_SEPARATOR' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `14`)] \| @[0].EventSeq"));c:[type=="evSeparatorSeq", value != "null", issuer=="AttestationPolicy"] => add(type="beforeEvSepClause", value=AppendString(AppendString("Events[? EventSeq < `", c.value), "`"));[type=="evSeparatorSeq", value=="null", issuer=="AttestationPolicy"] => add(type="beforeEvSepClause", value="Events[? `true` "); // No restriction of EV_SEPARATOR in case it is not present// Find the first EVENT_TRANSFER_CONTROL with value 1 or 2 in PCR 12 that is before the EV_SEPARATORc1:[type=="beforeEvSepClause", issuer=="AttestationPolicy"] && c2:[type=="bootMgrSvnSeq", value != "null", issuer=="AttestationPolicy"] => add(type="beforeEvSepAfterBootMgrSvnClause", value=AppendString(AppendString(AppendString(c1.value, "&& EventSeq >= `"), c2.value), "`"));c:[type=="beforeEvSepAfterBootMgrSvnClause", issuer=="AttestationPolicy"] => add(type="tranferControlQuery", value=AppendString(c.value, " && EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `12`&& (ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_TRANSFER_CONTROL.Value == `1` \|\| ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_TRANSFER_CONTROL.Value == `2`)] \| @[0].EventSeq"));c1:[type=="tranferControlQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => add(type="tranferControlSeq", value=JmesPath(c2.value, c1.value));// Find the first non-null EVENT_MODULE_SVN in PCR 13 after the transfer control.c:[type=="tranferControlSeq", value!="null", issuer=="AttestationPolicy"] => add(type="afterTransferCtrlClause", value=AppendString(AppendString(" && EventSeq > `", c.value), "`"));c1:[type=="beforeEvSepClause", issuer=="AttestationPolicy"] && c2:[type=="afterTransferCtrlClause", issuer=="AttestationPolicy"] => add(type="moduleQuery", value=AppendString(AppendString(c1.value, c2.value), " && EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `13` && ((ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_LOADEDMODULE_AGGREGATION[].EVENT_MODULE_SVN \| @[0]) \|\| (ProcessedData.EVENT_LOADEDMODULE_AGGREGATION[].EVENT_MODULE_SVN \| @[0]))].EventSeq \| @[0]"));c1:[type=="moduleQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => add(type="moduleSeq", value=JmesPath(c2.value, c1.value));// Find the first EVENT_APPLICATION_SVN after EV_EVENT_TAG in PCR 12. That value is Boot App SVNc:[type=="moduleSeq", value!="null", issuer=="AttestationPolicy"] => add(type="applicationSvnAfterModuleClause", value=AppendString(AppendString(" && EventSeq > `", c.value), "`"));c1:[type=="beforeEvSepClause", issuer=="AttestationPolicy"] && c2:[type=="applicationSvnAfterModuleClause", issuer=="AttestationPolicy"] => add(type="bootAppSvnQuery", value=AppendString(AppendString(c1.value, c2.value), " && EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `12`].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_APPLICATION_SVN \| @[0]"));c1:[type=="bootAppSvnQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => issue(type="bootAppSvn", value=JsonToClaimValue(JmesPath(c2.value, c1.value)));` |
+| Boot revision list | Boot revision list used to direct the device to an enterprise honeypot to further monitor the device's activities. | `// Boot Rev List Info c:[type=="events", issuer=="AttestationService"] => issue(type="bootRevListInfo", value=JsonToClaimValue(JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `13`].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_BOOT_REVOCATION_LIST.RawData \| @[0]")));` |
## Sample policies for TPM attestation using version 1.2
authorizationrules {
issuancerules {
-// Verify if secureboot is enabled
+// Verify if secure boot is enabled.
c:[type == "events", issuer=="AttestationService"] => add(type = "efiConfigVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && ProcessedData.VariableGuid == '8BE4DF61-93CA-11D2-AA0D-00E098032B8C']")); c:[type=="efiConfigVariables", issuer=="AttestationPolicy"]=> add(type = "secureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] | length(@) == `1` && @[0].ProcessedData.VariableData == 'AQ'"))); ![type=="secureBootEnabled", issuer=="AttestationPolicy"] => add(type="secureBootEnabled", value=false);
-//Verfify in Defender ELAM is loaded.
+//Verify if Defender ELAM is loaded.
c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="elamDriverLoaded", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_LOADEDMODULE_AGGREGATION[] | [? EVENT_IMAGEVALIDATED == `true` && (equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wdboot.sys') || equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wd\\wdboot.sys'))] | @ != `null`"))); [type=="elamDriverLoaded", issuer=="AttestationPolicy"] => add(type="WindowsDefenderElamDriverLoaded", value=true); ![type=="elamDriverLoaded", issuer=="AttestationPolicy"] => add(type="WindowsDefenderElamDriverLoaded", value=false);
attestation Tpm Attestation Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/tpm-attestation-concepts.md
Title: TPM attestation overview for Azure
-description: TPM Attestation overview
+description: This article provides an overview of Trusted Platform Module (TPM) attestation and capabilities supported by Azure Attestation.
-# Trusted Platform Module (TPM) Attestation
+# Trusted Platform Module attestation
-Devices with a TPM, can rely on attestation to prove that boot integrity isn't compromised along with using the measured boot to detect early boot feature states. A growing number of device types, bootloaders and boot stack attacks require an attestation solution to evolve accordingly. An attested state of a device is driven by the attestation policy used to verify the contents on the platform evidence. This document provides an overview of TPM attestation and capabilities supported by MAA.
+Devices with a Trusted Platform Module (TPM) can rely on attestation to prove that boot integrity isn't compromised along with using the Measured Boot process to detect early boot feature states.
+
+A growing number of device types, bootloaders, and boot stack attacks require an attestation solution to evolve accordingly. An attested state of a device is driven by the attestation policy used to verify the contents on the platform evidence.
+
+This article provides an overview of TPM attestation and capabilities supported by Azure Attestation.
## Overview TPM attestation starts from validating the TPM itself all the way up to the point where a relying party can validate the boot flow.
-In general, TPM attestation is based on the following pillars:
+In general, TPM attestation is based on the following pillars.
### Validate TPM authenticity
-Validate the TPM authenticity by validating the TPM.
+Validate the TPM authenticity by validating the TPM:
-- Every TPM ships with a unique asymmetric key, called the Endorsement Key (EK), burned by the manufacturer. We refer to the public portion of this key as EKPub and the associated private key as EKPriv. Some TPM chips also have an EK certificate that is issued by the manufacturer for the EKPub. We refer to this cert as EKCert.-- A CA establishes trust in the TPM either via EKPub or EKCert.-- A device proves to the CA that the key for which the certificate is being requested is cryptographically bound to the EKPub and that the TPM owns the EKpriv.-- The CA issues a certificate with a special issuance policy to denote that the key is now attested to be protected by a TPM.
+- Every TPM ships with a unique asymmetric key called the endorsement key (EK). This key is burned by the manufacturer. The public portion of this key is known as EKPub. The associated private key is known as EKPriv. Some TPM chips also have an EK certificate that's issued by the manufacturer for the EKPub. This certificate is known as EKCert.
+- A certification authority (CA) establishes trust in the TPM either via EKPub or EKCert.
+- A device proves to the CA that the key for which the certificate is being requested is cryptographically bound to the EKPub and that the TPM owns the EKPriv.
+- The CA issues a certificate with a special issuance policy to denote that the key is now attested as protected by a TPM.
### Validate the measurements made during the boot
-Validate the measurements made during the boot using the Azure Attestation service.
+Validate the measurements made during the boot by using Azure Attestation:
-- As part of Trusted and Measured boot, every step of the boot is validated and measured into the TPM. Different events are measured for different platforms. More information about the measured boot process in Windows can be found [here](/windows/security/information-protection/secure-the-windows-10-boot-process).-- At boot, an Attestation Identity Key is generated which is used to provide a cryptographic proof to the attestation service that the TPM in use has been issued a cert after EK validation was performed.-- Relying parties can perform an attestation against the Azure Attestation service, which can be used to validate measurements made during the boot process.
+- As part of Trusted Boot and Measured Boot, every step is validated and measured into the TPM. Different events are measured for different platforms. For more information about the Measured Boot process in Windows, see [Secure the Windows boot process](/windows/security/information-protection/secure-the-windows-10-boot-process).
+- At boot, an attestation identity key is generated. It's used to provide cryptographic proof to the attestation service that the TPM in use was issued a certificate after EK validation was performed.
+- Relying parties can perform an attestation against Azure Attestation, which can be used to validate measurements made during the boot process.
- A relying party can then rely on the attestation statement to gate access to resources or other actions.
-![Conceptual device attestation flow](./media/device-tpm-attestation-flow.png)
+![Diagram that shows the conceptual device attestation flow.](./media/device-tpm-attestation-flow.png)
-Conceptually, TPM attestation can be visualized as above, where the relying party applies Azure Attestation service to verify the platform(s) integrity and any violation of promises, providing the confidence to run workloads or provide access to resources.
+Conceptually, TPM attestation can be visualized as shown in the preceding diagram. The relying party applies Azure Attestation to verify the integrity of the platform and any violation of promises. The verification process gives you the confidence to run workloads or provide access to resources.
## Protection from malicious boot attacks
-Mature attacks techniques aim to infect the boot chain, as it can provide the attacker access to system resources while allowing it the capability of hiding from anti-malware software. Trusted boot acts as the first order of defense and extending the capability to be used by relying parties is trusted boot and attestation. Most attackers attempt to bypass secureboot or load an unwanted binary in the boot process.
+Mature attack techniques aim to infect the boot chain. A boot attack can provide the attacker with access to system resources and allow the attacker to hide from antimalware software. Trusted Boot acts as the first order of defense. Use of Trusted Boot and attestation extends the capability to relying parties. Most attackers attempt to bypass secure boot or load an unwanted binary in the boot process.
-Remote Attestation lets the relying parties verify the whole boot chain for any violation of promises. Consider the Secure Boot evaluation by the attestation service that validates the values of the secure variables measured by UEFI.
+Remote attestation allows the relying parties to verify the whole boot chain for any violation of promises. For example, the secure boot evaluation by the attestation service validates the values of the secure variables measured by UEFI.
-Measured boot instrumentation ensures the cryptographically bound measurements can't be changed once they are made and also only a trusted component can make the measurement. Hence, validating the secure variables is sufficient to ensure the enablement.
+Measured Boot instrumentation ensures cryptographically bound measurements can't be changed after they're made and that only a trusted component can make the measurement. For this reason, validating the secure variables is sufficient to ensure the enablement.
-Azure Attestation additionally signs the report to ensure the integrity of the attestation is also maintained protecting against Man in the Middle type of attacks.
+Azure Attestation signs the report to ensure the integrity of the attestation is also maintained to protect against man-in-the-middle attacks.
-A simple policy can be used as below.
+A simple policy can be used:
``` version=1.0;
issuancerules
```
-Sometimes it's not sufficient to only verify one single component in the boot but verifying complimenting features like Code Integrity(or HVCI), System Guard Secure Launch, also add to the protection profile of a device. More so the ability the peer into the boot to evaluate any violations is also needed to ensure confidence can be gained on a platform.
+Sometimes it isn't sufficient to verify only one component in the boot. Verifying complementary features like code integrity or hypervisor-protected code integrity (HVCI) and System Guard Secure Launch adds to the protection profile of a device. You also need the ability to peer into the boot so that you can evaluate any violations and be confident about the platform.
-Consider one such policy that takes advantage of the policy version 1.2 to verify details about secureboot, HVCI, System Guard Secure Launch and also verifying that an unwanted(malicious.sys) driver isn't loaded during the boot.
+The following example takes advantage of policy version 1.2 to verify details about secure boot, HVCI, and System Guard Secure Launch. It also verifies that an unwanted (malicious.sys) driver isn't loaded during the boot:
``` version=1.2;
authorizationrules {
issuancerules {
-// Verify if secureboot is enabled
+// Verify if secure boot is enabled
c:[type == "events", issuer=="AttestationService"] => add(type = "efiConfigVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && ProcessedData.VariableGuid == '8BE4DF61-93CA-11D2-AA0D-00E098032B8C']")); c:[type=="efiConfigVariables", issuer="AttestationPolicy"]=> add(type = "secureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] | length(@) == `1` && @[0].ProcessedData.VariableData == 'AQ'"))); ![type=="secureBootEnabled", issuer=="AttestationPolicy"] => add(type="secureBootEnabled", value=false);
c:[type=="boolProperties", issuer=="AttestationPolicy"] => issue(type="Malicious
## Next steps - [Device Health Attestation on Windows and interacting with Azure Attestation](/windows/client-management/mdm/healthattestation-csp#windows-11-device-health-attestation)-- [Learn more about the Claim Rule Grammar](claim-rule-grammar.md)
+- [Learn more about claim rule grammar](claim-rule-grammar.md)
- [Attestation policy claim rule functions](claim-rule-functions.md)
azure-app-configuration Enable Dynamic Configuration Java Spring Push Refresh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-java-spring-push-refresh.md
ms.devlang: java Previously updated : 05/02/2022 Last updated : 05/07/2022 #Customer intent: I want to use push refresh to dynamically update my app to use the latest configuration data in App Configuration.
In this tutorial, you learn how to:
1. Set up [Maven App Service Deployment](../app-service/quickstart-java.md?tabs=javase) so the application can be deployed to Azure App Service via Maven. ```console
- mvn com.microsoft.azure:azure-webapp-maven-plugin:1.12.0:config
+ mvn com.microsoft.azure:azure-webapp-maven-plugin:2.5.0:config
``` 1. Open bootstrap.properties and configure Azure App Configuration Push Refresh and Azure Service Bus
In this tutorial, you learn how to:
# Azure App Configuration Properties spring.cloud.azure.appconfiguration.stores[0].connection-string= ${AppConfigurationConnectionString} spring.cloud.azure.appconfiguration.stores[0].monitoring.enabled= true
- spring.cloud.azure.appconfiguration.stores[0].monitoring.cacheExpiration= 30d
+ spring.cloud.azure.appconfiguration.stores[0].monitoring.refresh-interval= 30d
spring.cloud.azure.appconfiguration.stores[0].monitoring.triggers[0].key= sentinel spring.cloud.azure.appconfiguration.stores[0].monitoring.push-notification.primary-token.name= myToken spring.cloud.azure.appconfiguration.stores[0].monitoring.push-notification.primary-token.secret= myTokenSecret
- management.endpoints.web.exposure.include= "appconfiguration-refresh"
+ management.endpoints.web.exposure.include= appconfiguration-refresh
``` A random delay is added before the cached value is marked as dirty to reduce potential throttling. The default maximum delay before the cached value is marked as dirty is 30 seconds.
A random delay is added before the cached value is marked as dirty to reduce pot
> [!NOTE] > The Primary token name should be stored in App Configuration as a key, and then the Primary token secret should be stores as an App Configuration Key Vault Reference for added security.
-## Build and run the app locally
+## Build and run the app in app service
Event Grid Web Hooks require validation on creation. You can validate by following this [guide](../event-grid/webhook-event-delivery.md) or by starting your application with Azure App Configuration Spring Web Library already configured, which will register your application for you. To use an event subscription, follow the steps in the next two sections.
Event Grid Web Hooks require validation on creation. You can validate by followi
export AppConfigurationConnectionString = <connection-string-of-your-app-configuration-store> ```
+1. Update your `pom.xml` under the `azure-webapp-maven-plugin`'s `configuration` add
+
+```xml
+<appSettings>
+ <AppConfigurationConnectionString>${AppConfigurationConnectionString}</AppConfigurationConnectionString>
+</appSettings>
+```
+ 1. Run the following command to build the console app: ```shell
Event Grid Web Hooks require validation on creation. You can validate by followi
1. After the build successfully completes, run the following command to run the app locally: ```shell
- mvn spring-boot:deploy
+ mvn azure-webapp:deploy
``` ## Set up an event subscription
Event Grid Web Hooks require validation on creation. You can validate by followi
1. After your application is running, use *curl* to test your application, for example: ```cmd
- curl -X GET http://localhost:8080
+ curl -X GET https://my-azure-webapp.azurewebsites.net
``` 1. Open the **Azure Portal** and navigate to your App Configuration resource associated with your application. Select **Configuration Explorer** under **Operations** and update the values of the following keys:
azure-arc Conceptual Inner Loop Gitops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-inner-loop-gitops.md
Luckily, there are many frameworks out there that support the listed capabilitie
Once you've evaluated and chosen an inner loop dev framework, build seamless inner loop to outer loop transition.
-As described in the [CI/CD workflow using GitOps](conceptual-gitops-ci-cd.md) article's example, an application developer works on application code within an application repository. This application repository also holds high-level deployment Helm and/or Kustomize templates. CI\CD pipelines:
+As described in the [CI/CD workflow using GitOps](conceptual-gitops-flux2-ci-cd.md) article's example, an application developer works on application code within an application repository. This application repository also holds high-level deployment Helm and/or Kustomize templates. CI\CD pipelines:
* Generate the low-level manifests from the high-level templates, adding environment-specific values * Create a pull request that merges the low-level manifests with the GitOps repo that holds desired state for the specific environment.
Suppose Alice wants to update, run, and debug the application either in local or
## Next steps
-Learn more about creating connections between your cluster and a Git repository as a [configuration resource with Azure Arc-enabled Kubernetes](./conceptual-configurations.md)
+Learn more about creating connections between your cluster and a Git repository as a [configuration resource with Azure Arc-enabled Kubernetes](./conceptual-gitops-flux2.md)
azure-arc Manage Vm Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions.md
Arc-enabled servers support moving machines with one or more VM extensions insta
|Azure Key Vault Certificate Sync | Microsoft.Azure.Key.Vault |KeyVaultForWindows | [Key Vault virtual machine extension for Windows](../../virtual-machines/extensions/key-vault-windows.md) | |Azure Monitor Agent |Microsoft.Azure.Monitor |AzureMonitorWindowsAgent |[Install the Azure Monitor agent](../../azure-monitor/agents/azure-monitor-agent-manage.md) | |Azure Automation Hybrid Runbook Worker extension (preview) |Microsoft.Compute |HybridWorkerForWindows |[Deploy an extension-based User Hybrid Runbook Worker](../../automation/extension-based-hybrid-runbook-worker-install.md) to execute runbooks locally. |
+|Azure Extension for SQL Server |Microsoft.AzureData |WindowsAgent.SqlServer |[Install Azure extension for SQL Server](/sql/sql-server/azure-arc/connect#initiate-the-connection-from-azure) to initiate SQL Server connection to Azure. |
### Linux extensions
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/network-requirements.md
Title: Connected Machine agent network requirements description: Learn about the networking requirements for using the Connected Machine agent for Azure Arc-enabled servers. Previously updated : 05/24/2022 Last updated : 06/09/2022
The table below lists the URLs that must be available in order to install and us
|`*.guestconfiguration.azure.com`| Extension management and guest configuration services |Always| Private | |`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com`|Notification service for extension and connectivity scenarios|Always| Private | |`azgn*.servicebus.windows.net`|Notification service for extension and connectivity scenarios|Always| Public |
-|`*servicebus.windows.net`|For Windows Admin Center and SSH scenarios|If using SSH or Windows Admin Center from Azure|Public|
+|`*.servicebus.windows.net`|For Windows Admin Center and SSH scenarios|If using SSH or Windows Admin Center from Azure|Public|
|`*.blob.core.windows.net`|Download source for Azure Arc-enabled servers extensions|Always, except when using private endpoints| Not used when private link is configured | |`dc.services.visualstudio.com`|Agent telemetry|Optional| Public |
azure-arc Onboard Group Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-group-policy.md
Before you can run the script to connect your machines, you'll need to do the fo
The group policy will project machines as Arc-enabled servers in the Azure subscription, resource group, and region specified in this configuration file.
-## Modify and save the onboarding script
+## Save the onboarding script to a remote share
-Before you can run the script to connect your machines, you'll need to modify and save the onboarding script:
+Before you can run the script to connect your machines, you'll need to save the onboarding script to the remote share. This will be referenced when creating the Group Policy Object.
-1. Edit the field for `remotePath` to reflect the distributed share location with the configuration file and Connected Machine Agent.
+<!--1. Edit the field for `remotePath` to reflect the distributed share location with the configuration file and Connected Machine Agent.
1. Edit the `localPath` with the local path where the logs generated from the onboarding to Azure Arc-enabled servers will be saved per machine.
-1. Save the modified onboarding script locally and note its location. This will be referenced when creating the Group Policy Object.
+1. Save the modified onboarding script locally and note its location. This will be referenced when creating the Group Policy Object.-->
```
-[string] $remotePath = "\\dc-01.contoso.lcl\Software\Arc"
-[string] $localPath = "$env:HOMEDRIVE\ArcDeployment"
-
-[string] $RegKey = "HKLM\SOFTWARE\Microsoft\Azure Connected Machine Agent"
-[string] $logFile = "installationlog.txt"
-[string] $InstallationFolder = "ArcDeployment"
-[string] $configFilename = "ArcConfig.json"
-
-if (!(Test-Path $localPath) ) {
- $BitsDirectory = new-item -path C:\ -Name $InstallationFolder -ItemType Directory
- $logpath = new-item -path $BitsDirectory -Name $logFile -ItemType File
-}
-else{
- $BitsDirectory = "C:\ArcDeployment"
- }
-
-function Deploy-Agent {
- [bool] $isDeployed = Test-Path $RegKey
- if ($isDeployed) {
- $logMessage = "Azure Arc Serverenabled agent is deployed , exit process"
- $logMessage >> $logpath
- exit
- }
- else {
- Copy-Item -Path "$remotePath\*" -Destination $BitsDirectory -Recurse -Verbose
- $exitCode = (Start-Process -FilePath msiexec.exe -ArgumentList @("/i", "$BitsDirectory\AzureConnectedMachineAgent.msi" , "/l*v", "$BitsDirectory\$logFile", "/qn") -Wait -Passthru).ExitCode
-
- if($exitCode -eq 0){
- Start-Sleep -Seconds 120
- $x= & "$env:ProgramW6432\AzureConnectedMachineAgent\azcmagent.exe" connect --config "$BitsDirectory\$configFilename"
- $msg >> $logpath
- }
- else {
- $message = (net helpmsg $exitCode)
- $message >> $logpath
- }
+# This script is used to install and configure the Azure Connected Machine Agent
+
+[CmdletBinding()]
+param(
+ [Parameter(Mandatory=$false)]
+ [string] $AltDownloadLocation,
+
+ [Parameter(Mandatory=$true)]
+ [string] $RemotePath,
+
+ [Parameter(Mandatory=$false)]
+ [string] $LogFile = "onboardinglog.txt",
+
+ [Parameter(Mandatory=$false)]
+ [string] $InstallationFolder = "$env:HOMEDRIVE\ArcDeployment",
+
+ [Parameter(Mandatory=$false)]
+ [string] $ConfigFilename = "ArcConfig.json"
+)
+
+$ErrorActionPreference="Stop"
+$ProgressPreference="SilentlyContinue"
+
+[string] $RegKey = "HKLM:\SOFTWARE\Microsoft\Azure Connected Machine Agent"
+
+# create local installation folder if it doesn't exist
+if (!(Test-Path $InstallationFolder) ) {
+ [void](New-Item -path $InstallationFolder -ItemType Directory )
+}
+
+# create log file and overwrite if it already exists
+$logpath = New-Item -path $InstallationFolder -Name $LogFile -ItemType File -Force
+
+@"
+Azure Arc-Enabled Servers Agent Deployment Group Policy Script
+Time: $(Get-Date)
+RemotePath: $RemotePath
+RegKey: $RegKey
+LogFile: $LogPath
+InstallationFolder: $InstallationFolder
+ConfigFileName: $ConfigFilename
+"@ >> $logPath
+
+try
+{
+ "Copying items to $InstallationFolder" >> $logPath
+ Copy-Item -Path "$RemotePath\*" -Destination $InstallationFolder -Recurse -Verbose
+
+ $agentData = Get-ItemProperty $RegKey -ErrorAction SilentlyContinue
+ if ($agentData) {
+ "Azure Connected Machine Agent version $($agentData.version) is already installed, proceeding to azcmagent connect" >> $logPath
+ } else {
+ # Download the installation package
+ "Downloading the installation script" >> $logPath
+ [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor [System.Net.SecurityProtocolType]::Tls12
+ Invoke-WebRequest -Uri "https://aka.ms/azcmagent-windows" -TimeoutSec 30 -OutFile "$InstallationFolder\install_windows_azcmagent.ps1"
+
+ # Install the hybrid agent
+ "Running the installation script" >> $logPath
+ & "$InstallationFolder\install_windows_azcmagent.ps1"
+ if ($LASTEXITCODE -ne 0) {
+ throw "Failed to install the hybrid agent: $LASTEXITCODE"
+ }
+
+ $agentData = Get-ItemProperty $RegKey -ErrorAction SilentlyContinue
+ if (! $agentData) {
+ throw "Could not read installation data from registry, a problem may have occurred during installation"
+ "Azure Connected Machine Agent version $($agentData.version) is already deployed, exiting without changes" >> $logPath
+ exit
}
-}
+ "Installation Complete" >> $logpath
+ }
-Deploy-Agent
+ & "$env:ProgramW6432\AzureConnectedMachineAgent\azcmagent.exe" connect --config "$InstallationFolder\$ConfigFilename" >> $logpath
+ if ($LASTEXITCODE -ne 0) {
+ throw "Failed during azcmagent connect: $LASTEXITCODE"
+ }
+
+ "Connect Succeeded" >> $logpath
+ & "$env:ProgramW6432\AzureConnectedMachineAgent\azcmagent.exe" show >> $logpath
+
+} catch {
+ "An error occurred during installation: $_" >> $logpath
+}
``` ## Create a Group Policy Object
In the **Triggers** tab, select **New**, then enter the following parameters in
1. In the field **Begin the task**, select **On a schedule**.
-1. Under **Settings**, select **One time** and enter the date and time for the task to run.
+1. Under **Settings**, select **One time** and enter the date and time for the task to run. Select a date and time that is at least 2 hours after the current time to make sure that the Group Policy update will be applied.
1. Under **Advanced Settings**, check the box for **Enabled**.
In the **Actions** tab, select **New**, then enter the follow parameters in the
1. For **Program/script**, enter `C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe`.
-1. For **Add arguments (optional)**, enter `-ExecutionPolicy Bypass -command <Path to Deployment Script>`.
-
- Note that you must enter the location of the deployment script, modified earlier with the `DeploymentPath` and `LocalPath`, instead of the placeholder "Path to Deployment Script".
+1. For **Add arguments (optional)**, enter `-ExecutionPolicy Bypass -command <INSERT UNC-path to PowerShell script> -remotePath <INSERT path to your Remote Share>`.
1. For **Start In (Optional)**, enter `C:\`.
azure-arc Scenario Migrate To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/scenario-migrate-to-azure.md
To inventory the VM extensions installed on your Azure Arc-enabled server, you c
With Azure PowerShell, use the [Get-AzConnectedMachineExtension](/powershell/module/az.connectedmachine/get-azconnectedmachineextension) command with the `-MachineName` and `-ResourceGroupName` parameters.
-With the Azure CLI, use the [az connectedmachine extension list](/cli/azure/ext/connectedmachine/connectedmachine/extension#ext_connectedmachine_az_connectedmachine_extension_list) command with the `--machine-name` and `--resource-group` parameters. By default, the output of Azure CLI commands is in JSON (JavaScript Object Notation). To change the default output to a list or table, for example, use [az configure --output](/cli/azure/reference-index). You can also add `--output` to any command for a one time change in output format.
+With the Azure CLI, use the [az connectedmachine extension list](/cli/azure/connectedmachine/extension#az-connectedmachine-extension-list) command with the `--machine-name` and `--resource-group` parameters. By default, the output of Azure CLI commands is in JSON (JavaScript Object Notation). To change the default output to a list or table, for example, use [az configure --output](/cli/azure/reference-index). You can also add `--output` to any command for a one time change in output format.
After identifying which VM extensions are deployed, you can remove them using the [Azure portal](manage-vm-extensions-portal.md), using the [Azure PowerShell](manage-vm-extensions-powershell.md), or using the [Azure CLI](manage-vm-extensions-cli.md). If the Log Analytics VM extension or Dependency agent VM extension was deployed using Azure Policy and the [VM insights initiative](../../azure-monitor/vm/vminsights-enable-policy.md), it is necessary to [create an exclusion](../../governance/policy/tutorials/create-and-manage.md#remove-a-non-compliant-or-denied-resource-from-the-scope-with-an-exclusion) to prevent re-evaluation and deployment of the extensions on the Azure Arc-enabled server before the migration is complete.
azure-cache-for-redis Cache How To Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-geo-replication.md
Some features aren't supported with geo-replication:
- Clustering is supported if both caches have clustering enabled and have the same number of shards. - Caches in the same Virtual Network (VNet) are supported. - Caches in different VNets are supported with caveats. See [Can I use geo-replication with my caches in a VNet?](#can-i-use-geo-replication-with-my-caches-in-a-vnet) for more information.
+- Caches with more than one replica cannot be geo-replicated.
After geo-replication is configured, the following restrictions apply to your linked cache pair:
azure-functions Functions Bindings Error Pages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-error-pages.md
Title: Azure Functions error handling and retry guidance
-description: Learn to handle errors and retry events in Azure Functions with links to specific binding errors.
+description: Learn to handle errors and retry events in Azure Functions with links to specific binding errors, including information on retry policies.
Previously updated : 10/01/2020 Last updated : 06/09/2022
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure Functions error handling and retries
-Handling errors in Azure Functions is important to avoid lost data, missed events, and to monitor the health of your application.
+Handling errors in Azure Functions is important to avoid lost data, missed events, and to monitor the health of your application. It's also important to understand the retry behaviors of event-based triggers.
-This article describes general strategies for error handling along with links to binding-specific errors.
+This article describes general strategies for error handling and the available retry strategies.
+
+> [!IMPORTANT]
+> The retry policy support in the runtime for triggers other than Timer and Event Hubs is being removed after this feature becomes generally available (GA). Preview retry policy support for all triggers other than Timer and Event Hubs will be removed in October 2022.
## Handling errors
+Errors raised in an Azure Functions can come from any of the following origins:
+
+- Use of built-in Azure Functions [triggers and bindings](functions-triggers-bindings.md).
+- Calls to APIs of underlying Azure services.
+- Calls to REST endpoints.
+- Calls to client libraries, packages, or third-party APIs.
+
+Good error handling practices are important to avoid loss of data or missed messages. This section describes some recommended error handling practices with links to more information.
+
+### Enable Application Insights
+
+Azure Functions integrates with Application Insights to collect error data, performance data, and runtime logs. You should use Application Insights to discover and better understand errors occurring in your function executions. To learn more, see [Monitor Azure Functions](functions-monitoring.md).
+
+### Use structured error handling
+
+Capturing and logging errors is critical to monitoring the health of your application. The top-most level of any function code should include a try/catch block. In the catch block, you can capture and log errors. For information about what errors might be raised by bindings, see [Binding error codes](#binding-error-codes).
+
+### Plan your retry strategy
+
+Several Functions bindings extensions provide built-in support for retries. In addition, the runtime lets you define retry policies for Timer and Event Hubs triggered functions. To learn more, see [Retries](#retries). For triggers that don't provide retry behaviors, you may want to implement your own retry scheme.
+
+### Design for idempotency
+
+The occurrence of errors when processing data can be a problem for your functions, especially when processing messages. You need to consider what happens when the error occurs and how to avoid duplicate processing. To learn more, see [Designing Azure Functions for identical input](functions-idempotent.md).
+
+## Retries
+
+There are two kinds of retries available for your functions: built-in retry behaviors of individual trigger extensions and retry policies. The following table indicates which triggers support retries and where the retry behavior is configured. It also links to more information about errors coming from the underlying services.
+
+| Trigger/binding | Retry source | Configuration |
+| - | - | -- |
+| Azure Cosmos DB | n/a | Not configurable |
+| Blob Storage | [Binding extension](functions-bindings-storage-blob-trigger.md#poison-blobs) | [host.json](functions-bindings-storage-queue.md#host-json) |
+| Event Grid | [Binding extension](../event-grid/delivery-and-retry.md) | Event subscription |
+| Event Hubs | [Retry policies](#retry-policies) | Function-level |
+| Queue Storage | [Binding extension](functions-bindings-storage-queue-trigger.md#poison-messages) | [host.json](functions-bindings-storage-queue.md#host-json) |
+| RabbitMQ | [Binding extension](functions-bindings-rabbitmq-trigger.md#dead-letter-queues) | [Dead letter queue](https://www.rabbitmq.com/dlx.html) |
+| Service Bus | [Binding extension](../service-bus-messaging/service-bus-dead-letter-queues.md) | [Dead letter queue](/service-bus-messaging/service-bus-dead-letter-queues.md#maximum-delivery-count) |
+|Timer | [Retry policies](#retry-policies) | Function-level |
+
+### Retry policies
+
+Starting with version 3.x of the Azure Functions runtime, you can define a retry policies for Timer and Event Hubs triggers that are enforced by the Functions runtime. The retry policy tells the runtime to rerun a failed execution until either successful completion occurs or the maximum number of retries is reached.
+
+A retry policy is evaluated when a Timer or Event Hubs triggered function raises an uncaught exception. As a best practice, you should catch all exceptions in your code and rethrow any errors that you want to result in a retry. Event Hubs checkpoints won't be written until the retry policy for the execution has completed. Because of this behavior, progress on the specific partition is paused until the current batch has completed.
+
+#### Retry strategies
+
+There are two retry strategies supported by policy that you can configure:
+
+# [Fixed delay](#tab/fixed-delay)
+
+A specified amount of time is allowed to elapse between each retry.
+
+# [Exponential backoff](#tab/exponential-backoff)
+
+The first retry waits for the minimum delay. On subsequent retries, time is added exponentially to the initial duration for each retry, until the maximum delay is reached. Exponential back-off adds some small randomization to delays to stagger retries in high-throughput scenarios.
+++
+#### Max retry counts
+
+You can configure the maximum number of times function execution is retried before eventual failure. The current retry count is stored in memory of the instance. It's possible that an instance has a failure between retry attempts. When an instance fails during a retry policy, the retry count is lost. When there are instance failures, the Event Hubs trigger is able to resume processing and retry the batch on a new instance, with the retry count reset to zero. Timer trigger doesn't resume on a new instance. This behavior means that the max retry count is a best effort, and in some rare cases an execution could be retried more than the maximum. For Timer triggers, the retries can be less than the maximum requested.
+
+#### Retry examples
++
+# [In-process](#tab/in-process/fixed-delay)
+
+Retries require NuGet package [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) >= 3.0.23
+
+```csharp
+[FunctionName("EventHubTrigger")]
+[FixedDelayRetry(5, "00:00:10")]
+public static async Task Run([EventHubTrigger("myHub", Connection = "EventHubConnection")] EventData[] events, ILogger log)
+{
+// ...
+}
+```
+
+|Property | Description |
+||-|
+|MaxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.|
+|DelayInterval|The delay that is used between retries. Specify as a string with the format `HH:mm:ss`.|
+
+# [Isolated process](#tab/isolated-process/fixed-delay)
+
+Retry policies aren't yet supported when running in an isolated process.
+
+# [C# Script](#tab/csharp-script/fixed-delay)
+
+Here's the retry policy in the *function.json* file:
+
+```json
+{
+ "disabled": false,
+ "bindings": [
+ {
+ ....
+ }
+ ],
+ "retry": {
+ "strategy": "fixedDelay",
+ "maxRetryCount": 4,
+ "delayInterval": "00:00:10"
+ }
+}
+```
+
+|function.json property | Description |
+||-|
+|strategy|Use `fixedDelay`.|
+|maxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.|
+|delayInterval|The delay that is used between retries. Specify as a string with the format `HH:mm:ss`.|
+
+# [In-process](#tab/in-process/exponential-backoff)
+
+Retries require NuGet package [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) >= 3.0.23
+
+```csharp
+[FunctionName("EventHubTrigger")]
+[ExponentialBackoffRetry(5, "00:00:04", "00:15:00")]
+public static async Task Run([EventHubTrigger("myHub", Connection = "EventHubConnection")] EventData[] events, ILogger log)
+{
+// ...
+}
+```
+
+|Property | Description |
+||-|
+|MaxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.|
+|MinimumInterval|The minimum retry delay. Specify as a string with the format `HH:mm:ss`.|
+|MaximumInterval|The maximum retry delay. Specify as a string with the format `HH:mm:ss`.|
+
+# [Isolated process](#tab/isolated-process/exponential-backoff)
+
+Retry policies aren't yet supported when running in an isolated process.
+
+# [C# Script](#tab/csharp-script/exponential-backoff)
+
+Here's the retry policy in the *function.json* file:
+
+```json
+{
+ "disabled": false,
+ "bindings": [
+ {
+ ....
+ }
+ ],
+ "retry": {
+ "strategy": "exponentialBackoff",
+ "maxRetryCount": 5,
+ "minimumInterval": "00:00:10",
+ "maximumInterval": "00:15:00"
+ }
+}
+```
+
+|function.json property | Description |
+||-|
+|strategy|Use `exponentialBackoff`.|
+|maxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.|
+|minimumInterval|The minimum retry delay. Specify as a string with the format `HH:mm:ss`.|
+|maximumInterval|The maximum retry delay. Specify as a string with the format `HH:mm:ss`.|
+++
+Here's the retry policy in the *function.json* file:
+
+# [Fixed delay](#tab/fixed-delay)
+
+```json
+{
+ "disabled": false,
+ "bindings": [
+ {
+ ....
+ }
+ ],
+ "retry": {
+ "strategy": "fixedDelay",
+ "maxRetryCount": 4,
+ "delayInterval": "00:00:10"
+ }
+}
+```
+
+# [Exponential backoff](#tab/exponential-backoff)
+
+```json
+{
+ "disabled": false,
+ "bindings": [
+ {
+ ....
+ }
+ ],
+ "retry": {
+ "strategy": "exponentialBackoff",
+ "maxRetryCount": 5,
+ "minimumInterval": "00:00:10",
+ "maximumInterval": "00:15:00"
+ }
+}
+```
+++
+|function.json property | Description |
+||-|
+|strategy|Required. The retry strategy to use. Valid values are `fixedDelay` or `exponentialBackoff`.|
+|maxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.|
+|delayInterval|The delay that is used between retries when using a `fixedDelay` strategy. Specify as a string with the format `HH:mm:ss`.|
+|minimumInterval|The minimum retry delay when using an `exponentialBackoff` strategy. Specify as a string with the format `HH:mm:ss`.|
+|maximumInterval|The maximum retry delay when using `exponentialBackoff` strategy. Specify as a string with the format `HH:mm:ss`.|
++
+Here's a Python sample to use retry context in a function:
+
+```Python
+import azure.functions
+import logging
++
+def main(mytimer: azure.functions.TimerRequest, context: azure.functions.Context) -> None:
+ logging.info(f'Current retry count: {context.retry_context.retry_count}')
+
+ if context.retry_context.retry_count == context.retry_context.max_retry_count:
+ logging.info(
+ f"Max retries of {context.retry_context.max_retry_count} for "
+ f"function {context.function_name} has been reached")
+
+```
++
+# [Fixed delay](#tab/fixed-delay)
+
+```java
+@FunctionName("TimerTriggerJava1")
+@FixedDelayRetry(maxRetryCount = 4, delayInterval = "00:00:10")
+public void run(
+ @TimerTrigger(name = "timerInfo", schedule = "0 */5 * * * *") String timerInfo,
+ final ExecutionContext context
+) {
+ context.getLogger().info("Java Timer trigger function executed at: " + LocalDateTime.now());
+}
+```
+
+# [Exponential backoff](#tab/exponential-backoff)
+
+```java
+@FunctionName("TimerTriggerJava1")
+@ExponentialBackoffRetry(maxRetryCount = 5 , maximumInterval = "00:15:00", minimumInterval = "00:00:10")
+public void run(
+ @TimerTrigger(name = "timerInfo", schedule = "0 */5 * * * *") String timerInfo,
+ final ExecutionContext context
+) {
+ context.getLogger().info("Java Timer trigger function executed at: " + LocalDateTime.now());
+}
+```
+
+|Element | Description |
+||-|
+|maxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.|
+|delayInterval|The delay that is used between retries when using a `fixedDelay` strategy. Specify as a string with the format `HH:mm:ss`.|
+|minimumInterval|The minimum retry delay when using an `exponentialBackoff` strategy. Specify as a string with the format `HH:mm:ss`.|
+|maximumInterval|The maximum retry delay when using `exponentialBackoff` strategy. Specify as a string with the format `HH:mm:ss`.|
+++ ## Binding error codes When integrating with Azure services, errors may originate from the APIs of the underlying services. Information relating to binding-specific errors is available in the **Exceptions and return codes** section of the following articles:
-+ [Azure Cosmos DB](functions-bindings-cosmosdb.md#exceptions-and-return-codes)
-++ [Azure Cosmos DB](/rest/api/cosmos-db/http-status-codes-for-cosmosdb) + [Blob storage](functions-bindings-storage-blob-output.md#exceptions-and-return-codes)-++ [Event Grid](../event-grid/troubleshoot-errors.md) + [Event Hubs](functions-bindings-event-hubs-output.md#exceptions-and-return-codes)- + [IoT Hubs](functions-bindings-event-iot-output.md#exceptions-and-return-codes)- + [Notification Hubs](functions-bindings-notification-hubs.md#exceptions-and-return-codes)- + [Queue storage](functions-bindings-storage-queue-output.md#exceptions-and-return-codes)- + [Service Bus](functions-bindings-service-bus-output.md#exceptions-and-return-codes)- + [Table storage](functions-bindings-storage-table-output.md#exceptions-and-return-codes)+
+## Next steps
+++ [Azure Functions triggers and bindings concepts](functions-triggers-bindings.md)++ [Best practices for reliable Azure Functions](functions-best-practices.md)
azure-functions Functions Bindings Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-errors.md
- Title: Handle Azure Functions bindings errors
-description: Learn to handle Azure Functions binding errors
-- Previously updated : 10/01/2020--
-# Handle Azure Functions binding errors
--
-For information on errors returned by services supported by Functions, see the [Binding error codes](functions-bindings-error-pages.md#binding-error-codes) section of the [Azure Functions error handling](functions-bindings-error-pages.md) overview article.
azure-functions Functions Host Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-host-json.md
Configuration settings can be found in [Storage queue triggers and bindings](fun
## retry
-Controls the [retry policy](./functions-bindings-error-pages.md#retry-policies-preview) options for all executions in the app.
+Controls the [retry policy](./functions-bindings-error-pages.md#retry-policies) options for all executions in the app.
```json {
azure-functions Functions Idempotent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-idempotent.md
Title: Designing Azure Functions for identical input
description: Building Azure Functions to be idempotent Previously updated : 9/12/2019 Last updated : 06/09/2022
When it comes to building applications, consider the following scenarios:
There are many contexts where requests to a function may receive identical commands. Some situations include: -- Retry policies sending the same request many times-- Cached commands replayed to the application-- Application errors sending multiple identical requests
+- Retry policies sending the same request many times.
+- Cached commands replayed to the application.
+- Application errors sending multiple identical requests.
To protect data integrity and system health, an idempotent application contains logic that may contain the following behaviors: -- Verifying of the existence of data before trying to execute a delete-- Checking to see if data already exists before trying to execute a create action-- Reconciling logic that creates eventual consistency in data-- Concurrency controls-- Duplication detection-- Data freshness validation-- Guard logic to verify input data
+- Verifying of the existence of data before trying to execute a delete.
+- Checking to see if data already exists before trying to execute a create action.
+- Reconciling logic that creates eventual consistency in data.
+- Concurrency controls.
+- Duplication detection.
+- Data freshness validation.
+- Guard logic to verify input data.
Ultimately idempotency is achieved by ensuring a given action is possible and is only executed once.
+
+## Next steps
+++ [Azure Functions reliable event processing](functions-reliable-event-processing.md) ++ [Concurrency in Azure Functions](functions-concurrency.md)++ [Azure Functions error handling and retries](functions-bindings-error-pages.md)
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
ID of the current function invocation.
Context for distributed tracing. For more information, see [`Trace Context`](https://www.w3.org/TR/trace-context/). `retry_context`
-Context for retries to the function. For more information, see [`retry-policies`](./functions-bindings-errors.md#retry-policies-preview).
+Context for retries to the function. For more information, see [`retry-policies`](./functions-bindings-errors.md#retry-policies).
## Global variables
azure-functions Functions Reliable Event Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reliable-event-processing.md
Azure Functions consumes Event Hub events while cycling through the following st
This behavior reveals a few important points: -- *Unhandled exceptions may cause you to lose messages.* Executions that result in an exception will continue to progress the pointer. Setting a [retry policy](./functions-bindings-error-pages.md#retry-policies-preview) will delay progressing the pointer until the entire retry policy has been evaluated.
+- *Unhandled exceptions may cause you to lose messages.* Executions that result in an exception will continue to progress the pointer. Setting a [retry policy](./functions-bindings-error-pages.md#retry-policies) will delay progressing the pointer until the entire retry policy has been evaluated.
- *Functions guarantees at-least-once delivery.* Your code and dependent systems may need to [account for the fact that the same message could be received twice](./functions-idempotent.md). ## Handling exceptions
As a general rule, every function should include a [try/catch block](./functions
### Retry mechanisms and policies
-Some exceptions are transient in nature and don't reappear when an operation is attempted again moments later. This is why the first step is always to retry the operation. You can leverage the function app [retry policies](./functions-bindings-error-pages.md#retry-policies-preview) or author retry logic within the function execution.
+Some exceptions are transient in nature and don't reappear when an operation is attempted again moments later. This is why the first step is always to retry the operation. You can leverage the function app [retry policies](./functions-bindings-error-pages.md#retry-policies) or author retry logic within the function execution.
Introducing fault-handling behaviors to your functions allow you to define both basic and advanced retry policies. For instance, you could implement a policy that follows a workflow illustrated by the following rules:
azure-monitor Azure Monitor Agent Troubleshoot Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-windows-vm.md
description: Guidance for troubleshooting issues on Windows virtual machines, sc
Previously updated : 5/10/2022 Last updated : 6/9/2022
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
2. On your virtual machine, verify the existence of the file `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\mcsconfig.latest.xml`. If this file doesn't exist: - The virtual machine may not be associated with a DCR. See step 3 - The virtual machine may not have Managed Identity enabled. [See here](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#enable-system-assigned-managed-identity-during-creation-of-a-vm) on how to enable.
+ - IMDS service is not running/accessible from the virtual machine. [Check if you can access IMDS from the machine](/azure/virtual-machines/windows/instance-metadata-service?tabs=windows). If not, [file a ticket](#file-a-ticket) with **Summary** as 'IMDS service not running' and **Problem type** as 'I need help configuring data collection from a VM'.
+ - AMA cannot access IMDS. Check if you see IMDS errors in `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\Tables\MAEventTable.tsf` file. If yes, [file a ticket](#file-a-ticket) with **Summary** as 'AMA cannot access IMDS' and **Problem type** as 'I need help configuring data collection from a VM'.
3. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** blade from left menu > You should see the virtual machine listed here 4. If not listed, click 'Add' and select your virtual machine from the resource picker. Repeat across all DCRs. 5. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'DCR not found or associated' and **Problem type** as 'I need help configuring data collection from a VM'.
azure-monitor Container Insights Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md
The following alert-based metrics have unique behavior characteristics compared
* *oomKilledContainerCount* metric is only sent when there are OOM killed containers.
-* *cpuExceededPercentage*, *memoryRssExceededPercentage*, and *memoryWorkingSetExceededPercentage* metrics are sent when the CPU, memory Rss, and Memory Working set values exceed the configured threshold (the default threshold is 95%). These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule. Meaning, if you want to collect these metrics and analyze them from [Metrics explorer](../essentials/metrics-getting-started.md), we recommend you configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for their container resource utilization thresholds can be overridden in the ConfigMaps file under the section `[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]`. See the section [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps) for details related to configuring your ConfigMap configuration file.
+* *cpuExceededPercentage*, *memoryRssExceededPercentage*, and *memoryWorkingSetExceededPercentage* metrics are sent when the CPU, memory Rss, and Memory Working set values exceed the configured threshold (the default threshold is 95%). *cpuThresholdViolated*, *memoryRssThresholdViolated*, and *memoryWorkingSetThresholdViolated* metrics are equal to 0 is the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule. Meaning, if you want to collect these metrics and analyze them from [Metrics explorer](../essentials/metrics-getting-started.md), we recommend you configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for their container resource utilization thresholds can be overridden in the ConfigMaps file under the section `[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]`. See the section [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps) for details related to configuring your ConfigMap configuration file.
-* *pvUsageExceededPercentage* metric is sent when the persistent volume usage percentage exceeds the configured threshold (the default threshold is 60%). This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule. Meaning, if you want to collect these metrics and analyze them from [Metrics explorer](../essentials/metrics-getting-started.md), we recommend you configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for persistent volume utilization thresholds can be overridden in the ConfigMaps file under the section `[alertable_metrics_configuration_settings.pv_utilization_thresholds]`. See the section [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps) for details related to configuring your ConfigMap configuration file. Collection of persistent volume metrics with claims in the *kube-system* namespace are excluded by default. To enable collection in this namespace, use the section `[metric_collection_settings.collect_kube_system_pv_metrics]` in the ConfigMap file. See [Metric collection settings](./container-insights-agent-config.md#metric-collection-settings) for details.
+* *pvUsageExceededPercentage* metric is sent when the persistent volume usage percentage exceeds the configured threshold (the default threshold is 60%). *pvUsageThresholdViolated* metric is equal to 0 when the PV usage percentage is below the threshold and is equal 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule. Meaning, if you want to collect these metrics and analyze them from [Metrics explorer](../essentials/metrics-getting-started.md), we recommend you configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for persistent volume utilization thresholds can be overridden in the ConfigMaps file under the section `[alertable_metrics_configuration_settings.pv_utilization_thresholds]`. See the section [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps) for details related to configuring your ConfigMap configuration file. Collection of persistent volume metrics with claims in the *kube-system* namespace are excluded by default. To enable collection in this namespace, use the section `[metric_collection_settings.collect_kube_system_pv_metrics]` in the ConfigMap file. See [Metric collection settings](./container-insights-agent-config.md#metric-collection-settings) for details.
## Metrics collected
To view alerts created for the enabled rules, in the **Recommended alerts** pane
Perform the following steps to configure your ConfigMap configuration file to override the default utilization thresholds. These steps are applicable only for the following alertable metrics: * *cpuExceededPercentage*
+* *cpuThresholdViolated*
* *memoryRssExceededPercentage*
+* *memoryRssThresholdViolated*
* *memoryWorkingSetExceededPercentage*
+* *memoryWorkingSetThresholdViolated*
* *pvUsageExceededPercentage*
+* *pvUsageThresholdViolated*
1. Edit the ConfigMap YAML file under the section `[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]` or `[alertable_metrics_configuration_settings.pv_utilization_thresholds]`.
azure-monitor Container Insights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-troubleshoot.md
When you configure monitoring of your Azure Kubernetes Service (AKS) cluster with Container insights, you may encounter an issue preventing data collection or reporting status. This article details some common issues and troubleshooting steps.
+## Known error messages
+
+The table below summarizes known errors you may encounter while using Container insights.
+
+| Error messages | Action |
+| - | |
+| Error Message `No data for selected filters` | It may take some time to establish monitoring data flow for newly created clusters. Allow at least 10 to 15 minutes for data to appear for your cluster.<br><br>If data still doesn't show up, check if the configured log analytics workspace is configured for *disableLocalAuth = true*, if yes, update back to *disableLocalAuth = false*.<br><br>`az resource show --ids "/subscriptions/[Your subscription ID]/resourcegroups/[Your resource group]/providers/microsoft.operationalinsights/workspaces/[Your workspace name]"`<br><br>`az resource update --ids "/subscriptions/[Your subscription ID]/resourcegroups/[Your resource group]/providers/microsoft.operationalinsights/workspaces/[Your workspace name]" --api-version "2021-06-01" --set properties.features.disableLocalAuth=False` |
+| Error Message `Error retrieving data` | While Azure Kubernetes Service cluster is setting up for health and performance monitoring, a connection is established between the cluster and Azure Log Analytics workspace. A Log Analytics workspace is used to store all monitoring data for your cluster. This error may occur when your Log Analytics workspace has been deleted. Check if the workspace was deleted. If it was, you'll need to re-enable monitoring of your cluster with Container insights and either specify an existing workspace or create a new one. To re-enable, you'll need to [disable](container-insights-optout.md) monitoring for the cluster and [enable](container-insights-enable-new-cluster.md) Container insights again. |
+| `Error retrieving data` after adding Container insights through az aks cli | When enable monitoring using `az aks cli`, Container insights may not be properly deployed. Check whether the solution is deployed. To verify, go to your Log Analytics workspace and see if the solution is available by selecting **Solutions** from the pane on the left-hand side. To resolve this issue, you'll need to redeploy the solution by following the instructions on [how to deploy Container insights](container-insights-onboard.md) |
+
+To help diagnose the problem, we've provided a [troubleshooting script](https://github.com/microsoft/Docker-Provider/tree/ci_dev/scripts/troubleshoot).
+ ## Authorization error during onboarding or update operation While enabling Container insights or updating a cluster to support collecting metrics, you may receive an error resembling the following - *The client <userΓÇÖs Identity>' with object id '<userΓÇÖs objectId>' does not have authorization to perform action 'Microsoft.Authorization/roleAssignments/write' over scope*
Use the following steps to diagnose the problem if you can't view status inform
omsagent-win-6drwq 1/1 Running 0 1d ```
-## Error messages
-
-The table below summarizes known errors you may encounter while using Container insights.
-
-| Error messages | Action |
-| - | |
-| Error Message `No data for selected filters` | It may take some time to establish monitoring data flow for newly created clusters. Allow at least 10 to 15 minutes for data to appear for your cluster. |
-| Error Message `Error retrieving data` | While Azure Kubernetes Service cluster is setting up for health and performance monitoring, a connection is established between the cluster and Azure Log Analytics workspace. A Log Analytics workspace is used to store all monitoring data for your cluster. This error may occur when your Log Analytics workspace has been deleted. Check if the workspace was deleted. If it was, you'll need to re-enable monitoring of your cluster with Container insights and either specify an existing workspace or create a new one. To re-enable, you'll need to [disable](container-insights-optout.md) monitoring for the cluster and [enable](container-insights-enable-new-cluster.md) Container insights again. |
-| `Error retrieving data` after adding Container insights through az aks cli | When enable monitoring using `az aks cli`, Container insights may not be properly deployed. Check whether the solution is deployed. To verify, go to your Log Analytics workspace and see if the solution is available by selecting **Solutions** from the pane on the left-hand side. To resolve this issue, you'll need to redeploy the solution by following the instructions on [how to deploy Container insights](container-insights-onboard.md) |
-
-To help diagnose the problem, we've provided a [troubleshooting script](https://github.com/microsoft/Docker-Provider/tree/ci_dev/scripts/troubleshoot).
- ## Container insights agent ReplicaSet Pods aren't scheduled on non-Azure Kubernetes cluster Container insights agent ReplicaSet Pods has a dependency on the following node selectors on the worker (or agent) nodes for the scheduling:
azure-monitor Azure Ad Authentication Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/azure-ad-authentication-logs.md
Disabling local authentication may limit some functionality available, specifica
- Existing Log Analytics Agents will stop functioning, only Azure Monitor Agent (AMA) is supported. Azure Monitor Agent is missing some capabilities that are available through Log Analytics agent (for example, custom log collection, IIS log collection). - Data Collector API (preview) doesn't support Azure AD authentication and won't be available to ingest data.
+- VM Insights and Container Insights will stop working. Local authorization is the only authorization method supported by these features.
You can disable local authentication by using the Azure Policy, or programmatically through Azure Resource Manager Template, PowerShell, or CLI.
Below is an example of PowerShell commands that you can use to disable local aut
``` ## Next steps
-* [Azure AD authentication for Application Insights (Preview)](../app/azure-ad-authentication.md)
+* [Azure AD authentication for Application Insights (Preview)](../app/azure-ad-authentication.md)
azure-monitor Private Link Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-configure.md
$scope | Set-AzResource -Force
``` ### Set resource access flags
-To manage the workspace or component access flags, use the flags `[--ingestion-access {Disabled, Enabled}]` and `[--query-access {Disabled, Enabled}]`on [Log Analytics workspaces](/cli/azure/monitor/log-analytics/workspace) or [Application Insights components](/cli/azure/ext/application-insights/monitor/app-insights/component).
+To manage the workspace or component access flags, use the flags `[--ingestion-access {Disabled, Enabled}]` and `[--query-access {Disabled, Enabled}]`on [az monitor log-analytics workspace](/cli/azure/monitor/log-analytics/workspace) or [az monitor app-insights component](/cli/azure/monitor/app-insights/component).
## Review and validate your Private Link setup
azure-monitor Resource Manager Vminsights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/resource-manager-vminsights.md
Title: Resource Manager template samples for VM insights description: Sample Azure Resource Manager templates to deploy and configureVM insights. -- Previously updated : 05/18/2020 Last updated : 06/08/2022
The following sample adds an Azure virtual machine scale set to VM insights.
## Next steps * [Get other sample templates for Azure Monitor](../resource-manager-samples.md).
-* [Learn more about VM insights](vminsights-overview.md).
+* [Learn more about VM insights](vminsights-overview.md).
azure-monitor Vminsights Change Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-change-analysis.md
Title: Change analysis in VM insights description: VM insights integration with Application Change Analysis integration allows you to view any changes made to a virtual machine that may have affected it performance. -- Previously updated : 09/23/2020 Last updated : 06/08/2022
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Last updated 04/04/2022
This article lists significant changes to Azure Monitor documentation.
-## March, 2022
-### Agents
-
-**Updated articles**
--- [Azure Monitor agent overview](agents/azure-monitor-agent-overview.md)-- [Migrate to Azure Monitor agent from Log Analytics agent](agents/azure-monitor-agent-migration.md)-
-### Alerts
-
-**Updated articles**
--- [Create a classic metric alert rule with a Resource Manager template](alerts/alerts-enable-template.md)-- [Overview of alerts in Microsoft Azure](alerts/alerts-overview.md)-- [Alert processing rules](alerts/alerts-action-rules.md)-
-### Application Insights
-
-**Updated articles**
--- [Application Insights API for custom events and metrics](app/api-custom-events-metrics.md)-- [Application Insights for ASP.NET Core applications](app/asp-net-core.md)-- [Application Insights for web pages](app/javascript.md)-- [Application Map: Triage Distributed Applications](app/app-map.md)-- [Configure Application Insights for your ASP.NET website](app/asp-net.md)-- [Export telemetry from Application Insights](app/export-telemetry.md)-- [Migrate to workspace-based Application Insights resources](app/convert-classic-resource.md)-- [React plugin for Application Insights JavaScript SDK](app/javascript-react-plugin.md)-- [Sampling in Application Insights](app/sampling.md)-- [Telemetry processors (preview) - Azure Monitor Application Insights for Java](app/java-standalone-telemetry-processors.md)-- [Tips for updating your JVM args - Azure Monitor Application Insights for Java](app/java-standalone-arguments.md)-- [Unified cross-component transaction diagnostics](app/transaction-diagnostics.md)-- [Visualizations for Application Change Analysis (preview)](app/change-analysis-visualizations.md)-
-### Containers
-
-**Updated articles**
--- [How to create log alerts from Container insights](containers/container-insights-log-alerts.md)-
-### Essentials
-
-**New articles**
--- [Activity logs insights (Preview)](essentials/activity-log.md)-
-**Updated articles**
--- [Create diagnostic settings to send Azure Monitor platform logs and metrics to different destinations](essentials/diagnostic-settings.md)-- [Azure Monitoring REST API walkthrough](essentials/rest-api-walkthrough.md)--
-### Logs
-
-**New articles**
--- [Migrate from Data Collector API and custom fields-enabled tables to DCR-based custom logs](logs/custom-logs-migrate.md)-
-**Updated articles**
--- [Archive data from Log Analytics workspace to Azure storage using Logic App](logs/logs-export-logic-app.md)-- [Azure Monitor Logs Dedicated Clusters](logs/logs-dedicated-clusters.md)-- [Configure Basic Logs in Azure Monitor (Preview)](logs/basic-logs-configure.md)-- [Configure data retention and archive policies in Azure Monitor Logs (Preview)](logs/data-retention-archive.md)-- [Log Analytics Workspace Insights](logs/log-analytics-workspace-insights-overview.md)-- [Move a Log Analytics workspace to different subscription or resource group](logs/move-workspace.md)-- [Query Basic Logs in Azure Monitor (Preview)](logs/basic-logs-query.md)-- [Restore logs in Azure Monitor (preview)](logs/restore.md)-- [Search jobs in Azure Monitor (preview)](logs/search-jobs.md)-
-### Virtual Machines
-
-**Updated articles**
--- [Monitor virtual machines with Azure Monitor: Alerts](vm/monitor-virtual-machine-alerts.md)--
-## February, 2022
-
-### General
-
-**Updated articles**
--- [What is monitored by Azure Monitor?](monitor-reference.md)
-### Agents
-
-**New articles**
--- [Sample data collection rule - agent](agents/data-collection-rule-sample-agent.md)-- [Using data collection endpoints with Azure Monitor agent (preview)](agents/azure-monitor-agent-data-collection-endpoint.md)-
-**Updated articles**
--- [Azure Monitor agent overview](./agents/azure-monitor-agent-overview.md)-- [Manage the Azure Monitor agent](./agents/azure-monitor-agent-manage.md)-
-### Alerts
-
-**Updated articles**
--- [How to trigger complex actions with Azure Monitor alerts](./alerts/action-groups-logic-app.md)-
-### Application Insights
-
-**New articles**
--- [Migrate from Application Insights instrumentation keys to connection strings](app/migrate-from-instrumentation-keys-to-connection-strings.md)--
-**Updated articles**
--- [Application Monitoring for Azure App Service and Java](./app/azure-web-apps-java.md)-- [Application Monitoring for Azure App Service and Node.js](./app/azure-web-apps-nodejs.md)-- [Enable Snapshot Debugger for .NET apps in Azure App Service](./app/snapshot-debugger-appservice.md)-- [Profile live Azure App Service apps with Application Insights](./app/profiler.md)-- [Visualizations for Application Change Analysis (preview)](/azure/azure-monitor/app/change-analysis-visualizations)-
-### Autoscale
-
-**New articles**
--- [Use predictive autoscale to scale out before load demands in virtual machine scale sets (Preview)](autoscale/autoscale-predictive.md)-
-### Data collection
-
-**New articles**
--- [Data collection endpoints in Azure Monitor (preview)](essentials/data-collection-endpoint-overview.md)-- [Data collection rules in Azure Monitor](essentials/data-collection-rule-overview.md)-- [Data collection rule transformations](essentials/data-collection-rule-transformations.md)-- [Structure of a data collection rule in Azure Monitor (preview)](essentials/data-collection-rule-structure.md)
-### Essentials
-
-**Updated articles**
--- [Azure Activity log](./essentials/activity-log.md)-
-### Logs
-
-**Updated articles**
--- [Azure Monitor Logs overview](logs/data-platform-logs.md)-
-**New articles**
--- [Configure Basic Logs in Azure Monitor (Preview)](logs/basic-logs-configure.md)-- [Configure data retention and archive in Azure Monitor Logs (Preview)](logs/data-retention-archive.md)-- [Log Analytics workspace overview](logs/log-analytics-workspace-overview.md)-- [Overview of ingestion-time transformations in Azure Monitor Logs](logs/ingestion-time-transformations.md)-- [Query data from Basic Logs in Azure Monitor (Preview)](logs/basic-logs-query.md)-- [Restore logs in Azure Monitor (Preview)](logs/restore.md)-- [Sample data collection rule - custom logs](logs/data-collection-rule-sample-custom-logs.md)-- [Search jobs in Azure Monitor (Preview)](logs/search-jobs.md)-- [Send custom logs to Azure Monitor Logs with REST API](logs/custom-logs-overview.md)-- [Tables that support ingestion-time transformations in Azure Monitor Logs (preview)](logs/tables-feature-support.md)-- [Tutorial - Send custom logs to Azure Monitor Logs (preview)](logs/tutorial-custom-logs.md)-- [Tutorial - Send custom logs to Azure Monitor Logs using resource manager templates](logs/tutorial-custom-logs-api.md)-- [Tutorial - Add ingestion-time transformation to Azure Monitor Logs using Azure portal](logs/tutorial-ingestion-time-transformations.md)-- [Tutorial - Add ingestion-time transformation to Azure Monitor Logs using resource manager templates](logs/tutorial-ingestion-time-transformations-api.md)--
-## January, 2022
-
-### Agents
-
-**Updated articles**
--- [Manage the Azure Monitor agent](agents/azure-monitor-agent-manage.md)-
-### Alerts
-
-**New articles**
--- [Non-common alert schema definitions for Test Action Group (Preview)](alerts/alerts-non-common-schema-definitions.md)-
-**Updated articles**
--- [Create and manage action groups in the Azure portal](alerts/action-groups.md)-- [Upgrade legacy rules management to the current Log Alerts API from legacy Log Analytics Alert API](alerts/alerts-log-api-switch.md)-- [Log alerts in Azure Monitor](alerts/alerts-unified-log.md)-
-### Application Insights
-
-**Updated articles**
--- [Usage analysis with Application Insights](app/usage-overview.md)-- [Tips for updating your JVM args - Azure Monitor Application Insights for Java](app/java-standalone-arguments.md)-- [Configuration options - Azure Monitor Application Insights for Java](app/java-standalone-config.md)-- [Troubleshooting SDK load failure for JavaScript web apps](app/javascript-sdk-load-failure.md)-
-### Logs
-
-**Updated articles**
--- [Azure Monitor customer-managed key](logs/customer-managed-keys.md)-- [Log Analytics workspace data export in Azure Monitor (preview)](logs/logs-data-export.md)-
-## December, 2021
-
-### General
-
-**Updated articles**
--- [What is monitored by Azure Monitor?](monitor-reference.md)-
-### Agents
-
-**New articles**
--- [Sample data collection rule - agent](agents/data-collection-rule-sample-agent.md)--
-**Updated articles**
--- [Install Log Analytics agent on Windows computers](agents/agent-windows.md)-- [Log Analytics agent overview](agents/log-analytics-agent.md)-
-### Alerts
-
-**New articles**
--- [Manage alert rules created in previous versions](alerts/alerts-manage-alerts-previous-version.md)-
-**Updated articles**
--- [Create an action group with a Resource Manager template](alerts/action-groups-create-resource-manager-template.md)-- [Troubleshoot log alerts in Azure Monitor](alerts/alerts-troubleshoot-log.md)-- [Troubleshooting problems in Azure Monitor alerts](alerts/alerts-troubleshoot.md)-- [Create, view, and manage log alerts using Azure Monitor](alerts/alerts-log.md)-- [Create, view, and manage activity log alerts by using Azure Monitor](alerts/alerts-activity-log.md)-- [Create, view, and manage metric alerts using Azure Monitor](alerts/alerts-metric.md)-
-### Application Insights
-
-**New articles**
--- [Analyzing product usage with HEART](app/usage-heart.md)-- [Migrate from Application Insights instrumentation keys to connection strings](app/migrate-from-instrumentation-keys-to-connection-strings.md)--
-**Updated articles**
--- [Tips for updating your JVM args - Azure Monitor Application Insights for Java](app/java-standalone-arguments.md)-- [Troubleshooting guide: Azure Monitor Application Insights for Java](app/java-standalone-troubleshoot.md)-- [Set up Azure Monitor for your Python application](app/opencensus-python.md)-- [Click Analytics Auto-collection plugin for Application Insights JavaScript SDK](app/javascript-click-analytics-plugin.md)--
-### Logs
-
-**New articles**
--- [Access the Azure Monitor Log Analytics API](logs/api/access-api.md)-- [Set Up Authentication and Authorization for the Azure Monitor Log Analytics API](logs/api/authentication-authorization.md)-- [Querying logs for Azure resources](logs/api/azure-resource-queries.md)-- [Batch queries](logs/api/batch-queries.md)-- [Caching](logs/api/cache.md)-- [Cross workspace queries](logs/api/cross-workspace-queries.md)-- [Azure Monitor Log Analytics API Errors](logs/api/errors.md)-- [Azure Monitor Log Analytics API Overview](logs/api/overview.md)-- [Prefer options](logs/api/prefer-options.md)-- [Azure Monitor Log Analytics API request format](logs/api/request-format.md)-- [Azure Monitor Log Analytics API response format](logs/api/response-format.md)-- [Timeouts](logs/api/timeouts.md)-
-**Updated articles**
--- [Log Analytics workspace data export in Azure Monitor (preview)](logs/logs-data-export.md)-- [Resource Manager template samples for Log Analytics workspaces in Azure Monitor](logs/resource-manager-workspace.md)-
-### Virtual Machines
-
-**Updated articles**
--- [Enable VM insights overview](vm/vminsights-enable-overview.md)---
-## November, 2021
-
-### General
-
-**Updated articles**
--- [What is monitored by Azure Monitor?](monitor-reference.md)-
-### Agents
-
-**Updated articles**
--- [Azure Monitor agent overview](agents/azure-monitor-agent-overview.md)-
-### Alerts
-
-**Updated articles**
--- [Troubleshooting problems in Azure Monitor alerts](alerts/alerts-troubleshoot.md)-- [How to update alert rules or alert processing rules when their target resource moves to a different Azure region](alerts/alerts-resource-move.md)-- [Alert processing rules (preview)](alerts/alerts-action-rules.md)-
-### Application Insights
-
-**Updated articles**
--- [Troubleshooting no data - Application Insights for .NET/.NET Core](app/asp-net-troubleshoot-no-data.md)-- [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](app/java-in-process-agent.md)-- [Enable Azure Monitor OpenTelemetry Exporter for .NET, Node.js, and Python applications (preview)](app/opentelemetry-enable.md)-- [Release notes for Azure Web App extension for Application Insights](app/web-app-extension-release-notes.md)-- [Tips for updating your JVM args - Azure Monitor Application Insights for Java](app/java-standalone-arguments.md)-- [Configuration options - Azure Monitor Application Insights for Java](app/java-standalone-config.md)-- [Supported languages](app/platforms.md)-- [What is Distributed Tracing?](app/distributed-tracing.md)-
-### Containers
-
-**New articles**
--- [Transition to using Container Insights on Azure Arc-enabled Kubernetes](containers/container-insights-transition-hybrid.md)-
-**Updated articles**
--- [Azure Monitor Container Insights for Azure Arc-enabled Kubernetes clusters](containers/container-insights-enable-arc-enabled-clusters.md)-
-### Essentials
-
-**Updated articles**
--- [Create diagnostic settings to send Azure Monitor platform logs and metrics to different destinations](essentials/diagnostic-settings.md)-
-### Insights
-
-**New articles**
--- [Azure Monitor - Service Bus insights](../service-bus-messaging/service-bus-insights.md)-
-**Updated articles**
--- [Enable SQL Insights (preview)](insights/sql-insights-enable.md)-- [Troubleshoot SQL Insights (preview)](insights/sql-insights-troubleshoot.md)-
-### Logs
-
-**Updated articles**
--- [Configure your Private Link](logs/private-link-configure.md)-- [Design your Private Link setup](logs/private-link-design.md)-- [Use Azure Private Link to connect networks to Azure Monitor](logs/private-link-security.md)-- [Log Analytics workspace data export in Azure Monitor (preview)](logs/logs-data-export.md)-- [Query data in Azure Monitor using Azure Data Explorer](logs/azure-data-explorer-monitor-proxy.md)-- [Log data ingestion time in Azure Monitor](logs/data-ingestion-time.md)-
-### Virtual Machines
-
-**Updated articles**
--- [VM insights guest health alerts (preview)](vm/vminsights-health-alerts.md)--
-## October, 2021
-### General
-
-**New articles**
--- [Deploying Azure Monitor - Alerts and automated actions](best-practices-alerts.md)-- [Azure Monitor best practices - Analyze and visualize data](best-practices-analysis.md)-- [Azure Monitor best practices - Configure data collection](best-practices-data-collection.md)-- [Azure Monitor best practices - Planning your monitoring strategy and configuration](best-practices-plan.md)-- [Azure Monitor best practices](best-practices.md)-
-**Updated articles**
--- [What is monitored by Azure Monitor?](monitor-reference.md)-- [Visualize data from Azure Monitor](visualizations.md)
-### Agents
-
-**Updated articles**
--- [How to troubleshoot issues with the Log Analytics agent for Linux](agents/agent-linux-troubleshoot.md)-- [Overview of Azure Monitor agents](agents/agents-overview.md)-- [Install the Azure Monitor agent](agents/azure-monitor-agent-manage.md)-
-### Alerts
-
-**Updated articles**
--- [Create, view, and manage activity log alerts by using Azure Monitor](alerts/alerts-activity-log.md)-- [Create a log alert with a Resource Manager template](alerts/alerts-log-create-templates.md)-- [Webhook actions for log alert rules](alerts/alerts-log-webhook.md)-- [Resource Manager template samples for log alert rules in Azure Monitor](alerts/resource-manager-alerts-log.md)-
-### Application Insights
-
-**New articles**
--- [Statsbeat in Azure Application Insights](app/statsbeat.md)-- [Enable Azure Monitor OpenTelemetry Exporter for .NET, Node.js, and Python applications (Preview)](app/opentelemetry-enable.md)-- [OpenTelemetry overview](app/opentelemetry-overview.md)-
-**Updated articles**
--- [Deploy Azure Monitor Application Insights Agent for on-premises servers](app/status-monitor-v2-overview.md)-- [Tips for updating your JVM args - Azure Monitor Application Insights for Java](app/java-standalone-arguments.md)-- [Set up Azure Monitor for your Python application](app/opencensus-python.md)-- [Java codeless application monitoring with Azure Monitor Application Insights](app/java-in-process-agent.md)-- [Configuration options - Azure Monitor Application Insights for Java](app/java-standalone-config.md)-
-### Containers
-
-**Updated articles**
--- [Troubleshooting Container insights](containers/container-insights-troubleshoot.md)-- [Recommended metric alerts (preview) from Container insights](containers/container-insights-metric-alerts.md)-
-### Essentials
-
-**Updated articles**
--- [Supported metrics with Azure Monitor](essentials/metrics-supported.md)-- [Supported categories for Azure Monitor resource logs](essentials/resource-logs-categories.md)-- [Azure Monitor Metrics overview](essentials/data-platform-metrics.md)-- [Custom metrics in Azure Monitor (preview)](essentials/metrics-custom-overview.md)-- [Common and service-specific schemas for Azure resource logs](essentials/resource-logs-schema.md)-- [Create diagnostic settings to send platform logs and metrics to different destinations](essentials/diagnostic-settings.md)-
-### Logs
-
-**Updated articles**
--- [Log Analytics workspace data export in Azure Monitor (preview)](logs/logs-data-export.md)-- [Azure Monitor customer-managed key](logs/customer-managed-keys.md)-- [Azure Monitor Logs Dedicated Clusters](logs/logs-dedicated-clusters.md)-
-### Virtual Machines
-
-**Updated articles**
--- [Enable VM insights by using Azure Policy](vm/vminsights-enable-policy.md)-
-## Visualizations
-
-**Updated articles**
--- [Monitor your Azure services in Grafana](visualize/grafana-plugin.md)
-## September, 2021
-### General
-
-**Updated articles**
--- [Deploy Azure Monitor at scale by using Azure Policy](./best-practices.md)-- [Azure Monitor partner integrations](partners.md)-- [Resource Manager template samples for Azure Monitor](resource-manager-samples.md)-- [Roles, permissions, and security in Azure Monitor](roles-permissions-security.md)-- [Monitor usage and estimated costs in Azure Monitor](usage-estimated-costs.md)-
-### Agents
-
-**Updated articles**
--- [Azure Monitor agent overview](agents/azure-monitor-agent-overview.md)-
-### Application Insights
-
-**New articles**
--- [Application Monitoring for Azure App Service and ASP.NET](app/azure-web-apps-net.md)-- [Application Monitoring for Azure App Service and Java](app/azure-web-apps-java.md)-- [Application Monitoring for Azure App Service and ASP.NET Core](app/azure-web-apps-net-core.md)-- [Application Monitoring for Azure App Service and Node.js](app/azure-web-apps-nodejs.md)-
-**Updated articles**
--- [Application Monitoring for Azure App Service and ASP.NET](app/azure-web-apps-net.md)-- [Filter and preprocess telemetry in the Application Insights SDK](app/api-filtering-sampling.md)-- [Release notes for Microsoft.ApplicationInsights.SnapshotCollector](app/snapshot-collector-release-notes.md)-- [What is auto-instrumentation for Azure Monitor application insights?](app/codeless-overview.md)-- [Application Monitoring for Azure App Service Overview](app/azure-web-apps.md)-
-### Containers
-
-**Updated articles**
--- [Enable Container insights](containers/container-insights-onboard.md)-
-### Essentials
-
-**Updated articles**
--- [Supported metrics with Azure Monitor](essentials/metrics-supported.md)-- [Supported categories for Azure Resource Logs](essentials/resource-logs-categories.md)-- [Azure Activity log](essentials/activity-log.md)-- [Azure Monitoring REST API walkthrough](essentials/rest-api-walkthrough.md)--
-### Insights
-
-**New articles**
--- [Manage Application Insights components by using Azure CLI](insights/azure-cli-application-insights-component.md)-
-**Updated articles**
--- [Azure Data Explorer Insights](/azure/data-explorer/data-explorer-insights)-- [Agent Health solution in Azure Monitor](insights/solution-agenthealth.md)-- [Monitoring solutions in Azure Monitor](insights/solutions.md)-- [Monitor your SQL deployments with SQL Insights (preview)](insights/sql-insights-overview.md)-- [Troubleshoot SQL Insights (preview)](insights/sql-insights-troubleshoot.md)-
-### Logs
-
-**New articles**
--- [Resource Manager template samples for Log Analytics clusters in Azure Monitor](logs/resource-manager-cluster.md)-
-**Updated articles**
--- [Configure your Private Link](logs/private-link-configure.md)-- [Azure Monitor customer-managed key](logs/customer-managed-keys.md)-- [Azure Monitor Logs Dedicated Clusters](logs/logs-dedicated-clusters.md)-- [Log Analytics workspace data export in Azure Monitor (preview)](logs/logs-data-export.md)-- [Move a Log Analytics workspace to another region by using the Azure portal](logs/move-workspace-region.md)-
-## August, 2021
-
-### Agents
-
-**Updated articles**
--- [Migrate from Log Analytics agents](agents/azure-monitor-agent-migration.md)-- [Azure Monitor agent overview](agents/azure-monitor-agent-overview.md)-
-### Alerts
-
-**Updated articles**
--- [Troubleshooting problems in Azure Monitor metric alerts](alerts/alerts-troubleshoot-metric.md)-- [Create metric alert monitors in Azure CLI](azure-cli-metrics-alert-sample.md)-- [Create, view, and manage activity log alerts by using Azure Monitor](alerts/alerts-activity-log.md)--
-### Application Insights
-
-**Updated articles**
--- [Monitoring Azure Functions with Azure Monitor Application Insights](app/monitor-functions.md)-- [Application Monitoring for Azure App Service](app/azure-web-apps.md)-- [Configure Application Insights for your ASP.NET website](app/asp-net.md)-- [Application Insights availability tests](app/availability-overview.md)-- [Application Insights logging with .NET](app/ilogger.md)-- [Geolocation and IP address handling](app/ip-collection.md)-- [Monitor availability with URL ping tests](app/monitor-web-app-availability.md)-
-### Essentials
-
-**Updated articles**
--- [Supported metrics with Azure Monitor](essentials/metrics-supported.md)-- [Supported categories for Azure Resource Logs](essentials/resource-logs-categories.md)-- [Collect custom metrics for a Linux VM with the InfluxData Telegraf agent](essentials/collect-custom-metrics-linux-telegraf.md)-
-### Insights
-
-**Updated articles**
--- [Azure Monitor Network Insights](insights/network-insights-overview.md)-
-### Logs
-
-**New articles**
--- [Azure AD authentication for Logs](logs/azure-ad-authentication-logs.md)-- [Move Log Analytics workspace to another region using the Azure portal](logs/move-workspace-region.md)-- [Availability zones in Azure Monitor](logs/availability-zones.md)-- [Managing Azure Monitor Logs in Azure CLI](logs/azure-cli-log-analytics-workspace-sample.md)-
-**Updated articles**
--- [Design your Private Link setup](logs/private-link-design.md)-- [Azure Monitor Logs Dedicated Clusters](logs/logs-dedicated-clusters.md)-- [Move Log Analytics workspace to another region using the Azure portal](logs/move-workspace-region.md)-- [Configure your Private Link](logs/private-link-configure.md)-- [Use Azure Private Link to connect networks to Azure Monitor](logs/private-link-security.md)-- [Standard columns in Azure Monitor Logs](logs/log-standard-columns.md)-- [Azure Monitor customer-managed key](logs/customer-managed-keys.md)-- [[Azure Monitor Logs data security](logs/data-security.md)-- [Send log data to Azure Monitor by using the HTTP Data Collector API (preview)](logs/data-collector-api.md)-- [Get started with log queries in Azure Monitor](logs/get-started-queries.md)-- [Azure Monitor Logs overview](logs/data-platform-logs.md)-- [Log Analytics tutorial](logs/log-analytics-tutorial.md)-
-### Virtual Machines
-
-**Updated articles**
--- [Monitor virtual machines with Azure Monitor: Alerts](vm/monitor-virtual-machine-alerts.md)-
-## July, 2021
+## May, 2022
### General
-**Updated articles**
--- [Azure Monitor Frequently Asked Questions](faq.yml)-- [Deploy Azure Monitor at scale using Azure Policy](./best-practices.md)-
-### Agents
-
-**New articles**
--- [Migrating from Log Analytics agent](agents/azure-monitor-agent-migration.md)-
-**Updated articles**
--- [Azure Monitor agent overview](agents/azure-monitor-agent-overview.md)-
-### Alerts
-
-**Updated articles**
--- [Common alert schema definitions](alerts/alerts-common-schema-definitions.md)-- [Create a log alert with a Resource Manager template](alerts/alerts-log-create-templates.md)-- [Resource Manager template samples for log alert rules in Azure Monitor](alerts/resource-manager-alerts-log.md)-
-### Application Insights
-
-**New articles**
--- [Standard test](app/availability-standard-tests.md)-
-**Updated articles**
--- [Use Azure Application Insights to understand how customers are using your application](app/tutorial-users.md)-- [Application Insights cohorts](app/usage-cohorts.md)-- [Discover how customers are using your application with Application Insights Funnels](app/usage-funnels.md)-- [Impact analysis with Application Insights](app/usage-impact.md)-- [Usage analysis with Application Insights](app/usage-overview.md)-- [User retention analysis for web applications with Application Insights](app/usage-retention.md)-- [Users, sessions, and events analysis in Application Insights](app/usage-segmentation.md)-- [Troubleshooting Application Insights Agent (formerly named Status Monitor v2)](app/status-monitor-v2-troubleshoot.md)-- [Monitor availability with URL ping tests](app/monitor-web-app-availability.md)-
-### Containers
-
-**New articles**
--- [How to query logs from Container insights](containers/container-insights-log-query.md)-- [Monitoring Azure Kubernetes Service (AKS) with Azure Monitor](../aks/monitor-aks.md)-
-**Updated articles**
--- [How to create log alerts from Container insights](containers/container-insights-log-alerts.md)-
-### Essentials
-
-**Updated articles**
--- [Supported metrics with Azure Monitor](essentials/metrics-supported.md)-- [Supported categories for Azure Resource Logs](essentials/resource-logs-categories.md)-
-### Insights
-
-**Updated articles**
--- [Monitor Surface Hubs with Azure Monitor to track their health](insights/surface-hubs.md)-
-### Logs
-
-**Updated articles**
--- [Azure Monitor Logs Dedicated Clusters](logs/logs-dedicated-clusters.md)-- [Log Analytics workspace data export in Azure Monitor (preview)](logs/logs-data-export.md)-
-### Virtual Machines
-
-**Updated articles**
--- [Monitor virtual machines with Azure Monitor: Configure monitoring](vm/monitor-virtual-machine-configure.md)-- [Monitor virtual machines with Azure Monitor: Security monitoring](vm/monitor-virtual-machine-security.md)-- [Monitor virtual machines with Azure Monitor: Workloads](vm/monitor-virtual-machine-workloads.md)-- [Monitor virtual machines with Azure Monitor](vm/monitor-virtual-machine.md)-- [Monitor virtual machines with Azure Monitor: Alerts](vm/monitor-virtual-machine-alerts.md)-- [Monitor virtual machines with Azure Monitor: Analyze monitoring data](vm/monitor-virtual-machine-analyze.md)-
-### Visualizations
-
-**Updated articles**
+- [Azure Monitor cost and usage](usage-estimated-costs.md) - Added standard web tests to table<br>Added explanation of billable GB calculation
+- [Azure Monitor overview](overview.md) - Updated overview diagram
-- [Visualizing data from Azure Monitor](best-practices-analysis.md)
-## June, 2021
### Agents
-**Updated articles**
--- [Azure Monitor agent overview](agents/azure-monitor-agent-overview.md)-- [Overview of Azure Monitor agents](agents/agents-overview.md)-- [Configure data collection for the Azure Monitor agent (preview)](agents/data-collection-rule-azure-monitor-agent.md)
+- [Azure Monitor agent extension versions](agents/azure-monitor-agent-extension-versions.md) - Update to latest extension version
+- [Azure Monitor agent overview](agents/azure-monitor-agent-overview.md) - Added supported resource types
+- [Collect text and IIS logs with Azure Monitor agent (preview)](agents/data-collection-text-log.md) - Corrected error in data collection rule
+- [Overview of the Azure monitoring agents](agents/agents-overview.md) - Added new OS supported for agent
+- [Resource Manager template samples for agents](agents/resource-manager-agent.md) - Added Bicep examples
+- [Resource Manager template samples for data collection rules](agents/resource-manager-data-collection-rules.md) - Fixed bug in sample parameter file
+- [Rsyslog data not uploaded due to Full Disk space issue on AMA Linux Agent](agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md) - New article
+- [Troubleshoot the Azure Monitor agent on Linux virtual machines and scale sets](agents/azure-monitor-agent-troubleshoot-linux-vm.md) - New article
+- [Troubleshoot the Azure Monitor agent on Windows Arc-enabled server](agents/azure-monitor-agent-troubleshoot-windows-arc.md) - New article
+- [Troubleshoot the Azure Monitor agent on Windows virtual machines and scale sets](agents/azure-monitor-agent-troubleshoot-windows-vm.md) - New article
### Alerts
-**New articles**
--- [Migrate Azure Monitor Application Insights smart detection to alerts (Preview)](alerts/alerts-smart-detections-migration.md)-
-**Updated articles**
--- [Create Metric Alerts for Logs in Azure Monitor](alerts/alerts-metric-logs.md)-- [Troubleshoot log alerts in Azure Monitor](alerts/alerts-troubleshoot-log.md)
+- [IT Service Management Connector - Secure Webhook in Azure Monitor - Azure Configurations](alerts/itsm-connector-secure-webhook-connections-azure-configuration.md) - Added the workflow for ITSM management and removed all references to SCSM.
+- [Overview of Azure Monitor Alerts](alerts/alerts-overview.md) - Complete rewrite
+- [Resource Manager template samples for log query alerts](alerts/resource-manager-alerts-log.md) - Bicep samples for alerting have been added to the Resource Manager template samples articles.
+- [Supported resources for metric alerts in Azure Monitor](alerts/alerts-metric-near-real-time.md) - Added a newly supported resource type
### Application Insights
-**New articles**
--- [Azure AD authentication for Application Insights (Preview)](app/azure-ad-authentication.md)-- [Quickstart: Monitor an ASP.NET Core app with Azure Monitor Application Insights](app/dotnet-quickstart.md)-
-**Updated articles**
--- [Work Item Integration](app/work-item-integration.md)-- [Azure AD authentication for Application Insights (Preview)](app/azure-ad-authentication.md)-- [Release annotations for Application Insights](app/annotations.md)-- [Connection strings](app/sdk-connection-string.md)-- [Telemetry processors (preview) - Azure Monitor Application Insights for Java](app/java-standalone-telemetry-processors.md)-- [IP addresses used by Azure Monitor](app/ip-addresses.md)-- [Java codeless application monitoring Azure Monitor Application Insights](app/java-in-process-agent.md)-- [Adding the JVM arg - Azure Monitor Application Insights for Java](app/java-standalone-arguments.md)-- [Application Insights for ASP.NET Core applications](app/asp-net-core.md)-- [Telemetry processor examples - Azure Monitor Application Insights for Java](app/java-standalone-telemetry-processors-examples.md)-- [Application security detection pack (preview)](app/proactive-application-security-detection-pack.md)-- [Smart detection in Application Insights](app/proactive-diagnostics.md)-- [Abnormal rise in exception volume (preview)](app/proactive-exception-volume.md)-- [Smart detection - Performance Anomalies](app/proactive-performance-diagnostics.md)-- [Memory leak detection (preview)](app/proactive-potential-memory-leak.md)-- [Degradation in trace severity ratio (preview)](app/proactive-trace-severity.md)-
+- [Application Map in Azure Application Insights](app/app-map.md) - Application Maps Intelligent View feature
+- [Azure Application Insights for ASP.NET Core applications](app/asp-net-core.md) - telemetry.Flush() guidance is now available.
+- [Diagnose with Live Metrics Stream - Azure Application Insights](app/live-stream.md) - Updated information on using unsecure control channel.
+- [Migrate an Azure Monitor Application Insights classic resource to a workspace-based resource](app/convert-classic-resource.md) - Schema change documentation is now available here.
+- [Profile production apps in Azure with Application Insights Profiler](profiler/profiler-overview.md) - Profiler documentation now has a new home in the table of contents.
+- All references to unsupported versions of .NET and .NET CORE have been scrubbed from Application Insights product documentation. See [.NET and >NET Core Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)
+### Change Analysis
+
+- [Navigate to a change using custom filters in Change Analysis](change/change-analysis-custom-filters.md) - New article
+- [Pin and share a Change Analysis query to the Azure dashboard](change/change-analysis-query.md) - New article
+- [Use Change Analysis in Azure Monitor to find web-app issues](change/change-analysis.md) - Added details enabling for web app in-guest changes
### Containers
-**Updated articles**
--- [How to query logs from Container insights](containers/container-insights-log-query.md)-
-### Essentials
-
-**Updated articles**
--- [Supported categories for Azure Resource Logs](essentials/resource-logs-categories.md)-- [Resource Manager template samples for diagnostic settings in Azure Monitor](essentials/resource-manager-diagnostic-settings.md)--
+- [Configure ContainerLogv2 schema (preview) for Container Insights](containers/container-insights-logging-v2.md) - New article describing new schema for container logs
+- [Enable Container insights](containers/container-insights-onboard.md) - General rewrite to improve clarity
+- [Resource Manager template samples for Container insights](containers/resource-manager-container-insights.md) - Added Bicep examples
### Insights
-**Updated articles**
--- [Enable SQL Insights (preview)](insights/sql-insights-enable.md)-
+- [Troubleshoot SQL Insights (preview)](insights/sql-insights-troubleshoot.md) - Added known issue for OS computer name.
### Logs
-**Updated articles**
--- [Log Analytics tutorial](logs/log-analytics-tutorial.md)-- [Use Azure Private Link to securely connect networks to Azure Monitor](logs/private-link-security.md)-- [Azure Monitor Logs Dedicated Clusters](logs/logs-dedicated-clusters.md)-- [Monitor health of Log Analytics workspace in Azure Monitor](logs/monitor-workspace.md)
+- [Azure Monitor customer-managed key](logs/customer-managed-keys.md) - Update limitations and constraint.
+- [Design a Log Analytics workspace architecture](logs/workspace-design.md) - Complete rewrite to better describe decision criteria and include Sentinel considerations
+- [Manage access to Log Analytics workspaces](logs/manage-access.md) - Consolidated and rewrote all content on configuring workspace access
+- [Restore logs in Azure Monitor (Preview)](logs/restore.md) - Documented new Log Analytics table management configuration UI, which lets you configure a table's log plan and archive and retention policies.
### Virtual Machines
-**New articles**
--- [Monitoring virtual machines with Azure Monitor - Alerts](vm/monitor-virtual-machine-alerts.md)-- [Monitoring virtual machines with Azure Monitor - Analyze monitoring data](vm/monitor-virtual-machine-analyze.md)-- [Monitor virtual machines with Azure Monitor - Configure monitoring](vm/monitor-virtual-machine-configure.md)-- [Monitor virtual machines with Azure Monitor - Security monitoring](vm/monitor-virtual-machine-security.md)-- [Monitoring virtual machines with Azure Monitor - Workloads](vm/monitor-virtual-machine-workloads.md)-- [Monitoring virtual machines with Azure Monitor](vm/monitor-virtual-machine.md)-
-**Updated articles**
--- [Troubleshoot VM insights guest health (preview)](vm/vminsights-health-troubleshoot.md)-- [Create interactive reports VM insights with workbooks](vm/vminsights-workbooks.md)
+- [Migrate from VM insights guest health (preview) to Azure Monitor log alerts](vm/vminsights-health-migrate.md) - New article describing process to replace VM guest health with alert rules
+- [VM insights guest health (preview)](vm/vminsights-health-overview.md) - Added deprecation statement
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na Previously updated : 04/26/2022 Last updated : 06/10/2022 # Solution architectures using Azure NetApp Files
This section provides references to SAP on Azure solutions.
* [Implementing Azure NetApp Files with Kerberos for SAP HANA](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/implementing-azure-netapp-files-with-kerberos/ba-p/3142010) * [Azure Application Consistent Snapshot tool (AzAcSnap)](azacsnap-introduction.md) * [SAP HANA Disaster Recovery with Azure NetApp Files](https://docs.netapp.com/us-en/netapp-solutions-sap/pdfs/sidebar/SAP_HANA_Disaster_Recovery_with_Azure_NetApp_Files.pdf)
+* [SAP HANA backup and recovery on Azure NetApp Files with SnapCenter Service](https://docs.netapp.com/us-en/netapp-solutions-sap/pdfs/sidebar/SAP_HANA_backup_and_recovery_on_Azure_NetApp_Files_with_SnapCenter_Service.pdf)
### SAP AnyDB
This section provides references to SAP on Azure solutions.
## Azure VMware Solutions
-* [Azure NetApp Files with Azure VMware Solution - Guest OS Mounts](../azure-vmware/netapp-files-with-azure-vmware-solution.md)
+* [Attach Azure NetApp Files datastores to Azure VMware Solution hosts](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md)
+* [Attach Azure NetApp Files to Azure VMware Solution VMs - Guest OS Mounts](../azure-vmware/netapp-files-with-azure-vmware-solution.md)
## Virtual Desktop Infrastructure solutions
azure-netapp-files Faq Application Resilience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-application-resilience.md
Previously updated : 10/11/2021 Last updated : 06/09/2022 # Application resilience FAQs for Azure NetApp Files
This article answers frequently asked questions (FAQs) about Azure NetApp Files
## What do you recommend for handling potential application disruptions due to storage service maintenance events?
-Azure NetApp Files might undergo occasional planned maintenance (for example, platform updates, service or software upgrades). From a file protocol (NFS/SMB) perspective, the maintenance operations are non-disruptive, as long as the application can handle the IO pauses that might briefly occur during these events. The I/O pauses are typically short, ranging from a few seconds up to 30 seconds. The NFS protocol is especially robust, and client-server file operations continue normally. Some applications might require tuning to handle IO pauses for as long as 30-45 seconds. As such, ensure that you are aware of the applicationΓÇÖs resiliency settings to cope with the storage service maintenance events. For human interactive applications leveraging the SMB protocol, the standard protocol settings are usually sufficient.
+Azure NetApp Files might undergo occasional planned maintenance (for example, platform updates, service or software upgrades). From a file protocol (NFS/SMB) perspective, the maintenance operations are non-disruptive, as long as the application can handle the IO pauses that might briefly occur during these events. The I/O pauses are typically short, ranging from a few seconds up to 30 seconds. The NFS protocol is especially robust, and client-server file operations continue normally. Some applications might require tuning to handle IO pauses for as long as 30-45 seconds. As such, ensure that you're aware of the applicationΓÇÖs resiliency settings to cope with the storage service maintenance events. For human interactive applications leveraging the SMB protocol, the standard protocol settings are usually sufficient.
## Do I need to take special precautions for SMB-based applications?
-Yes, certain SMB-based applications require SMB Transparent Failover. SMB Transparent Failover enables maintenance operations on the Azure NetApp Files service without interrupting connectivity to server applications storing and accessing data on SMB volumes. To support SMB Transparent Failover for specific applications, Azure NetApp Files now supports the [SMB Continuous Availability shares option](azure-netapp-files-create-volumes-smb.md#continuous-availability).
+Yes, certain SMB-based applications require SMB Transparent Failover. SMB Transparent Failover enables maintenance operations on the Azure NetApp Files service without interrupting connectivity to server applications storing and accessing data on SMB volumes. To support SMB Transparent Failover for specific applications, Azure NetApp Files now supports the [SMB Continuous Availability shares option](azure-netapp-files-create-volumes-smb.md#continuous-availability). Using SMB Continuous Availability is only supported for workloads on:
+* Citrix App Laying
+* [FSLogix user profile containers](../virtual-desktop/create-fslogix-profile-container.md)
+* Microsoft SQL Server (not Linux SQL Server)
-## I am running IBM MQ on Azure NetApp Files. What precautions can I take to avoid disruptions due to storage service maintenance events despite using the NFS protocol?
+## I'm running IBM MQ on Azure NetApp Files. What precautions can I take to avoid disruptions due to storage service maintenance events despite using the NFS protocol?
-If you are running the [IBM MQ application in a shared files configuration](https://www.ibm.com/docs/en/ibm-mq/9.2?topic=multiplatforms-sharing-mq-files), where the IBM MQ data and logs are stored on an Azure NetApp Files volume, the following considerations are recommended to improve resilience during storage service maintenance events:
+If you're running the [IBM MQ application in a shared files configuration](https://www.ibm.com/docs/en/ibm-mq/9.2?topic=multiplatforms-sharing-mq-files), where the IBM MQ data and logs are stored on an Azure NetApp Files volume, the following considerations are recommended to improve resilience during storage service maintenance events:
* You must use NFS v4.1 protocol only. * For High Availability, you should use an [IBM MQ multi-instance configuration using shared NFS v4.1 volumes](https://www.ibm.com/docs/en/ibm-mq/9.2?topic=manager-create-multi-instance-queue-linux).
If you are running the [IBM MQ application in a shared files configuration](http
The scale-out architecture would be comprised of multiple IBM MQ multi-instance pairs deployed behind an Azure Load Balancer. Applications configured to communicate with IBM MQ would then be configured to communicate with the IBM MQ instances via Azure Load Balancer. For support related to IBM MQ on shared NFS volumes, you should obtain vendor support at IBM.
-## I am running Apache ActiveMQ with LevelDB or KahaDB on Azure NetApp Files. What precautions can I take to avoid disruptions due to storage service maintenance events despite using the *NFS* protocol?
+## I'm running Apache ActiveMQ with LevelDB or KahaDB on Azure NetApp Files. What precautions can I take to avoid disruptions due to storage service maintenance events despite using the *NFS* protocol?
>[!NOTE] > This section contains references to the terms *slave* and *master*, terms that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
-If you are running the Apache ActiveMQ, it is recommended to deploy [ActiveMQ High Availability with Pluggable Storage Lockers](https://www.openlogic.com/blog/pluggable-storage-lockers-activemq).
+If you're running the Apache ActiveMQ, it is recommended to deploy [ActiveMQ High Availability with Pluggable Storage Lockers](https://www.openlogic.com/blog/pluggable-storage-lockers-activemq).
ActiveMQ high availability (HA) models ensure that a broker instance is always online and able to process message traffic. The two most common ActiveMQ HA models involve sharing a filesystem over a network. The purpose is to provide either LevelDB or KahaDB to the active and passive broker instances. These HA models require that an OS-level lock be obtained and maintained on a file in the LevelDB or KahaDB directories, called "lock". There are some problems with this ActiveMQ HA model. They can lead to a "no-master" situation, where the "slave" isnΓÇÖt aware that it can lock the file. They can also lead to a "master-master" configuration that results in index or journal corruption and ultimately message loss. Most of these problems stem from factors outside of ActiveMQ's control. For instance, a poorly optimized NFS client can cause locking data to become stale under load, leading to ΓÇ£no-masterΓÇ¥ downtime during failover. Because most problems with this HA solution stem from inaccurate OS-level file locking, the ActiveMQ community [introduced the concept of a pluggable storage locker](https://www.openlogic.com/blog/pluggable-storage-lockers-activemq) in version 5.7 of the broker. This approach allows a user to take advantage of a different means of the shared lock, using a row-level JDBC database lock as opposed to an OS-level filesystem lock. For support or consultancy on ActiveMQ HA architectures and deployments, you should [contact OpenLogic by Perforce](https://www.openlogic.com/contact-us).
-## I am running Apache ActiveMQ with LevelDB or KahaDB on Azure NetApp Files. What precautions can I take to avoid disruptions due to storage service maintenance events despites using the *SMB* protocol?
+## I'm running Apache ActiveMQ with LevelDB or KahaDB on Azure NetApp Files. What precautions can I take to avoid disruptions due to storage service maintenance events despites using the *SMB* protocol?
-The general industry recommendation is to [not run your KahaDB shared storage on CIFS/SMB](https://www.openlogic.com/blog/activemq-community-deprecates-leveldb-what-you-need-know). If you are having trouble maintaining accurate lock state, check out the JDBC Pluggable Storage Locker, which can provide a more reliable locking mechanism. For support or consultancy on ActiveMQ HA architectures and deployments, you should [contact OpenLogic by Perforce](https://www.openlogic.com/contact-us).
+The general industry recommendation is to [not run your KahaDB shared storage on CIFS/SMB](https://www.openlogic.com/blog/activemq-community-deprecates-leveldb-what-you-need-know). If you're having trouble maintaining accurate lock state, check out the JDBC Pluggable Storage Locker, which can provide a more reliable locking mechanism. For support or consultancy on ActiveMQ HA architectures and deployments, you should [contact OpenLogic by Perforce](https://www.openlogic.com/contact-us).
## Next steps
azure-resource-manager Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-github-actions.md
description: In this quickstart, you learn how to deploy Bicep files by using Gi
Previously updated : 11/16/2021 Last updated : 05/19/2022
az group create -n exampleRG -l westus
## Generate deployment credentials
-Your GitHub Actions runs under an identity. Use the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac) command to create a [service principal](../../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) for the identity.
+# [Service principal](#tab/userlevel)
+
+Your GitHub Actions run under an identity. Use the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac) command to create a [service principal](../../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) for the identity.
Replace the placeholder `myApp` with the name of your application. Replace `{subscription-id}` with your subscription ID.
The output is a JSON object with the role assignment credentials that provide ac
(...) } ```
+# [Open ID Connect](#tab/openid)
++
+Open ID Connect is an authentication method that uses short-lived tokens. Setting up [OpenID Connect with GitHub Actions](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect) is more complex process that offers hardened security.
+
+1. If you do not have an existing application, register a [new Active Directory application and service principal that can access resources](../../active-directory/develop/howto-create-service-principal-portal.md). Create the Active Directory application.
+
+ ```azurecli-interactive
+ az ad app create --display-name myApp
+ ```
+
+ This command will output JSON with an `appId` that is your `client-id`. Save the value to use as the `AZURE_CLIENT_ID` GitHub secret later.
+
+ You'll use the `objectId` value when creating federated credentials with Graph API and reference it as the `APPLICATION-OBJECT-ID`.
+
+1. Create a service principal. Replace the `$appID` with the appId from your JSON output.
+
+ This command generates JSON output with a different `objectId` and will be used in the next step. The new `objectId` is the `assignee-object-id`.
+
+ Copy the `appOwnerTenantId` to use as a GitHub secret for `AZURE_TENANT_ID` later.
+
+ ```azurecli-interactive
+ az ad sp create --id $appId
+ ```
+
+1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID, `$resourceGroupName` with your resource group name, and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
+
+ ```azurecli-interactive
+ az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --assignee-principal-type ServicePrincipal --scopes /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Web/sites/
+ ```
+1. Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for your active directory application.
+
+ * Replace `APPLICATION-OBJECT-ID` with the **objectId (generated while creating app)** for your Active Directory application.
+ * Set a value for `CREDENTIAL-NAME` to reference later.
+ * Set the `subject`. The value of this is defined by GitHub depending on your workflow:
+ * Jobs in your GitHub Actions environment: `repo:< Organization/Repository >:environment:< Name >`
+ * For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`.
+ * For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull_request`.
+
+ ```azurecli
+ az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/<APPLICATION-OBJECT-ID>/federatedIdentityCredentials' --body '{"name":"<CREDENTIAL-NAME>","issuer":"https://token.actions.githubusercontent.com","subject":"repo:organization/repository:ref:refs/heads/main","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
+ ```
+
+ To learn how to create a Create an active directory application, service principal, and federated credentials in Azure portal, see [Connect GitHub and Azure](/azure/developer/github/connect-from-azure#use-the-azure-login-action-with-openid-connect).
+
+ ## Configure the GitHub secrets
+# [Service principal](#tab/userlevel)
+ Create secrets for your Azure credentials, resource group, and subscriptions. 1. In [GitHub](https://github.com/), navigate to your repository.
Create secrets for your Azure credentials, resource group, and subscriptions.
1. Create another secret named `AZURE_SUBSCRIPTION`. Add your subscription ID to the secret's value field (example: `90fd3f9d-4c61-432d-99ba-1273f236afa2`).
+# [Open ID Connect](#tab/openid)
+
+You need to provide your application's **Client ID**, **Tenant ID**, and **Subscription ID** to the login action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option.
+
+1. Open your GitHub repository and go to **Settings**.
+
+1. Select **Settings > Secrets > New secret**.
+
+1. Create secrets for `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_SUBSCRIPTION_ID`. Use these values from your Active Directory application for your GitHub secrets:
+
+ |GitHub Secret | Active Directory Application |
+ |||
+ |AZURE_CLIENT_ID | Application (client) ID |
+ |AZURE_TENANT_ID | Directory (tenant) ID |
+ |AZURE_SUBSCRIPTION_ID | Subscription ID |
+
+1. Save each secret by selecting **Add secret**.
+++ ## Add a Bicep file Add a Bicep file to your GitHub repository. The following Bicep file creates a storage account:
To create a workflow, take the following steps:
1. Rename the workflow file if you prefer a different name other than **main.yml**. For example: **deployBicepFile.yml**. 1. Replace the content of the yml file with the following code:
+ # [Service principal](#tab/userlevel)
+ ```yml on: [push] name: Azure ARM
To create a workflow, take the following steps:
parameters: storagePrefix=mystore failOnStdErr: false ```-
+
Replace `mystore` with your own storage account name prefix. > [!NOTE]
To create a workflow, take the following steps:
- **name**: The name of the workflow. - **on**: The name of the GitHub events that triggers the workflow. The workflow is triggered when there's a push event on the main branch.
+
+ # [OpenID Connect](#tab/openid)
+
+ ```yml
+ on: [push]
+ name: Azure ARM
+ jobs:
+ build-and-deploy:
+ runs-on: ubuntu-latest
+ steps:
+
+ # Checkout code
+ - uses: actions/checkout@main
+
+ # Log into Azure
+ - uses: azure/login@v1
+ with:
+ client-id: ${{ secrets.AZURE_CLIENT_ID }}
+ tenant-id: ${{ secrets.AZURE_TENANT_ID }}
+ subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
+
+ # Deploy Bicep file
+ - name: deploy
+ uses: azure/arm-deploy@v1
+ with:
+ subscriptionId: ${{ secrets.AZURE_SUBSCRIPTION }}
+ resourceGroupName: ${{ secrets.AZURE_RG }}
+ template: ./main.bicep
+ parameters: storagePrefix=mystore
+ failOnStdErr: false
+ ```
+
+
++ 1. Select **Start commit**. 1. Select **Commit directly to the main branch**.
azure-signalr Signalr Howto Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-diagnostic-logs.md
Provide:
5. [Optional] Repro code > [!NOTE]
-> If you open issue in GitHub, keep your sensitive information (For example, resource ID, server/client logs) private, only send to members in Microsoft organization privately.
+> If you open issue in GitHub, keep your sensitive information (For example, resource ID, server/client logs) private, only send to members in Microsoft organization privately.
azure-vmware Enable Public Ip Nsx Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-public-ip-nsx-edge.md
With this capability, you have the following features:
- DDoS Security protection against network traffic in and out of the Internet. - HCX Migration support over the Public Internet.
+>[!TIP]
+>To enable this feature for your subscription, register the ```PIPOnNSXEnabled``` flag and follow these steps to [set up the preview feature in your Azure subscription](https://docs.microsoft.com/azure/azure-resource-manager/management/preview-features?tabs=azure-portal).
+ ## Reference architecture The architecture shows Internet access to and from your Azure VMware Solution private cloud using a Public IP directly to the NSX Edge. :::image type="content" source="media/public-ip-nsx-edge/architecture-internet-access-avs-public-ip.png" alt-text="Diagram that shows architecture of Internet access to and from your Azure VMware Solution Private Cloud using a Public IP directly to the NSX Edge." border="false" lightbox="media/public-ip-nsx-edge/architecture-internet-access-avs-public-ip.png":::
If **Match Internal Address** was specified, the destination would be the intern
For more information on the NSX-T Gateway Firewall see the [NSX-T Gateway Firewall Administration Guide]( https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-A52E1A6F-F27D-41D9-9493-E3A75EC35481.html) The Distributed Firewall could be used to filter traffic to VMs. This feature is outside the scope of this document. For more information, see [NSX-T Distributed Firewall Administration Guide]( https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-6AB240DB-949C-4E95-A9A7-4AC6EF5E3036.html)git status.
-To enable this feature for your subscription, register the ```PIPOnNSXEnabled``` flag and follow these steps to [set up the preview feature in your Azure subscription](https://docs.microsoft.com/azure/azure-resource-manager/management/preview-features?tabs=azure-portal).
-- ## Next steps [Internet connectivity design considerations (Preview)](concepts-design-public-internet-access.md)
backup Backup Azure Linux App Consistent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-linux-app-consistent.md
Pre-scripts invoke native application APIs, which quiesce the IOs, and flush in-
- **VMSnapshotScriptPluginConfig.json**: Permission ΓÇ£600.ΓÇ¥ For example, only ΓÇ£rootΓÇ¥ user should have ΓÇ£readΓÇ¥ and ΓÇ£writeΓÇ¥ permissions to this file, and no user should have ΓÇ£executeΓÇ¥ permissions.
- - **Pre-script file**: Permission ΓÇ£700.ΓÇ¥ For example, only ΓÇ£rootΓÇ¥ user should have ΓÇ£readΓÇ¥, ΓÇ£writeΓÇ¥, and ΓÇ£executeΓÇ¥ permissions to this file.
+ - **Pre-script file**: Permission ΓÇ£700.ΓÇ¥ For example, only ΓÇ£rootΓÇ¥ user should have ΓÇ£readΓÇ¥, ΓÇ£writeΓÇ¥, and ΓÇ£executeΓÇ¥ permissions to this file. The file is expected to be a shell script but theoretically this script can internally spawn or refer to other scripts like a python script.
- - **Post-script** Permission ΓÇ£700.ΓÇ¥ For example, only ΓÇ£rootΓÇ¥ user should have ΓÇ£readΓÇ¥, ΓÇ£writeΓÇ¥, and ΓÇ£executeΓÇ¥ permissions to this file.
+ - **Post-script** Permission ΓÇ£700.ΓÇ¥ For example, only ΓÇ£rootΓÇ¥ user should have ΓÇ£readΓÇ¥, ΓÇ£writeΓÇ¥, and ΓÇ£executeΓÇ¥ permissions to this file. The file is expected to be a shell script but theoretically this script can internally spawn or refer to other scripts like a python script.
> [!IMPORTANT] > The framework gives users a lot of power. Secure the framework, and ensure only ΓÇ£rootΓÇ¥ user has access to critical JSON and script files.
backup Backup Azure Sql Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-automation.md
Disable-AzRecoveryServicesBackupProtection -Item $bkpItem -VaultId $testVault.ID
If autoprotection was configured on an SQLInstance, you can disable it using the [Disable-AzRecoveryServicesBackupAutoProtection](/powershell/module/az.recoveryservices/disable-azrecoveryservicesbackupautoprotection) PowerShell cmdlet.
+Find the instances where auto-protection is enabled using the following PowerShell command.
+
+```azurepowershell
+Get-AzRecoveryServicesBackupProtectableItem -WorkloadType MSSQL -VaultId $testVault.ID | Where-Object {$_.IsAutoProtected -eq $true}
+```
+
+Then pick the relevant protectable item name and server name from the output and disable auto-protection for those instances.
+ ```powershell $SQLInstance = Get-AzRecoveryServicesBackupProtectableItem -workloadType MSSQL -ItemType SQLInstance -VaultId $testVault.ID -Name "<Protectable Item name>" -ServerName "<Server Name>" Disable-AzRecoveryServicesBackupAutoProtection -InputItem $SQLInstance -BackupManagementType AzureWorkload -WorkloadType MSSQL -VaultId $testVault.ID
backup Backup Rm Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-rm-template-samples.md
Title: Azure Resource Manager templates
-description: Azure Resource Manager templates for use with Recovery Services vaults and Azure Backup features
+ Title: Azure Resource Manager and Bicep templates
+description: Azure Resource Manager and Bicep templates for use with Recovery Services vaults and Azure Backup features
Previously updated : 01/31/2019 Last updated : 06/10/2022 +++
-# Azure Resource Manager templates for Azure Backup
+# Azure Resource Manager and Bicep templates for Azure Backup
-The following table includes links to Azure Resource Manager templates for use with Recovery Services vaults and Azure Backup features. To learn about the JSON syntax and properties, see [Microsoft.RecoveryServices resource types](/azure/templates/microsoft.recoveryservices/allversions).
+The following table includes a link to a repository of Azure Resource Manager and Bicep templates for use with Recovery Services vaults and Azure Backup features. To learn about the JSON or Bicep syntax and properties, see [Microsoft.RecoveryServices resource types](/azure/templates/microsoft.recoveryservices/allversions).
| Template | Description | |||
backup Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix.md
Title: Azure Backup support matrix description: Provides a summary of support settings and limitations for the Azure Backup service. Previously updated : 05/05/2022 Last updated : 06/08/2022
Here's what's supported if you want to back up on-premises machines:
**Machine** | **What's backed up** | **Location** | **Features** | | |
-**Direct backup of Windows machine with MARS agent** | Files, folders, system state | Back up to Recovery Services vault. | Back up three times a day<br/><br/> No app-aware backup<br/><br/> Restore file, folder, volume
+**Direct backup of Windows machine with MARS agent** | - Files, folders <br><br> - System state | Back up to Recovery Services vault. | - Back up three times a day<br/><br/> - Back up once a day. <br><br> No app-aware backup<br/><br/> Restore file, folder, volume
**Direct backup of Linux machine with MARS agent** | Backup not supported **Back up to DPM** | Files, folders, volumes, system state, app data | Back up to local DPM storage. DPM then backs up to vault. | App-aware snapshots<br/><br/> Full granularity for backup and recovery<br/><br/> Linux supported for VMs (Hyper-V/VMware)<br/><br/> Oracle not supported **Back up to MABS** | Files, folders, volumes, system state, app data | Back up to MABS local storage. MABS then backs up to the vault. | App-aware snapshots<br/><br/> Full granularity for backup and recovery<br/><br/> Linux supported for VMs (Hyper-V/VMware)<br/><br/> Oracle not supported
Here's what's supported if you want to back up Azure VMs:
**Machine** | **What's backed up** | **Location** | **Features** | | | **Azure VM backup by using VM extension** | Entire VM | Back up to vault. | Extension installed when you enable backup for a VM.<br/><br/> Back up once a day.<br/><br/> App-aware backup for Windows VMs; file-consistent backup for Linux VMs. You can configure app-consistency for Linux machines by using custom scripts.<br/><br/> Restore VM or disk.<br/><br/>[Backup and restore of Active Directory domain controllers](active-directory-backup-restore.md) is supported.<br><br> Can't back up an Azure VM to an on-premises location.
-**Azure VM backup by using MARS agent** | Files, folders, system state | Back up to vault. | Back up three times a day.<br/><br/> If you want to back up specific files or folders rather than the entire VM, the MARS agent can run alongside the VM extension.
+**Azure VM backup by using MARS agent** | - Files, folders <br><br> - System state | Back up to vault. | - Back up three times a day. <br><br> - Back up once a day. <br/><br/> If you want to back up specific files or folders rather than the entire VM, the MARS agent can run alongside the VM extension.
**Azure VM with DPM** | Files, folders, volumes, system state, app data | Back up to local storage of Azure VM that's running DPM. DPM then backs up to vault. | App-aware snapshots.<br/><br/> Full granularity for backup and recovery.<br/><br/> Linux supported for VMs (Hyper-V/VMware).<br/><br/> Oracle not supported. **Azure VM with MABS** | Files, folders, volumes, system state, app data | Back up to local storage of Azure VM that's running MABS. MABS then backs up to the vault. | App-aware snapshots.<br/><br/> Full granularity for backup and recovery.<br/><br/> Linux supported for VMs (Hyper-V/VMware).<br/><br/> Oracle not supported.
backup Restore Sql Database Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-sql-database-azure-vm.md
Before you restore a database, note the following:
- If you have multiple instances running on a server, all the instances should be up and running. Otherwise the server won't appear in the list of destination servers for you to restore the database to. For more information, refer to [the troubleshooting steps](backup-sql-server-azure-troubleshoot.md#faulty-instance-in-a-vm-with-multiple-sql-server-instances). - To restore a TDE-encrypted database to another SQL Server, you need to first [restore the certificate to the destination server](/sql/relational-databases/security/encryption/move-a-tde-protected-database-to-another-sql-server). - [CDC](/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server) enabled databases should be restored using the [Restore as files](#restore-as-files) option.-- Before you restore the "master" database, start the SQL Server instance in single-user mode by using the startup option **-m AzureWorkloadBackup**.
- - The value for **-m** is the name of the client.
- - Only the specified client name can open the connection.
-- For all system databases (model, master, msdb), stop the SQL Server Agent service before you trigger the restore.
+- We strongly recommended to restore the "master" database using the [Restore as files](#restore-as-files) option and then restore [using T-SQL commands](/sql/relational-databases/backup-restore/restore-the-master-database-transact-sql).
+- For all system databases (model, msdb), stop the SQL Server Agent service before you trigger the restore.
- Close any applications that might try to take a connection to any of these databases. ## Restore a database
cdn Cdn App Dev Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-app-dev-net.md
You need Visual Studio 2015 to complete this tutorial. [Visual Studio Community
## Create your project and add Nuget packages Now that we've created a resource group for our CDN profiles and given our Azure AD application permission to manage CDN profiles and endpoints within that group, we can start creating our application.
+> [!IMPORTANT]
+> The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](/azure/active-directory/develop/msal-migration) for more details.
+ From within Visual Studio 2015, click **File**, **New**, **Project...** to open the new project dialog. Expand **Visual C#**, then select **Windows** in the pane on the left. Click **Console Application** in the center pane. Name your project, then click **OK**. ![New Project](./media/cdn-app-dev-net/cdn-new-project.png)
certification How To Test Pnp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/how-to-test-pnp.md
To meet the certification requirements, your device must:
## Test with the Azure IoT Extension CLI
-The [Azure IoT CLI extension](/cli/azure/ext/azure-iot/iot/product?view=azure-cli-latest&preserve-view=true) lets you validate that the device implementation matches the model before you submit the device for certification through the Azure Certified Device portal.
+The [Azure IoT CLI extension](/cli/azure/iot/product?view=azure-cli-latest&preserve-view=true) lets you validate that the device implementation matches the model before you submit the device for certification through the Azure Certified Device portal.
The following steps show you how to prepare for and run the certification tests using the CLI:
chaos-studio Chaos Studio Tutorial Agent Based Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-agent-based-cli.md
sudo apt-get update && sudo apt-get -y install unzip && sudo apt-get -y install
Or: ```bash
-sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && sudo yum -y install stress-ng
+sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm && sudo yum -y install stress-ng
``` ### Enable chaos target and capabilities
chaos-studio Chaos Studio Tutorial Agent Based Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-agent-based-portal.md
sudo apt-get update && sudo apt-get -y install unzip && sudo apt-get -y install
or ```bash
-sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && sudo yum -y install stress-ng
+sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm && sudo yum -y install stress-ng
``` ### Enable chaos target, capabilities, and agent
cognitive-services Reference Markdown Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/reference-markdown-format.md
A new line between 2 sentences.|`\n\n`|`How can I create a bot with \n\n QnA Mak
|Italics |`*text*`|`How do I create a bot with *QnA Maker*?`|![format with italics](./media/qnamaker-concepts-datasources/format-italics.png)| |Strong (bold)|`**text**`|`How do I create a bot with **QnA Maker**?`|![format with strong marking for bold](./media/qnamaker-concepts-datasources/format-strong.png)| |URL for link|`[text](https://www.my.com)`|`How do I create a bot with [QnA Maker](https://www.qnamaker.ai)?`|![format for URL (hyperlink)](./media/qnamaker-concepts-datasources/format-url.png)|
-|*URL for public image|`![text](https://www.my.com/image.png)`|`How can I create a bot with ![QnAMaker](https://review.docs.microsoft.com/azure/cognitive-services/qnamaker/media/qnamaker-how-to-key-management/qnamaker-resource-list.png)`|![format for public image URL ](./media/qnamaker-concepts-datasources/format-image-url.png)|
+|*URL for public image|`![text](https://www.my.com/image.png)`|`How can I create a bot with ![QnAMaker](https://review.docs.microsoft.com/azure/cognitive-services/qnamaker/media/qnamaker-how-to-key-management/qnamaker-resource-list.png)`|![format for public image URL](./media/qnamaker-concepts-datasources/format-image-url.png)|
|Strikethrough|`~~text~~`|`some ~~questoins~~ questions need to be asked`|![format for strikethrough](./media/qnamaker-concepts-datasources/format-strikethrough.png)| |Bold and italics|`***text***`|`How can I create a ***QnA Maker*** bot?`|![format for bold and italics](./media/qnamaker-concepts-datasources/format-bold-italics.png)| |Bold URL for link|`[**text**](https://www.my.com)`|`How do I create a bot with [**QnA Maker**](https://www.qnamaker.ai)?`|![format for bold URL](./media/qnamaker-concepts-datasources/format-bold-url.png)|
Additionally, CR LF(\r\n) are converted to \n in the KB. LF(\n) is kept as is. I
## Next steps
-Review batch testing [file formats](reference-tsv-format-batch-testing.md).
+Review batch testing [file formats](reference-tsv-format-batch-testing.md).
cognitive-services Get Started Speech Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-speech-translation.md
Title: Speech translation quickstart - Speech service
-description: In this quickstart, you translate speech to text. Learn about object construction, supported audio input formats, and configuration options.
+description: In this quickstart, you translate speech from one language to text in another language.
Previously updated : 01/08/2022 Last updated : 06/07/2022 - zone_pivot_groups: programming-languages-speech-services keywords: speech translation
keywords: speech translation
## Next steps > [!div class="nextstepaction"]
-> [Learn about language identification](language-identification.md)
+> [Learn more about speech translation](how-to-translate-speech.md)
+
cognitive-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-pronunciation-assessment.md
Title: How to use pronunciation assessment
+ Title: Use pronunciation assessment
-description: The Speech SDK supports pronunciation assessment, which assesses the pronunciation quality of speech input, with indicators of accuracy, fluency, completeness, etc.
+description: Learn about pronunciation assessment features that are currently publicly available.
-+ Previously updated : 01/23/2022--
-zone_pivot_groups: programming-languages-speech-services-nomore-variant
Last updated : 05/31/2022+
+zone_pivot_groups: programming-languages-speech-sdk
-# Pronunciation assessment
+# Use pronunciation assessment
-Pronunciation assessment evaluates speech pronunciation and gives speakers feedback on the accuracy and fluency of spoken audio. With pronunciation assessment, language learners can practice, get instant feedback, and improve their pronunciation so that they can speak and present with confidence. Educators can use the capability to evaluate pronunciation of multiple speakers in real time. Pronunciation Assessment is announced generally available in US English, while [other languages](language-support.md#pronunciation-assessment) are available in preview. 
+In this article, you'll learn how to use pronunciation assessment through the Speech SDK.
-In this article, you'll learn how to set up `PronunciationAssessmentConfig` and retrieve the `PronunciationAssessmentResult` using the speech SDK.
+> [!NOTE]
+> pronunciation assessment is not available with the Speech SDK for Go.
+
+You can get pronunciation assessment scores for:
+
+- Full text
+- Words
+- Syllable groups
+- Phonemes in SAPI or IPA format
+
+> [!NOTE]
+> For information about availability of pronunciation assessment, see [supported languages](language-support.md#pronunciation-assessment) and [available regions](regions.md#speech-to-text-pronunciation-assessment-text-to-speech-and-translation).
-## Pronunciation assessment with the Speech SDK
+## Configuration parameters
+
+This table lists some of the key configuration parameters for pronunciation assessment.
+
+| Parameter | Description |
+|--|-|
+| `ReferenceText` | The text that the pronunciation will be evaluated against. |
+| `GradingSystem` | The point system for score calibration. The `FivePoint` system gives a 0-5 floating point score, and `HundredMark` gives a 0-100 floating point score. |
+| `Granularity` | The evaluation granularity. Accepted values are `Phoneme`, which shows the score on the full text, word and phoneme level, `Word`, which shows the score on the full text and word level, or `FullText`, which shows the score on the full text level only. |
+| `EnableMiscue` | Enables miscue calculation when the pronounced words are compared to the reference text. If this value is `True`, the `ErrorType` result value can be set to `Omission` or `Insertion` based on the comparison. Accepted values are `False` and `True`. Default: `False`. |
-The following snippet illustrates how to create a `PronunciationAssessmentConfig`, then apply it to a `SpeechRecognizer`.
+You must create a `PronunciationAssessmentConfig` object with the reference text, grading system, and granularity. Enabling miscue and other configuration settings are optional.
::: zone pivot="programming-language-csharp" ```csharp var pronunciationAssessmentConfig = new PronunciationAssessmentConfig(
- "reference text", GradingSystem.HundredMark, Granularity.Phoneme);
+ referenceText: "good morning",
+ gradingSystem: GradingSystem.HundredMark,
+ granularity: Granularity.Phoneme,
+ enableMiscue: true);
+```
+
++
+```cpp
+auto pronunciationAssessmentConfig = PronunciationAssessmentConfig::CreateFromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\"}");
+```
+++
+```Java
+PronunciationAssessmentConfig pronunciationAssessmentConfig = PronunciationAssessmentConfig.fromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\"}");
+```
+++
+```Python
+pronunciation_assessment_config = speechsdk.PronunciationAssessmentConfig(json_string="{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\"}")
+```
+++
+```JavaScript
+var pronunciationAssessmentConfig = SpeechSDK.PronunciationAssessmentConfig.fromJSON("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\"}");
+```
+++
+```ObjectiveC
+SPXPronunciationAssessmentConfiguration *pronunciationAssessmentConfig =
+[[SPXPronunciationAssessmentConfiguration alloc] init:@"good morning"
+ gradingSystem:SPXPronunciationAssessmentGradingSystem_HundredMark
+ granularity:SPXPronunciationAssessmentGranularity_Phoneme
+ enableMiscue:true];
+```
++++
+```swift
+var pronunciationAssessmentConfig: SPXPronunciationAssessmentConfiguration?
+do {
+ try pronunciationAssessmentConfig = SPXPronunciationAssessmentConfiguration.init(referenceText, gradingSystem: SPXPronunciationAssessmentGradingSystem.hundredMark, granularity: SPXPronunciationAssessmentGranularity.phoneme, enableMiscue: true)
+} catch {
+ print("error \(error) happened")
+ pronunciationAssessmentConfig = nil
+ return
+}
+```
+++++
+## Syllable groups
+
+For the [supported languages](language-support.md#pronunciation-assessment) in public preview, pronunciation assessment can provide syllable-level assessment results along with phonemes. Grouping in syllables is more legible and aligned with speaking habits, as a word is typically pronounced syllable by syllable rather than phoneme by phoneme.
+
+The following table compares example phonemes with the corresponding syllables.
+
+| Sample word | Phonemes | Syllables |
+|--|-|-|
+|technological|teknələdʒɪkl|tek·nə·lɑ·dʒɪkl|
+|hello|hɛloʊ|hɛ·loʊ|
+|luck|lʌk|lʌk|
+|photosynthesis|foʊtəsɪnθəsɪs|foʊ·tə·sɪn·θə·sɪs|
+
+To request syllable-level results along with phonemes, set the granularity [configuration parameter](#configuration-parameters) to `Phoneme`.
+
+## Phoneme alphabet format
+
+The phoneme name is provided together with the score, to help identity which phonemes were pronounced accurately or inaccurately. For the [supported languages](language-support.md#pronunciation-assessment) in public preview, you can get the phoneme name in [SAPI](/previous-versions/windows/desktop/ee431828(v=vs.85)#american-english-phoneme-table) or [IPA](https://en.wikipedia.org/wiki/IPA) format.
+
+The following table compares example SAPI phonemes with the corresponding IPA phonemes.
+
+| Sample word | SAPI Phonemes | IPA phonemes |
+|--|-|-|
+|hello|h eh l ow|h ɛ l oʊ|
+|luck|l ah k|l ʌ k|
+|photosynthesis|f ow t ax s ih n th ax s ih s|f oʊ t ə s ɪ n θ ə s ɪ s|
+
+To request IPA phonemes, set the phoneme alphabet to `"IPA"`. If you don't specify the alphabet, the phonemes will be in SAPI format by default.
++
+```csharp
+pronunciationAssessmentConfig.PhonemeAlphabet = "IPA";
+```
+
++
+```cpp
+auto pronunciationAssessmentConfig = PronunciationAssessmentConfig::CreateFromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}");
+```
+
++
+```Java
+PronunciationAssessmentConfig pronunciationAssessmentConfig = PronunciationAssessmentConfig.fromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}");
+```
+++
+```Python
+pronunciation_assessment_config = speechsdk.PronunciationAssessmentConfig(json_string="{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}")
+```
+++
+```JavaScript
+var pronunciationAssessmentConfig = SpeechSDK.PronunciationAssessmentConfig.fromJSON("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}");
+```
++
+
+```ObjectiveC
+pronunciationAssessmentConfig.phonemeAlphabet = @"IPA";
+```
++++
+```swift
+pronunciationAssessmentConfig?.phonemeAlphabet = "IPA"
+```
+++++
+## Spoken phoneme
+
+> [!NOTE]
+> The spoken phoneme feature of pronunciation assessment is only generally available for the `en-US` locale.
+
+With spoken phonemes, you can get confidence scores indicating how likely the spoken phonemes matched the expected phonemes.
+
+For example, when you speak the word "hello", the expected IPA phonemes are "h ɛ l oʊ". The actual spoken phonemes could be "h ə l oʊ". In the following assessment result, the most likely spoken phoneme was `"ə"` instead of the expected phoneme `"ɛ"`. The expected phoneme `"ɛ"` only received a confidence score of 47. Other potential matches received confidence scores of 52, 17, and 2.
+
+```json
+{
+ "Phoneme": "ɛ",
+ "PronunciationAssessment": {
+ "AccuracyScore": 47.0,
+ "NBestPhonemes": [
+ {
+ "Phoneme": "ə",
+ "Score": 100.0
+ },
+ {
+ "Phoneme": "l",
+ "Score": 52.0
+ },
+ {
+ "Phoneme": "ɛ",
+ "Score": 47.0
+ },
+ {
+ "Phoneme": "h",
+ "Score": 17.0
+ },
+ {
+ "Phoneme": "æ",
+ "Score": 2.0
+ }
+ ]
+ },
+ "Offset": 11100000,
+ "Duration": 500000
+},
+```
+
+To indicate whether, and how many potential spoken phonemes to get confidence scores for, set the `NBestPhonemeCount` parameter to an integer value such as `5`.
+
+
+```csharp
+pronunciationAssessmentConfig.NBestPhonemeCount = 5;
+```
+
++
+```cpp
+auto pronunciationAssessmentConfig = PronunciationAssessmentConfig::CreateFromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}");
+```
+
++
+```Java
+PronunciationAssessmentConfig pronunciationAssessmentConfig = PronunciationAssessmentConfig.fromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}");
+```
+
++
+```Python
+pronunciation_assessment_config = speechsdk.PronunciationAssessmentConfig(json_string="{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}")
+```
+++
+```JavaScript
+var pronunciationAssessmentConfig = SpeechSDK.PronunciationAssessmentConfig.fromJSON("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}");
+```
++
+
+
+```ObjectiveC
+pronunciationAssessmentConfig.nbestPhonemeCount = 5;
+```
+++
-using (var recognizer = new SpeechRecognizer(
+```swift
+pronunciationAssessmentConfig?.nbestPhonemeCount = 5
+```
++++
+## Get pronunciation assessment results
+
+When speech is recognized, you can request the pronunciation assessment results as SDK objects or a JSON string.
++
+```csharp
+using (var speechRecognizer = new SpeechRecognizer(
speechConfig, audioConfig)) {
- // apply the pronunciation assessment configuration to the speech recognizer
- pronunciationAssessmentConfig.ApplyTo(recognizer);
- var speechRecognitionResult = await recognizer.RecognizeOnceAsync();
+ pronunciationAssessmentConfig.ApplyTo(speechRecognizer);
+ var speechRecognitionResult = await speechRecognizer.RecognizeOnceAsync();
+
+ // The pronunciation assessment result as a Speech SDK object
var pronunciationAssessmentResult = PronunciationAssessmentResult.FromResult(speechRecognitionResult);
- var pronunciationScore = pronunciationAssessmentResult.PronunciationScore;
+
+ // The pronunciation assessment result as a JSON string
+ var pronunciationAssessmentResultJson = speechRecognitionResult.Properties.GetProperty(PropertyId.SpeechServiceResponse_JsonResult);
} ```-
+
::: zone pivot="programming-language-cpp"
-```cpp
-auto pronunciationAssessmentConfig =
- PronunciationAssessmentConfig::Create("reference text",
- PronunciationAssessmentGradingSystem::HundredMark,
- PronunciationAssessmentGranularity::Phoneme);
+Word, syllable, and phoneme results aren't available via SDK objects with the Speech SDK foc C++. Word, syllable, and phoneme results are only available in the JSON string.
-auto recognizer = SpeechRecognizer::FromConfig(
+```cpp
+auto speechRecognizer = SpeechRecognizer::FromConfig(
speechConfig, audioConfig);
-// apply the pronunciation assessment configuration to the speech recognizer
-pronunciationAssessmentConfig->ApplyTo(recognizer);
-speechRecognitionResult = recognizer->RecognizeOnceAsync().get();
+pronunciationAssessmentConfig->ApplyTo(speechRecognizer);
+speechRecognitionResult = speechRecognizer->RecognizeOnceAsync().get();
+
+// The pronunciation assessment result as a Speech SDK object
auto pronunciationAssessmentResult = PronunciationAssessmentResult::FromResult(speechRecognitionResult);
-auto pronunciationScore = pronunciationAssessmentResult->PronunciationScore;
-```
+// The pronunciation assessment result as a JSON string
+auto pronunciationAssessmentResultJson = speechRecognitionResult->Properties.GetProperty(PropertyId::SpeechServiceResponse_JsonResult);
+```
+
::: zone pivot="programming-language-java"
+For Android application development, the word, syllable, and phoneme results are available via SDK objects with the Speech SDK foc Java. The results are also available in the JSON string. For Java Runtime (JRE) application development, the word, syllable, and phoneme results are only available in the JSON string.
-```java
-PronunciationAssessmentConfig pronunciationAssessmentConfig =
- new PronunciationAssessmentConfig("reference text",
- PronunciationAssessmentGradingSystem.HundredMark,
- PronunciationAssessmentGranularity.Phoneme);
-
-SpeechRecognizer recognizer = new SpeechRecognizer(
+```Java
+SpeechRecognizer speechRecognizer = new SpeechRecognizer(
speechConfig, audioConfig);
-// apply the pronunciation assessment configuration to the speech recognizer
-pronunciationAssessmentConfig.applyTo(recognizer);
-Future<SpeechRecognitionResult> future = recognizer.recognizeOnceAsync();
-SpeechRecognitionResult result = future.get(30, TimeUnit.SECONDS);
+pronunciationAssessmentConfig.applyTo(speechRecognizer);
+Future<SpeechRecognitionResult> future = speechRecognizer.recognizeOnceAsync();
+SpeechRecognitionResult speechRecognitionResult = future.get(30, TimeUnit.SECONDS);
+
+// The pronunciation assessment result as a Speech SDK object
PronunciationAssessmentResult pronunciationAssessmentResult =
- PronunciationAssessmentResult.fromResult(result);
-Double pronunciationScore = pronunciationAssessmentResult.getPronunciationScore();
+ PronunciationAssessmentResult.fromResult(speechRecognitionResult);
+
+// The pronunciation assessment result as a JSON string
+String pronunciationAssessmentResultJson = speechRecognitionResult.getProperties().getProperty(PropertyId.SpeechServiceResponse_JsonResult);
recognizer.close(); speechConfig.close(); audioConfig.close(); pronunciationAssessmentConfig.close();
-result.close();
+speechRecognitionResult.close();
``` +++
+```JavaScript
+var speechRecognizer = SpeechSDK.SpeechRecognizer.FromConfig(speechConfig, audioConfig);
+
+pronunciationAssessmentConfig.applyTo(speechRecognizer);
+
+speechRecognizer.recognizeOnceAsync((speechRecognitionResult: SpeechSDK.SpeechRecognitionResult) => {
+ // The pronunciation assessment result as a Speech SDK object
+ var pronunciationAssessmentResult = SpeechSDK.PronunciationAssessmentResult.fromResult(speechRecognitionResult);
+
+ // The pronunciation assessment result as a JSON string
+ var pronunciationAssessmentResultJson = speechRecognitionResult.properties.getProperty(SpeechSDK.PropertyId.SpeechServiceResponse_JsonResult);
+},
+{});
++
+```
+
::: zone pivot="programming-language-python" ```Python
-pronunciation_assessment_config = \
- speechsdk.PronunciationAssessmentConfig(reference_text='reference text',
- grading_system=speechsdk.PronunciationAssessmentGradingSystem.HundredMark,
- granularity=speechsdk.PronunciationAssessmentGranularity.Phoneme)
speech_recognizer = speechsdk.SpeechRecognizer( speech_config=speech_config, \ audio_config=audio_config)
-# apply the pronunciation assessment configuration to the speech recognizer
pronunciation_assessment_config.apply_to(speech_recognizer)
-result = speech_recognizer.recognize_once()
-pronunciation_assessment_result = speechsdk.PronunciationAssessmentResult(result)
-pronunciation_score = pronunciation_assessment_result.pronunciation_score
-```
-
+speech_recognition_result = speech_recognizer.recognize_once()
-
-```Javascript
-var pronunciationAssessmentConfig = new SpeechSDK.PronunciationAssessmentConfig("reference text",
- PronunciationAssessmentGradingSystem.HundredMark,
- PronunciationAssessmentGranularity.Word, true);
-var speechRecognizer = SpeechSDK.SpeechRecognizer.FromConfig(speechConfig, audioConfig);
-// apply the pronunciation assessment configuration to the speech recognizer
-pronunciationAssessmentConfig.applyTo(speechRecognizer);
+# The pronunciation assessment result as a Speech SDK object
+pronunciation_assessment_result = speechsdk.PronunciationAssessmentResult(speech_recognition_result)
-speechRecognizer.recognizeOnceAsync((result: SpeechSDK.SpeechRecognitionResult) => {
- var pronunciationAssessmentResult = SpeechSDK.PronunciationAssessmentResult.fromResult(result);
- var pronunciationScore = pronunciationAssessmentResult.pronunciationScore;
- var wordLevelResult = pronunciationAssessmentResult.detailResult.Words;
-},
-{});
+# The pronunciation assessment result as a JSON string
+pronunciation_assessment_result_json = speech_recognition_result.properties.get(speechsdk.PropertyId.SpeechServiceResponse_JsonResult)
``` -
-```Objective-C
-SPXPronunciationAssessmentConfiguration* pronunciationAssessmentConfig =
- [[SPXPronunciationAssessmentConfiguration alloc]init:@"reference text"
- gradingSystem:SPXPronunciationAssessmentGradingSystem_HundredMark
- granularity:SPXPronunciationAssessmentGranularity_Phoneme];
+
+```ObjectiveC
SPXSpeechRecognizer* speechRecognizer = \ [[SPXSpeechRecognizer alloc] initWithSpeechConfiguration:speechConfig audioConfiguration:audioConfig];
-// apply the pronunciation assessment configuration to the speech recognizer
[pronunciationAssessmentConfig applyToRecognizer:speechRecognizer];
-SPXSpeechRecognitionResult *result = [speechRecognizer recognizeOnce];
-SPXPronunciationAssessmentResult* pronunciationAssessmentResult = [[SPXPronunciationAssessmentResult alloc] init:result];
-double pronunciationScore = pronunciationAssessmentResult.pronunciationScore;
+SPXSpeechRecognitionResult *speechRecognitionResult = [speechRecognizer recognizeOnce];
+
+// The pronunciation assessment result as a Speech SDK object
+SPXPronunciationAssessmentResult* pronunciationAssessmentResult = [[SPXPronunciationAssessmentResult alloc] init:speechRecognitionResult];
+
+// The pronunciation assessment result as a JSON string
+NSString* pronunciationAssessmentResultJson = [speechRecognitionResult.properties getPropertyByName:SPXSpeechServiceResponseJsonResult];
```
-## Configuration parameters
+
+```swift
+let speechRecognizer = try! SPXSpeechRecognizer(speechConfiguration: speechConfig, audioConfiguration: audioConfig)
+
+try! pronConfig.apply(to: speechRecognizer)
-This table lists the configuration parameters for pronunciation assessment.
+let speechRecognitionResult = try? speechRecognizer.recognizeOnce()
-| Parameter | Description | Required? |
-|--|-||
-| `ReferenceText` | The text that the pronunciation will be evaluated against. | Required |
-| `GradingSystem` | The point system for score calibration. The `FivePoint` system gives a 0-5 floating point score, and `HundredMark` gives a 0-100 floating point score. Default: `FivePoint`. | Optional |
-| `Granularity` | The evaluation granularity. Accepted values are `Phoneme`, which shows the score on the full text, word and phoneme level, `Syllable`, which shows the score on the full text, word and syllable level, `Word`, which shows the score on the full text and word level, `FullText`, which shows the score on the full text level only. Default: `Phoneme`. | Optional |
-| `EnableMiscue` | Enables miscue calculation when the pronounced words are compared to the reference text. If this value is `True`, the `ErrorType` result value can be set to `Omission` or `Insertion` based on the comparison. Accepted values are `False` and `True`. Default: `False`. | Optional |
-| `ScenarioId` | A GUID indicating a customized point system. | Optional |
+// The pronunciation assessment result as a Speech SDK object
+let pronunciationAssessmentResult = SPXPronunciationAssessmentResult(speechRecognitionResult!)
-## Result parameters
+// The pronunciation assessment result as a JSON string
+let pronunciationAssessmentResultJson = speechRecognitionResult!.properties?.getPropertyBy(SPXPropertyId.speechServiceResponseJsonResult)
+```
+
-This table lists the result parameters of pronunciation assessment.
++
+### Result parameters
+
+This table lists some of the key pronunciation assessment results.
| Parameter | Description | |--|-|
This table lists the result parameters of pronunciation assessment.
| `PronScore` | Overall score indicating the pronunciation quality of the given speech. `PronScore` is aggregated from `AccuracyScore`, `FluencyScore`, and `CompletenessScore` with weight. | | `ErrorType` | This value indicates whether a word is omitted, inserted, or mispronounced, compared to the `ReferenceText`. Possible values are `None`, `Omission`, `Insertion`, and `Mispronunciation`. |
-### Sample responses
+### JSON result example
-A typical pronunciation assessment result in JSON:
+Pronunciation assessment results for the spoken word "hello" are shown as a JSON string in the following example. Here's what you should know:
+- The phoneme [alphabet](#phoneme-alphabet-format) is IPA.
+- The [syllables](#syllable-groups) are returned alongside phonemes for the same word.
+- You can use the `Offset` and `Duration` values to align syllables with their corresponding phonemes. For example, the starting offset (11700000) of the second syllable ("loʊ") aligns with the third phoneme ("l").
+- There are five `NBestPhonemes` corresponding to the number of [spoken phonemes](#spoken-phoneme) requested.
+- Within `Phonemes`, the most likely [spoken phonemes](#spoken-phoneme) was `"ə"` instead of the expected phoneme `"ɛ"`. The expected phoneme `"ɛ"` only received a confidence score of 47. Other potential matches received confidence scores of 52, 17, and 2.
```json {
- "RecognitionStatus": "Success",
- "Offset": "400000",
- "Duration": "11000000",
- "NBest": [
- {
- "Confidence": "0.87",
- "Lexical": "good morning",
- "ITN" : "good morning",
- "MaskedITN" : "good morning",
- "Display" : "Good morning.",
- "PronunciationAssessment" : {
- "PronScore" : 84.4,
- "AccuracyScore" : 100.0,
- "FluencyScore" : 74.0,
- "CompletenessScore" : 100.0,
- },
- "Words": [
- {
- "Word" : "good",
- "Offset" : 500000,
- "Duration" : 2700000,
- "PronunciationAssessment": {
- "AccuracyScore" : 100.0,
- "ErrorType" : "None"
- },
- "Syllables" : [
- {
- "Syllable" : "ɡʊd",
- "Offset" : 500000,
- "Duration" : 2700000,
- "PronunciationAssessment" : {
- "AccuracyScore": 100.0
- }
- }],
- "Phonemes": [
- {
- "Phoneme" : "ɡ",
- "Offset" : 500000,
- "Duration": 1200000,
- "PronunciationAssessment": {
- "AccuracyScore": 100.0
- }
- },
- {
- "Phoneme" : "ʊ",
- "Offset" : 1800000,
- "Duration": 500000,
- "PronunciationAssessment": {
- "AccuracyScore": 100.0
- }
- },
- {
- "Phoneme" : "d",
- "Offset" : 2400000,
- "Duration": 800000,
- "PronunciationAssessment": {
- "AccuracyScore": 100.0
- }
- }]
- },
- {
- "Word" : "morning",
- "Offset" : 3300000,
- "Duration" : 5500000,
- "PronunciationAssessment": {
- "AccuracyScore" : 100.0,
- "ErrorType" : "None"
- },
- "Syllables": [
- {
- "Syllable" : "mɔr",
- "Offset" : 3300000,
- "Duration": 2300000,
- "PronunciationAssessment": {
- "AccuracyScore": 100.0
- }
- },
- {
- "Syllable" : "nɪŋ",
- "Offset" : 5700000,
- "Duration": 3100000,
- "PronunciationAssessment": {
- "AccuracyScore": 100.0
- }
- }],
- "Phonemes": [
- ... // omitted phonemes
- ]
- }]
- }]
+ "Id": "bbb42ea51bdb46d19a1d685e635fe173",
+ "RecognitionStatus": 0,
+ "Offset": 7500000,
+ "Duration": 13800000,
+ "DisplayText": "Hello.",
+ "SNR": 34.879055,
+ "NBest": [
+ {
+ "Confidence": 0.975003,
+ "Lexical": "hello",
+ "ITN": "hello",
+ "MaskedITN": "hello",
+ "Display": "Hello.",
+ "PronunciationAssessment": {
+ "AccuracyScore": 100,
+ "FluencyScore": 100,
+ "CompletenessScore": 100,
+ "PronScore": 100
+ },
+ "Words": [
+ {
+ "Word": "hello",
+ "Offset": 7500000,
+ "Duration": 13800000,
+ "PronunciationAssessment": {
+ "AccuracyScore": 99.0,
+ "ErrorType": "None"
+ },
+ "Syllables": [
+ {
+ "Syllable": "hɛ",
+ "PronunciationAssessment": {
+ "AccuracyScore": 91.0
+ },
+ "Offset": 7500000,
+ "Duration": 4100000
+ },
+ {
+ "Syllable": "loʊ",
+ "PronunciationAssessment": {
+ "AccuracyScore": 100.0
+ },
+ "Offset": 11700000,
+ "Duration": 9600000
+ }
+ ],
+ "Phonemes": [
+ {
+ "Phoneme": "h",
+ "PronunciationAssessment": {
+ "AccuracyScore": 98.0,
+ "NBestPhonemes": [
+ {
+ "Phoneme": "h",
+ "Score": 100.0
+ },
+ {
+ "Phoneme": "oʊ",
+ "Score": 52.0
+ },
+ {
+ "Phoneme": "ə",
+ "Score": 35.0
+ },
+ {
+ "Phoneme": "k",
+ "Score": 23.0
+ },
+ {
+ "Phoneme": "æ",
+ "Score": 20.0
+ }
+ ]
+ },
+ "Offset": 7500000,
+ "Duration": 3500000
+ },
+ {
+ "Phoneme": "ɛ",
+ "PronunciationAssessment": {
+ "AccuracyScore": 47.0,
+ "NBestPhonemes": [
+ {
+ "Phoneme": "ə",
+ "Score": 100.0
+ },
+ {
+ "Phoneme": "l",
+ "Score": 52.0
+ },
+ {
+ "Phoneme": "ɛ",
+ "Score": 47.0
+ },
+ {
+ "Phoneme": "h",
+ "Score": 17.0
+ },
+ {
+ "Phoneme": "æ",
+ "Score": 2.0
+ }
+ ]
+ },
+ "Offset": 11100000,
+ "Duration": 500000
+ },
+ {
+ "Phoneme": "l",
+ "PronunciationAssessment": {
+ "AccuracyScore": 100.0,
+ "NBestPhonemes": [
+ {
+ "Phoneme": "l",
+ "Score": 100.0
+ },
+ {
+ "Phoneme": "oʊ",
+ "Score": 46.0
+ },
+ {
+ "Phoneme": "ə",
+ "Score": 5.0
+ },
+ {
+ "Phoneme": "ɛ",
+ "Score": 3.0
+ },
+ {
+ "Phoneme": "u",
+ "Score": 1.0
+ }
+ ]
+ },
+ "Offset": 11700000,
+ "Duration": 1100000
+ },
+ {
+ "Phoneme": "oʊ",
+ "PronunciationAssessment": {
+ "AccuracyScore": 100.0,
+ "NBestPhonemes": [
+ {
+ "Phoneme": "oʊ",
+ "Score": 100.0
+ },
+ {
+ "Phoneme": "d",
+ "Score": 29.0
+ },
+ {
+ "Phoneme": "t",
+ "Score": 24.0
+ },
+ {
+ "Phoneme": "n",
+ "Score": 22.0
+ },
+ {
+ "Phoneme": "l",
+ "Score": 18.0
+ }
+ ]
+ },
+ "Offset": 12900000,
+ "Duration": 8400000
+ }
+ ]
+ }
+ ]
+ }
+ ]
} ``` ## Next steps
-* Learn more about released [use cases](https://techcommunity.microsoft.com/t5/azure-ai-blog/speech-service-update-pronunciation-assessment-is-generally/ba-p/2505501)
-
-* Try out the [pronunciation assessment demo](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment/BrowserJS) and watch the [video tutorial](https://www.youtube.com/watch?v=zFlwm7N4Awc) of pronunciation assessment.
-
-* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs) on GitHub for pronunciation assessment.
-
-* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/speech_recognition_samples.cpp) on GitHub for pronunciation assessment.
-
-* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/java/jre/console/src/com/microsoft/cognitiveservices/speech/samples/console/SpeechRecognitionSamples.java) on GitHub for pronunciation assessment.
-
-* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py) on GitHub for pronunciation assessment.
-
-* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/objective-c/ios/speech-samples/speech-samples/ViewController.m) on GitHub for pronunciation assessment.
-
+- Try out [pronunciation assessment in Speech Studio](pronunciation-assessment-tool.md)
+- Try out the [pronunciation assessment demo](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment/BrowserJS) and watch the [video tutorial](https://www.youtube.com/watch?v=zFlwm7N4Awc) of pronunciation assessment.
cognitive-services How To Translate Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-translate-speech.md
+
+ Title: "How to translate speech - Speech service"
+
+description: Learn how to translate speech from one language to text in another language, including object construction and supported audio input formats.
++++++ Last updated : 06/08/2022+
+zone_pivot_groups: programming-languages-speech-services
++
+# How to recognize and translate speech
+++++++++++
+## Next steps
+
+* [Try the speech to text quickstart](get-started-speech-to-text.md)
+* [Try the speech translation quickstart](get-started-speech-translation.md)
+* [Improve recognition accuracy with custom speech](custom-speech-overview.md)
+
cognitive-services Pronunciation Assessment Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/pronunciation-assessment-tool.md
+
+ Title: How to use pronunciation assessment in Speech Studio
+
+description: The pronunciation assessment tool in Speech Studio gives you feedback on the accuracy and fluency of your speech, no coding required.
++++++ Last updated : 06/08/2022+++
+# Pronunciation assessment in Speech Studio
+
+Pronunciation assessment provides subjective and objective feedback to language learners. Practicing pronunciation and getting timely feedback are essential for improving language skills. Assessments driven by experienced teachers can take a lot of time and effort, and makes a high-quality assessment expensive for learners. Pronunciation assessment can help make the language assessment more engaging and accessible to learners of all backgrounds.
+
+Pronunciation assessment provides various assessment results in different granularities, from individual phonemes to the entire text input.
+- At the full-text level, pronunciation assessment offers additional Fluency and Completeness scores: Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words, and Completeness indicates how many words are pronounced in the speech to the reference text input. An overall score aggregated from Accuracy, Fluency and Completeness is then given to indicate the overall pronunciation quality of the given speech.
+- At the word-level, pronunciation assessment can automatically detect miscues and provide accuracy score simultaneously, which provides more detailed information on omission, repetition, insertions, and mispronunciation in the given speech.
+- At the phoneme level, pronunciation assessment provides accuracy scores of each phoneme, helping learners to better understand the pronunciation details of their speech.
+
+This article describes how to use the pronunciation assessment tool through the [Speech Studio](https://speech.microsoft.com). You can get immediate feedback on the accuracy and fluency of your speech without writing any code. For information about how to integrate pronunciation assessment in your speech applications, see [How to use pronunciation assessment](how-to-pronunciation-assessment.md).
+
+## Try out pronunciation assessment
+
+You can explore and try out pronunciation assessment even without signing in.
+
+> [!TIP]
+> To assess more than 5 seconds of speech with your own script, sign in with an Azure account and use your Speech or Cognitive Services resource.
+
+Follow these steps to assess your pronunciation of the reference text:
+
+1. Go to **Pronunciation Assessment** in the [Speech Studio](https://aka.ms/speechstudio/pronunciationassessment).
+
+1. Choose a supported [language](language-support.md#pronunciation-assessment) that you want to evaluate the pronunciation.
+
+1. Choose from the provisioned text samples, or under the **Enter your own script** label, enter your own reference text.
+
+ When reading the text, you should be close to microphone to make sure the recorded voice isn't too low.
+
+ :::image type="content" source="media/pronunciation-assessment/pa-record.png" alt-text="Screenshot of where to record audio with a microphone.":::
+
+ Otherwise you can upload recorded audio for pronunciation assessment. Once successfully uploaded, the audio will be automatically evaluated by the system, as shown in the following screenshot.
+
+ :::image type="content" source="media/pronunciation-assessment/pa-upload.png" alt-text="Screenshot of uploading recorded audio to be assessed.":::
++
+## Pronunciation assessment results
+
+Once you've recorded the reference text or uploaded the recorded audio, the **Assessment result** will be output. The result includes your spoken audio and the feedback on the accuracy and fluency of spoken audio, by comparing a machine generated transcript of the input audio with the reference text. You can listen to your spoken audio, and download it if necessary.
+
+You can also check the pronunciation assessment result in JSON. The word-level, syllable-level, and phoneme-level accuracy scores are included in the JSON file.
+
+### Overall scores
+
+Pronunciation Assessment evaluates three aspects of pronunciation: accuracy, fluency, and completeness. At the bottom of **Assessment result**, you can see **Pronunciation score**, **Accuracy score**, **Fluency score**, and **Completeness score**. The **Pronunciation score** is overall score indicating the pronunciation quality of the given speech. This overall score is aggregated from **Accuracy score**, **Fluency score**, and **Completeness score** with weight.
++
+### Scores within words
+
+### [Display](#tab/display)
+
+The complete transcription is shown in the **Display** window. If a word is omitted, inserted, or mispronounced compared to the reference text, the word will be highlighted according to the error type. While hovering over each word, you can see accuracy scores for the whole word or specific phonemes.
++
+### [JSON](#tab/json)
+
+The complete transcription is shown in the `text` attribute. You can see accuracy scores for the whole word, syllables, and specific phonemes. You can get the same results using the Speech SDK. For information, see [How to use Pronunciation Assessment](how-to-pronunciation-assessment.md).
+
+```json
+{
+ "text": "Today was a beautiful day. We had a great time taking a long long walk in the morning. The countryside was in full bloom, yet the air was crisp and cold towards end of the day clouds came in forecasting much needed rain.",
+ "duration": 156100000,
+ "offset": 800000,
+ "json": {
+ "Id": "f583d7588c89425d8fce76686c11ed12",
+ "RecognitionStatus": 0,
+ "Offset": 800000,
+ "Duration": 156100000,
+ "DisplayText": "Today was a beautiful day. We had a great time taking a long long walk in the morning. The countryside was in full bloom, yet the air was crisp and cold towards end of the day clouds came in forecasting much needed rain.",
+ "SNR": 40.47014,
+ "NBest": [
+ {
+ "Confidence": 0.97532314,
+ "Lexical": "today was a beautiful day we had a great time taking a long long walk in the morning the countryside was in full bloom yet the air was crisp and cold towards end of the day clouds came in forecasting much needed rain",
+ "ITN": "today was a beautiful day we had a great time taking a long long walk in the morning the countryside was in full bloom yet the air was crisp and cold towards end of the day clouds came in forecasting much needed rain",
+ "MaskedITN": "today was a beautiful day we had a great time taking a long long walk in the morning the countryside was in full bloom yet the air was crisp and cold towards end of the day clouds came in forecasting much needed rain",
+ "Display": "Today was a beautiful day. We had a great time taking a long long walk in the morning. The countryside was in full bloom, yet the air was crisp and cold towards end of the day clouds came in forecasting much needed rain.",
+ "PronunciationAssessment": {
+ "AccuracyScore": 92,
+ "FluencyScore": 81,
+ "CompletenessScore": 93,
+ "PronScore": 85.6
+ },
+ "Words": [
+ // Words preceding "countryside" are omitted for brevity...
+ {
+ "Word": "countryside",
+ "Offset": 66200000,
+ "Duration": 7900000,
+ "PronunciationAssessment": {
+ "AccuracyScore": 30,
+ "ErrorType": "Mispronunciation"
+ },
+ "Syllables": [
+ {
+ "Syllable": "kahn",
+ "PronunciationAssessment": {
+ "AccuracyScore": 3
+ },
+ "Offset": 66200000,
+ "Duration": 2700000
+ },
+ {
+ "Syllable": "triy",
+ "PronunciationAssessment": {
+ "AccuracyScore": 19
+ },
+ "Offset": 69000000,
+ "Duration": 1100000
+ },
+ {
+ "Syllable": "sayd",
+ "PronunciationAssessment": {
+ "AccuracyScore": 51
+ },
+ "Offset": 70200000,
+ "Duration": 3900000
+ }
+ ],
+ "Phonemes": [
+ {
+ "Phoneme": "k",
+ "PronunciationAssessment": {
+ "AccuracyScore": 0
+ },
+ "Offset": 66200000,
+ "Duration": 900000
+ },
+ {
+ "Phoneme": "ah",
+ "PronunciationAssessment": {
+ "AccuracyScore": 0
+ },
+ "Offset": 67200000,
+ "Duration": 1000000
+ },
+ {
+ "Phoneme": "n",
+ "PronunciationAssessment": {
+ "AccuracyScore": 11
+ },
+ "Offset": 68300000,
+ "Duration": 600000
+ },
+ {
+ "Phoneme": "t",
+ "PronunciationAssessment": {
+ "AccuracyScore": 16
+ },
+ "Offset": 69000000,
+ "Duration": 300000
+ },
+ {
+ "Phoneme": "r",
+ "PronunciationAssessment": {
+ "AccuracyScore": 27
+ },
+ "Offset": 69400000,
+ "Duration": 300000
+ },
+ {
+ "Phoneme": "iy",
+ "PronunciationAssessment": {
+ "AccuracyScore": 15
+ },
+ "Offset": 69800000,
+ "Duration": 300000
+ },
+ {
+ "Phoneme": "s",
+ "PronunciationAssessment": {
+ "AccuracyScore": 26
+ },
+ "Offset": 70200000,
+ "Duration": 1700000
+ },
+ {
+ "Phoneme": "ay",
+ "PronunciationAssessment": {
+ "AccuracyScore": 56
+ },
+ "Offset": 72000000,
+ "Duration": 1300000
+ },
+ {
+ "Phoneme": "d",
+ "PronunciationAssessment": {
+ "AccuracyScore": 100
+ },
+ "Offset": 73400000,
+ "Duration": 700000
+ }
+ ]
+ },
+ // Words following "countryside" are omitted for brevity...
+ ]
+ }
+ ]
+ }
+}
+```
+++++
+## Next steps
+
+- Use [pronunciation assessment with the Speech SDK](how-to-pronunciation-assessment.md)
+- Read the blog about [use cases](https://techcommunity.microsoft.com/t5/azure-ai-blog/speech-service-update-pronunciation-assessment-is-generally/ba-p/2505501)
cognitive-services Speech Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-translation.md
keywords: speech translation
# What is speech translation?
-In this article, you learn about the benefits and capabilities of the speech translation service, which enables real-time, multi-language speech-to-speech and speech-to-text translation of audio streams. By using the Speech SDK, you can give your applications, tools, and devices access to source transcriptions and translation outputs for the provided audio. Interim transcription and translation results are returned as speech is detected, and the final results can be converted into synthesized speech.
+In this article, you learn about the benefits and capabilities of the speech translation service, which enables real-time, multi-language speech-to-speech and speech-to-text translation of audio streams.
-For a list of languages that the Speech Translation API supports, see the "Speech translation" section of [Language and voice support for the Speech service](language-support.md#speech-translation).
+By using the Speech SDK or Speech CLI, you can give your applications, tools, and devices access to source transcriptions and translation outputs for the provided audio. Interim transcription and translation results are returned as speech is detected, and the final results can be converted into synthesized speech.
+
+For a list of languages supported for speech translation, see [Language and voice support](language-support.md#speech-translation).
## Core features
For a list of languages that the Speech Translation API supports, see the "Speec
* Support for translation to multiple target languages. * Interim recognition and translation results.
-## Before you begin
-
-As your first step, see [Get started with speech translation](get-started-speech-translation.md). The speech translation service is available via the [Speech SDK](speech-sdk.md) and the [Speech CLI](spx-overview.md).
+## Get started
-## Sample code
+As your first step, try the [Speech translation quickstart](get-started-speech-translation.md). The speech translation service is available via the [Speech SDK](speech-sdk.md) and the [Speech CLI](spx-overview.md).
You'll find [Speech SDK speech-to-text and translation samples](https://github.com/Azure-Samples/cognitive-services-speech-sdk) on GitHub. These samples cover common scenarios, such as reading audio from a file or stream, continuous and single-shot recognition and translation, and working with custom models.
-## Migration guides
+## Migration guide
If your applications, tools, or products are using the [Translator Speech API](./how-to-migrate-from-translator-speech-api.md), see [Migrate from the Translator Speech API to Speech service](how-to-migrate-from-translator-speech-api.md).
-## Reference docs
-
-* [Speech SDK](./speech-sdk.md)
-* [REST API: Speech-to-text](rest-speech-to-text.md)
-* [REST API: Text-to-speech](rest-text-to-speech.md)
-* [REST API: Batch transcription and customization](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0)
- ## Next steps
-* Complete the [speech translation quickstart](get-started-speech-translation.md)
-* Get the [Speech SDK](speech-sdk.md)
+* Try the [speech translation quickstart](get-started-speech-translation.md)
+* Install the [Speech SDK](speech-sdk.md)
+* Install the [Speech CLI](spx-overview.md)
cognitive-services Data Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/data-filtering.md
Last updated 08/17/2020 + #Customer intent: As a Custom Translator, I want to understand how data is filtered before training a model.
When you submit documents to be used for training a custom system, the documents
If your document isn't in XLIFF, TMX, or ALIGN format, Custom Translator aligns the sentences of your source and target documents to each other, sentence by sentence. Custom Translator doesn't perform document alignment ΓÇô it follows your naming of the documents to find the matching document of the other language. Within the document, Custom Translator tries to find the corresponding sentence in the other language. It uses document markup like embedded HTML tags to help with the alignment.
-If you see a large discrepancy between the number of sentences in the source and target side documents, your document may not have been parallel in the first place, or for other reasons couldn't be aligned. The document pairs with a large difference (>10%) of sentences on each side warrant a second look to make sure they're indeed parallel. Custom Translator shows a warning next to the document if the sentence count differs suspiciously.
+If you see a large discrepancy between the number of sentences in the source and target documents, your documents may not be parallel. The document pairs with a large difference (>10%) of sentences on each side warrant a second look to make sure they're indeed parallel. Custom Translator shows a warning next to the document if the sentence count differs suspiciously.
## Deduplication
cognitive-services Document Formats Naming Convention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/document-formats-naming-convention.md
Last updated 12/06/2021 + #Customer intent: As a Custom Translator user, I want to understand how to format and name my documents.
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/faq.md
Last updated 08/17/2020 + #Customer intent: As a Custom Translator user, I want to review frequently asked questions.
There are restrictions and limits with respect to file size, model training, and
## When should I request deployment for a translation system that has been trained? It may take several trainings to create the optimal translation system for your project. You may want to try using more training data or more carefully filtered data, if the BLEU score and/ or the test results aren't satisfactory. You should
-be strict and careful in designing your tuning set and your test set, to be
-fully representative of the terminology and style of material you want to
+be strict and careful in designing your tuning set and your test set. Make certain your sets
+fully represent the terminology and style of material you want to
translate. You can be more liberal in composing your training data, and experiment with different options. Request a system deployment when you're
-satisfied with the translations in your system test results, have no more data to add to the training to
-improve your trained system, and you want to access the trained model via APIs.
+satisfied with the translations in your system test results and have no more data to add to
+improve your trained system.
## How many trained systems can be deployed in a project?
cognitive-services How To Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to-create-project.md
Last updated 12/06/2021 + #Customer intent: As a Custom Translator user, I want to understand how to create project, so that I can build and manage a project.
cognitive-services How To Manage Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to-manage-settings.md
Last updated 12/06/2021 + #Customer intent: As a Custom Translator user, I want to understand how to manage settings, so that I can create workspace, share workspace, and manage key in Custom Translator.
cognitive-services How To Search Edit Delete Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to-search-edit-delete-projects.md
Last updated 12/06/2021 + #Customer intent: As a Custom Translator user, I want to understand how to search, edit, delete projects, so that I can manage my projects effeciently. # Search, edit, and delete projects
cognitive-services How To Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to-train-model.md
Last updated 12/06/2021 + #Customer intent: As a Custom Translator user, I want to understand how to train, so that I can start start building my custom translation model.
cognitive-services How To Upload Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to-upload-document.md
Last updated 12/06/2021 + #Customer intent: As a Custom Translator user, I want to know how to upload document, so that I can start uploading my documents to train my model .
In upload history page you can view history of all document uploads details like
![Upload history tab](media/how-to/how-to-upload-history-1.png) 2. This page shows the status of all of your past uploads. It displays
- uploads from most recent to least recent. For each upload, it shows the document name, upload status, upload date, number of files uploaded, type of file uploaded, and language pair of the file.
+ uploads from most recent to least recent. For each upload, it shows the document name, upload status, upload date, number of files uploaded, type of file uploaded, and language pairs.
![Upload history page](media/how-to/how-to-document-history-2.png)
cognitive-services How To View Document Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to-view-document-details.md
Last updated 12/06/2021 + #Customer intent: As a Custom Translator user, I want to understand how to view document details, so that I can to review list of extracted sentences in a document.
cognitive-services How To View Model Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to-view-model-details.md
Last updated 12/06/2021 + #Customer intent: As a Custom Translator user, I want to understand how to view the model details, so that I can review details of each translation model.
-# View model details
+# View the model details
The Models tab under project shows all models in that project. All models trained for that project are listed in this tab.
For each model in the project, these details are displayed.
>[!Note] >To compare consecutive trainings for the same systems, it is important to keep the tuning set and testing set constant.
-## View model training details
+## View the model training details
When your training is complete, you can review details about the training from the details page. Select a project, locate and select the models tab, and choose a model.
cognitive-services How To View System Test Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to-view-system-test-results.md
Last updated 12/06/2021 + #Customer intent: As a Custom Translator user, I want to understand how to view system test results, so that I can review test results and analyze my training.
cognitive-services Quickstart Build Deploy Custom Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/quickstart-build-deploy-custom-model.md
Last updated 04/26/2022 -+ #Customer intent: As a user, I want to understand how to use Custom Translator so that I can build, deploy, and use a custom model for translation. # Quickstart: Build, deploy, and use a custom model for translation
cognitive-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/release-notes.md
Last updated 05/03/2021 + # Custom Translator release notes
cognitive-services Sentence Alignment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/sentence-alignment.md
Last updated 04/19/2021 + #Customer intent: As a Custom Translator user, I want to know how sentence alignment works, so that I can have better understanding of underlying process of sentence extraction, pairing, filtering, aligning.
cognitive-services Training And Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/training-and-model.md
Title: "Legacy: What are trainings and modeling? - Custom Translator"
+ Title: "Legacy: What are training and modeling? - Custom Translator"
description: A model is the system, which provides translation for a specific language pair. The outcome of a successful training is a model. To train a model, three mutually exclusive data sets are required training dataset, tuning dataset, and testing dataset.
Last updated 12/06/2021 + #Customer intent: As a Custom Translator user, I want to concept of a model and training, so that I can efficiently use training, tuning and testing datasets the helps me build a translation model.
cognitive-services Unsupported Language Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/unsupported-language-deployments.md
Last updated 04/24/2019 + # Unsupported language deployments
cognitive-services What Is Bleu Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/what-is-bleu-score.md
Last updated 08/17/2020 + #Customer intent: As a Custom Translator user, I want to understand how BLEU score works so that I understand system test outcome better. # What is a BLEU score?
-[BLEU (Bilingual Evaluation Understudy)](https://en.wikipedia.org/wiki/BLEU) is a measurement of the differences between an automatic translation and
-one or more human-created reference translations of the same source sentence.
+[BLEU (Bilingual Evaluation Understudy)](https://en.wikipedia.org/wiki/BLEU) is a measurement of the difference between an automatic translation and human-created reference translations of the same source sentence.
## Scoring process
cognitive-services What Is Dictionary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/what-is-dictionary.md
Last updated 12/06/2021 + #Customer intent: As a Custom Translator, I want to understand how to use a dictionary to build a custom translation model.
You can train a model using only dictionary data. To do so, select only the dict
- Dictionaries aren't a substitute for training a model using training data. We recommended letting the system learn from your training data for better results. However, when sentences or compound nouns must be rendered as-is, use a dictionary. - The phrase dictionary should be used sparingly. When a phrase within a sentence is replaced, the context within that sentence is lost or limited for translating the rest of the sentence. The result is that while the phrase or word within the sentence will translate according to the provided dictionary, the overall translation quality of the sentence will often suffer. - The phrase dictionary works well for compound nouns like product names ("Microsoft SQL Server"), proper names ("City of Hamburg"), or features of the product ("pivot table"). It doesn't work equally well for verbs or adjectives because those words are typically highly inflected in the source or in the target language. Best practice is to avoid phrase dictionary entries for anything but compound nouns.-- When using a phrase dictionary, capitalization and punctuation are important. Dictionary entries will only match words and phrases in the input sentence that use exactly the same capitalization and punctuation as specified in the source dictionary file. Also the translations will reflect the capitalization and punctuation provided in the target dictionary file. For example, if you trained an English to Spanish system that uses a phrase dictionary that specifies "US" in the source file, and "EE.UU." in the target file. When you request translation of a sentence that includes the word "us" (not capitalized), it will NOT return a match from the dictionary. However, if you request translation of a sentence that contains the word "US" (capitalized), it will match the dictionary and the translation will contain "EE.UU." The capitalization and punctuation in the translation may be different than specified in the dictionary target file, and may be different from the capitalization and punctuation in the source. It follows the rules of the target language.-- When using a sentence dictionary, the end of sentence punctuation is ignored. For example, if your source dictionary contains "this sentence ends with punctuation!", then any translation requests containing "this sentence ends with punctuation" would match.
+- If you're using a phrase dictionary, capitalization and punctuation are important. Dictionary entries will only match words and phrases in the input sentence that use exactly the same capitalization and punctuation as specified in the source dictionary file. Also the translations will reflect the capitalization and punctuation provided in the target dictionary file. For example, if you trained an English to Spanish system that uses a phrase dictionary that specifies "US" in the source file, and "EE.UU." in the target file. When you request translation of a sentence that includes the word "us" (not capitalized), it will NOT return a match from the dictionary. However, if you request translation of a sentence that contains the word "US" (capitalized), it will match the dictionary and the translation will contain "EE.UU." The capitalization and punctuation in the translation may be different than specified in the dictionary target file, and may be different from the capitalization and punctuation in the source. It follows the rules of the target language.
+- If you're using a sentence dictionary, the end of sentence punctuation is ignored. For example, if your source dictionary contains "this sentence ends with punctuation!", then any translation requests containing "this sentence ends with punctuation" would match.
- If a word appears more than once in a dictionary file, the system will always use the last entry provided. Thus, your dictionary shouldn't contain multiple translations of the same word. ## Next steps
cognitive-services Workspace And Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/workspace-and-project.md
Last updated 08/17/2020 + #Customer intent: As a Custom Translator user, I want to concept of a project, so that I can use it efficiently. # What is a Custom Translator workspace?
necessary.
The project label is used as part of the CategoryID. If the project label is left unset or is set identically across projects, then projects with the same category and *different* language pairs will share the same CategoryID. This approach is
-advantageous because it allows you or your customer to switch between
-languages when using the Text Translator API without worrying about a CategoryID that is unique to each project.
+advantageous because it allows you to switch between languages when using the Translator API without worrying about a CategoryID that is unique to each project.
For example, if I wanted to enable translations in the Technology domain from English to French and from French to English, I would create two
for both English and French translations without having to modify my CategoryID.
If you're a language service provider and want to serve multiple customers with different models that retain the same category and
-language pair, then using a project label to differentiate between customers
-would be a wise decision.
+language pair, use a project label to differentiate between customers.
## Next steps
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/quickstart.md
Previously updated : 04/05/2022 Last updated : 06/07/2022 zone_pivot_groups: usage-custom-language-features
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/entity-linking/quickstart.md
Previously updated : 11/02/2021 Last updated : 06/07/2022 ms.devlang: csharp, java, javascript, python
If you want to clean up and remove a Cognitive Services subscription, you can de
* [Portal](../../cognitive-services-apis-create-account.md#clean-up-resources) * [Azure CLI](../../cognitive-services-apis-create-account-cli.md#clean-up-resources)
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST API&Pillar=Language&Product=Entity-linking&Page=quickstart&Section=Clean-up-resources" target="_target">I ran into an issue</a>
+ ## Next steps * [Entity Linking overview](overview.md)
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/quickstart.md
Previously updated : 11/02/2021 Last updated : 06/07/2022 ms.devlang: csharp, java, javascript, python
If you want to clean up and remove a Cognitive Services subscription, you can de
* [Portal](../../cognitive-services-apis-create-account.md#clean-up-resources) * [Azure CLI](../../cognitive-services-apis-create-account-cli.md#clean-up-resources)
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST API&Pillar=Language&Product=Key-phrase-extraction&Page=quickstart&Section=Clean-up-resources" target="_target">I ran into an issue</a>
+ ## Next steps * [Key phrase extraction overview](overview.md)
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/quickstart.md
Previously updated : 11/02/2021 Last updated : 06/07/2022 ms.devlang: csharp, java, javascript, python
If you want to clean up and remove a Cognitive Services subscription, you can de
* [Portal](../../cognitive-services-apis-create-account.md#clean-up-resources) * [Azure CLI](../../cognitive-services-apis-create-account-cli.md#clean-up-resources)
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST API&Pillar=Language&Product=Language-detection&Page=quickstart&Section=Clean-up-resources" target="_target">I ran into an issue</a>
+ ## Next steps * [Language detection overview](overview.md)
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/quickstart.md
Previously updated : 02/02/2022 Last updated : 06/06/2022 ms.devlang: csharp, java, javascript, python
If you want to clean up and remove a Cognitive Services subscription, you can de
* [Portal](../../cognitive-services-apis-create-account.md#clean-up-resources) * [Azure CLI](../../cognitive-services-apis-create-account-cli.md#clean-up-resources)
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=JAVA&Pillar=Language&Product=Named-entity-recognition&Page=quickstart&Section=Clean-up-resources" target="_target">I ran into an issue</a>
+ ## Next steps * [NER overview](overview.md)
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/quickstart.md
Previously updated : 01/27/2022 Last updated : 06/06/2022 zone_pivot_groups: usage-custom-language-features
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/quickstart.md
Previously updated : 02/02/2022 Last updated : 06/06/2022 ms.devlang: csharp, java, javascript, python
If you want to clean up and remove a Cognitive Services subscription, you can de
* [Portal](../../cognitive-services-apis-create-account.md#clean-up-resources) * [Azure CLI](../../cognitive-services-apis-create-account-cli.md#clean-up-resources)
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=PYTHON&Pillar=Language&Product=Personally-identifying-info&Page=quickstart&Section=Clean-up-resources" target="_target">I ran into an issue</a>
+ ## Next steps * [Overview](overview.md)
cognitive-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/encrypt-data-at-rest.md
Question answering automatically encrypts your data when it is persisted to the
By default, your subscription uses Microsoft-managed encryption keys. There is also the option to manage your subscription with your own keys called customer-managed keys (CMK). CMK offers greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data. If CMK is configured for your subscription, double encryption is provided, which offers a second layer of protection, while allowing you to control the encryption key through your Azure Key Vault.
-Question answering uses [CMK support from Azure search](../../../../search/search-security-manage-encryption-keys.md), and automatically associates the provided CMK to encrypt the data stored in Azure search index.
+Question answering uses CMK support from Azure search, and associates the provided CMK to encrypt the data stored in Azure search index. Please follow the steps listed in [this article](../../../../search/search-security-manage-encryption-keys.md) to configure Key Vault access for the Azure search service.
> [!IMPORTANT] > Your Azure Search service resource must have been created after January 2019 and cannot be in the free (shared) tier. There is no support to configure customer-managed keys in the Azure portal.
cognitive-services Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/quickstart/sdk.md
description: This quickstart shows you how to create and manage your knowledge b
Previously updated : 11/29/2021 Last updated : 06/06/2022 recommendations: false
If you want to clean up and remove a Cognitive Services subscription, you can de
* [Portal](../../../cognitive-services-apis-create-account.md#clean-up-resources) * [Azure CLI](../../../cognitive-services-apis-create-account-cli.md#clean-up-resources)
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=PYTHON&Pillar=Language&Product=Question-answering&Page=quickstart&Section=Clean-up-resources" target="_target">I ran into an issue</a>
+ ## Explore the REST API To learn about automating your question answering pipeline consult the REST API documentation. Currently authoring functionality is only available via REST API:
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/sentiment-opinion-mining/quickstart.md
Previously updated : 04/12/2022 Last updated : 06/06/2022 ms.devlang: csharp, java, javascript, python
If you want to clean up and remove a Cognitive Services subscription, you can de
* [Portal](../../cognitive-services-apis-create-account.md#clean-up-resources) * [Azure CLI](../../cognitive-services-apis-create-account-cli.md#clean-up-resources)
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=PYTHON&Pillar=Language&Product=Sentiment-analysis&Page=quickstart&Section=Clean-up-resources" target="_target">I ran into an issue</a>
+ ## Next steps * [Sentiment Analysis overview](overview.md)
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/quickstart.md
Previously updated : 11/02/2021 Last updated : 06/06/2022 ms.devlang: csharp, java, javascript, python
If you want to clean up and remove a Cognitive Services subscription, you can de
* [Portal](../../cognitive-services-apis-create-account.md#clean-up-resources) * [Azure CLI](../../cognitive-services-apis-create-account-cli.md#clean-up-resources)
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST API&Pillar=Language&Product=Summarization&Page=quickstart&Section=Clean-up-resources" target="_target">I ran into an issue</a>
++ ## Next steps * [Summarization overview](overview.md)
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/quickstart.md
Previously updated : 04/20/2022 Last updated : 06/06/2022 ms.devlang: csharp, java, javascript, python
If you want to clean up and remove a Cognitive Services subscription, you can de
* [Portal](../../cognitive-services-apis-create-account.md#clean-up-resources) * [Azure CLI](../../cognitive-services-apis-create-account-cli.md#clean-up-resources)
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=CSHARP&Pillar=Language&Product=Text-analytics-for-health&Page=quickstart&Section=Clean-up-resources" target="_target">I ran into an issue</a>
+ ## Next steps * [Text Analytics for health overview](overview.md)
communication-services Custom Teams Endpoint Authentication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/custom-teams-endpoint-authentication-overview.md
Title: Authentication of custom Teams endpoint-
-description: This article discusses authentication of a custom Teams endpoint.
+ Title: Authentication for customized Teams apps
+description: Explore single-tenant and multi-tenant authentication use cases for customized Teams applications. Also learn about authentication artifacts.
- Previously updated : 06/30/2021 Last updated : 06/08/2022 +
-# Authentication flow cases
+# Single-tenant and multi-tenant authentication for Teams
-Azure Communication Services provides developers the ability to build custom Teams calling experience with Communication Services calling software development kit (SDK). This article provides insights into the process of authentication and describes individual authentication artifacts. In the following use cases, we'll demonstrate authentication for single and multi-tenant Azure Active Directory (Azure AD) applications.
+ This article gives you insight into the authentication process for single-tenant and multi-tenant, *Azure Active Directory* (Azure AD) applications. You can use authentication when you build customized Teams calling experiences with the *Calling software development kit* (SDK) that *Azure Communication Services* makes available. Use cases in this article also break down individual authentication artifacts.
-## Case 1: Single-tenant application
-The following scenario shows an example of the company Fabrikam, which has built custom Teams calling application for internal use within a company. All Teams users are managed by Azure Active Directory. The access to the Azure Communication Services is controlled via Azure role-based access control (Azure RBAC).
+## Case 1: Example of a single-tenant application
+The Fabrikam company has built a custom, Teams calling application for internal company use. All Teams users are managed by Azure Active Directory. Access to Azure Communication Services is controlled by *Azure role-based access control (Azure RBAC)*.
-![Diagram of the process for authenticating Teams user for accessing Fabrikam client application and Fabrikam Azure Communication Services resource.](./media/custom-teams-endpoint/authentication-case-single-tenant-azure-rbac-overview.svg)
+![A diagram that outlines the authentication process for Fabrikam;s customized Teams calling application and its Azure Communication Services resource.](./media/custom-teams-endpoint/authentication-case-single-tenant-azure-rbac-overview.svg)
-The following sequence diagram is showing detailed steps of the authentication:
+The following sequence diagram details single-tenant authentication.
-Prerequisites:
-- Alice or her Azure AD Administrator needs to provide consent to the Fabrikam's Azure Active Directory Application before first sign in. To learn more about [consent flow](../../../active-directory/develop/consent-framework.md).-- The admin of the Azure Communication Services resource must grant Alice permission to perform this action. You can learn about the [Azure RBAC role assignment](../../../role-based-access-control/role-assignments-portal.md).
+Before we begin:
+- Alice or her Azure AD administrator needs to give the custom Teams application consent, prior to the first attempt to sign in. Learn more about [consent](../../../active-directory/develop/consent-framework.md).
+- The Azure Communication Services resource admin needs to grant Alice permission to perform her role. Learn more about [Azure RBAC role assignment](../../../role-based-access-control/role-assignments-portal.md).
Steps:
-1. Authentication of Alice from Fabrikam against Fabrikam's Azure Active Directory: This step is standard OAuth flow leveraging Microsoft Authentication Library (MSAL) to authenticate against Fabrikam's Azure Active Directory. Alice is authenticating for Fabrikam's Azure AD application. If the authentication of Alice is successful, Fabrikam's Client application receives Azure AD access token 'A'. Details of the token are captured below. Developer experience is captured in the [quickstart](../../quickstarts/manage-teams-identity.md).
-1. Get access token for Alice: This flow is initiated from the Fabrikam's Client application and performs control plane logic authorized by artifact 'A' to retrieve Fabrikam's Azure Communication Services access token 'D' for Alice. Details of the token are captured below. This access token can be used for data plane actions in Azure Communication Services such as calling. Developer experience is captured in the [quickstart](../../quickstarts/manage-teams-identity.md).
-1. Start a call to Bob from Fabrikam: Alice is using Azure Communication Services access token to make a call to Teams user Bob via Communication Services calling SDK. You can learn more about the [developer experience in the quickstart](../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md).
+1. Authenticate Alice using Azure Active Directory: Alice is authenticated using a standard OAuth flow with *Microsoft Authentication Library (MSAL)*. If authentication is successful, the client application receives an Azure AD access token, with a value of 'A'. Tokens are outlined later in this article. Authentication from the developer perspective is explored in this [quickstart](../../quickstarts/manage-teams-identity.md).
+1. Get an access token for Alice: The customized Teams application performs control plane logic, using artifact 'A'. This produces Azure Communication Services access token 'D' and gives Alice access. This access token can also be used for data plane actions in Azure Communication Services, like Calling.
+1. Call Bob: Alice makes a call to Teams user Bob, with Fabrikam's customized Teams app. The call takes place via the Calling SDK with an Azure Communication Services access token. Learn more about [developing custom Teams clients](../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md).
Artifacts: - Artifact A
Artifacts:
- Audience: _`Azure Communication Services`_ ΓÇö data plane - Azure Communication Services Resource ID: Fabrikam's _`Azure Communication Services Resource ID`_
-## Case 2: Multi-tenant application
-The following scenario shows an example of company Contoso, which has built custom Teams calling application for external customers, such as the company Fabrikam. Contoso infrastructure uses custom authentication within the Contoso infrastructure. Contoso infrastructure is using a connection string to retrieve the token for Fabrikam's Teams user.
+## Case 2: Example of a multi-tenant application
+The Contoso company has built a custom Teams calling application for external customers. This application uses custom authentication within Contoso's own infrastructure. Contoso uses a connection string to retrieve tokens from Fabrikam's customized Teams application.
-![Diagram of the process for authenticating Fabrikam Teams user for accessing Contoso client application and Contoso Azure Communication Services resource.](./media/custom-teams-endpoint/authentication-case-multiple-tenants-hmac-overview.svg)
+![A sequence diagram that demonstrates how the Contoso application authenticates Fabrikam users with Contoso's own Azure Communication Services resource.](./media/custom-teams-endpoint/authentication-case-multiple-tenants-hmac-overview.svg)
-The following sequence diagram is showing detailed steps of the authentication:
+The following sequence diagram details multi-tenant authentication.
-Prerequisites:
-- Alice or her Azure AD Administrator needs to provide consent to the Contoso's Azure Active Directory Application before first sign in. To learn more about [consent flow](../../../active-directory/develop/consent-framework.md).
+Before we begin:
+- Alice or her Azure AD administrator needs to give Contoso's Azure Active Directory application consent before the first attempt to sign in. Learn more about [consent](../../../active-directory/develop/consent-framework.md).
Steps:
-1. Authentication of Alice from Fabrikam against Fabrikam's Azure Active Directory: This step is standard OAuth flow using Microsoft Authentication Library (MSAL) to authenticate against Fabrikam's Azure Active Directory. Alice is authenticating for Contoso's Azure AD application. If the authentication of Alice is successful, Contoso's Client application receives Azure AD access token 'A'. Details of the token are captured below. Developer experience is captured in the [quickstart](../../quickstarts/manage-teams-identity.md).
-1. Get access token for Alice: This flow is initiated from Contoso's client application and performs control plane logic authorized by artifact 'A' to retrieve Contoso's Azure Communication Services access token 'D' for Alice. Details of the token are captured below. This access token can be used for data plane actions in Azure Communication Services such as calling. Developer experience is captured in the [quickstart](../../quickstarts/manage-teams-identity.md). (https://docs.microsoft.com/azure/role-based-access-control/role-assignments-portal).
-1. Start a call to Bob from Fabrikam: Alice is using Azure Communication Services access token to make a call to Teams user Bob via Communication Services calling SDK. You can learn more about the [developer experience in the quickstart](../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md).
+1. Authenticate Alice using the Fabrikam application: Alice is authenticated through Fabrikam's customized Teams application. A standard OAuth flow with Microsoft Authentication Library (MSAL) is used. If authentication is successful, the client application, the Contoso app in this case, receives an Azure AD access token with a value of 'A'. Token details are outlined below. Authentication from the developer perspective is explored in this [quickstart](../../quickstarts/manage-teams-identity.md).
+1. Get an access token for Alice: The Contoso application performs control plane logic, using artifact 'A'. This generates Azure Communication Services access token 'D' for Alice within the Contoso application. This access token can be used for data plane actions in Azure Communication Services, like Calling.
+1. Call Bob: Alice makes a call to Teams user Bob, with Fabrikam's customized Teams app. The call takes place via the Calling SDK with an Azure Communication Services access token. Learn more about developing custom, Teams apps [in this quickstart](../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md).
Artifacts:
Artifacts:
## Next steps
-The following articles might be of interest to you:
+The following articles may be of interest to you:
- Learn more about [authentication](../authentication.md).-- Try [quickstart for authentication of Teams users](../../quickstarts/manage-teams-identity.md).-- Try [quickstart for calling to a Teams user](../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md).
+- Try this [quickstart to authenticate Teams users](../../quickstarts/manage-teams-identity.md).
+- Try this [quickstart to call a Teams user](../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md).
communication-services Custom Teams Endpoint Firewall Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/custom-teams-endpoint-firewall-configuration.md
Title: Firewall configuration-
-description: This article describes firewall configuration requirements to enable a custom Teams endpoint.
+ Title: Firewall configuration and Teams customization
+description: Learn the firewall configuration requirements that enable customized Teams calling experiences.
Previously updated : 06/30/2021 Last updated : 06/06/2022 +
-# Firewall configuration
+# Firewall configuration for customized Teams calling experiences
-Azure Communication Services provides the ability to leverage Communication Services calling Software development kit (SDK) to build custom Teams calling experience. To enable this experience, Administrators need to configure the firewall according to Communication Services and Microsoft Teams guidance. Communication Services requirements allow control plane, and Teams requirements allow calling experience. If an independent software vendor (ISV) provides the authentication experience, then instead of Communication Services configuration, use configuration guidance of the vendor.
+Azure Communication Services allow you to build custom Teams calling experiences.
-The following articles might be of interest to you:
+You can use the Calling *Software development kit* (SDK) to customize experiences. Use your Administrator account to configure your firewall based on Communication Services and Microsoft Teams guidelines. Communication Services requirements are for control plane and Teams requirements are for Calling.
+
+If you use an *independent software vendor* (ISV) for authentication, use instructions from that vendor and not from Communication Services.
+
+The following articles may be of interest to you:
- Learn more about [Azure Communication Services firewall configuration](../voice-video-calling/network-requirements.md). - Learn about [Microsoft Teams firewall configuration](/microsoft-365/enterprise/urls-and-ip-address-ranges?view=o365-worldwide&preserve-view=true#skype-for-business-online-and-microsoft-teams).
communication-services Logging And Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/logging-and-diagnostics.md
Communication Services offers the following types of logs that you can enable:
* **SMS operational logs** - provides basic information related to the SMS service * **Authentication operational logs** - provides basic information related to the Authentication service * **Network Traversal operational logs** - provides basic information related to the Network Traversal service
+* **Email Send Mail operational logs** - provides detailed information related to the Email service send mail requests.
+* **Email Status Update operational logs** - provides message and recipient level delivery status updates related to the Email service send mail requests.
+* **Email User Engagement operational logs** - provides information related to 'open' and 'click' user engagement metrics for messages sent from the Email service.
### Usage logs schema
Communication Services offers the following types of logs that you can enable:
| SdkType | The SDK type being used in the request. | | PlatformType | The platform type being used in the request. | | RouteType | The routing methodology to where the ICE server will be located from the client (e.g. Any or Nearest). |++
+### Email Send Mail operational logs
+
+| Property | Description |
+| -- | |
+| TimeGenerated | The timestamp (UTC) of when the log was generated. |
+| Location | The region where the operation was processed. |
+| OperationName | The operation associated with log record. |
+| OperationVersion | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there is no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| Category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| CorrelationID | The ID for correlated events. Can be used to identify correlated events between multiple tables. For all Email operational logs, the CorrelationId is mapped to the MessageId which is returned from a successful SendMail request. |
+| Size | Represents the total size in megabytes of the email body, subject, headers and attachments. |
+| ToRecipientsCount | The total # of unique email addresses on the To line. |
+| CcRecipientsCount | The total # of unique email addresses on the Cc line. |
+| BccRecipientsCount | The total # of unique email addresses on the Bcc line. |
+| UniqueRecipientsCount | This is the deduplicated total recipient count for the To, Cc and Bcc address fields. |
+| AttachmentsCount | The total # of attachments. |
++
+### Email Status Update operational logs
+
+| Property | Description |
+| -- | |
+| TimeGenerated | The timestamp (UTC) of when the log was generated. |
+| Location | The region where the operation was processed. |
+| OperationName | The operation associated with log record. |
+| OperationVersion | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there is no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| Category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| CorrelationID | The ID for correlated events. Can be used to identify correlated events between multiple tables. For all Email operational logs, the CorrelationId is mapped to the MessageId which is returned from a successful SendMail request. |
+| RecipientId | The email address for the targeted recipient. If this is a message-level event, the property will be empty. |
+| DeliveryStatus | The terminal status of the message. |
+
+### Email User Engagement operational logs
+
+| Property | Description |
+| -- | |
+| TimeGenerated | The timestamp (UTC) of when the log was generated. |
+| Location | The region where the operation was processed. |
+| OperationName | The operation associated with log record. |
+| OperationVersion | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there is no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| Category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| CorrelationID | The ID for correlated events. Can be used to identify correlated events between multiple tables. For all Email operational logs, the CorrelationId is mapped to the MessageId which is returned from a successful SendMail request. |
+| RecipientId | The email address for the targeted recipient. If this is a message-level event, the property will be empty. |
+| EngagementType | The type of user engagement being tracked. |
+| EngagementContext | The context represents what the user interacted with. |
+| UserAgent | The user agent string from the client. |
communication-services Accept Decline Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/accept-decline-offer.md
Title: Accept or decline an offer in Job Router
+ Title: How to accept or decline offers in Job Router
-description: Use Azure Communication Services SDKs to accept or decline a job offer in Job Router.
+description: Learn how to use Azure Communication Services SDKs to accept or decline job offers in Job Router.
Previously updated : 01/18/2022- Last updated : 06/01/2022+
-#Customer intent: As a developer, I want to accept/decline job offers when coming in.
+#Customer intent: As a developer, I want to accept/decline job offers when they come in.
-# Accept or decline a Job Router offer
+# Discover how to accept or decline Job Router job offers
-This guide outlines the steps to observe a Job Router offer, and then to accept the job offer or delete the job offer.
+This guide lays out the steps you need to take to observe a Job Router offer. It also outlines how to accept or decline job offers.
[!INCLUDE [Private Preview Disclaimer](../../includes/private-preview-include-section.md)] ## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure account with an active subscription. [Create an Azure account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
- A deployed Azure Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md). - Optional: Complete the quickstart to [get started with Job Router](../../quickstarts/router/get-started-router.md).
-## Observe the offer issued event
+## Observe worker offer-issued events
-After you create a job, observe the [offer issued event](subscribe-events.md#microsoftcommunicationrouterworkerofferissued), which contains the worker ID and the job offer ID:
+After you create a job, observe the [worker offer-issued event](subscribe-events.md#microsoftcommunicationrouterworkerofferissued), which contains the worker ID and the job offer ID:
```csharp var workerId = event.data.workerId;
Console.WriteLine($"Job Offer ID: {offerId} offered to worker {workerId} ");
```
-## Accept a job offer
+## Accept job offers
-Then, the worker can accept the job offer by using the SDK:
+The worker can accept job offers by using the SDK:
```csharp var result = await client.AcceptJobOfferAsync(workerId, offerId); ```
-## Decline a job offer
+## Decline job offers
-Alternatively, the worker can decline the job offer by using the SDK if worker isn't willing to take the job:
+The worker can decline job offers by using the SDK:
```csharp var result = await client.DeclineJobOfferAsync(workerId, offerId);
cosmos-db Create Sql Api Dotnet V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-dotnet-v4.md
- Title: Manage Azure Cosmos DB SQL API resources using .NET V4 SDK
-description: Use this quickstart to build a console app by using the .NET V4 SDK to manage Azure Cosmos DB SQL API account resources.
------ Previously updated : 08/26/2021--
-# Quickstart: Build a console app by using the .NET V4 SDK (preview) to manage Azure Cosmos DB SQL API account resources
-
-> [!div class="op_single_selector"]
-> * [.NET V3](create-sql-api-dotnet.md)
-> * [.NET V4](create-sql-api-dotnet-V4.md)
-> * [Java SDK v4](create-sql-api-java.md)
-> * [Spring Data v3](create-sql-api-spring-data.md)
-> * [Spark v3 connector](create-sql-api-spark.md)
-> * [Node.js](create-sql-api-nodejs.md)
-> * [Python](create-sql-api-python.md)
-> * [Go](create-sql-api-go.md)
-
-Get started with the Azure Cosmos DB SQL API client library for .NET. Follow the steps in this article to install the .NET V4 (Azure.Cosmos) package and build an app. Then, try out the example code for basic create, read, update, and delete (CRUD) operations on the data stored in Azure Cosmos DB.
-
-> [!IMPORTANT]
-> The .NET V4 SDK for Azure Cosmos DB is currently in public preview. This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities.
->
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-Azure Cosmos DB is Microsoft's fast NoSQL database with open APIs for any scale. You can use Azure Cosmos DB to quickly create and query key/value, document, and graph databases. Use the Azure Cosmos DB SQL API client library for .NET to:
-
-* Create an Azure Cosmos database and a container.
-* Add sample data to the container.
-* Query the data.
-* Delete the database.
-
-[Library source code](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/v4) | [Package (NuGet)](https://www.nuget.org/packages/Azure.Cosmos)
-
-## Prerequisites
-
-* Azure subscription. [Create one for free](https://azure.microsoft.com/free/). You can also [try Azure Cosmos DB](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription, free of charge and commitments.
-* [NET Core 3 SDK](https://dotnet.microsoft.com/download/dotnet-core). You can verify which version is available in your environment by running `dotnet --version`.
-
-## Set up
-
-This section walks you through creating an Azure Cosmos account and setting up a project that uses the Azure Cosmos DB SQL API client library for .NET to manage resources.
-
-The example code described in this article creates a `FamilyDatabase` database and family members within that database. Each family member is an item and has properties such as `Id`, `FamilyName`, `FirstName`, `LastName`, `Parents`, `Children`, and `Address`. The `LastName` property is used as the partition key for the container.
-
-### <a id="create-account"></a>Create an Azure Cosmos account
-
-If you use the [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) option to create an Azure Cosmos account, you must create an Azure Cosmos account of type **SQL API**. An Azure Cosmos test account is already created for you. You don't have to create the account explicitly, so you can skip this section and move to the next section.
-
-If you have your own Azure subscription or created a subscription for free, you should create an Azure Cosmos account explicitly. The following code will create an Azure Cosmos account with session consistency. The account is replicated in `South Central US` and `North Central US`.
-
-You can use Azure Cloud Shell to create the Azure Cosmos account. Azure Cloud Shell is an interactive, authenticated, browser-accessible shell for managing Azure resources. It provides the flexibility of choosing the shell experience that best suits the way you work: either Bash or PowerShell.
-
-For this quickstart, use Bash. Azure Cloud Shell also requires a storage account. You can create one when prompted.
-
-1. Select the **Try It** button next to the following code, choose **Bash** mode, select **create a storage account**, and sign in to Cloud Shell.
-
-1. Copy and paste the following code to Azure Cloud Shell and run it. The Azure Cosmos account name must be globally unique, so be sure to update the `mysqlapicosmosdb` value before you run the command.
-
- ```azurecli-interactive
-
- # Set variables for the new SQL API account, database, and container
- resourceGroupName='myResourceGroup'
- location='southcentralus'
-
- # The Azure Cosmos account name must be globally unique, so be sure to update the `mysqlapicosmosdb` value before you run the command
- accountName='mysqlapicosmosdb'
-
- # Create a resource group
- az group create \
- --name $resourceGroupName \
- --location $location
-
- # Create a SQL API Cosmos DB account with session consistency and multi-region writes enabled
- az cosmosdb create \
- --resource-group $resourceGroupName \
- --name $accountName \
- --kind GlobalDocumentDB \
- --locations regionName="South Central US" failoverPriority=0 --locations regionName="North Central US" failoverPriority=1 \
- --default-consistency-level "Session" \
- --enable-multiple-write-locations true
-
- ```
-
-The creation of the Azure Cosmos account takes a while. After the operation is successful, you can see the confirmation output. Sign in to the [Azure portal](https://portal.azure.com/) and verify that the Azure Cosmos account with the specified name exists. You can close the Azure Cloud Shell window after the resource is created.
-
-### <a id="create-dotnet-core-app"></a>Create a .NET app
-
-Create a .NET application in your preferred editor or IDE. Open the Windows command prompt or a terminal window from your local computer. You'll run all the commands in the next sections from the command prompt or terminal.
-
-Run the following `dotnet new` command to create an app with the name `todo`. The `--langVersion` parameter sets the `LangVersion` property in the created project file.
-
- ```bash
- dotnet new console --langVersion:8 -n todo
- ```
-
-Use the following commands to change your directory to the newly created app folder and build the application:
-
- ```bash
- cd todo
- dotnet build
- ```
-
-The expected output from the build should look something like this:
-
-```bash
- Restore completed in 100.37 ms for C:\Users\user1\Downloads\CosmosDB_Samples\todo\todo.csproj.
- todo -> C:\Users\user1\Downloads\CosmosDB_Samples\todo\bin\Debug\netcoreapp3.0\todo.dll
-
-Build succeeded.
- 0 Warning(s)
- 0 Error(s)
-
-Time Elapsed 00:00:34.17
-```
-
-### <a id="install-package"></a>Install the Azure Cosmos DB package
-
-While you're still in the application directory, install the Azure Cosmos DB client library for .NET Core by using the `dotnet add package` command:
-
- ```bash
- dotnet add package Azure.Cosmos --version 4.0.0-preview3
- ```
-
-### Copy your Azure Cosmos account credentials from the Azure portal
-
-The sample application needs to authenticate to your Azure Cosmos account. To authenticate, pass the Azure Cosmos account credentials to the application. Get your Azure Cosmos account credentials by following these steps:
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. Go to your Azure Cosmos account.
-
-1. Open the **Keys** pane and copy the **URI** and **PRIMARY KEY** values for your account. You'll add the URI and key values to an environment variable in the next procedure.
-
-## <a id="object-model"></a>Learn the object model
-
-Before you continue building the application, let's look into the hierarchy of resources in Azure Cosmos DB and the object model that's used to create and access these resources. Azure Cosmos DB creates resources in the following order:
-
-* Azure Cosmos account
-* Databases
-* Containers
-* Items
-
-To learn more about the hierarchy of entities, see the [Azure Cosmos DB resource model](../account-databases-containers-items.md) article. You'll use the following .NET classes to interact with these resources:
-
-* `CosmosClient`. This class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service.
-* `CreateDatabaseIfNotExistsAsync`. This method creates (if it doesn't exist) or gets (if it already exists) a database resource as an asynchronous operation.
-* `CreateContainerIfNotExistsAsync`. This method creates (if it doesn't exist) or gets (if it already exists) a container as an asynchronous operation. You can check the status code from the response to determine whether the container was newly created (201) or an existing container was returned (200).
-* `CreateItemAsync`. This method creates an item within the container.
-* `UpsertItemAsync`. This method creates an item within the container if it doesn't already exist or replaces the item if it already exists.
-* `GetItemQueryIterator`. This method creates a query for items under a container in an Azure Cosmos database by using a SQL statement with parameterized values.
-* `DeleteAsync`. This method deletes the specified database from your Azure Cosmos account.
-
- ## <a id="code-examples"></a>Configure code examples
-
-The sample code described in this article creates a family database in Azure Cosmos DB. The family database contains family details such as name, address, location, parents, children, and pets.
-
-Before you populate the data for your Azure Cosmos account, define the properties of a family item. Create a new class named `Family.cs` at the root level of your sample application and add the following code to it:
-
-[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Family.cs)]
-
-### Add the using directives and define the client object
-
-From the project directory, open the *Program.cs* file in your editor and add the following `using` directives at the top of your application:
-
-[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=Usings)]
--
-Add the following global variables in your `Program` class. These variables will include the endpoint and authorization keys, the name of the database, and the container that you'll create. Be sure to replace the endpoint and authorization key values according to your environment.
-
-[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=Constants)]
-
-Finally, replace the `Main` method:
-
-[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=Main)]
-
-### Create a database
-
-Define the `CreateDatabaseAsync` method within the `program.cs` class. This method creates the `FamilyDatabase` database if it doesn't already exist.
-
-[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=CreateDatabaseAsync)]
-
-### Create a container
-
-Define the `CreateContainerAsync` method within the `Program` class. This method creates the `FamilyContainer` container if it doesn't already exist.
-
-[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=CreateContainerAsync)]
-
-### Create an item
-
-Create a family item by adding the `AddItemsToContainerAsync` method with the following code. You can use the `CreateItemAsync` or `UpsertItemAsync` method to create an item.
-
-[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=AddItemsToContainerAsync)]
-
-### Query the items
-
-After you insert an item, you can run a query to get the details of the Andersen family. The following code shows how to execute the query by using the SQL query directly. The SQL query to get the Andersen family details is `SELECT * FROM c WHERE c.LastName = 'Andersen'`. Define the `QueryItemsAsync` method within the `Program` class and add the following code to it:
-
-[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=QueryItemsAsync)]
-
-### Replace an item
-
-Read a family item and then update it by adding the `ReplaceFamilyItemAsync` method with the following code:
-
-[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=ReplaceFamilyItemAsync)]
-
-### Delete an item
-
-Delete a family item by adding the `DeleteFamilyItemAsync` method with the following code:
-
-[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=DeleteFamilyItemAsync)]
-
-### Delete the database
-
-You can delete the database by adding the `DeleteDatabaseAndCleanupAsync` method with the following code:
-
-[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=DeleteDatabaseAndCleanupAsync)]
-
-After you add all the required methods, save the *Program.cs* file.
-
-## Run the code
-
-Run the application to create the Azure Cosmos DB resources:
-
- ```bash
- dotnet run
- ```
-
-The following output is generated when you run the application:
-
- ```bash
- Created Database: FamilyDatabase
-
- Created Container: FamilyContainer
-
- Created item in database with id: Andersen.1
-
- Running query: SELECT * FROM c WHERE c.LastName = 'Andersen'
-
- Read {"id":"Andersen.1","LastName":"Andersen","Parents":[{"FamilyName":null,"FirstName":"Thomas"},{"FamilyName":null "FirstName":"Mary Kay"}],"Children":[{"FamilyName":null,"FirstName":"Henriette Thaulow","Gender":"female","Grade":5,"Pets": [{"GivenName":"Fluffy"}]}],"Address":{"State":"WA","County":"King","City":"Seattle"},"IsRegistered":false}
-
- Updated Family [Wakefield,Wakefield.7].
- Body is now: {"id":"Wakefield.7","LastName":"Wakefield","Parents":[{"FamilyName":"Wakefield","FirstName":"Robin"} {"FamilyName":"Miller","FirstName":"Ben"}],"Children":[{"FamilyName":"Merriam","FirstName":"Jesse","Gender":"female","Grade":6 "Pets":[{"GivenName":"Goofy"},{"GivenName":"Shadow"}]},{"FamilyName":"Miller","FirstName":"Lisa","Gender":"female","Grade":1 "Pets":null}],"Address":{"State":"NY","County":"Manhattan","City":"NY"},"IsRegistered":true}
-
- Deleted Family [Wakefield,Wakefield.7]
-
- Deleted Database: FamilyDatabase
-
- End of demo, press any key to exit.
- ```
-
-You can validate that the data is created by signing in to the Azure portal and seeing the required items in your Azure Cosmos account.
-
-## Clean up resources
-
-When you no longer need the Azure Cosmos account and the corresponding resource group, you can use the Azure CLI or Azure PowerShell to remove them. The following command shows how to delete the resource group by using the Azure CLI:
-
-```azurecli
-az group delete -g "myResourceGroup"
-```
-
-## Next steps
-
-In this quickstart, you learned how to create an Azure Cosmos account, create a database, and create a container by using a .NET Core app. You can now import more data to your Azure Cosmos account by using the instructions in the following article:
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
-
-> [!div class="nextstepaction"]
-> [Import data into Azure Cosmos DB](../import-data.md)
cosmos-db Create Sql Api Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-dotnet.md
- Title: Quickstart - Build a .NET console app to manage Azure Cosmos DB SQL API resources
-description: Learn how to build a .NET console app to manage Azure Cosmos DB SQL API account resources in this quickstart.
------ Previously updated : 08/26/2021---
-# Quickstart: Build a .NET console app to manage Azure Cosmos DB SQL API resources
-
-> [!div class="op_single_selector"]
-> * [.NET V3](create-sql-api-dotnet.md)
-> * [.NET V4](create-sql-api-dotnet-V4.md)
-> * [Java SDK v4](create-sql-api-java.md)
-> * [Spring Data v3](create-sql-api-spring-data.md)
-> * [Spark v3 connector](create-sql-api-spark.md)
-> * [Node.js](create-sql-api-nodejs.md)
-> * [Python](create-sql-api-python.md)
-> * [Go](create-sql-api-go.md)
-
-Get started with the Azure Cosmos DB SQL API client library for .NET. Follow the steps in this doc to install the .NET package, build an app, and try out the example code for basic CRUD operations on the data stored in Azure Cosmos DB.
-
-Azure Cosmos DB is Microsoft's fast NoSQL database with open APIs for any scale. You can use Azure Cosmos DB to quickly create and query key/value, document, and graph databases. Use the Azure Cosmos DB SQL API client library for .NET to:
-
-* Create an Azure Cosmos database and a container
-* Add sample data to the container
-* Query the data
-* Delete the database
-
-[API reference documentation](/dotnet/api/microsoft.azure.cosmos) | [Library source code](https://github.com/Azure/azure-cosmos-dotnet-v3) | [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Cosmos)
-
-## Prerequisites
-
-* Azure subscription - [create one for free](https://azure.microsoft.com/free/) or you can [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription, free of charge and commitments.
-* The [.NET Core 2.1 SDK or later](https://dotnet.microsoft.com/download/dotnet-core/2.1).
-
-## Setting up
-
-This section walks you through creating an Azure Cosmos account and setting up a project that uses Azure Cosmos DB SQL API client library for .NET to manage resources. The example code described in this article creates a `FamilyDatabase` database and family members (each family member is an item) within that database. Each family member has properties such as `Id, FamilyName, FirstName, LastName, Parents, Children, Address,`. The `LastName` property is used as the partition key for the container.
-
-### <a id="create-account"></a>Create an Azure Cosmos account
-
-If you use the [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) option to create an Azure Cosmos account, you must create an Azure Cosmos DB account of type **SQL API**. An Azure Cosmos DB test account is already created for you. You don't have to create the account explicitly, so you can skip this section and move to the next section.
-
-If you have your own Azure subscription or created a subscription for free, you should create an Azure Cosmos account explicitly. The following code will create an Azure Cosmos account with session consistency. The account is replicated in `South Central US` and `North Central US`.
-
-You can use Azure Cloud Shell to create the Azure Cosmos account. Azure Cloud Shell is an interactive, authenticated, browser-accessible shell for managing Azure resources. It provides the flexibility of choosing the shell experience that best suits the way you work, either Bash or PowerShell. For this quickstart, choose **Bash** mode. Azure Cloud Shell also requires a storage account, you can create one when prompted.
-
-Select the **Try It** button next to the following code, choose **Bash** mode select **create a storage account** and login to Cloud Shell. Next copy and paste the following code to Azure Cloud Shell and run it. The Azure Cosmos account name must be globally unique, make sure to update the `mysqlapicosmosdb` value before you run the command.
-
-```azurecli-interactive
-
-# Set variables for the new SQL API account, database, and container
-resourceGroupName='myResourceGroup'
-location='southcentralus'
-
-# The Azure Cosmos account name must be globally unique, make sure to update the `mysqlapicosmosdb` value before you run the command
-accountName='mysqlapicosmosdb'
-
-# Create a resource group
-az group create \
- --name $resourceGroupName \
- --location $location
-
-# Create a SQL API Cosmos DB account with session consistency and multi-region writes enabled
-az cosmosdb create \
- --resource-group $resourceGroupName \
- --name $accountName \
- --kind GlobalDocumentDB \
- --locations regionName="South Central US" failoverPriority=0 --locations regionName="North Central US" failoverPriority=1 \
- --default-consistency-level "Session" \
- --enable-multiple-write-locations true
-
-```
-
-The creation of the Azure Cosmos account takes a while, once the operation is successful, you can see the confirmation output. After the command completes successfully, sign into the [Azure portal](https://portal.azure.com/) and verify that the Azure Cosmos account with the specified name exists. You can close the Azure Cloud Shell window after the resource is created.
-
-### <a id="create-dotnet-core-app"></a>Create a new .NET app
-
-Create a new .NET application in your preferred editor or IDE. Open the Windows command prompt or a Terminal window from your local computer. You will run all the commands in the next sections from the command prompt or terminal. Run the following dotnet new command to create a new app with the name `todo`. The --langVersion parameter sets the LangVersion property in the created project file.
-
-```console
-dotnet new console --langVersion 7.1 -n todo
-```
-
-Change your directory to the newly created app folder. You can build the application with:
-
-```console
-cd todo
-dotnet build
-```
-
-The expected output from the build should look something like this:
-
-```console
- Restore completed in 100.37 ms for C:\Users\user1\Downloads\CosmosDB_Samples\todo\todo.csproj.
- todo -> C:\Users\user1\Downloads\CosmosDB_Samples\todo\bin\Debug\netcoreapp2.2\todo.dll
- todo -> C:\Users\user1\Downloads\CosmosDB_Samples\todo\bin\Debug\netcoreapp2.2\todo.Views.dll
-
-Build succeeded.
- 0 Warning(s)
- 0 Error(s)
-
-Time Elapsed 00:00:34.17
-```
-
-### <a id="install-package"></a>Install the Azure Cosmos DB package
-
-While still in the application directory, install the Azure Cosmos DB client library for .NET Core by using the dotnet add package command.
-
-```console
-dotnet add package Microsoft.Azure.Cosmos
-```
-
-### Copy your Azure Cosmos account credentials from the Azure portal
-
-The sample application needs to authenticate to your Azure Cosmos account. To authenticate, you should pass the Azure Cosmos account credentials to the application. Get your Azure Cosmos account credentials by following these steps:
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. Navigate to your Azure Cosmos account.
-
-1. Open the **Keys** pane and copy the **URI** and **PRIMARY KEY** of your account. You will add the URI and keys values to an environment variable in the next step.
-
-### Set the environment variables
-
-After you have copied the **URI** and **PRIMARY KEY** of your account, save them to a new environment variable on the local machine running the application. To set the environment variable, open a console window, and run the following command. Make sure to replace `<Your_Azure_Cosmos_account_URI>` and `<Your_Azure_Cosmos_account_PRIMARY_KEY>` values.
-
-**Windows**
-
-```console
-setx EndpointUrl "<Your_Azure_Cosmos_account_URI>"
-setx PrimaryKey "<Your_Azure_Cosmos_account_PRIMARY_KEY>"
-```
-
-**Linux**
-
-```bash
-export EndpointUrl = "<Your_Azure_Cosmos_account_URI>"
-export PrimaryKey = "<Your_Azure_Cosmos_account_PRIMARY_KEY>"
-```
-
-**macOS**
-
-```bash
-export EndpointUrl = "<Your_Azure_Cosmos_account_URI>"
-export PrimaryKey = "<Your_Azure_Cosmos_account_PRIMARY_KEY>"
-```
-
- ## <a id="object-model"></a>Object model
-
-Before you start building the application, let's look into the hierarchy of resources in Azure Cosmos DB and the object model used to create and access these resources. The Azure Cosmos DB creates resources in the following order:
-
-* Azure Cosmos account
-* Databases
-* Containers
-* Items
-
-To learn in more about the hierarchy of different entities, see the [working with databases, containers, and items in Azure Cosmos DB](../account-databases-containers-items.md) article. You will use the following .NET classes to interact with these resources:
-
-* [CosmosClient](/dotnet/api/microsoft.azure.cosmos.cosmosclient) - This class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service.
-
-* [CreateDatabaseIfNotExistsAsync](/dotnet/api/microsoft.azure.cosmos.cosmosclient.createdatabaseifnotexistsasync) - This method creates (if doesn't exist) or gets (if already exists) a database resource as an asynchronous operation.
-
-* [CreateContainerIfNotExistsAsync](/dotnet/api/microsoft.azure.cosmos.database.createcontainerifnotexistsasync)
-* [CreateItemAsync](/dotnet/api/microsoft.azure.cosmos.container.createitemasync) - This method creates an item within the container.
-
-* [UpsertItemAsync](/dotnet/api/microsoft.azure.cosmos.container.upsertitemasync) - This method creates an item within the container if it doesn't already exist or replaces the item if it already exists.
-
-* [GetItemQueryIterator](/dotnet/api/microsoft.azure.cosmos.container.GetItemQueryIterator) - This method creates a query for items under a container in an Azure Cosmos database using a SQL statement with parameterized values.
-
-* [DeleteAsync](/dotnet/api/microsoft.azure.cosmos.database.deleteasync) - Deletes the specified database from your Azure Cosmos account. `DeleteAsync` method only deletes the database. Disposing of the `Cosmosclient` instance should happen separately (which it does in the DeleteDatabaseAndCleanupAsync method.
-
- ## <a id="code-examples"></a>Code examples
-
-The sample code described in this article creates a family database in Azure Cosmos DB. The family database contains family details such as name, address, location, the associated parents, children, and pets. Before populating the data to your Azure Cosmos account, define the properties of a family item. Create a new class named `Family.cs` at the root level of your sample application and add the following code to it:
-
-```csharp
-using Newtonsoft.Json;
-
-namespace todo
-{
- public class Family
- {
- [JsonProperty(PropertyName = "id")]
- public string Id { get; set; }
- public string LastName { get; set; }
- public Parent[] Parents { get; set; }
- public Child[] Children { get; set; }
- public Address Address { get; set; }
- public bool IsRegistered { get; set; }
- // The ToString() method is used to format the output, it's used for demo purpose only. It's not required by Azure Cosmos DB
- public override string ToString()
- {
- return JsonConvert.SerializeObject(this);
- }
- }
-
- public class Parent
- {
- public string FamilyName { get; set; }
- public string FirstName { get; set; }
- }
-
- public class Child
- {
- public string FamilyName { get; set; }
- public string FirstName { get; set; }
- public string Gender { get; set; }
- public int Grade { get; set; }
- public Pet[] Pets { get; set; }
- }
-
- public class Pet
- {
- public string GivenName { get; set; }
- }
-
- public class Address
- {
- public string State { get; set; }
- public string County { get; set; }
- public string City { get; set; }
- }
-}
-```
-
-### Add the using directives & define the client object
-
-From the project directory, open the `Program.cs` file in your editor and add the following using directives at the top of your application:
-
-```csharp
-
-using System;
-using System.Threading.Tasks;
-using System.Configuration;
-using System.Collections.Generic;
-using System.Net;
-using Microsoft.Azure.Cosmos;
-```
-
-To the **Program.cs** file, add code to read the environment variables that you have set in the previous step. Define the `CosmosClient`, `Database`, and the `Container` objects. Next add code to the main method that calls the `GetStartedDemoAsync` method where you manage Azure Cosmos account resources.
-
-```csharp
-namespace todo
-{
-public class Program
-{
-
- /// The Azure Cosmos DB endpoint for running this GetStarted sample.
- private string EndpointUrl = Environment.GetEnvironmentVariable("EndpointUrl");
-
- /// The primary key for the Azure DocumentDB account.
- private string PrimaryKey = Environment.GetEnvironmentVariable("PrimaryKey");
-
- // The Cosmos client instance
- private CosmosClient cosmosClient;
-
- // The database we will create
- private Database database;
-
- // The container we will create.
- private Container container;
-
- // The name of the database and container we will create
- private string databaseId = "FamilyDatabase";
- private string containerId = "FamilyContainer";
-
- public static async Task Main(string[] args)
- {
- try
- {
- Console.WriteLine("Beginning operations...\n");
- Program p = new Program();
- await p.GetStartedDemoAsync();
-
- }
- catch (CosmosException de)
- {
- Exception baseException = de.GetBaseException();
- Console.WriteLine("{0} error occurred: {1}", de.StatusCode, de);
- }
- catch (Exception e)
- {
- Console.WriteLine("Error: {0}", e);
- }
- finally
- {
- Console.WriteLine("End of demo, press any key to exit.");
- Console.ReadKey();
- }
- }
-}
-}
-```
-
-### Create a database
-
-Define the `CreateDatabaseAsync` method within the `program.cs` class. This method creates the `FamilyDatabase` if it doesn't already exist.
-
-```csharp
-private async Task CreateDatabaseAsync()
-{
- // Create a new database
- this.database = await this.cosmosClient.CreateDatabaseIfNotExistsAsync(databaseId);
- Console.WriteLine("Created Database: {0}\n", this.database.Id);
-}
-```
-
-### Create a container
-
-Define the `CreateContainerAsync` method within the `program.cs` class. This method creates the `FamilyContainer` if it doesn't already exist.
-
-```csharp
-/// Create the container if it does not exist.
-/// Specifiy "/LastName" as the partition key since we're storing family information, to ensure good distribution of requests and storage.
-private async Task CreateContainerAsync()
-{
- // Create a new container
- this.container = await this.database.CreateContainerIfNotExistsAsync(containerId, "/LastName");
- Console.WriteLine("Created Container: {0}\n", this.container.Id);
-}
-```
-
-### Create an item
-
-Create a family item by adding the `AddItemsToContainerAsync` method with the following code. You can use the `CreateItemAsync` or `UpsertItemAsync` methods to create an item:
-
-```csharp
-private async Task AddItemsToContainerAsync()
-{
- // Create a family object for the Andersen family
- Family andersenFamily = new Family
- {
- Id = "Andersen.1",
- LastName = "Andersen",
- Parents = new Parent[]
- {
- new Parent { FirstName = "Thomas" },
- new Parent { FirstName = "Mary Kay" }
- },
- Children = new Child[]
- {
- new Child
- {
- FirstName = "Henriette Thaulow",
- Gender = "female",
- Grade = 5,
- Pets = new Pet[]
- {
- new Pet { GivenName = "Fluffy" }
- }
- }
- },
- Address = new Address { State = "WA", County = "King", City = "Seattle" },
- IsRegistered = false
- };
-
- try
- {
- // Create an item in the container representing the Andersen family. Note we provide the value of the partition key for this item, which is "Andersen".
- ItemResponse<Family> andersenFamilyResponse = await this.container.CreateItemAsync<Family>(andersenFamily, new PartitionKey(andersenFamily.LastName));
- // Note that after creating the item, we can access the body of the item with the Resource property of the ItemResponse. We can also access the RequestCharge property to see the amount of RUs consumed on this request.
- Console.WriteLine("Created item in database with id: {0} Operation consumed {1} RUs.\n", andersenFamilyResponse.Resource.Id, andersenFamilyResponse.RequestCharge);
- }
- catch (CosmosException ex) when (ex.StatusCode == HttpStatusCode.Conflict)
- {
- Console.WriteLine("Item in database with id: {0} already exists\n", andersenFamily.Id);
- }
-}
-
-```
-
-### Query the items
-
-After inserting an item, you can run a query to get the details of "Andersen" family. The following code shows how to execute the query using the SQL query directly. The SQL query to get the "Anderson" family details is: `SELECT * FROM c WHERE c.LastName = 'Andersen'`. Define the `QueryItemsAsync` method within the `program.cs` class and add the following code to it:
--
-```csharp
-private async Task QueryItemsAsync()
-{
- var sqlQueryText = "SELECT * FROM c WHERE c.LastName = 'Andersen'";
-
- Console.WriteLine("Running query: {0}\n", sqlQueryText);
-
- QueryDefinition queryDefinition = new QueryDefinition(sqlQueryText);
- FeedIterator<Family> queryResultSetIterator = this.container.GetItemQueryIterator<Family>(queryDefinition);
-
- List<Family> families = new List<Family>();
-
- while (queryResultSetIterator.HasMoreResults)
- {
- FeedResponse<Family> currentResultSet = await queryResultSetIterator.ReadNextAsync();
- foreach (Family family in currentResultSet)
- {
- families.Add(family);
- Console.WriteLine("\tRead {0}\n", family);
- }
- }
-}
-
-```
-
-### Delete the database
-
-Finally you can delete the database adding the `DeleteDatabaseAndCleanupAsync` method with the following code:
-
-```csharp
-private async Task DeleteDatabaseAndCleanupAsync()
-{
- DatabaseResponse databaseResourceResponse = await this.database.DeleteAsync();
- // Also valid: await this.cosmosClient.Databases["FamilyDatabase"].DeleteAsync();
-
- Console.WriteLine("Deleted Database: {0}\n", this.databaseId);
-
- //Dispose of CosmosClient
- this.cosmosClient.Dispose();
-}
-```
-
-### Execute the CRUD operations
-
-After you have defined all the required methods, execute them with in the `GetStartedDemoAsync` method. The `DeleteDatabaseAndCleanupAsync` method commented out in this code because you will not see any resources if that method is executed. You can uncomment it after validating that your Azure Cosmos DB resources were created in the Azure portal.
-
-```csharp
-public async Task GetStartedDemoAsync()
-{
- // Create a new instance of the Cosmos Client
- this.cosmosClient = new CosmosClient(EndpointUrl, PrimaryKey);
- await this.CreateDatabaseAsync();
- await this.CreateContainerAsync();
- await this.AddItemsToContainerAsync();
- await this.QueryItemsAsync();
-}
-```
-
-After you add all the required methods, save the `Program.cs` file.
-
-## Run the code
-
-Next build and run the application to create the Azure Cosmos DB resources. Make sure to open a new command prompt window, don't use the same instance that you have used to set the environment variables. Because the environment variables are not set in the current open window. You will need to open a new command prompt to see the updates.
-
-```console
-dotnet build
-```
-
-```console
-dotnet run
-```
-
-The following output is generated when you run the application. You can also sign into the Azure portal and validate that the resources are created:
-
-```console
-Created Database: FamilyDatabase
-
-Created Container: FamilyContainer
-
-Created item in database with id: Andersen.1 Operation consumed 11.62 RUs.
-
-Running query: SELECT * FROM c WHERE c.LastName = 'Andersen'
-
- Read {"id":"Andersen.1","LastName":"Andersen","Parents":[{"FamilyName":null,"FirstName":"Thomas"},{"FamilyName":null,"FirstName":"Mary Kay"}],"Children":[{"FamilyName":null,"FirstName":"Henriette Thaulow","Gender":"female","Grade":5,"Pets":[{"GivenName":"Fluffy"}]}],"Address":{"State":"WA","County":"King","City":"Seattle"},"IsRegistered":false}
-
-End of demo, press any key to exit.
-```
-
-You can validate that the data is created by signing into the Azure portal and see the required items in your Azure Cosmos account.
-
-## Clean up resources
-
-When no longer needed, you can use the Azure CLI or Azure PowerShell to remove the Azure Cosmos account and the corresponding resource group. The following command shows how to delete the resource group by using the Azure CLI:
-
-```azurecli
-az group delete -g "myResourceGroup"
-```
-
-## Next steps
-
-In this quickstart, you learned how to create an Azure Cosmos account, create a database and a container using a .NET Core app. You can now import additional data to your Azure Cosmos account with the instructions in the following article.
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
-
-> [!div class="nextstepaction"]
-> [Import data into Azure Cosmos DB](../import-data.md)
cosmos-db How To Create Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-create-account.md
+
+ Title: Create an Azure Cosmos DB SQL API account
+description: Learn how to create a new Azure Cosmos DB SQL API account to store databases, containers, and items.
++++
+ms.devlang: csharp
+ Last updated : 06/08/2022++
+# Create an Azure Cosmos DB SQL API account
+
+An Azure Cosmos DB SQL API account contains all of your Azure Cosmos DB resources: databases, containers, and items. The account provides a unique endpoint for various tools and SDKs to connect to Azure Cosmos DB and perform everyday operations. For more information about the resources in Azure Cosmos DB, see [Azure Cosmos DB resource model](../account-databases-containers-items.md).
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+
+## Create an account
+
+Create a single Azure Cosmos DB account using the SQL API.
+
+#### [Azure CLI](#tab/azure-cli)
+
+1. Create shell variables for *accountName*, *resourceGroupName*, and *location*.
+
+ ```azurecli-interactive
+ # Variable for resource group name
+ resourceGroupName="msdocs-cosmos"
+
+ # Variable for location
+ location="westus"
+
+ # Variable for account name with a randomnly generated suffix
+ let suffix=$RANDOM*$RANDOM
+ accountName="msdocs-$suffix"
+ ```
+
+1. If you haven't already, sign in to the Azure CLI using the [``az login``](/cli/azure/reference-index#az-login) command.
+
+1. Use the [``az group create``](/cli/azure/group#az-group-create) command to create a new resource group in your subscription.
+
+ ```azurecli-interactive
+ az group create \
+ --name $resourceGroupName \
+ --location $location
+ ```
+
+1. Use the [``az cosmosdb create``](/cli/azure/cosmosdb#az-cosmosdb-create) command to create a new Azure Cosmos DB SQL API account with default settings.
+
+ ```azurecli-interactive
+ az cosmosdb create \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --locations regionName=$location
+ ```
+
+#### [PowerShell](#tab/azure-powershell)
+
+1. Create shell variables for *ACCOUNT_NAME*, *RESOURCE_GROUP_NAME*, and **LOCATION**.
+
+ ```azurepowershell-interactive
+ # Variable for resource group name
+ $RESOURCE_GROUP_NAME = "msdocs-cosmos"
+
+ # Variable for location
+ $LOCATION = "West US"
+
+ # Variable for account name with a randomnly generated suffix
+ $SUFFIX = Get-Random
+ $ACCOUNT_NAME = "msdocs-$SUFFIX"
+ ```
+
+1. If you haven't already, sign in to Azure PowerShell using the [``Connect-AzAccount``](/powershell/module/az.accounts/connect-azaccount) cmdlet.
+
+1. Use the [``New-AzResourceGroup``](/powershell/module/az.resources/new-azresourcegroup) cmdlet to create a new resource group in your subscription.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ Name = $RESOURCE_GROUP_NAME
+ Location = $LOCATION
+ }
+ New-AzResourceGroup @parameters
+ ```
+
+1. Use the [``New-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet to create a new Azure Cosmos DB SQL API account with default settings.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ Name = $ACCOUNT_NAME
+ Location = $LOCATION
+ }
+ New-AzCosmosDBAccount @parameters
+ ```
+
+#### [Portal](#tab/azure-portal)
+
+> [!TIP]
+> For this guide, we recommend using the resource group name ``msdocs-cosmos``.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. From the Azure portal menu or the **Home page**, select **Create a resource**.
+
+1. On the **New** page, search for and select **Azure Cosmos DB**.
+
+1. On the **Select API option** page, select the **Create** option within the **Core (SQL) - Recommend** section. Azure Cosmos DB has five APIs: SQL, MongoDB, Gremlin, Table, and Cassandra. [Learn more about the SQL API](/azure/cosmos-db/sql/introduction.md).
+
+ :::image type="content" source="media/create-account-portal/cosmos-api-choices.png" lightbox="media/create-account-portal/cosmos-api-choices.png" alt-text="Screenshot of select A P I option page for Azure Cosmos D B.":::
+
+1. On the **Create Azure Cosmos DB Account** page, enter the following information:
+
+ | Setting | Value | Description |
+ | | | |
+ | Subscription | Subscription name | Select the Azure subscription that you wish to use for this Azure Cosmos account. |
+ | Resource Group | Resource group name | Select a resource group, or select **Create new**, then enter a unique name for the new resource group. |
+ | Account Name | A unique name | Enter a name to identify your Azure Cosmos account. The name will be used as part of a fully qualified domain name (FQDN) with a suffix of *documents.azure.com*, so the name must be globally unique. The name can only contain lowercase letters, numbers, and the hyphen (-) character. The name must also be between 3-44 characters in length. |
+ | Location | The region closest to your users | Select a geographic location to host your Azure Cosmos DB account. Use the location that is closest to your users to give them the fastest access to the data. |
+ | Capacity mode |Provisioned throughput or Serverless|Select **Provisioned throughput** to create an account in [provisioned throughput](../set-throughput.md) mode. Select **Serverless** to create an account in [serverless](../serverless.md) mode. |
+ | Apply Azure Cosmos DB free tier discount | **Apply** or **Do not apply** |With Azure Cosmos DB free tier, you'll get the first 1000 RU/s and 25 GB of storage for free in an account. Learn more about [free tier](https://azure.microsoft.com/pricing/details/cosmos-db/). |
+
+ > [!NOTE]
+ > You can have up to one free tier Azure Cosmos DB account per Azure subscription and must opt-in when creating the account. If you do not see the option to apply the free tier discount, this means another account in the subscription has already been enabled with free tier.
+
+ :::image type="content" source="media/create-account-portal/new-cosmos-account-page.png" lightbox="media/create-account-portal/new-cosmos-account-page.png" alt-text="Screenshot of new account page for Azure Cosmos D B SQL A P I.":::
+
+1. Select **Review + create**.
+
+1. Review the settings you provide, and then select **Create**. It takes a few minutes to create the account. Wait for the portal page to display **Your deployment is complete** before moving on.
+
+1. Select **Go to resource** to go to the Azure Cosmos DB account page.
+
+ :::image type="content" source="media/create-account-portal/cosmos-deployment-complete.png" lightbox="media/create-account-portal/cosmos-deployment-complete.png" alt-text="Screenshot of deployment page for Azure Cosmos D B SQL A P I resource.":::
+++
+## Next steps
+
+In this guide, you learned how to create an Azure Cosmos DB SQL API account. You can now import additional data to your Azure Cosmos DB account.
+
+> [!div class="nextstepaction"]
+> [Import data into Azure Cosmos DB SQL API](../import-data.md)
cosmos-db How To Dotnet Create Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-dotnet-create-container.md
+
+ Title: Create a container in Azure Cosmos DB SQL API using .NET
+description: Learn how to create a container in your Azure Cosmos DB SQL API database using the .NET SDK.
++++
+ms.devlang: csharp
+ Last updated : 06/08/2022+++
+# Create a container in Azure Cosmos DB SQL API using .NET
+
+Containers in Azure Cosmos DB store sets of items. Before you can create, query, or manage items, you must first create a container.
+
+## Name a container
+
+In Azure Cosmos DB, a container is analogous to a table in a relational database. When you create a container, the container name forms a segment of the URI used to access the container resource and any child items.
+
+Here are some quick rules when naming a container:
+
+* Keep container names between 3 and 63 characters long
+* Container names can only contain lowercase letters, numbers, or the dash (-) character.
+* Container names must start with a lowercase letter or number.
+
+Once created, the URI for a container is in this format:
+
+``https://<cosmos-account-name>.documents.azure.com/dbs/<database-name>/colls/<container-name>``
+
+## Create a container
+
+To create a container, call one of the following methods:
+
+* [``CreateContainerAsync``](#create-a-container-asynchronously)
+* [``CreateContainerIfNotExistsAsync``](#create-a-container-asynchronously-if-it-doesnt-already-exist)
+
+### Create a container asynchronously
+
+The following example creates a container asynchronously:
++
+The [``Database.CreateContainerAsync``](/dotnet/api/microsoft.azure.cosmos.database.createcontainerasync) method will throw an exception if a database with the same name already exists.
+
+### Create a container asynchronously if it doesn't already exist
+
+The following example creates a container asynchronously only if it doesn't already exist on the account:
++
+The [``Database.CreateContainerIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.database.createcontainerifnotexistsasync) method will only create a new container if it doesn't already exist. This method is useful for avoiding errors if you run the same code multiple times.
+
+## Parsing the response
+
+In all examples so far, the response from the asynchronous request was cast immediately to the [``Container``](/dotnet/api/microsoft.azure.cosmos.container) type. You may want to parse metadata about the response including headers and the HTTP status code. The true return type for the **Database.CreateContainerAsync** and **Database.CreateContainerIfNotExistsAsync** methods is [``ContainerResponse``](/dotnet/api/microsoft.azure.cosmos.containerresponse).
+
+The following example shows the **Database.CreateContainerIfNotExistsAsync** method returning a **ContainerResponse**. Once returned, you can parse response properties and then eventually get the underlying **Container** object:
++
+## See also
+
+- [Get started with Azure Cosmos DB SQL API and .NET](how-to-dotnet-get-started.md)
+- [Create a database](how-to-dotnet-create-database.md)
cosmos-db How To Dotnet Create Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-dotnet-create-database.md
+
+ Title: Create a database in Azure Cosmos DB SQL API using .NET
+description: Learn how to create a database in your Azure Cosmos DB SQL API account using the .NET SDK.
++++
+ms.devlang: csharp
+ Last updated : 06/08/2022+++
+# Create a database in Azure Cosmos DB SQL API using .NET
+
+Databases in Azure Cosmos DB are units of management for one or more containers. Before you can create or manage containers, you must first create a database.
+
+## Name a database
+
+In Azure Cosmos DB, a database is analogous to a namespace. When you create a database, the database name forms a segment of the URI used to access the database resource and any child resources.
+
+Here are some quick rules when naming a database:
+
+* Keep database names between 3 and 63 characters long
+* Database names can only contain lowercase letters, numbers, or the dash (-) character.
+* Database names must start with a lowercase letter or number.
+
+Once created, the URI for a database is in this format:
+
+``https://<cosmos-account-name>.documents.azure.com/dbs/<database-name>``
+
+## Create a database
+
+To create a database, call one of the following methods:
+
+* [``CreateDatabaseAsync``](#create-a-database-asynchronously)
+* [``CreateDatabaseIfNotExistsAsync``](#create-a-database-asynchronously-if-it-doesnt-already-exist)
+
+### Create a database asynchronously
+
+The following example creates a database asynchronously:
++
+The [``CosmosClient.CreateDatabaseAsync``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.createdatabaseasync) method will throw an exception if a database with the same name already exists.
+
+### Create a database asynchronously if it doesn't already exist
+
+The following example creates a database asynchronously only if it doesn't already exist on the account:
++
+The [``CosmosClient.CreateDatabaseIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.createdatabaseifnotexistsasync) method will only create a new database if it doesn't already exist. This method is useful for avoiding errors if you run the same code multiple times.
+
+## Parsing the response
+
+In all examples so far, the response from the asynchronous request was cast immediately to the [``Database``](/dotnet/api/microsoft.azure.cosmos.database) type. You may want to parse metadata about the response including headers and the HTTP status code. The true return type for the **CosmosClient.CreateDatabaseAsync** and **CosmosClient.CreateDatabaseIfNotExistsAsync** methods is [``DatabaseResponse``](/dotnet/api/microsoft.azure.cosmos.databaseresponse).
+
+The following example shows the **CosmosClient.CreateDatabaseIfNotExistsAsync** method returning a **DatabaseResponse**. Once returned, you can parse response properties and then eventually get the underlying **Database** object:
++
+## See also
+
+- [Get started with Azure Cosmos DB SQL API and .NET](how-to-dotnet-get-started.md)
+- [Create a container](how-to-dotnet-create-container.md)
cosmos-db How To Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-dotnet-get-started.md
+
+ Title: Get started with Azure Cosmos DB SQL API and .NET
+description: Get started developing a .NET application that works with Azure Cosmos DB SQL API. This article helps you learn how to set up a project and configure access to an Azure Cosmos DB SQL API endpoint.
++++
+ms.devlang: csharp
+ Last updated : 06/08/2022+++
+# Get started with Azure Cosmos DB SQL API and .NET
+
+This article shows you how to connect to Azure Cosmos DB SQL API using the .NET SDK. Once connected, you can perform operations on databases, containers, and items.
+
+[Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) | [Samples](samples-dotnet.md) | [API reference](/dotnet/api/microsoft.azure.cosmos) | [Library source code](https://github.com/Azure/azure-cosmos-dotnet-v3) | [Give Feedback](https://github.com/Azure/azure-cosmos-dotnet-v3/issues)
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+* Azure Cosmos DB SQL API account. [Create a SQL API account](how-to-create-account.md).
+* [.NET 6.0 or later](https://dotnet.microsoft.com/download)
+* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+
+## Set up your project
+
+### Create the .NET console application
+
+Create a new .NET application by using the [``dotnet new``](/dotnet/core/tools/dotnet-new) command with the **console** template.
+
+```dotnetcli
+dotnet new console
+```
+
+Import the [Microsoft.Azure.Cosmos](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) NuGet package using the [``dotnet add package``](/dotnet/core/tools/dotnet-add-package) command.
+
+```dotnetcli
+dotnet add package Microsoft.Azure.Cosmos
+```
+
+Build the project with the [``dotnet build``](/dotnet/core/tools/dotnet-build) command.
+
+```dotnetcli
+dotnet build
+```
+
+## Connect to Azure Cosmos DB SQL API
+
+To connect to the SQL API of Azure Cosmos DB, create an instance of the [``CosmosClient``](/dotnet/api/microsoft.azure.cosmos.cosmosclient) class. This class is the starting point to perform all operations against databases. There are three core ways to connect to a SQL API account using the **CosmosClient** class:
+
+* [Connect with a SQL API endpoint and read/write key](#connect-with-an-endpoint-and-key)
+* [Connect with a SQL API connection string](#connect-with-a-connection-string)
+* [Connect with Azure Active Directory](#connect-using-the-microsoft-identity-platform)
+
+### Connect with an endpoint and key
+
+The most common constructor for **CosmosClient** has two parameters:
+
+| Parameter | Example value | Description |
+| | | |
+| ``accountEndpoint`` | ``COSMOS_ENDPOINT`` environment variable | SQL API endpoint to use for all requests |
+| ``authKeyOrResourceToken`` | ``COSMOS_KEY`` environment variable | Account key or resource token to use when authenticating |
+
+#### Retrieve your account endpoint and key
+
+##### [Azure CLI](#tab/azure-cli)
+
+1. Create a shell variable for *resourceGroupName*.
+
+ ```azurecli-interactive
+ # Variable for resource group name
+ resourceGroupName="msdocs-cosmos-dotnet-howto-rg"
+ ```
+
+1. Use the [``az cosmosdb list``](/cli/azure/cosmosdb#az-cosmosdb-list) command to retrieve the name of the first Azure Cosmos DB account in your resource group and store it in the *accountName* shell variable.
+
+ ```azurecli-interactive
+ # Retrieve most recently created account name
+ accountName=$(
+ az cosmosdb list \
+ --resource-group $resourceGroupName \
+ --query "[0].name" \
+ --output tsv
+ )
+ ```
+
+1. Get the SQL API endpoint *URI* for the account using the [``az cosmosdb show``](/cli/azure/cosmosdb#az-cosmosdb-show) command.
+
+ ```azurecli-interactive
+ az cosmosdb show \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --query "documentEndpoint"
+ ```
+
+1. Find the *PRIMARY KEY* from the list of keys for the account with the[``az-cosmosdb-keys-list``](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command.
+
+ ```azurecli-interactive
+ az cosmosdb keys list \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --type "keys" \
+ --query "primaryMasterKey"
+ ```
+
+1. Record the *URI* and *PRIMARY KEY* values. You'll use these credentials later.
+
+##### [PowerShell](#tab/azure-powershell)
+
+1. Create a shell variable for *RESOURCE_GROUP_NAME*.
+
+ ```azurepowershell-interactive
+ # Variable for resource group name
+ $RESOURCE_GROUP_NAME = "msdocs-cosmos-dotnet-howto-rg"
+ ```
+
+1. Use the [``Get-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) cmdlet to retrieve the name of the first Azure Cosmos DB account in your resource group and store it in the *ACCOUNT_NAME* shell variable.
+
+ ```azurepowershell-interactive
+ # Retrieve most recently created account name
+ $parameters = @{
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ }
+ $ACCOUNT_NAME = (
+ Get-AzCosmosDBAccount @parameters |
+ Select-Object -Property Name -First 1
+ ).Name
+ ```
+
+1. Get the SQL API endpoint *URI* for the account using the [``Get-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) cmdlet.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ Name = $ACCOUNT_NAME
+ }
+ Get-AzCosmosDBAccount @parameters |
+ Select-Object -Property "DocumentEndpoint"
+ ```
+
+1. Find the *PRIMARY KEY* from the list of keys for the account with the [``Get-AzCosmosDBAccountKey``](/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) cmdlet.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ Name = $ACCOUNT_NAME
+ Type = "Keys"
+ }
+ Get-AzCosmosDBAccountKey @parameters |
+ Select-Object -Property "PrimaryMasterKey"
+ ```
+
+1. Record the *URI* and *PRIMARY KEY* values. You'll use these credentials later.
+
+##### [Portal](#tab/azure-portal)
+
+> [!TIP]
+> For this guide, we recommend using the resource group name ``msdocs-cosmos-dotnet-howto-rg``.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to the existing Azure Cosmos DB SQL API account page.
+
+1. From the Azure Cosmos DB SQL API account page, select the **Keys** navigation menu option.
+
+ :::image type="content" source="media/get-credentials-portal/cosmos-keys-option.png" lightbox="media/get-credentials-portal/cosmos-keys-option.png" alt-text="Screenshot of an Azure Cosmos D B SQL A P I account page. The Keys option is highlighted in the navigation menu.":::
+
+1. Record the values from the **URI** and **PRIMARY KEY** fields. You'll use these values in a later step.
+
+ :::image type="content" source="media/get-credentials-portal/cosmos-endpoint-key-credentials.png" lightbox="media/get-credentials-portal/cosmos-endpoint-key-credentials.png" alt-text="Screenshot of Keys page with various credentials for an Azure Cosmos D B SQL A P I account.":::
+++
+To use the **URI** and **PRIMARY KEY** values within your .NET code, persist them to new environment variables on the local machine running the application.
+
+#### [Windows](#tab/windows)
+
+```powershell
+$env:COSMOS_ENDPOINT = "<cosmos-account-URI>"
+$env:COSMOS_KEY = "<cosmos-account-PRIMARY-KEY>"
+```
+
+#### [Linux / macOS](#tab/linux+macos)
+
+```bash
+export COSMOS_ENDPOINT="<cosmos-account-URI>"
+export COSMOS_KEY="<cosmos-account-PRIMARY-KEY>"
+```
+++
+#### Create CosmosClient with account endpoint and key
+
+Create a new instance of the **CosmosClient** class with the ``COSMOS_ENDPOINT`` and ``COSMOS_KEY`` environment variables as parameters.
++
+### Connect with a connection string
+
+Another constructor for **CosmosClient** only contains a single parameter:
+
+| Parameter | Example value | Description |
+| | | |
+| ``accountEndpoint`` | ``COSMOS_ENDPOINT`` environment variable | SQL API endpoint to use for all requests |
+| ``connectionString`` | ``COSMOS_CONNECTION_STRING`` environment variable | Connection string to the SQL API account |
+
+#### Retrieve your account connection string
+
+##### [Azure CLI](#tab/azure-cli)
+
+1. Use the [``az cosmosdb list``](/cli/azure/cosmosdb#az-cosmosdb-list) command to retrieve the name of the first Azure Cosmos DB account in your resource group and store it in the *accountName* shell variable.
+
+ ```azurecli-interactive
+ # Retrieve most recently created account name
+ accountName=$(
+ az cosmosdb list \
+ --resource-group $resourceGroupName \
+ --query "[0].name" \
+ --output tsv
+ )
+ ```
+
+1. Find the *PRIMARY CONNECTION STRING* from the list of connection strings for the account with the[``az-cosmosdb-keys-list``](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command.
+
+ ```azurecli-interactive
+ az cosmosdb keys list \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --type "connection-strings" \
+ --query "connectionStrings[?description == \`Primary SQL Connection String\`] | [0].connectionString"
+ ```
+
+##### [PowerShell](#tab/azure-powershell)
+
+1. Use the [``Get-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) cmdlet to retrieve the name of the first Azure Cosmos DB account in your resource group and store it in the *ACCOUNT_NAME* shell variable.
+
+ ```azurepowershell-interactive
+ # Retrieve most recently created account name
+ $parameters = @{
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ }
+ $ACCOUNT_NAME = (
+ Get-AzCosmosDBAccount @parameters |
+ Select-Object -Property Name -First 1
+ ).Name
+ ```
+
+1. Find the *PRIMARY CONNECTION STRING* from the list of connection strings for the account with the [``Get-AzCosmosDBAccountKey``](/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) cmdlet.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ Name = $ACCOUNT_NAME
+ Type = "ConnectionStrings"
+ }
+ Get-AzCosmosDBAccountKey @parameters |
+ Select-Object -Property "Primary SQL Connection String" -First 1
+ ```
+
+##### [Portal](#tab/azure-portal)
+
+> [!TIP]
+> For this guide, we recommend using the resource group name ``msdocs-cosmos-dotnet-howto-rg``.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to the existing Azure Cosmos DB SQL API account page.
+
+1. From the Azure Cosmos DB SQL API account page, select the **Keys** navigation menu option.
+
+1. Record the value from the **PRIMARY CONNECTION STRING** field.
++
+To use the **PRIMARY CONNECTION STRING** value within your .NET code, persist it to a new environment variable on the local machine running the application.
+
+#### [Windows](#tab/windows)
+
+```powershell
+$env:COSMOS_CONNECTION_STRING = "<cosmos-account-PRIMARY-CONNECTION-STRING>"
+```
+
+#### [Linux / macOS](#tab/linux+macos)
+
+```bash
+export COSMOS_CONNECTION_STRING="<cosmos-account-PRIMARY-CONNECTION-STRING>"
+```
+++
+#### Create CosmosClient with connection string
+
+Create a new instance of the **CosmosClient** class with the ``COSMOS_CONNECTION_STRING`` environment variable as the only parameter.
++
+### Connect using the Microsoft Identity Platform
+
+To connect to your SQL API account using the Microsoft Identity Platform and Azure AD, use a security principal. The exact type of principal will depend on where you host your application code. The table below serves as a quick reference guide.
+
+| Where the application runs | Security principal
+|--|--||
+| Local machine (developing and testing) | User identity or service principal |
+| Azure | Managed identity |
+| Servers or clients outside of Azure | Service principal |
+
+#### Import Azure.Identity
+
+The **Azure.Identity** NuGet package contains core authentication functionality that is shared among all Azure SDK libraries.
+
+Import the [Azure.Identity](https://www.nuget.org/packages/Azure.Identity) NuGet package using the ``dotnet add package`` command.
+
+```dotnetcli
+dotnet add package Azure.Identity
+```
+
+Rebuild the project with the ``dotnet build`` command.
+
+```dotnetcli
+dotnet build
+```
+
+In your code editor, add using directives for ``Azure.Core`` and ``Azure.Identity`` namespaces.
++
+#### Create CosmosClient with default credential implementation
+
+If you're testing on a local machine, or your application will run on Azure services with direct support for managed identities, obtain an OAuth token by creating a [``DefaultAzureCredential``](/dotnet/api/azure.identity.defaultazurecredential) instance.
+
+For this example, we saved the instance in a variable of type [``TokenCredential``](/dotnet/api/azure.core.tokencredential) as that's a more generic type that's reusable across SDKs.
++
+Create a new instance of the **CosmosClient** class with the ``COSMOS_ENDPOINT`` environment variable and the **TokenCredential** object as parameters.
++
+#### Create CosmosClient with a custom credential implementation
+
+If you plan to deploy the application out of Azure, you can obtain an OAuth token by using other classes in the [Azure.Identity client library for .NET](/dotnet/api/overview/azure/identity-readme). These other classes also derive from the ``TokenCredential`` class.
+
+For this example, we create a [``ClientSecretCredential``](/dotnet/api/azure.identity.clientsecretcredential) instance by using client and tenant identifiers, along with a client secret.
++
+You can obtain the client ID, tenant ID, and client secret when you register an application in Azure Active Directory (AD). For more information about registering Azure AD applications, see [Register an application with the Microsoft identity platform](/azure/active-directory/develop/quickstart-register-app).
+
+Create a new instance of the **CosmosClient** class with the ``COSMOS_ENDPOINT`` environment variable and the **TokenCredential** object as parameters.
+++
+## Build your application
+
+As you build your application, your code will primarily interact with four types of resources:
+
+- The SQL API account, which is the unique top-level namespace for your Azure Cosmos DB data.
+
+- Databases, which organize the containers in your account.
+
+- Containers, which contain a set of individual items in your database.
+
+- Items, which represent a JSON document in your container.
+
+The following diagram shows the relationship between these resources.
+
+ Hierarchical diagram showing an Azure Cosmos D B account at the top. The account has two child database nodes. One of the database nodes includes two child container nodes. The other database node includes a single child container node. That single container node has three child item nodes.
+
+Each type of resource is represented by one or more associated .NET classes. Here's a list of the most common classes:
+
+| Class | Description |
+|||
+| [``CosmosClient``](/dotnet/api/microsoft.azure.cosmos.cosmosclient) | This class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service. |
+| [``Database``](/dotnet/api/microsoft.azure.cosmos.database) | This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it. |
+| [``Container``](/dotnet/api/microsoft.azure.cosmos.container) | This class is a reference to a container that also may not exist in the service yet. The container is validated server-side when you attempt to work with it. |
+
+The following guides show you how to use each of these classes to build your application.
+
+| Guide | Description |
+|--||
+| [Create a database](how-to-dotnet-create-database.md) | Create databases |
+| [Create a container](how-to-dotnet-create-container.md) | Create containers |
+
+## See also
+
+- [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Cosmos)
+- [Samples](samples-dotnet.md)
+- [API reference](/dotnet/api/microsoft.azure.cosmos)
+- [Library source code](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [Give Feedback](https://github.com/Azure/azure-cosmos-dotnet-v3/issues)
+
+## Next steps
+
+Now that you've connected to a SQL API account, use the next guide to create and manage databases.
+
+> [!div class="nextstepaction"]
+> [Create a database in Azure Cosmos DB SQL API using .NET](how-to-dotnet-create-database.md)
cosmos-db Kafka Connector Sink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/kafka-connector-sink.md
To delete the created Azure Cosmos DB service and its resource group using Azure
## <a id="sink-configuration-properties"></a>Sink configuration properties
-The following settings are used to configure an Azure Cosmos DB Kafka sink connector. These configuration values determine which Kafka topics data is consumed, which Azure Cosmos DB containerΓÇÖs data is written into, and formats to serialize the data. For an example configuration file with the default values, refer to [this config]( https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/src/docker/resources/sink.example.json).
+The following settings are used to configure an Azure Cosmos DB Kafka sink connector. These configuration values determine which Kafka topics data is consumed, which Azure Cosmos DB containerΓÇÖs data is written into, and formats to serialize the data. For an example configuration file with the default values, refer to [this config](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/src/docker/resources/sink.example.json).
| Name | Type | Description | Required/Optional | | : | : | : | : |
To be clear, the only JSON structure that is valid for `schemas.enable=true` has
You can learn more about change feed in Azure Cosmo DB with the following docs: * [Introduction to the change feed](https://azurecosmosdb.github.io/labs/dotnet/labs/08-change_feed_with_azure_functions.html)
-* [Reading from change feed](read-change-feed.md)
+* [Reading from change feed](read-change-feed.md)
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/quickstart-dotnet.md
+
+ Title: Quickstart - Azure Cosmos DB SQL API client library for .NET
+description: Learn how to build a .NET app to manage Azure Cosmos DB SQL API account resources in this quickstart.
++++
+ms.devlang: csharp
+ Last updated : 06/08/2022+++
+# Quickstart: Azure Cosmos DB SQL API client library for .NET
+
+> [!div class="op_single_selector"]
+> * [.NET](quickstart-dotnet.md)
+
+Get started with the Azure Cosmos DB client library for .NET to create databases, containers, and items within your account. Follow these steps to install the package and try out example code for basic tasks.
+
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/azure-cosmos-db-dotnet-quickstart) are available on GitHub as a .NET project.
+
+[API reference documentation](/dotnet/api/microsoft.azure.cosmos) | [Library source code](https://github.com/Azure/azure-cosmos-dotnet-v3) | [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Cosmos)
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+* [.NET 6.0 or later](https://dotnet.microsoft.com/download)
+* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+
+### Prerequisite check
+
+* In a terminal or command window, run ``dotnet --version`` to check that the .NET SDK is version 6.0 or later.
+* Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed.
+
+## Setting up
+
+This section walks you through creating an Azure Cosmos account and setting up a project that uses Azure Cosmos DB SQL API client library for .NET to manage resources.
+
+### Create an Azure Cosmos DB account
+
+This quickstart will create a single Azure Cosmos DB account using the SQL API.
+
+#### [Azure CLI](#tab/azure-cli)
+
+1. Create shell variables for *accountName*, *resourceGroupName*, and *location*.
+
+ ```azurecli-interactive
+ # Variable for resource group name
+ resourceGroupName="msdocs-cosmos-dotnet-quickstart-rg"
+ location="westus"
+
+ # Variable for account name with a randomnly generated suffix
+ let suffix=$RANDOM*$RANDOM
+ accountName="msdocs-dotnet-$suffix"
+ ```
+
+1. If you haven't already, sign in to the Azure CLI using the [``az login``](/cli/azure/reference-index#az-login) command.
+
+1. Use the [``az group create``](/cli/azure/group#az-group-create) command to create a new resource group in your subscription.
+
+ ```azurecli-interactive
+ az group create \
+ --name $resourceGroupName \
+ --location $location
+ ```
+
+1. Use the [``az cosmosdb create``](/cli/azure/cosmosdb#az-cosmosdb-create) command to create a new Azure Cosmos DB SQL API account with default settings.
+
+ ```azurecli-interactive
+ az cosmosdb create \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --locations regionName=$location
+ ```
+
+1. Get the SQL API endpoint *URI* for the account using the [``az cosmosdb show``](/cli/azure/cosmosdb#az-cosmosdb-show) command.
+
+ ```azurecli-interactive
+ az cosmosdb show \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --query "documentEndpoint"
+ ```
+
+1. Find the *PRIMARY KEY* from the list of keys for the account with the[``az-cosmosdb-keys-list``](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command.
+
+ ```azurecli-interactive
+ az cosmosdb keys list \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --type "keys" \
+ --query "primaryMasterKey"
+ ```
+
+1. Record the *URI* and *PRIMARY KEY* values. You'll use these credentials later.
+
+#### [PowerShell](#tab/azure-powershell)
+
+1. Create shell variables for *ACCOUNT_NAME*, *RESOURCE_GROUP_NAME*, and **LOCATION**.
+
+ ```azurepowershell-interactive
+ # Variable for resource group name
+ $RESOURCE_GROUP_NAME = "msdocs-cosmos-dotnet-quickstart-rg"
+ $LOCATION = "West US"
+
+ # Variable for account name with a randomnly generated suffix
+ $SUFFIX = Get-Random
+ $ACCOUNT_NAME = "msdocs-dotnet-$SUFFIX"
+ ```
+
+1. If you haven't already, sign in to Azure PowerShell using the [``Connect-AzAccount``](/powershell/module/az.accounts/connect-azaccount) cmdlet.
+
+1. Use the [``New-AzResourceGroup``](/powershell/module/az.resources/new-azresourcegroup) cmdlet to create a new resource group in your subscription.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ Name = $RESOURCE_GROUP_NAME
+ Location = $LOCATION
+ }
+ New-AzResourceGroup @parameters
+ ```
+
+1. Use the [``New-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet to create a new Azure Cosmos DB SQL API account with default settings.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ Name = $ACCOUNT_NAME
+ Location = $LOCATION
+ }
+ New-AzCosmosDBAccount @parameters
+ ```
+
+1. Get the SQL API endpoint *URI* for the account using the [``Get-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) cmdlet.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ Name = $ACCOUNT_NAME
+ }
+ Get-AzCosmosDBAccount @parameters |
+ Select-Object -Property "DocumentEndpoint"
+ ```
+
+1. Find the *PRIMARY KEY* from the list of keys for the account with the [``Get-AzCosmosDBAccountKey``](/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) cmdlet.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ Name = $ACCOUNT_NAME
+ Type = "Keys"
+ }
+ Get-AzCosmosDBAccountKey @parameters |
+ Select-Object -Property "PrimaryMasterKey"
+ ```
+
+1. Record the *URI* and *PRIMARY KEY* values. You'll use these credentials later.
+
+#### [Portal](#tab/azure-portal)
+
+> [!TIP]
+> For this quickstart, we recommend using the resource group name ``msdocs-cosmos-dotnet-quickstart-rg``.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. From the Azure portal menu or the **Home page**, select **Create a resource**.
+
+1. On the **New** page, search for and select **Azure Cosmos DB**.
+
+1. On the **Select API option** page, select the **Create** option within the **Core (SQL) - Recommend** section. Azure Cosmos DB has five APIs: SQL, MongoDB, Gremlin, Table, and Cassandra. [Learn more about the SQL API](/azure/cosmos-db/sql/introduction.md).
+
+ :::image type="content" source="media/create-account-portal/cosmos-api-choices.png" lightbox="media/create-account-portal/cosmos-api-choices.png" alt-text="Screenshot of select A P I option page for Azure Cosmos D B.":::
+
+1. On the **Create Azure Cosmos DB Account** page, enter the following information:
+
+ | Setting | Value | Description |
+ | | | |
+ | Subscription | Subscription name | Select the Azure subscription that you wish to use for this Azure Cosmos account. |
+ | Resource Group | Resource group name | Select a resource group, or select **Create new**, then enter a unique name for the new resource group. |
+ | Account Name | A unique name | Enter a name to identify your Azure Cosmos account. The name will be used as part of a fully qualified domain name (FQDN) with a suffix of *documents.azure.com*, so the name must be globally unique. The name can only contain lowercase letters, numbers, and the hyphen (-) character. The name must also be between 3-44 characters in length. |
+ | Location | The region closest to your users | Select a geographic location to host your Azure Cosmos DB account. Use the location that is closest to your users to give them the fastest access to the data. |
+ | Capacity mode |Provisioned throughput or Serverless|Select **Provisioned throughput** to create an account in [provisioned throughput](../set-throughput.md) mode. Select **Serverless** to create an account in [serverless](../serverless.md) mode. |
+ | Apply Azure Cosmos DB free tier discount | **Apply** or **Do not apply** |With Azure Cosmos DB free tier, you'll get the first 1000 RU/s and 25 GB of storage for free in an account. Learn more about [free tier](https://azure.microsoft.com/pricing/details/cosmos-db/). |
+
+ > [!NOTE]
+ > You can have up to one free tier Azure Cosmos DB account per Azure subscription and must opt-in when creating the account. If you do not see the option to apply the free tier discount, this means another account in the subscription has already been enabled with free tier.
+
+ :::image type="content" source="media/create-account-portal/new-cosmos-account-page.png" lightbox="media/create-account-portal/new-cosmos-account-page.png" alt-text="Screenshot of new account page for Azure Cosmos D B SQL A P I.":::
+
+1. Select **Review + create**.
+
+1. Review the settings you provide, and then select **Create**. It takes a few minutes to create the account. Wait for the portal page to display **Your deployment is complete** before moving on.
+
+1. Select **Go to resource** to go to the Azure Cosmos DB account page.
+
+ :::image type="content" source="media/create-account-portal/cosmos-deployment-complete.png" lightbox="media/create-account-portal/cosmos-deployment-complete.png" alt-text="Screenshot of deployment page for Azure Cosmos D B SQL A P I resource.":::
+
+1. From the Azure Cosmos DB SQL API account page, select the **Keys** navigation menu option.
+
+ :::image type="content" source="media/get-credentials-portal/cosmos-keys-option.png" lightbox="media/get-credentials-portal/cosmos-keys-option.png" alt-text="Screenshot of an Azure Cosmos D B SQL A P I account page. The Keys option is highlighted in the navigation menu.":::
+
+1. Record the values from the **URI** and **PRIMARY KEY** fields. You'll use these values in a later step.
+
+ :::image type="content" source="media/get-credentials-portal/cosmos-endpoint-key-credentials.png" lightbox="media/get-credentials-portal/cosmos-endpoint-key-credentials.png" alt-text="Screenshot of Keys page with various credentials for an Azure Cosmos D B SQL A P I account.":::
+++
+### Create a new .NET app
+
+Create a new .NET application in an empty folder using your preferred terminal. Use the [``dotnet new``](/dotnet/core/tools/dotnet-new) command specifying the **console** template.
+
+```dotnetcli
+dotnet new console
+```
+
+### Install the package
+
+Add the [Microsoft.Azure.Cosmos](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) NuGet package to the .NET project. Use the [``dotnet add package``](/dotnet/core/tools/dotnet-add-package) command specifying the name of the NuGet package.
+
+```dotnetcli
+dotnet add package Microsoft.Azure.Cosmos
+```
+
+Build the project with the [``dotnet build``](/dotnet/core/tools/dotnet-build) command.
+
+```dotnetcli
+dotnet build
+```
+
+Make sure that the build was successful with no errors. The expected output from the build should look something like this:
+
+```output
+ Determining projects to restore...
+ All projects are up-to-date for restore.
+ dslkajfjlksd -> C:\Users\sidandrews\Demos\dslkajfjlksd\bin\Debug\net6.0\dslkajfjlksd.dll
+
+Build succeeded.
+ 0 Warning(s)
+ 0 Error(s)
+```
+
+### Configure environment variables
+
+To use the **URI** and **PRIMARY KEY** values within your .NET code, persist them to new environment variables on the local machine running the application. To set the environment variable, use your preferred terminal to run the following commands:
+
+#### [Windows](#tab/windows)
+
+```powershell
+$env:COSMOS_ENDPOINT = "<cosmos-account-URI>"
+$env:COSMOS_KEY = "<cosmos-account-PRIMARY-KEY>"
+```
+
+#### [Linux / macOS](#tab/linux+macos)
+
+```bash
+export COSMOS_ENDPOINT="<cosmos-account-URI>"
+export COSMOS_KEY="<cosmos-account-PRIMARY-KEY>"
+```
+++
+## Object model
+
+Before you start building the application, let's look into the hierarchy of resources in Azure Cosmos DB. Azure Cosmos DB has a specific object model used to create and access resources. The Azure Cosmos DB creates resources in a hierarchy that consists of accounts, databases, containers, and items.
+
+ Hierarchical diagram showing an Azure Cosmos D B account at the top. The account has two child database nodes. One of the database nodes includes two child container nodes. The other database node includes a single child container node. That single container node has three child item nodes.
+
+For more information about the hierarchy of different resources, see [working with databases, containers, and items in Azure Cosmos DB](../account-databases-containers-items.md).
+
+You'll use the following .NET classes to interact with these resources:
+
+- [``CosmosClient``](/dotnet/api/microsoft.azure.cosmos.cosmosclient) - This class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service.
+- [``Database``](/dotnet/api/microsoft.azure.cosmos.database) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.
+- [``Container``](/dotnet/api/microsoft.azure.cosmos.container) - This class is a reference to a container that also may not exist in the service yet. The container is validated server-side when you attempt to work with it.
+- [``QueryDefinition``](/dotnet/api/microsoft.azure.cosmos.querydefinition) - This class represents a SQL query and any query parameters.
+- [``FeedIterator<>``](/dotnet/api/microsoft.azure.cosmos.feediterator-1) - This class represents an iterator that can track the current page of results and get a new page of results.
+- [``FeedResponse<>``](/dotnet/api/microsoft.azure.cosmos.feedresponse-1) - This class represents a single page of responses from the iterator. This type can be iterated over using a ``foreach`` loop.
+
+## Code examples
+
+- [Authenticate the client](#authenticate-the-client)
+- [Create a database](#create-a-database)
+- [Create a container](#create-a-container)
+- [Create an item](#create-an-item)
+- [Get an item](#get-an-item)
+- [Query items](#query-items)
+
+The sample code described in this article creates a database named ``adventureworks`` with a container named ``products``. The ``products`` database is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
+
+For this sample code, the container will use the category as a logical partition key.
+
+### Authenticate the client
+
+From the project directory, open the *Program.cs* file. In your editor, add a using directive for ``Microsoft.Azure.Cosmos``.
++
+Define a new instance of the ``CosmosClient`` class using the constructor, and [``Environment.GetEnvironmentVariable``](/dotnet/api/system.environment.getenvironmentvariable) to read the two environment variables you created earlier.
++
+For more information on different ways to create a ``CosmosClient`` instance, see [Get started with Azure Cosmos DB SQL API and .NET](how-to-dotnet-get-started.md#connect-to-azure-cosmos-db-sql-api).
+
+### Create a database
+
+Use the [``CosmosClient.CreateDatabaseIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.createdatabaseifnotexistsasync) method to create a new database if it doesn't already exist. This method will return a reference to the existing or newly created database.
++
+For more information on creating a database, see [Create a database in Azure Cosmos DB SQL API using .NET](how-to-dotnet-create-database.md).
+
+### Create a container
+
+The [``Database.CreateContainerIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.database.createcontainerifnotexistsasync) will create a new container if it doesn't already exist. This method will also return a reference to the container.
++
+For more information on creating a container, see [Create a container in Azure Cosmos DB SQL API using .NET](how-to-dotnet-create-container.md).
+
+### Create an item
+
+The easiest way to create a new item in a container is to first build a C# [class](/dotnet/csharp/language-reference/keywords/class) or [record](/dotnet/csharp/language-reference/builtin-types/record) type with all of the members you want to serialize into JSON. In this example, the C# record has a unique identifier, a *category* field for the partition key, and extra *name*, *quantity*, and *sale* fields.
++
+Create an item in the container by calling [``Container.UpsertItemAsync``](/dotnet/api/microsoft.azure.cosmos.container.upsertitemasync). In this example, we chose to *upsert* instead of *create* a new item in case you run this sample code more than once.
++
+### Get an item
+
+In Azure Cosmos DB, you can perform a point read operation by using both the unique identifier (``id``) and partition key fields. In the SDK, call [``Container.ReadItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) passing in both values to return a deserialized instance of your C# type.
++
+### Query items
+
+After you insert an item, you can run a query to get all items that match a specific filter. This example runs the SQL query: ``SELECT * FROM todo t WHERE t.partitionKey = 'gear-surf-surfboards'``. This example uses the **QueryDefinition** type and a parameterized query expression for the partition key filter. Once the query is defined, call [``Container.GetItemQueryIterator<>``](/dotnet/api/microsoft.azure.cosmos.container.getitemqueryiterator) to get a result iterator that will manage the pages of results. Then, use a combination of ``while`` and ``foreach`` loops to retrieve pages of results and then iterate over the individual items.
++
+## Run the code
+
+This app creates an Azure Cosmos DB SQL API database and container. The example then creates an item and then reads the exact same item back. Finally, the example issues a query that should only return that single item. With each step, the example outputs metadata to the console about the steps it has performed.
+
+To run the app, use a terminal to navigate to the application directory and run the application.
+
+```dotnetcli
+dotnet run
+```
+
+The output of the app should be similar to this example:
+
+```output
+New database: adventureworks
+New container: products
+Created item: 68719518391 [gear-surf-surfboards]
+```
+
+## Clean up resources
+
+When you no longer need the Azure Cosmos DB SQL API account, you can delete the corresponding resource group.
+
+#### [Azure CLI](#tab/azure-cli)
+
+Use the [``az group delete``](/cli/azure/group#az-group-delete) command to delete the resource group.
+
+```azurecli-interactive
+az group delete --name $resourceGroupName
+```
+
+#### [PowerShell](#tab/azure-powershell)
+
+Use the [``Remove-AzResourceGroup``](/powershell/module/az.resources/remove-azresourcegroup) cmdlet to delete the resource group.
+
+```azurepowershell-interactive
+$parameters = @{
+ Name = $RESOURCE_GROUP_NAME
+}
+Remove-AzResourceGroup @parameters
+```
+
+#### [Portal](#tab/azure-portal)
+
+1. Navigate to the resource group you previously created in the Azure portal.
+
+ > [!TIP]
+ > In this quickstart, we recommended the name ``msdocs-cosmos-dotnet-quickstart-rg``.
+1. Select **Delete resource group**.
+
+ :::image type="content" source="media/delete-account-portal/delete-resource-group-option.png" lightbox="media/delete-account-portal/delete-resource-group-option.png" alt-text="Screenshot of the Delete resource group option in the navigation bar for a resource group.":::
+
+1. On the **Are you sure you want to delete** dialog, enter the name of the resource group, and then select **Delete**.
+
+ :::image type="content" source="media/delete-account-portal/delete-confirmation.png" lightbox="media/delete-account-portal/delete-confirmation.png" alt-text="Screenshot of the delete confirmation page for a resource group.":::
+++
+## Next steps
+
+In this quickstart, you learned how to create an Azure Cosmos DB SQL API account, create a database, and create a container using the .NET SDK. You can now dive deeper into the SDK to import more data, perform complex queries, and manage your Azure Cosmos DB SQL API resources.
+
+> [!div class="nextstepaction"]
+> [Get started with Azure Cosmos DB SQL API and .NET](how-to-dotnet-get-started.md)
cosmos-db Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/samples-dotnet.md
+
+ Title: Examples for Azure Cosmos DB SQL API SDK for .NET
+description: Find .NET SDK examples on GitHub for common tasks using the Azure Cosmos DB SQL API.
++++
+ms.devlang: csharp
+ Last updated : 06/08/2022+++
+# Examples for Azure Cosmos DB SQL API SDK for .NET
+
+> [!div class="op_single_selector"]
+> * [.NET](quickstart-dotnet.md)
+
+The [cosmos-db-sql-api-dotnet-samples](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples) GitHub repository includes multiple sample projects. These projects illustrate how to perform common operations on Azure Cosmos DB SQL API resources.
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+* Azure Cosmos DB SQL API account. [Create a SQL API account](how-to-create-account.md).
+* [.NET 6.0 or later](https://dotnet.microsoft.com/download)
+* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+
+## Samples
+
+The sample projects are all self-contained and are designed to be ran individually without any dependencies between projects.
+
+### Client
+
+| Task | API reference |
+| : | : |
+| [Create a client with endpoint and key](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/101-client-endpoint-key/Program.cs#L11-L14) |[``CosmosClient(string, string)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-system-string-microsoft-azure-cosmos-cosmosclientoptions)) |
+| [Create a client with connection string](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/102-client-connection-string/Program.cs#L11-L13) |[``CosmosClient(string)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-microsoft-azure-cosmos-cosmosclientoptions)) |
+| [Create a client with ``DefaultAzureCredential``](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/103-client-default-credential/Program.cs#L20-L23) |[``CosmosClient(string, TokenCredential)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-azure-core-tokencredential-microsoft-azure-cosmos-cosmosclientoptions)) |
+| [Create a client with custom ``TokenCredential``](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/104-client-secret-credential/Program.cs#L25-L28) |[``CosmosClient(string, TokenCredential)``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.-ctor#microsoft-azure-cosmos-cosmosclient-ctor(system-string-azure-core-tokencredential-microsoft-azure-cosmos-cosmosclientoptions)) |
+
+### Databases
+
+| Task | API reference |
+| : | : |
+| [Create a database](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/200-create-database/Program.cs#L19-L21) |[``CosmosClient.CreateDatabaseIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.createdatabaseifnotexistsasync) |
+
+### Containers
+
+| Task | API reference |
+| : | : |
+| [Create a container](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/225-create-container/Program.cs#L26-L30) |[``Database.CreateContainerIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.database.createcontainerifnotexistsasync) |
+
+### Items
+
+| Task | API reference |
+| : | : |
+| [Create an item](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/250-create-item/Program.cs#L35-L46) |[``Container.CreateItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.createitemasync) |
+| [Point read an item](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/275-read-item/Program.cs#L51-L54) |[``Container.ReadItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) |
+| [Query multiple items](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples/blob/v3/300-query-items/Program.cs#L64-L80) |[``Container.GetItemQueryIterator<>``](/dotnet/api/microsoft.azure.cosmos.container.getitemqueryiterator) |
+
+## Next steps
+
+Dive deeper into the SDK to import more data, perform complex queries, and manage your Azure Cosmos DB SQL API resources.
+
+> [!div class="nextstepaction"]
+> [Get started with Azure Cosmos DB SQL API and .NET >](how-to-dotnet-get-started.md)
cosmos-db Sql Api Dotnet V2sdk Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-dotnet-v2sdk-samples.md
- Title: 'Azure Cosmos DB: .NET examples for the SQL API'
-description: Find C# .NET examples on GitHub for common tasks using the Azure Cosmos DB SQL API, including CRUD operations.
------ Previously updated : 05/02/2020---
-# Azure Cosmos DB: .NET examples for the SQL API (Legacy)
-
-> [!div class="op_single_selector"]
-> * [.NET V3 SDK Examples](sql-api-dotnet-v3sdk-samples.md)
-> * [Java V4 SDK Examples](sql-api-java-sdk-samples.md)
-> * [Spring Data V3 SDK Examples](sql-api-spring-data-sdk-samples.md)
-> * [Node.js Examples](sql-api-nodejs-samples.md)
-> * [Python Examples](sql-api-python-samples.md)
-> * [.NET V2 SDK Examples (Legacy)](sql-api-dotnet-v2sdk-samples.md)
-> * [Azure Code Sample Gallery](https://azure.microsoft.com/resources/samples/?sort=0&service=cosmos-db)
->
->
-
-The [azure-cosmos-dotnet-v2](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples) GitHub repository includes the latest .NET sample solutions to perform CRUD and other common operations on Azure Cosmos DB resources. This article provides:
-
-* Links to the tasks in each of the example C# project files.
-* Links to the related API reference content.
-
-For .NET SDK Version 3.0 (Preview) code samples, see the latest samples in the [azure-cosmos-dotnet-v3](https://github.com/Azure/azure-cosmos-dotnet-v3) GitHub repository.
-
-## Prerequisites
-
-Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
-
-The [Microsoft.Azure.DocumentDB NuGet package](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB/)
-
-An Azure subscription or free Cosmos DB trial account
-- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
-
-- You can [activate Visual Studio subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio): Your Visual Studio subscription gives you credits every month, which you can use for paid Azure services.-- [!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)] -
-> [!NOTE]
-> The samples are self-contained, and set up and clean up after themselves with multiple calls to [CreateDocumentCollectionAsync()](/dotnet/api/microsoft.azure.documents.client.documentclient.createdocumentcollectionasync). Each occurrence bills your subscription for one hour of usage in your collection's performance tier.
->
-
-## Database examples
-The [RunDatabaseDemo](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/DatabaseManagement/Program.cs#L75-L91) method of the sample *DatabaseManagement* project shows how to do the following tasks. To learn about Azure Cosmos databases before you run the following samples, see [Work with databases, containers, and items](../account-databases-containers-items.md).
-
-| Task | API reference |
-| | |
-| [Create a database](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/DatabaseManagement/Program.cs#L77) |[DocumentClient.CreateDatabaseIfNotExistsAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.createdatabaseifnotexistsasync) |
-| [Read a database by ID](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/DatabaseManagement/Program.cs#L79) |[DocumentClient.ReadDatabaseAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.readdatabaseasync) |
-| [Read all the databases](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/DatabaseManagement/Program.cs#L83) |[DocumentClient.ReadDatabaseFeedAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.readdatabasefeedasync) |
-| [Delete a database](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/DatabaseManagement/Program.cs#L89) |[DocumentClient.DeleteDatabaseAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.deletedatabaseasync) |
-
-## Collection examples
-The [RunCollectionDemo](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/CollectionManagement/Program.cs#L86-L104) method of the sample *CollectionManagement* project shows how to do the following tasks. To learn about Azure Cosmos collections before you run the following samples, see [Work with databases, containers, and items](../account-databases-containers-items.md).
-
-| Task | API reference |
-| | |
-| [Create a collection](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/CollectionManagement/Program.cs#L109) |[DocumentClient.CreateDocumentCollectionIfNotExistsAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.createdocumentcollectionifnotexistsasync)
-| [Change configured performance of a collection](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/CollectionManagement/Program.cs#L190) |[DocumentClient.ReplaceOfferAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.replaceofferasync) |
-| [Get a collection by ID](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/CollectionManagement/Program.cs#L202) |[DocumentClient.ReadDocumentCollectionAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.readdocumentcollectionasync) |
-| [Read all the collections in a database](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/CollectionManagement/Program.cs#L215) |[DocumentClient.ReadDocumentCollectionFeedAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.readdocumentcollectionfeedasync) |
-| [Delete a collection](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/CollectionManagement/Program.cs#L228) |[DocumentClient.DeleteDocumentCollectionAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.deletedocumentcollectionasync) |
-
-## Document examples
-The [RunDocumentsDemo](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/DocumentManagement/Program.cs#L109-L118) method of the sample *DocumentManagement* project shows how to do the following tasks. To learn about Azure Cosmos documents before you run the following samples, see [Work with databases, containers, and items](../account-databases-containers-items.md).
-
-| Task | API reference |
-| | |
-| [Create a document](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/DocumentManagement/Program.cs#L154) |[DocumentClient.CreateDocumentAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.createdocumentasync) |
-| [Read a document by ID](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/DocumentManagement/Program.cs#L168) |[DocumentClient.ReadDocumentAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.readdocumentasync) |
-| [Read all the documents in a collection](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/DocumentManagement/Program.cs#L190) |[DocumentClient.ReadDocumentFeedAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.readdocumentfeedasync) |
-| [Query for documents](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/DocumentManagement/Program.cs#L18-L222) |[DocumentClient.CreateDocumentQuery](/previous-versions/azure/dn850285(v=azure.100)) |
-| [Replace a document](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/DocumentManagement/Program.cs#L240-L242) |[DocumentClient.ReplaceDocumentAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.replacedocumentasync) |
-| [Upsert a document](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/DocumentManagement/Program.cs#L254) |[DocumentClient.UpsertDocumentAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.upsertdocumentasync) |
-| [Delete a document](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/DocumentManagement/Program.cs#L275-L277) |[DocumentClient.DeleteDocumentAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.deletedocumentasync) |
-| [Working with .NET dynamic objects](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/DocumentManagement/Program.cs#L349-L397) |[DocumentClient.CreateDocumentAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.createdocumentasync)<br>[DocumentClient.ReadDocumentAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.readdocumentasync)<br>[DocumentClient.ReplaceDocumentAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.replacedocumentasync) |
-| [Replace document with conditional ETag check](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/DocumentManagement/Program.cs#L405-L501) |[DocumentClient.AccessCondition](/dotnet/api/microsoft.azure.documents.client.accesscondition)<br>[Documents.Client.AccessConditionType](/dotnet/api/microsoft.azure.documents.client.accessconditiontype) |
-| [Read document only if document has changed](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/DocumentManagement/Program.cs#L454-L500) |[DocumentClient.AccessCondition](/dotnet/api/microsoft.azure.documents.client.accesscondition)<br>[Documents.Client.AccessConditionType](/dotnet/api/microsoft.azure.documents.client.accessconditiontype) |
-
-## Indexing examples
-The [RunIndexDemo](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/IndexManagement/Program.cs#L93-L115) method of the sample *IndexManagement* project shows how to do the following tasks. To learn about indexing in Azure Cosmos DB before you run the following samples, see [index policies](../index-policy.md), [index types](../index-overview.md#index-types), and [index paths](../index-policy.md#include-exclude-paths).
-
-| Task | API reference |
-| | |
-| [Exclude a document from the index](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/IndexManagement/Program.cs#L123-L162) |[IndexingDirective.Exclude](/dotnet/api/microsoft.azure.documents.indexingdirective) |
-| [Use Lazy indexing](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/IndexManagement/Program.cs#L174-L192) |[IndexingPolicy.IndexingMode](/dotnet/api/microsoft.azure.documents.indexingpolicy.indexingmode) |
-| [Exclude specified document paths from the index](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/IndexManagement/Program.cs#L202-L263) |[IndexingPolicy.ExcludedPaths](/dotnet/api/microsoft.azure.documents.indexingpolicy.excludedpaths) |
-| [Force a range scan operation on a hash indexed path](https://github.com/Azure/azure-cosmos-dotnet-v2/blob/2e9a48b6a446b47dd6182606c8608d439b88b683/samples/code-samples/IndexManagement/Program.cs#L305-L340) |[FeedOptions.EnableScanInQuery](/dotnet/api/microsoft.azure.documents.client.feedoptions.enablescaninquery) |
-| [Use range indexes on strings](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/IndexManagement/Program.cs#L265-L316) |[IndexingPolicy.IncludedPaths](/dotnet/api/microsoft.azure.documents.indexingpolicy.includedpaths)<br>[RangeIndex](/dotnet/api/microsoft.azure.documents.rangeindex) |
-| [Perform an index transform](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/IndexManagement/Program.cs#L318-L370) |[ReplaceDocumentCollectionAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.replacedocumentcollectionasync) |
-
-## Geospatial examples
-The [RunDemoAsync](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/Geospatial/Program.cs#L94-L139) method of the sample *Geospatial* project shows how to do the following tasks. To learn about GeoJSON and geospatial data before you run the following samples, see [Use geospatial and GeoJSON location data](./sql-query-geospatial-intro.md).
-
-| Task | API reference |
-| | |
-| [Enable geospatial indexing on a new collection](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/Geospatial/Program.cs#L48) |[IndexingPolicy](/dotnet/api/microsoft.azure.documents.indexingpolicy) <br> [IndexKind.Spatial](/dotnet/api/microsoft.azure.documents.indexkind) <br>[DataType.Point](/dotnet/api/microsoft.azure.documents.datatype) |
-| [Insert documents with GeoJSON points](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/Geospatial/Program.cs#L104-L114) |[DocumentClient.CreateDocumentAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.createdocumentasync) </br> [DataType.Point](/dotnet/api/microsoft.azure.documents.datatype) |
-| [Find points within a specified distance](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/Geospatial/Program.cs#L158-L199) |[ST_DISTANCE](sql-query-st-distance.md) </br> [GeometryOperationExtensions.Distance](/dotnet/api/microsoft.azure.documents.spatial.geometryoperationextensions.distance) |
-| [Find points within a polygon](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/Geospatial/Program.cs#L204-L227) |[ST_WITHIN](sql-query-st-within.md) </br> [GeometryOperationExtensions.Within](/dotnet/api/microsoft.azure.documents.spatial.geometryoperationextensions.distance) </br>[Polygon](/dotnet/api/microsoft.azure.documents.spatial.polygon) |
-| [Enable geospatial indexing on an existing collection](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/Geospatial/Program.cs#L385-L391) |[DocumentClient.ReplaceDocumentCollectionAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.replacedocumentcollectionasync)<br>[DocumentCollection.IndexingPolicy](/dotnet/api/microsoft.azure.documents.documentcollection.indexingpolicy) |
-| [Validate point and polygon data](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/Geospatial/Program.cs#L290-L326) |[ST_ISVALID](sql-query-st-isvalid.md)<br>[ST_ISVALIDDETAILED](sql-query-st-isvaliddetailed.md)<br>[GeometryOperationExtensions.IsValid](/dotnet/api/microsoft.azure.documents.spatial.geometryoperationextensions.isvalid)<br>[GeometryOperationExtensions.IsValidDetailed](/dotnet/api/microsoft.azure.documents.spatial.geometryoperationextensions.isvaliddetailed) |
-
-## Query examples
-The [RunDemoAsync](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/Queries/Program.cs#L64-L129) method of the sample *Queries* project shows how to do the following tasks using the SQL query grammar, the LINQ provider with query, and Lambda. To learn about the SQL query reference in Azure Cosmos DB before you run the following samples, see [SQL query examples for Azure Cosmos DB](./sql-query-getting-started.md).
-
-| Task | API reference |
-| | |
-| [Query for all documents](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/Queries/Program.cs#L131-L147) |[DocumentQueryable.CreateDocumentQuery](/previous-versions/azure/dn850285(v=azure.100)) |
-| [Query for equality using ==](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/Queries/Program.cs#L186-L198) |[DocumentQueryable.CreateDocumentQuery](/previous-versions/azure/dn850285(v=azure.100)) |
-| [Query for inequality using != and NOT](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/Queries/Program.cs#L288-L332) |[DocumentQueryable.CreateDocumentQuery](/previous-versions/azure/dn850285(v=azure.100)) |
-| [Query using range operators like >, <, >=, <=](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/Queries/Program.cs#L334-L355) |[DocumentQueryable.CreateDocumentQuery](/previous-versions/azure/dn850285(v=azure.100)) |
-| [Query using range operators against strings](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/Queries/Program.cs#L355-L370) |[DocumentQueryable.CreateDocumentQuery](/previous-versions/azure/dn850285(v=azure.100)) |
-| [Query with ORDER BY](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/Queries/Program.cs#L422-L446) |[DocumentQueryable.CreateDocumentQuery](/previous-versions/azure/dn850285(v=azure.100)) |
-| [Query with aggregate functions](https://github.com/Azure/azure-cosmos-dotnet-v2/blob/master/samples/code-samples/Queries/Program.cs#L448-L496) |[DocumentQueryable.CreateDocumentQuery](/previous-versions/azure/dn850285(v=azure.100)) |
-| [Work with subdocuments](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/Queries/Program.cs#L498-L524) |[DocumentQueryable.CreateDocumentQuery](/previous-versions/azure/dn850285(v=azure.100)) |
-| [Query with intra-document Joins](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/Queries/Program.cs#L526-L540) |[DocumentQueryable.CreateDocumentQuery](/previous-versions/azure/dn850285(v=azure.100)) |
-| [Query with string, math, and array operators](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/Queries/Program.cs#L636-L673) |[DocumentQueryable.CreateDocumentQuery](/previous-versions/azure/dn850285(v=azure.100)) |
-| [Query with parameterized SQL using SqlQuerySpec](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/Queries/Program.cs#L149-L184) |[DocumentQueryable.CreateDocumentQuery](/previous-versions/azure/dn850285(v=azure.100))<br>[SqlQuerySpec](/dotnet/api/microsoft.azure.documents.sqlqueryspec) |
-| [Query with explicit paging](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/Queries/Program.cs#L675-L734) |[DocumentQueryable.CreateDocumentQuery](/previous-versions/azure/dn850285(v=azure.100)) |
-| [Query partitioned collections in parallel](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/Queries/Program.cs#L736-L807) |[DocumentQueryable.CreateDocumentQuery](/previous-versions/azure/dn850285(v=azure.100)) |
-| [Query with ORDER BY for partitioned collections](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/Queries/Program.cs#L809-L882) |[DocumentQueryable.CreateDocumentQuery](/previous-versions/azure/dn850285(v=azure.100)) |
-
-## Change feed examples
-The [RunDemoAsync](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/ChangeFeed/Program.cs#L54-L97) method of the sample *ChangeFeed* project shows how to do the following tasks. To learn about change feed in Azure Cosmos DB before you run the following samples, see [Read Azure Cosmos DB change feed](read-change-feed.md) and [Change feed processor](change-feed-processor.md).
-
-| Task | API reference |
-| | |
-| [Read change feed](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/ChangeFeed/Program.cs#L132) |[DocumentClient.CreateDocumentChangeFeedQuery](/dotnet/api/microsoft.azure.documents.client.documentclient.createdocumentchangefeedquery) |
-| [Read partition key ranges](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/ChangeFeed/Program.cs#L118) |[DocumentClient.ReadPartitionKeyRangeFeedAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.readpartitionkeyrangefeedasync) |
-
-The change feed processor sample, [ChangeFeedMigrationTool](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/ChangeFeedMigrationTool), shows how to use the change feed processor library to replicate data to another Cosmos container.
-
-## Server-side programming examples
-The [RunDemoAsync](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/ServerSideScripts/Program.cs#L58-L91) method of the sample *ServerSideScripts* project shows how to do the following tasks. To learn about server-side programming in Azure Cosmos DB before you run the following samples, see [Stored procedures, triggers, and user-defined functions](stored-procedures-triggers-udfs.md).
-
-| Task | API reference |
-| | |
-| [Create a stored procedure](https://github.com/Azure/azure-documentdb-net/blob/master/samples/code-samples/ServerSideScripts/Program.cs#L110) |[DocumentClient.CreateStoredProcedureAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.createstoredprocedureasync) |
-| [Execute a stored procedure](https://github.com/Azure/azure-documentdb-net/blob/master/samples/code-samples/ServerSideScripts/Program.cs#L125) |[DocumentClient.ExecuteStoredProcedureAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.executestoredprocedureasync) |
-| [Read a document feed for a stored procedure](https://github.com/Azure/azure-documentdb-net/blob/master/samples/code-samples/ServerSideScripts/Program.cs#L198) |[DocumentClient.ReadDocumentFeedAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.readdocumentfeedasync) |
-| [Create a stored procedure with ORDER BY](https://github.com/Azure/azure-documentdb-net/blob/master/samples/code-samples/ServerSideScripts/Program.cs#L223) |[DocumentClient.CreateStoredProcedureAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.createstoredprocedureasync) |
-| [Create a pre-trigger](https://github.com/Azure/azure-documentdb-net/blob/master/samples/code-samples/ServerSideScripts/Program.cs#L273) |[DocumentClient.CreateTriggerAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.createtriggerasync) |
-| [Create a post-trigger](https://github.com/Azure/azure-documentdb-net/blob/master/samples/code-samples/ServerSideScripts/Program.cs#L341) |[DocumentClient.CreateTriggerAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.createtriggerasync) |
-| [Create a user-defined function (UDF)](https://github.com/Azure/azure-documentdb-net/blob/master/samples/code-samples/ServerSideScripts/Program.cs#L421) |[DocumentClient.CreateUserDefinedFunctionAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.createuserdefinedfunctionasync) |
-
-## User management examples
-The [RunDemoAsync](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/code-samples/UserManagement/Program.cs#L55-L129) method of the sample *UserManagement* project shows how to do the following tasks:
-
-| Task | API reference |
-| | |
-| [Create a user](https://github.com/Azure/azure-documentdb-net/blob/master/samples/code-samples/UserManagement/Program.cs#L93) |[DocumentClient.CreateUserAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.createuserasync) |
-| [Set permissions on a collection or document](https://github.com/Azure/azure-documentdb-net/blob/master/samples/code-samples/UserManagement/Program.cs#L97) |[DocumentClient.CreatePermissionAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.createpermissionasync) |
-| [Get a list of a user's permissions](https://github.com/Azure/azure-documentdb-net/blob/master/samples/code-samples/UserManagement/Program.cs#L241) |[DocumentClient.ReadUserAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.readuserasync)<br>[DocumentClient.ReadPermissionFeedAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.readpermissionfeedasync) |
-
-## Next steps
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Sql Api Dotnet V3sdk Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-dotnet-v3sdk-samples.md
- Title: 'Azure Cosmos DB: .NET (Microsoft.Azure.Cosmos) examples for the SQL API'
-description: Find the C# .NET v3 SDK examples on GitHub for common tasks by using the Azure Cosmos DB SQL API.
------ Previously updated : 05/02/2020---
-# Azure Cosmos DB .NET v3 SDK (Microsoft.Azure.Cosmos) examples for the SQL API
--
-> [!div class="op_single_selector"]
-> * [.NET v3 SDK Examples](sql-api-dotnet-v3sdk-samples.md)
-> * [Java v4 SDK Examples](sql-api-java-sdk-samples.md)
-> * [Spring Data v3 SDK Examples](sql-api-spring-data-sdk-samples.md)
-> * [Node.js Examples](sql-api-nodejs-samples.md)
-> * [Python Examples](sql-api-python-samples.md)
-> * [.NET v2 SDK Examples (Legacy)](sql-api-dotnet-v2sdk-samples.md)
-> * [Azure Code Sample Gallery](https://azure.microsoft.com/resources/samples/?sort=0&service=cosmos-db)
->
->
-
-The [azure-cosmos-dotnet-v3](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage) GitHub repository includes the latest .NET sample solutions. You use these solutions to perform CRUD (create, read, update, and delete) and other common operations on Azure Cosmos DB resources.
-
-If you're familiar with the previous version of the .NET SDK, you might be used to the terms collection and document. Because Azure Cosmos DB supports multiple API models, version 3.0 of the .NET SDK uses the generic terms *container* and *item*. A container can be a collection, graph, or table. An item can be a document, edge/vertex, or row, and is the content inside a container. This article provides:
-
-* Links to the tasks in each of the example C# project files.
-* Links to the related API reference content.
-
-## Prerequisites
--- Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]--- The [Microsoft.Azure.cosmos NuGet package](https://www.nuget.org/packages/Microsoft.Azure.cosmos/).--- An Azure subscription or free Azure Cosmos DB trial account.-
- - [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
-
-- You can [activate Visual Studio subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Your Visual Studio subscription gives you credits every month, which you can use for paid Azure services.--- [!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)] -
-> [!NOTE]
-> The samples are self-contained, and set up and clean up after themselves. Each occurrence bills your subscription for one hour of usage in your container's performance tier.
->
-
-## Database examples
-
-The [RunDatabaseDemo](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/DatabaseManagement/Program.cs#L65-L91) method of the sample *DatabaseManagement* project shows how to do the following tasks. To learn about Azure Cosmos DB databases before you run the following samples, see [Work with databases, containers, and items](../account-databases-containers-items.md).
-
-| Task | API reference |
-| | |
-| [Create a database](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/DatabaseManagement/Program.cs#L68) |[CosmosClient.CreateDatabaseIfNotExistsAsync](/dotnet/api/microsoft.azure.cosmos.cosmosclient.createdatabaseifnotexistsasync) |
-| [Read a database by ID](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/DatabaseManagement/Program.cs#L80) |[Database.ReadAsync](/dotnet/api/microsoft.azure.cosmos.database.readasync) |
-| [Read all the databases for an account](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/DatabaseManagement/Program.cs#L96) |[CosmosClient.GetDatabaseQueryIterator](/dotnet/api/microsoft.azure.cosmos.cosmosclient.getdatabasequeryiterator) |
-| [Delete a database](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/DatabaseManagement/Program.cs#L106) |[Database.DeleteAsync](/dotnet/api/microsoft.azure.cosmos.database.deleteasync) |
-
-## Container examples
-
-The [RunContainerDemo](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ContainerManagement/Program.cs#L69-L89) method of the sample *ContainerManagement* project shows how to do the following tasks. To learn about Azure Cosmos DB containers before you run the following samples, see [Work with databases, containers, and items](../account-databases-containers-items.md).
-
-| Task | API reference |
-| | |
-| [Create a container](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ContainerManagement/Program.cs#L97-L107) |[Database.CreateContainerIfNotExistsAsync](/dotnet/api/microsoft.azure.cosmos.database.createcontainerifnotexistsasync) |
-| [Create a container with custom index policy](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ContainerManagement/Program.cs#L161-L178) |[Database.CreateContainerIfNotExistsAsync](/dotnet/api/microsoft.azure.cosmos.database.createcontainerifnotexistsasync) |
-| [Change configured performance of a container](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ContainerManagement/Program.cs#L198-L223) |[Container.ReplaceThroughputAsync](/dotnet/api/microsoft.azure.cosmos.container.replacethroughputasync) |
-| [Get a container by ID](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ContainerManagement/Program.cs#L225-L236) |[Container.ReadContainerAsync](/dotnet/api/microsoft.azure.cosmos.container.readcontainerasync) |
-| [Read all the containers in a database](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ContainerManagement/Program.cs#L242-L258) |[Database.GetContainerQueryIterator](/dotnet/api/microsoft.azure.cosmos.database.getcontainerqueryiterator) |
-| [Delete a container](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ContainerManagement/Program.cs#L264-L270) |[Container.DeleteContainerAsync](/dotnet/api/microsoft.azure.cosmos.container.deletecontainerasync) |
-
-## Item examples
-
-The [RunItemsDemo](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ItemManagement/Program.cs#L119-L130) method of the sample *ItemManagement* project shows how to do the following tasks. To learn about Azure Cosmos DB items before you run the following samples, see [Work with databases, containers, and items](../account-databases-containers-items.md).
-
-| Task | API reference |
-| | |
-| [Create an item](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ItemManagement/Program.cs#L172) |[Container.CreateItemAsync](/dotnet/api/microsoft.azure.cosmos.container.createitemasync) |
-| [Read an item by ID](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ItemManagement/Program.cs#L227) |[container.ReadItemAsync](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) |
-| [Query for items](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ItemManagement/Program.cs#L344) |[container.GetItemQueryIterator](/dotnet/api/microsoft.azure.cosmos.container.getitemqueryiterator) |
-| [Replace an item](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ItemManagement/Program.cs#L477) |[container.ReplaceItemAsync](/dotnet/api/microsoft.azure.cosmos.container.replaceitemasync) |
-| [Upsert an item](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ItemManagement/Program.cs#L574) |[container.UpsertItemAsync](/dotnet/api/microsoft.azure.cosmos.container.upsertitemasync) |
-| [Delete an item](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ItemManagement/Program.cs#L627) |[container.DeleteItemAsync](/dotnet/api/microsoft.azure.cosmos.container.deleteitemasync) |
-| [Replace an item with conditional ETag check](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ItemManagement/Program.cs#L798) |[RequestOptions.IfMatchEtag](/dotnet/api/microsoft.azure.cosmos.requestoptions.ifmatchetag) |
-| [Partially update (patch) an item](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ItemManagement/Program.cs#L520) |[container.PatchItemAsync](/dotnet/api/microsoft.azure.cosmos.container.patchitemasync) |
--
-## Indexing examples
-
-The [RunIndexDemo](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/IndexManagement/Program.cs#L108-L122) method of the sample *IndexManagement* project shows how to do the following tasks. To learn about indexing in Azure Cosmos DB before you run the following samples, see [index policies](../index-policy.md), [index types](../index-overview.md#index-types), and [index paths](../index-policy.md#include-exclude-paths).
-
-| Task | API reference |
-| | |
-| [Exclude an item from the index](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/IndexManagement/Program.cs#L130-L186) |[IndexingDirective.Exclude](/dotnet/api/microsoft.azure.cosmos.indexingdirective) |
-| [Use Lazy indexing](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/IndexManagement/Program.cs#L198-L220) |[IndexingPolicy.IndexingMode](/dotnet/api/microsoft.azure.cosmos.indexingpolicy.indexingmode) |
-| [Exclude specified item paths from the index](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/IndexManagement/Program.cs#L230-L297) |[IndexingPolicy.ExcludedPaths](/dotnet/api/microsoft.azure.cosmos.indexingpolicy.excludedpaths) |
-
-## Query examples
-
-The [RunDemoAsync](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/Queries/Program.cs#L76-L96) method of the sample *Queries* project shows how to do the following tasks, by using the SQL query grammar, the LINQ provider with query, and Lambda. To learn about the SQL query reference in Azure Cosmos DB before you run the following samples, see [SQL query examples for Azure Cosmos DB](./sql-query-getting-started.md).
-
-| Task | API reference |
-| | |
-| [Query items from single partition](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/Queries/Program.cs#L154-L186) |[container.GetItemQueryIterator](/dotnet/api/microsoft.azure.cosmos.container.getitemqueryiterator) |
-| [Query items from multiple partitions](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/Queries/Program.cs#L215-L275) |[container.GetItemQueryIterator](/dotnet/api/microsoft.azure.cosmos.container.getitemqueryiterator) |
-| [Query using a SQL statement](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/Queries/Program.cs#L189-L212) |[container.GetItemQueryIterator](/dotnet/api/microsoft.azure.cosmos.container.getitemqueryiterator) |
-
-## Change feed examples
-
-The [RunBasicChangeFeed](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs#L91-L119) method of the sample *ChangeFeed* project shows how to do the following tasks. To learn about change feed in Azure Cosmos DB before you run the following samples, see [Read Azure Cosmos DB change feed](read-change-feed.md) and [Change feed processor](change-feed-processor.md).
-
-| Task | API reference |
-| | |
-| [Basic change feed functionality](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs#L91-L119) |[Container.GetChangeFeedProcessorBuilder](/dotnet/api/microsoft.azure.cosmos.container.getchangefeedprocessorbuilder) |
-| [Read change feed from a specific time](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs#L127-L162) |[Container.GetChangeFeedProcessorBuilder](/dotnet/api/microsoft.azure.cosmos.container.getchangefeedprocessorbuilder) |
-| [Read change feed from the beginning](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs#L170-L198) |[ChangeFeedProcessorBuilder.WithStartTime(DateTime)](/dotnet/api/microsoft.azure.cosmos.changefeedprocessorbuilder.withstarttime) |
-| [Migrate from change feed processor to change feed in v3 SDK](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs#L256-L333) |[Container.GetChangeFeedProcessorBuilder](/dotnet/api/microsoft.azure.cosmos.container.getchangefeedprocessorbuilder) |
-
-## Server-side programming examples
-
-The [RunDemoAsync](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ServerSideScripts/Program.cs#L72-L102) method of the sample *ServerSideScripts* project shows how to do the following tasks. To learn about server-side programming in Azure Cosmos DB before you run the following samples, see [Stored procedures, triggers, and user-defined functions](stored-procedures-triggers-udfs.md).
-
-| Task | API reference |
-| | |
-| [Create a stored procedure](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ServerSideScripts/Program.cs#L116) |[Scripts.CreateStoredProcedureAsync](/dotnet/api/microsoft.azure.cosmos.scripts.scripts.createstoredprocedureasync) |
-| [Execute a stored procedure](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ServerSideScripts/Program.cs#L135) |[Scripts.ExecuteStoredProcedureAsync](/dotnet/api/microsoft.azure.cosmos.scripts.scripts.executestoredprocedureasync) |
-| [Delete a stored procedure](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ServerSideScripts/Program.cs#L351) |[Scripts.DeleteStoredProcedureAsync](/dotnet/api/microsoft.azure.cosmos.scripts.scripts.deletestoredprocedureasync) |
-
-## Custom serialization
-
-The [SystemTextJson](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/SystemTextJson/Program.cs) sample project shows how to use a custom serializer when you're initializing a new `CosmosClient` object. The sample also includes [a custom `CosmosSerializer` class](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/SystemTextJson/CosmosSystemTextJsonSerializer.cs), which uses `System.Text.Json` for serialization and deserialization.
-
-## Next steps
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-
-* If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units by using vCores or vCPUs](../convert-vcore-to-request-unit.md).
-
-* If you know typical request rates for your current database workload, read about [estimating request units by using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md).
cost-management-billing Analyze Unexpected Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/analyze-unexpected-charges.md
The anomaly detection model is a univariate time-series, unsupervised prediction
Anomaly detection is available to every subscription monitored using the cost analysis preview. To enable anomaly detection for your subscriptions, open the cost analysis preview and select your subscription from the scope selector at the top of the page. You'll see a notification informing you that your subscription is onboarded and you'll start to see your anomaly detection status within 24 hours.
+## Create an anomaly alert
+
+You can create an anomaly alert to automatically get notified when an anomaly is detected. All email recipients get notified when a subscription cost anomaly is detected.
+
+An anomaly alert email includes a summary of changes in resource group count and cost. It also includes the top resource group changes for the day compared to the previous 60 days. And, it has a direct link to the Azure portal so that you can review the cost and investigate further.
+
+1. Start on a subscription scope.
+1. In the left menu, select **Cost alerts**.
+1. On the Cost alerts page, select **+ Add** > **Add anomaly alert**.
+1. On the Subscribe to emails page, enter required information and then select **Save**.
+ :::image type="content" source="./media/analyze-unexpected-charges/subscribe-emails.png" alt-text="Screenshot showing the Subscribe to emails page where you enter notification information for an alert." lightbox="./media/analyze-unexpected-charges/subscribe-emails.png" :::
+
+Here's an example email generated for an anomaly alert.
++ ## Manually find unexpected cost changes Let's look at a more detailed example of finding a change in cost. When you navigate to Cost analysis and then select a subscription scope, you'll start with the **Accumulated costs** view. The following screenshot shows an example of what you might see.
If you have an existing policy of [tagging resources](../costs/cost-mgt-best-pra
If you've used the preceding strategies and you still don't understand why you received a charge or if you need other help with billing issues, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
-## Create an anomaly alert
-
-You can create an anomaly alert to automatically get notified when an anomaly is detected. All email recipients get notified when a subscription cost anomaly is detected.
-
-An anomaly alert email includes a summary of changes in resource group count and cost. It also includes the top resource group changes for the day compared to the previous 60 days. And, it has a direct link to the Azure portal so that you can review the cost and investigate further.
-
-1. Start on a subscription scope.
-1. In the left menu, select **Cost alerts**.
-1. On the Cost alerts page, select **+ Add** > **Add anomaly alert**.
-1. On the Subscribe to emails page, enter required information and then select **Save**.
- :::image type="content" source="./media/analyze-unexpected-charges/subscribe-emails.png" alt-text="Screenshot showing the Subscribe to emails page where you enter notification information for an alert." lightbox="./media/analyze-unexpected-charges/subscribe-emails.png" :::
-
-Here's an example email generated for an anomaly alert.
-- ## Next steps - Learn about how to [Optimize your cloud investment with Cost Management](../costs/cost-mgt-best-practices.md).
data-factory Connector Dynamics Crm Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dynamics-crm-office-365.md
Previously updated : 04/24/2022 Last updated : 06/01/2022 # Copy and transform data in Dynamics 365 (Microsoft Dataverse) or Dynamics CRM using Azure Data Factory or Azure Synapse Analytics
To use this connector with Azure AD service-principal authentication, you must s
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
-## Create a linked service to Dynamics 365 using UI
+## Create a linked service to Dynamics 365 (Microsoft Dataverse) or Dynamics CRM using UI
Use the following steps to create a linked service to Dynamics 365 in the Azure portal UI.
Use the following steps to create a linked service to Dynamics 365 in the Azure
:::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
-2. Search for Dynamics and select the Dynamics 365 connector.
+2. Search for Dynamics or Dataverse and select the Dynamics 365 (Microsoft Dataverse) or Dynamics CRM connector.
:::image type="content" source="media/connector-dynamics-crm-office-365/dynamics-crm-office-365-connector.png" alt-text="Screenshot of the Dynamics 365 connector.":::
+ :::image type="content" source="media/connector-dynamics-crm-office-365/dataverse-connector.png" alt-text="Screenshot of the Dataverse connector.":::
+ 1. Configure the service details, test the connection, and create the new linked service. :::image type="content" source="media/connector-dynamics-crm-office-365/configure-dynamics-crm-office-365-linked-service.png" alt-text="Screenshot of linked service configuration for Dynamics 365.":::
data-factory Data Flow Assert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-assert.md
Previously updated : 12/22/2021 Last updated : 06/09/2022 # Assert transformation in mapping data flow
Enter an expression for evaluation for each of your assertions. You can have mul
By default, the assert transformation will include NULLs in row assertion evaluation. You can choose to ignore NULLs with this property.
+## Direct assert row failures
+
+When an assertion fails, you can optionally direct those error rows to a file in Azure by using the "Errors" tab on the sink transformation.
+ ## Examples ```
data-factory Data Flow Join https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-join.md
Previously updated : 09/09/2021 Last updated : 06/09/2022 # Join transformation in mapping data flow
If you would like to explicitly produce a full cartesian product, use the Derive
> [!NOTE] > Make sure to include at least one column from each side of your left and right relationship in a custom cross join. Executing cross joins with static values instead of columns from each side results in full scans of the entire dataset, causing your data flow to perform poorly.
+## Fuzzy join
+
+You can choose to join based on fuzzy join logic instead of exact column value matching by turning on the "Use fuzzy matching" checkbox option.
+
+* Combine text parts: Use this option to find matches by remove space between words. For example, Data Factory is matched with DataFactory if this option is enabled.
+* Similarity score column: You can optionally choose to store the matching score for each row in a column by entering a new column name here to store that value.
+* Similarity threshold: Choose a value between 60 and 100 as a percentage match between values in the columns you've selected.
++ ## Configuration 1. Choose which data stream you're joining with in the **Right stream** dropdown. 1. Select your **Join type**
-1. Choose which key columns you want to match on for you join condition. By default, data flow looks for equality between one column in each stream. To compare via a computed value, hover over the column dropdown and select **Computed column**.
+1. Choose which key columns you want to match on for your join condition. By default, data flow looks for equality between one column in each stream. To compare via a computed value, hover over the column dropdown and select **Computed column**.
### Non-equi joins
data-factory Data Flow Sink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-sink.md
By default, data is written to multiple sinks in a nondeterministic order. The e
You can group sinks together by applying the same order number for a series of sinks. The service will treat those sinks as groups that can execute in parallel. Options for parallel execution will surface in the pipeline data flow activity.
-## Error row handling
+## Errors
+
+On the sink errors tab you can configure error row handling to capture and redirect output for database driver errors and failed assertions.
When writing to databases, certain rows of data may fail due to constraints set by the destination. By default, a data flow run will fail on the first error it gets. In certain connectors, you can choose to **Continue on error** that allows your data flow to complete even if individual rows have errors. Currently, this capability is only available in Azure SQL Database and Azure Synapse. For more information, see [error row handling in Azure SQL DB](connector-azure-sql-database.md#error-row-handling).
Below is a video tutorial on how to use database error row handling automaticall
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4IWne]
+For assert failure rows, you can use the Assert transformation upstream in your data flow and then redirect failed assertions to an output file here in the sink errors tab.
++ ## Data preview in sink When fetching a data preview in debug mode, no data will be written to your sink. A snapshot of what the data looks like will be returned, but nothing will be written to your destination. To test writing data into your sink, run a pipeline debug from the pipeline canvas.
data-factory Monitor Visually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-visually.md
Previously updated : 07/30/2021 Last updated : 06/09/2022 # Visually monitor Azure Data Factory
By default, all data factory runs are displayed in the browser's local time zone
The default monitoring view is list of triggered pipeline runs in the selected time period. You can change the time range and filter by status, pipeline name, or annotation. Hover over the specific pipeline run to get run-specific actions such as rerun and the consumption report. The pipeline run grid contains the following columns:
The pipeline run grid contains the following columns:
You need to manually select the **Refresh** button to refresh the list of pipeline and activity runs. Autorefresh is currently not supported. To view the results of a debug run, select the **Debug** tab. ## Monitor activity runs To get a detailed view of the individual activity runs of a specific pipeline run, click on the pipeline name. The list view shows activity runs that correspond to each pipeline run. Hover over the specific activity run to get run-specific information such as the JSON input, JSON output, and detailed activity-specific monitoring experiences. | **Column name** | **Description** | | | |
The list view shows activity runs that correspond to each pipeline run. Hover ov
If an activity failed, you can see the detailed error message by clicking on the icon in the error column. ### Promote user properties to monitor
Promote any pipeline activity property as a user property so that it becomes an
> [!NOTE] > You can only promote up to five pipeline activity properties as user properties. After you create the user properties, you can monitor them in the monitoring list views. If the source for the copy activity is a table name, you can monitor the source table name as a column in the list view for activity runs. ## Rerun pipelines and activities
After you create the user properties, you can monitor them in the monitoring lis
To rerun a pipeline that has previously ran from the start, hover over the specific pipeline run and select **Rerun**. If you select multiple pipelines, you can use the **Rerun** button to run them all. If you wish to rerun starting at a specific point, you can do so from the activity runs view. Select the activity you wish to start from and select **Rerun from activity**. +
+You can also rerun a pipeline and change the parameters. Select the **New parameters** button to change the parameters.
++
+> [!NOTE]
+> Rerunning a pipeline with new parameters will be considered a new pipeline run so will not show under the rerun groupings for a pipeline run.
### Rerun from failed activity If an activity fails, times out, or is canceled, you can rerun the pipeline from that failed activity by selecting **Rerun from failed activity**. ### View rerun history You can view the rerun history for all the pipeline runs in the list view. You can also view rerun history for a particular pipeline run. ## Monitor consumption
You can see the resources consumed by a pipeline run by clicking the consumption
Clicking the icon opens a consumption report of resources used by that pipeline run. You can plug these values into the [Azure pricing calculator](https://azure.microsoft.com/pricing/details/data-factory/) to estimate the cost of the pipeline run. For more information on Azure Data Factory pricing, see [Understanding pricing](pricing-concepts.md).
You can plug these values into the [Azure pricing calculator](https://azure.micr
A Gantt chart is a view that allows you to see the run history over a time range. By switching to a Gantt view, you will see all pipeline runs grouped by name displayed as bars relative to how long the run took. You can also group by annotations/tags that you've create on your pipeline. The Gantt view is also available at the activity run level. The length of the bar informs the duration of the pipeline. You can also select the bar to see more details. ## Alerts You can raise alerts on supported metrics in Data Factory. Select **Monitor** > **Alerts & metrics** on the Data Factory monitoring page to get started. For a seven-minute introduction and demonstration of this feature, watch the following video:
For a seven-minute introduction and demonstration of this feature, watch the fol
1. Select **New alert rule** to create a new alert.
- :::image type="content" source="media/monitor-visually/new-alerts.png" alt-text="New Alert Rule button":::
+ :::image type="content" source="media/monitor-visually/new-alerts.png" alt-text="Screenshot of New Alert Rule button.":::
1. Specify the rule name and select the alert severity.
- :::image type="content" source="media/monitor-visually/name-and-severity.png" alt-text="Boxes for rule name and severity":::
+ :::image type="content" source="media/monitor-visually/name-and-severity.png" alt-text="Screenshot of boxes for rule name and severity.":::
1. Select the alert criteria.
- :::image type="content" source="media/monitor-visually/add-criteria-1.png" alt-text="Box for target criteria":::
+ :::image type="content" source="media/monitor-visually/add-criteria-1.png" alt-text="Screenshot of box for target criteria.":::
:::image type="content" source="media/monitor-visually/add-criteria-2.png" alt-text="Screenshot that shows where you select one metric to set up the alert condition.":::
- :::image type="content" source="media/monitor-visually/add-criteria-3.png" alt-text="List of criteria":::
+ :::image type="content" source="media/monitor-visually/add-criteria-3.png" alt-text="Screenshot of list of criteria.":::
You can create alerts on various metrics, including those for ADF entity count/size, activity/pipeline/trigger runs, Integration Runtime (IR) CPU utilization/memory/node count/queue, as well as for SSIS package executions and SSIS IR start/stop operations. 1. Configure the alert logic. You can create an alert for the selected metric for all pipelines and corresponding activities. You can also select a particular activity type, activity name, pipeline name, or failure type.
- :::image type="content" source="media/monitor-visually/alert-logic.png" alt-text="Options for configuring alert logic":::
+ :::image type="content" source="media/monitor-visually/alert-logic.png" alt-text="Screenshot of options for configuring alert logic.":::
1. Configure email, SMS, push, and voice notifications for the alert. Create an action group, or choose an existing one, for the alert notifications.
- :::image type="content" source="media/monitor-visually/configure-notification-1.png" alt-text="Options for configuring notifications":::
+ :::image type="content" source="media/monitor-visually/configure-notification-1.png" alt-text="Screenshot of options for configuring notifications.":::
- :::image type="content" source="media/monitor-visually/configure-notification-2.png" alt-text="Options for adding a notification":::
+ :::image type="content" source="media/monitor-visually/configure-notification-2.png" alt-text="Screenshot of options for adding a notification.":::
1. Create the alert rule.
- :::image type="content" source="media/monitor-visually/create-alert-rule.png" alt-text="Options for creating an alert rule":::
+ :::image type="content" source="media/monitor-visually/create-alert-rule.png" alt-text="Screenshot of options for creating an alert rule.":::
## Next steps
data-factory Parameterize Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/parameterize-linked-services.md
Previously updated : 03/03/2022 Last updated : 06/09/2022
All the linked service types are supported for parameterization.
- Azure Blob Storage - Azure Cosmos DB (SQL API) - Azure Data Explorer
+- Azure Data Lake Storage Gen1
- Azure Data Lake Storage Gen2 - Azure Database for MySQL
+- Azure Database for PostgreSQL
- Azure Databricks - Azure File Storage - Azure Function
All the linked service types are supported for parameterization.
- OData - Oracle - Oracle Cloud Storage
+- PostgreSQL
- Salesforce - Salesforce Service Cloud - SFTP
data-factory Wrangling Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/wrangling-functions.md
The following M functions add or transform columns: [Table.AddColumn](/powerquer
step, but the user must ensure that there are no duplicate column names among the joined tables * Supported Join Kinds:
- [Inner](/powerquery-m/joinkind-inner),
- [LeftOuter](/powerquery-m/joinkind-leftouter),
- [RightOuter](/powerquery-m/joinkind-rightouter),
- [FullOuter](/powerquery-m/joinkind-fullouter)
+ Inner,
+ LeftOuter,
+ RightOuter,
+ FullOuter
* Both [Value.Equals](/powerquery-m/value-equals) and
databox-online Azure Stack Edge Gpu Deploy Activate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-activate.md
Previously updated : 07/14/2021 Last updated : 05/31/2022 # Customer intent: As an IT admin, I need to understand how to activate Azure Stack Edge Pro so I can use it to transfer data to Azure.
Before you configure and set up your Azure Stack Edge Pro device with GPU, make
- You've installed the physical device as detailed in [Install Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-install.md). - You've configured the network and compute network settings as detailed in [Configure network, compute network, web proxy](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md)
- - You have uploaded your own or generated the device certificates on your device if you changed the device name or the DNS domain via the **Device** page. If you haven't done this step, you will see an error during the device activation and the activation will be blocked. For more information, go to [Configure certificates](azure-stack-edge-gpu-deploy-configure-certificates.md).
+ - You've uploaded your own or generated the device certificates on your device if you changed the device name or the DNS domain via the **Device** page. If you haven't done this step, you'll see an error during the device activation and the activation will be blocked. For more information, go to [Configure certificates](azure-stack-edge-gpu-deploy-configure-certificates.md).
* You have the activation key from the Azure Stack Edge service that you created to manage the Azure Stack Edge Pro device. For more information, go to [Prepare to deploy Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-prep.md).
Before you configure and set up your Azure Stack Edge Pro device with GPU, make
1. In the local web UI of the device, go to **Get started** page. 2. On the **Activation** tile, select **Activate**.
- ![Local web UI "Cloud details" page](./media/azure-stack-edge-gpu-deploy-activate/activate-1.png)
+ ![Screenshot that shows the local web U I "Cloud details" page.](./media/azure-stack-edge-gpu-deploy-activate/activate-1.png)
3. In the **Activate** pane, enter the **Activation key** that you got in [Get the activation key for Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-prep.md#get-the-activation-key). 4. Select **Apply**.
- ![Local web UI "Cloud details" page 2](./media/azure-stack-edge-gpu-deploy-activate/activate-2.png)
+ ![Screenshot that shows the local web U I "Cloud details" page 2.](./media/azure-stack-edge-gpu-deploy-activate/activate-2.png)
-5. First the device is activated. You are then prompted to download the key file.
+5. First the device is activated. You're then prompted to download the key file.
- ![Local web UI "Cloud details" page 3](./media/azure-stack-edge-gpu-deploy-activate/activate-3.png)
+ ![Screenshot that shows the local web U I "Cloud details" page 3.](./media/azure-stack-edge-gpu-deploy-activate/activate-3.png)
Select **Download and continue** and save the *device-serial-no.json* file in a safe location outside of the device. **This key file contains the recovery keys for the OS disk and data disks on your device**. These keys may be needed to facilitate a future system recovery.
Before you configure and set up your Azure Stack Edge Pro device with GPU, make
|Field |Description | ||| |`Id` | This is the ID for the device. |
- |`DataVolumeBitLockerExternalKeys`|These are the BitLockers keys for the data disks and are used to recover the local data on your device.|
+ |`DataVolumeBitLockerExternalKeys`|These are the BitLocker keys for the data disks and are used to recover the local data on your device.|
|`SystemVolumeBitLockerRecoveryKey`| This is the BitLocker key for the system volume. This key helps with the recovery of the system configuration and system data for your device. |
- |`ServiceEncryptionKey`| This key protects the data flowing through the Azure service. This key ensures that a compromise of the Azure service will not result in a compromise of stored information. |
+ |`ServiceEncryptionKey`| This key protects the data flowing through the Azure service. This key ensures that a compromise of the Azure service won't result in a compromise of stored information. |
6. Go to the **Overview** page. The device state should show as **Activated**.
- ![Local web UI "Cloud details" page 4](./media/azure-stack-edge-gpu-deploy-activate/activate-4.png)
+ ![Screenshot that shows the local web U I "Cloud details" page 4.](./media/azure-stack-edge-gpu-deploy-activate/activate-4.png)
The device activation is complete. You can now add shares on your device.
If you encounter any issues during activation, go to [Troubleshoot activation an
## Deploy workloads
-After you have activated the device, the next step is to deploy workloads.
+After you've activated the device, the next step is to deploy workloads.
- To deploy VM workloads, see [What are VMs on Azure Stack Edge?](azure-stack-edge-gpu-virtual-machine-overview.md) and the associated VM deployment documentation. - To deploy network functions as managed applications:
databox-online Azure Stack Edge Gpu Deploy Configure Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-configure-certificates.md
Previously updated : 02/15/2022 Last updated : 05/31/2022 zone_pivot_groups: azure-stack-edge-device-deployment # Customer intent: As an IT admin, I need to understand how to configure certificates for Azure Stack Edge Pro GPU so I can use it to transfer data to Azure.
databox-online Azure Stack Edge Gpu Deploy Configure Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-configure-compute.md
Previously updated : 04/05/2022 Last updated : 05/31/2022 # Customer intent: As an IT admin, I need to understand how to configure compute on Azure Stack Edge Pro so I can use it to transform the data before sending it to Azure.
Before you set up a compute role on your Azure Stack Edge Pro device:
## Get Kubernetes endpoints
-To configure a client to access Kubernetes cluster, you will need the Kubernetes endpoint. Follow these steps to get Kubernetes API endpoint from the local UI of your Azure Stack Edge Pro device.
+To configure a client to access Kubernetes cluster, you'll need the Kubernetes endpoint. Follow these steps to get Kubernetes API endpoint from the local UI of your Azure Stack Edge Pro device.
1. In the local web UI of your device, go to **Device** page. 2. Under the **Device endpoints**, copy the **Kubernetes API** endpoint. This endpoint is a string in the following format: `https://compute.<device-name>.<DNS-domain>[Kubernetes-cluster-IP-address]`.
- ![Device page in local UI](./media/azure-stack-edge-gpu-create-kubernetes-cluster/device-kubernetes-endpoint-1.png)
+ ![Screenshot that shows the Device page in local U I.](./media/azure-stack-edge-gpu-create-kubernetes-cluster/device-kubernetes-endpoint-1.png)
-3. Save the endpoint string. You will use this endpoint string later when configuring a client to access the Kubernetes cluster via kubectl.
+3. Save the endpoint string. You'll use this endpoint string later when configuring a client to access the Kubernetes cluster via kubectl.
4. While you are in the local web UI, you can:
- - If you have been provided a key from Microsoft (select users may have a key), go to Kubernetes API, select **Advanced config**, and download an advanced configuration file for Kubernetes.
+ - If you've been provided a key from Microsoft (select users may have a key), go to Kubernetes API, select **Advanced config**, and download an advanced configuration file for Kubernetes.
- ![Device page in local UI 1](./media/azure-stack-edge-gpu-deploy-configure-compute/download-advanced-config-1.png)
+ ![Screenshot that shows the Device page in local U I 1.](./media/azure-stack-edge-gpu-deploy-configure-compute/download-advanced-config-1.png)
- ![Device page in local UI 2](./media/azure-stack-edge-gpu-deploy-configure-compute/download-advanced-config-2.png)
+ ![Screenshot that shows the Device page in local U I 2.](./media/azure-stack-edge-gpu-deploy-configure-compute/download-advanced-config-2.png)
- You can also go to **Kubernetes dashboard** endpoint and download an `aseuser` config file.
- ![Device page in local UI 3](./media/azure-stack-edge-gpu-deploy-configure-compute/download-aseuser-config-1.png)
+ ![Screenshot that shows the Device page in local U I 3.](./media/azure-stack-edge-gpu-deploy-configure-compute/download-aseuser-config-1.png)
You can use this config file to sign into the Kubernetes dashboard or debug any issues in your Kubernetes cluster. For more information, see [Access Kubernetes dashboard](azure-stack-edge-gpu-monitor-kubernetes-dashboard.md#access-dashboard).
databox-online Azure Stack Edge Gpu Deploy Set Up Device Update Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-set-up-device-update-time.md
Previously updated : 05/24/2022 Last updated : 05/31/2022 # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro so I can use it to transfer data to Azure.
Follow these steps to configure device related settings:
1. To validate and apply the configured device settings, select **Apply**.
- ![Local web UI "Device" page 1](./media/azure-stack-edge-gpu-deploy-set-up-device-update-time/device-2.png)
+ ![Screenshot of local web U I "Device" page 1.](./media/azure-stack-edge-gpu-deploy-set-up-device-update-time/device-2.png)
When the device name and the DNS domain are changed, the SMB endpoint is created.
- If you have changed the device name and the DNS domain, the automatically generated self-signed certificates on the device will not work. You'll need to regenerate device certificates or bring your own certificates.
+ If you've changed the device name and the DNS domain, the automatically generated self-signed certificates on the device won't work. You'll need to regenerate device certificates or bring your own certificates.
- ![Local web UI "Device" page 2](./media/azure-stack-edge-gpu-deploy-set-up-device-update-time/device-3.png)
+ ![Screehshot of local web U I "Device" page 2.](./media/azure-stack-edge-gpu-deploy-set-up-device-update-time/device-3.png)
1. After the settings are applied, select **Next: Update server**.
- ![Local web UI "Device" page 3](./media/azure-stack-edge-gpu-deploy-set-up-device-update-time/device-4.png)
+ ![Screenshot of local web U I "Device" page 3.](./media/azure-stack-edge-gpu-deploy-set-up-device-update-time/device-4.png)
## Configure update
Follow these steps to configure device related settings:
- You can get the updates directly from the **Microsoft Update server**.
- ![Local web UI "Update Server" page](./media/azure-stack-edge-gpu-deploy-set-up-device-update-time/update-2.png)
+ ![Screenshot of local web U I "Update Server" page.](./media/azure-stack-edge-gpu-deploy-set-up-device-update-time/update-2.png)
You can also choose to deploy updates from the **Windows Server Update services** (WSUS). Provide the path to the WSUS server.
- ![Local web UI "Update Server" page 2](./media/azure-stack-edge-gpu-deploy-set-up-device-update-time/update-3.png)
+ ![Screenshot of local web U I "Update Server" page 2.](./media/azure-stack-edge-gpu-deploy-set-up-device-update-time/update-3.png)
> [!NOTE] > If a separate Windows Update server is configured and if you choose to connect over *https* (instead of *http*), then signing chain certificates required to connect to the update server are needed. For information on how to create and upload certificates, go to [Manage certificates](azure-stack-edge-gpu-manage-certificates.md).
digital-twins Concepts 3D Scenes Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-3d-scenes-studio.md
Azure Digital Twins [3D Scenes Studio (preview)](https://explorer.digitaltwins.azure.net/3dscenes) is an immersive 3D environment, where end users can monitor, diagnose, and investigate operational data with the visual context of 3D assets. 3D Scenes Studio empowers organizations to enrich existing 3D models with visualizations powered by Azure Digital Twins data, without the need for 3D expertise. The visualizations can be easily consumed from web browsers.
-With a digital twin graph and curated 3D model, subject matter experts can leverage the studio's low-code builder to map the 3D elements to the digital twin, and define UI interactivity and business logic for a 3D visualization of a business environment. The 3D scenes can then be consumed in the hosted [Azure Digital Twins Explorer 3D Scenes Studio](concepts-azure-digital-twins-explorer.md), or in a custom application that leverages the embeddable 3D viewer component.
+With a digital twin graph and curated 3D model, subject matter experts can leverage the studio's low-code builder to map the 3D elements to digital twins, and define UI interactivity and business logic for a 3D visualization of a business environment. The 3D scenes can then be consumed in the hosted [3D Scenes Studio](concepts-azure-digital-twins-explorer.md), or in a custom application that leverages the embeddable 3D viewer component.
This article gives an overview of 3D Scenes Studio and its key features. For comprehensive, step-by-step instructions on how to use each feature, see [Use 3D Scenes Studio (preview)](how-to-use-3d-scenes-studio.md).
digital-twins Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/overview.md
Take advantage of your domain expertise on top of Azure Digital Twins to build c
* Use a robust event system to build dynamic business logic and data processing * Integrate with Azure data, analytics, and AI services to help you track the past and then predict the future
-## Azure Digital Twins capabilities
-
-Here's a summary of the features provided by Azure Digital Twins.
-
-### Open modeling language
+## Open modeling language
In Azure Digital Twins, you define the digital entities that represent the people, places, and things in your physical environment using custom twin types called [models](concepts-models.md).
You can think of these model definitions as a specialized vocabulary to describe
DTDL is also used for data models throughout other Azure IoT services, including [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) and [Time Series Insights](../time-series-insights/overview-what-is-tsi.md). This compatibility helps you connect your Azure Digital Twins solution with other parts of the Azure ecosystem.
-### Live execution environment
+## Live execution environment
Digital models in Azure Digital Twins are live, up-to-date representations of the real world. Using the relationships in your custom DTDL models, you'll connect twins into a live graph representing your environment.
Azure Digital Twins provides a rich event system to keep that graph current with
You can also extract insights from the live execution environment, using Azure Digital Twins' powerful *query APIΓÇï*. The API lets you query with extensive search conditions, including property values, relationships, relationship properties, model information, and more. You can also combine queries, gathering a broad range of insights about your environment and answering custom questions that are important to you.
-### Input from IoT and business systems
+## Input from IoT and business systems
To keep the live execution environment of Azure Digital Twins up to date with the real world, you can use [IoT Hub](../iot-hub/about-iot-hub.md) to connect your solution to IoT and IoT Edge devices. These hub-managed devices are represented as part of your twin graph, and provide the data that drives your model.
You can create a new IoT Hub for this purpose with Azure Digital Twins, or [conn
You can also drive Azure Digital Twins from other data sources, using REST APIs or connectors to other services like [Logic Apps](../logic-apps/logic-apps-overview.md).
-### Output data for storage and analytics
+## Output data for storage and analytics
The data in your Azure Digital Twins model can be routed to downstream Azure services for more analytics or storage.
The following diagram shows where Azure Digital Twins lies in the context of a l
:::image type="content" source="media/overview/solution-context.png" alt-text="Diagram showing input sources, output services, and two-way communication with both client apps and external compute resources." border="false" lightbox="media/overview/solution-context.png":::
-## Service limits
+## Resources
+
+Here are some resources that may be useful while working with Azure Digital Twins. You can view more resources under the **Resources** header in the table of contents for this documentation set.
+
+### Service limits
You can read about the service limits of Azure Digital Twins in the [Azure Digital Twins service limits article](reference-service-limits.md). This resource can be useful while working with the service to understand the service's functional and rate limitations, as well as which limits can be adjusted if necessary.
-## Terminology
+### Terminology
You can view a list of common IoT terms and their uses across the Azure IoT services, including Azure Digital Twins, in the [Azure IoT Glossary](../iot-fundamentals/iot-glossary.md?toc=/azure/digital-twins/toc.json&bc=/azure/digital-twins/breadcrumb/toc.json). This resource may be a useful reference while you get started with Azure Digital Twins and building an IoT solution.
dns Tutorial Alias Tm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-alias-tm.md
Title: 'Tutorial: Create an alias record to support domain apex names - Traffic Manager'
+ Title: 'Tutorial: Create an alias record to support domain name apex with Traffic Manager'
-description: This tutorial shows you how to configure an Azure DNS alias record to support using your domain apex name with Traffic Manager.
+description: In this tutorial, you learn how to create and configure an Azure DNS alias record to support using your domain name apex with Traffic Manager.
Previously updated : 04/19/2021 Last updated : 06/10/2022
-#Customer intent: As an experienced network administrator, I want to configure Azure DNS alias records to use my domain apex name with Traffic Manager.
+
+#Customer intent: As an experienced network administrator, I want to configure Azure DNS alias records to use my domain name apex with Traffic Manager.
-# Tutorial: Configure an alias record to support apex domain names with Traffic Manager
+# Tutorial: Create an alias record to support domain name apex with Traffic Manager
-You can create an alias record for your domain name apex to reference an Azure Traffic Manager profile. An example is contoso.com. Instead of using a redirecting service, you configure Azure DNS to reference a Traffic Manager profile directly from your zone.
+You can create an alias record for your domain name apex to reference an Azure Traffic Manager profile. Instead of using a redirecting service, you configure Azure DNS to reference a Traffic Manager profile directly from your zone.
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Create a host VM and network infrastructure.
+> * Create a virtual network and a subnet.
+> * Create a web server virtual machine with a public IP.
+> * Add a DNS label to a public IP.
> * Create a Traffic Manager profile. > * Create an alias record. > * Test the alias record.
In this tutorial, you learn how to:
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites
-You must have a domain name available that you can host in Azure DNS to test with. You must have full control of this domain. Full control includes the ability to set the name server (NS) records for the domain.
-For instructions on how to host your domain in Azure DNS, see [Tutorial: Host your domain in Azure DNS](dns-delegate-domain-azure-dns.md).
+* An Azure account with an active subscription.
+* A domain name hosted in Azure DNS. If you don't have an Azure DNS zone, you can [create a DNS zone](./dns-delegate-domain-azure-dns.md#create-a-dns-zone), then [delegate your domain](dns-delegate-domain-azure-dns.md#delegate-the-domain) to Azure DNS.
-The example domain used for this tutorial is contoso.com, but use your own domain name.
+> [!NOTE]
+> In this tutorial, `contoso.com` is used as an example domain name. Replace `contoso.com` with your own domain name.
+
+## Sign in to Azure
+
+Sign in to the Azure portal at https://portal.azure.com.
## Create the network infrastructure
-First, create a virtual network and a subnet to place your web servers in.
+Create a virtual network and a subnet to place your web servers in.
+
+1. In the Azure portal, enter *virtual network* in the search box at the top of the portal, and then select **Virtual networks** from the search results.
+1. In **Virtual networks**, select **+ Create**.
+1. In **Create virtual network**, enter or select the following information in the **Basics** tab:
+
+ | **Setting** | **Value** |
+ |-|-|
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription |
+ | Resource Group | Select **Create new** </br> In **Name**, enter *TMResourceGroup* </br> Select **OK** |
+ | **Instance details** | |
+ | Name | Enter *myTMVNet* |
+ | Region | Select your region |
+
+1. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
+1. In the **IP Addresses** tab, enter the following information:
+
+ | Setting | Value |
+ |-|-|
+ | IPv4 address space | Enter *10.10.0.0/16* |
+
+1. Select **+ Add subnet**, and enter this information in the **Add subnet**:
+
+ | Setting | Value |
+ |-|-|
+ | Subnet name | Enter *WebSubnet* |
+ | Subnet address range | Enter *10.10.0.0/24* |
+
+1. Select **Add**.
+1. Select the **Review + create** tab or select the **Review + create** button.
+1. Select **Create**.
+
+## Create web server virtual machines
+
+Create two Windows Server virtual machines, and install IIS web server on them, and then add DNS labels to their public IPs.
+
+### Create the virtual machines
+
+Create two Windows Server 2019 virtual machines.
+
+1. In the Azure portal, enter *virtual machine* in the search box at the top of the portal, and then select **Virtual machines** from the search results.
+1. In **Virtual machines**, select **+ Create** and then select **Azure virtual machine**.
+1. In **Create a virtual machine**, enter or select the following information in the **Basics** tab:
+
+ | **Setting** | **Value** |
+ |||
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription |
+ | Resource Group | Select **TMResourceGroup** |
+ | **Instance details** | |
+ | Virtual machine name | Enter *Web-01* |
+ | Region | Select **(US) East US** |
+ | Availability options | Select **No infrastructure redundancy required** |
+ | Security type | Select **Standard**. |
+ | Image | Select **Windows Server 2019 Datacenter - Gen2** |
+ | Size | Select your VM size |
+ | **Administrator account** | |
+ | Username | Enter a username |
+ | Password | Enter a password |
+ | Confirm password | Reenter password |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None** |
++
+1. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
-1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
-2. In the upper left in the portal, select **Create a resource**. Enter *resource group* in the search box, and create a resource group named **RG-DNS-Alias-TM**.
-3. Select **Create a resource** > **Networking** > **Virtual network**.
-4. Create a virtual network named **VNet-Servers**. Place it in the **RG-DNS-Alias-TM** resource group, and name the subnet **SN-Web**.
+1. In the **Networking** tab, enter or select the following information:
-## Create two web server virtual machines
+ | Setting | Value |
+ ||-|
+ | **Network interface** | |
+ | Virtual network | Select **myTMVNet** |
+ | Subnet | Select **WebSubnet** |
+ | Public IP | Select **Create new**, and then enter *Web-01-ip* in **Name** </br> Select **Basic** for the **SKU**, and **Static** for the **Assignment** |
+ | NIC network security group | Select **Basic**|
+ | Public inbound ports | Select **Allow selected ports** |
+ | Select inbound ports | Select **HTTP (80)**, **HTTPS (443)** and **RDP (3389)** |
-1. Select **Create a resource** > **Windows Server 2016 VM**.
-2. Enter **Web-01** for the name, and place the VM in the **RG-DNS-Alias-TM** resource group. Enter a username and a password, and select **OK**.
-3. For **Size**, select an SKU with 8-GB RAM.
-4. For **Settings**, select the **VNet-Servers** virtual network and the **SN-Web** subnet.
-5. Select **Public IP address**. Under **Assignment**, select **Static**, and then select **OK**.
-6. For public inbound ports, select **HTTP (80)** > **HTTPS (443)** > **RDP (3389)**, and then select **OK**.
-7. On the **Summary** page, select **Create**. This procedure takes a few minutes to finish.
+1. Select **Review + create**.
+1. Review the settings, and then select **Create**.
+1. Repeat previous steps to create the second virtual machine. Enter *Web-02* in the **Virtual machine name** and *Web-02-ip* in the **Name** of **Public IP**. For the other settings, use the same information from the previous steps used with first virtual machine.
-Repeat this procedure to create another virtual machine named **Web-02**.
+Each virtual machine deployment may take a few minutes to complete.
+
+### Install IIS web server
+
+Install IIS on both **Web-01** and **Web-02** virtual machines.
+
+1. In the **Connect** page of **Web-01** virtual machine, select **RDP** and then **Download RDP File**.
+1. Open *Web-01.rdp* file, and select **Connect**.
+1. Enter the username and password entered during virtual machine creation.
+1. On the **Server Manager** dashboard, select **Manage** then **Add Roles and Features**.
+1. Select **Server Roles** or select **Next** three times. On the **Server Roles** page, select **Web Server (IIS)**.
+1. Select **Add Features**, and then select **Next**.
+
+ :::image type="content" source="./media/tutorial-alias-tm/iis-web-server-installation.png" alt-text="Screenshot of Add Roles and Features Wizard in Windows Server 2019 showing how to add the I I S Web Server.":::
+
+1. Select **Confirmation** or select **Next** three times, and then select **Install**. The installation process takes a few minutes to finish.
+1. After the installation finishes, select **Close**.
+1. Go to *C:\inetpub\wwwroot* and open *iisstart.htm* with Notepad or any editor of your choice.
+1. Replace `IIS Windows Server` in the title with the virtual machine name `Web-01` and save the file.
+1. Open a web browser. Browse to **localhost** to verify that the default IIS welcome page appears.
+
+ :::image type="content" source="./media/tutorial-alias-tm/iis-on-web-01-vm-in-web-browser.png" alt-text="Screenshot of Internet Explorer showing the I I S Web Server Welcome page.":::
+
+1. Repeat previous steps to install IIS web server on **Web-02** virtual machine. Use `Web-02` in the title of *iisstart.htm*.
### Add a DNS label The public IP addresses need a DNS label to work with Traffic Manager.
-1. In the **RG-DNS-Alias-TM** resource group, select the **Web-01-ip** public IP address.
-2. Under **Settings**, select **Configuration**.
-3. In the DNS name label text box, enter **web01pip**.
-4. Select **Save**.
-Repeat this procedure for the **Web-02-ip** public IP address by using **web02pip** for the DNS name label.
+1. In the Azure portal, enter *TMResourceGroup* in the search box at the top of the portal, and then select **TMResourceGroup** from the search results.
+1. In the **TMResourceGroup** resource group, select the **Web-01-ip** public IP address.
+1. Under **Settings**, select **Configuration**.
+1. Enter *web01pip* in the **DNS name label**.
+1. Select **Save**.
+
+ :::image type="content" source="./media/tutorial-alias-tm/ip-dns-name-label-inline.png" alt-text="Screenshot of the Configuration page of Azure Public IP Address showing D N S name label." lightbox="./media/tutorial-alias-tm/ip-dns-name-label-expanded.png":::
-### Install IIS
+1. Repeat the previous steps for the **Web-02-ip** public IP address and enter *web02pip* in the **DNS name label**.
-Install IIS on both **Web-01** and **Web-02**.
+## Create a Traffic Manager profile
-1. Connect to **Web-01**, and sign in.
-2. On the **Server Manager** dashboard, select **Add roles and features**.
-3. Select **Next** three times. On the **Server Roles** page, select **Web Server (IIS)**.
-4. Select **Add Features**, and select **Next**.
-5. Select **Next** four times. Then select **Install**. This procedure takes a few minutes to finish.
-6. When the installation finishes, select **Close**.
-7. Open a web browser. Browse to **localhost** to verify that the default IIS web page appears.
+1. In the **Overview** page of **Web-01-ip** public IP address, note the IP address for later use. Repeat this step for the **Web-02-ip** public IP address.
+1. In the Azure portal, enter *Traffic Manager profile* in the search box at the top of the portal, and then select **Traffic Manager profiles**.
+1. Select **+ Create**.
+1. In the **Create Traffic Manager profile** page, enter or select the following information:
-Repeat this procedure to install IIS on **Web-02**.
+ | Setting | Value |
+ |--||
+ | Name | Enter *TM-alias-test* |
+ | Routing method | Select **Priority** |
+ | Subscription | Select your Azure subscription |
+ | Resource group | Select **TMResourceGroup** |
+ :::image type="content" source="./media/tutorial-alias-tm/create-traffic-manager-profile.png" alt-text="Screenshot of the Create Traffic Manager profile page showing the selected settings.":::
-## Create a Traffic Manager profile
+1. Select **Create**.
+
+1. After **TM-alias-test** deployment finishes, select **Go to resource**.
+1. In the **Endpoints** page of **TM-alias-test** Traffic Manager profile, select **+ Add** and enter or select the following information:
+
+ | Setting | Value |
+ |--||
+ | Type | Select **External endpoint** |
+ | Name | Enter *EP-Web01* |
+ | Fully qualified domain name (FQDN) or IP | Enter the IP address for **Web-01-ip** that you noted previously |
+ | Priority | Enter *1* |
-1. Open the **RG-DNS-Alias-TM** resource group, and select the **Web-01-ip** Public IP address. Note the IP address for later use. Repeat this step for the **Web-02-ip** public IP address.
-1. Select **Create a resource** > **Networking** > **Traffic Manager profile**.
-2. For the name, enter **TM-alias-test**. Place it in the **RG-DNS-Alias-TM** resource group.
-3. Select **Create**.
-4. After deployment finishes, select **Go to resource**.
-5. On the Traffic Manager profile page, under **Settings**, select **Endpoints**.
-6. Select **Add**.
-7. For **Type**, select **External endpoint**, and for **Name**, enter **EP-Web01**.
-8. In the **Fully qualified domain name (FQDN) or IP** text box, enter the IP address for **Web-01-ip** that you noted previously.
-9. Select the same **Location** as your other resources, and then select **OK**.
+ :::image type="content" source="./media/tutorial-alias-tm/add-endpoint-tm-inline.png" alt-text="Screenshot of the Endpoints page in Traffic Manager profile showing selected settings for adding an endpoint." lightbox="./media/tutorial-alias-tm/add-endpoint-tm-expanded.png":::
-Repeat this procedure to add the **Web-02** endpoint by using the IP address you noted previously for **Web-02-ip**.
+1. Select **Add**.
+
+1. Repeat the last two steps to create the second endpoint. Enter or select the following information:
+
+ | Setting | Value |
+ |--||
+ | Type | Select **External endpoint** |
+ | Name | Enter *EP-Web02* |
+ | Fully qualified domain name (FQDN) or IP | Enter the IP address for **Web-02-ip** that you noted previously |
+ | Priority | Enter *2* |
## Create an alias record Create an alias record that points to the Traffic Manager profile.
-1. Select your Azure DNS zone to open the zone.
-2. Select **Record set**.
-3. Leave the **Name** text box empty to represent the domain name apex. An example is contoso.com.
-4. Leave the **Type** as an **A** record.
-5. Select the **Alias Record Set** check box.
-6. Select **Choose Azure service**, and select the **TM-alias-test** Traffic Manager profile.
+1. In the Azure portal, enter *contoso.com* in the search box at the top of the portal, and then select **contoso.com** DNS zone from the search results.
+1. In the **Overview** page of **contoso.com** DNS zone, select the **+ Record set** button.
+1. In the **Add record set**, leave the **Name** box empty to represent the domain name apex. An example is `contoso.com`.
+1. Select **A** for the **Type**.
+1. Select **Yes** for the **Alias record set**, and then select the **Azure Resource** for the **Alias type**.
+1. Select the **TM-alias-test** Traffic Manager profile for the **Azure resource**.
+1. Select **OK**.
+
+ :::image type="content" source="./media/tutorial-alias-tm/add-record-set-tm-inline.png" alt-text="Screenshot of adding an alias record to refer to the Traffic Manager profile." lightbox="./media/tutorial-alias-tm/add-record-set-tm-expanded.png":::
## Test the alias record
-1. From a web browser, browse to your domain name apex. An example is contoso.com. You see the IIS default web page. Close the web browser.
-2. Shut down the **Web-01** virtual machine. Wait a few minutes for it to completely shut down.
-3. Open a new web browser, and browse to your domain name apex again.
-4. You see the IIS default web page again, because Traffic Manager handled the situation and directed traffic to **Web-02**.
+1. From a web browser, browse to `contoso.com` or your domain name apex. You see the IIS default welcome page with `Web-01` in the title of the browser page. The Traffic Manager directed traffic to **Web-01** IIS web server because it has the highest priority. Close the web browser and shut down **Web-01** virtual machine. Wait a few minutes for it to completely shut down.
+1. Open a new web browser, and browse again to `contoso.com` or your domain name apex.
+1. You see the IIS default welcome page again but with `Web-02` in the title of the browser page. The Traffic Manager handled the situation and directed traffic to the second IIS server after shutting down the first server that has the highest priority.
## Clean up resources
-When you no longer need the resources created for this tutorial, delete the **RG-DNS-Alias-TM** resource group.
+When no longer needed, you can delete all resources created in this tutorial by following these steps:
+
+1. On the Azure portal menu, select **Resource groups**.
+1. Select the **TMResourceGroup** resource group.
+1. Select **Delete resource group**.
+1. Enter *TMResourceGroup* and select **Delete**.
+1. On the Azure portal menu, select **All resources**.
+1. Select **contoso.com** DNS zone.
+1. Select the **@** record created in this tutorial.
+1. Select **Delete** and then **Yes**.
## Next steps
-In this tutorial, you created an alias record to use your apex domain name to reference a Traffic Manager profile. To learn about Azure DNS and web apps, continue with the tutorial for web apps.
+In this tutorial, you created an alias record to use your apex domain name to reference a Traffic Manager profile.
-> [!div class="nextstepaction"]
-> [Host load-balanced web apps at the zone apex](./dns-alias-appservice.md)
+- Learn more about [alias records](dns-alias.md).
+- Learn more about [zones and records](dns-zones-records.md).
+- Learn more about [Traffic Manager routing methods](../traffic-manager/traffic-manager-routing-methods.md).
event-grid Monitor Virtual Machine Changes Event Grid Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/monitor-virtual-machine-changes-event-grid-logic-app.md
Title: Monitor virtual machines changes - Azure Event Grid & Logic Apps
-description: Check for changes in virtual machines (VMs) by using Azure Event Grid and Logic Apps
+ Title: Monitor virtual machines changes with Azure Event Grid
+description: Check for changes in virtual machines (VMs) by using Azure Event Grid and Azure Logic Apps.
ms.suite: integration
Previously updated : 01/01/2022 Last updated : 06/10/2022
-# Tutorial: Monitor virtual machine changes by using Azure Event Grid and Logic Apps
+# Tutorial: Monitor virtual machine changes by using Azure Event Grid and Azure Logic Apps
-To monitor and respond to specific events that happen in Azure resources or third-party resources, you can create an automated [logic app workflow](../logic-apps/logic-apps-overview.md) with minimal code using Azure Logic Apps. You can have these resources publish events to an [Azure event grid](../event-grid/overview.md). In turn, the event grid pushes those events to subscribers that have queues, webhooks, or [event hubs](../event-hubs/event-hubs-about.md) as endpoints. As a subscriber, your workflow waits for these events to arrive in the event grid before running the steps to process the events.
-For example, here are some events that publishers can send to subscribers through the Azure Event Grid service:
+You can monitor and respond to specific events that happen in Azure resources or external resources by using Azure Event Grid and Azure Logic Apps. You can create an automated [Consumption logic app workflow](../logic-apps/logic-apps-overview.md) with minimal code using Azure Logic Apps. You can have these resources publish events to [Azure Event Grid](../event-grid/overview.md). In turn, Azure Event Grid pushes those events to subscribers that have queues, webhooks, or [event hubs](../event-hubs/event-hubs-about.md) as endpoints. As a subscriber, your workflow waits for these events to arrive in Azure Event Grid before running the steps to process the events.
+
+For example, here are some events that publishers can send to subscribers through Azure Event Grid:
* Create, read, update, or delete a resource. For example, you can monitor changes that might incur charges on your Azure subscription and affect your bill.
For example, here are some events that publishers can send to subscribers throug
* A new message appears in a queue.
-This tutorial creates a Consumption logic app resource that runs in [*multi-tenant* Azure Logic Apps](../logic-apps/logic-apps-overview.md) and is based on the [Consumption pricing model](../logic-apps/logic-apps-pricing.md#consumption-pricing). Using this logic app resource, you create a workflow that monitors changes to a virtual machine, and sends emails about those changes. When you create a workflow that has an event subscription to an Azure resource, events flow from that resource through an event grid to the workflow.
+This tutorial creates a Consumption logic app resource that runs in [*multi-tenant* Azure Logic Apps](../logic-apps/logic-apps-overview.md) and is based on the [Consumption pricing model](../logic-apps/logic-apps-pricing.md#consumption-pricing). Using this logic app resource, you create a workflow that monitors changes to a virtual machine, and sends emails about those changes. When you create a workflow that has an event subscription to an Azure resource, events flow from that resource through Azure Event Grid to the workflow.
![Screenshot showing the workflow designer with a workflow that monitors a virtual machine using Azure Event Grid.](./media/monitor-virtual-machine-changes-event-grid-logic-app/monitor-virtual-machine-event-grid-logic-app-overview.png) In this tutorial, you learn how to: > [!div class="checklist"]
-> * Create a logic app resource and workflow that monitors events from an event grid.
+> * Create a logic app resource and workflow that monitors events from Azure Event Grid.
> * Add a condition that specifically checks for virtual machine changes. > * Send email when your virtual machine changes.
In this tutorial, you learn how to:
* A [virtual machine](https://azure.microsoft.com/services/virtual-machines) that's alone in its own Azure resource group. If you haven't already done so, create a virtual machine through the [Create a VM tutorial](../virtual-machines/windows/quick-create-portal.md). To make the virtual machine publish events, you [don't need to do anything else](../event-grid/overview.md).
-* If you have a firewall that limits traffic to specific IP addresses, set up your firewall to allow access for both the [inbound](../logic-apps/logic-apps-limits-and-config.md#inbound) and [outbound](../logic-apps/logic-apps-limits-and-config.md#outbound) IP addresses used by Azure Logic Apps in the Azure region where you create your logic app workflow.
+* If you have a firewall that limits traffic to specific IP addresses, you have to set up your firewall to allow access for Azure Logic Apps to communicate through the firewall. You need to allow access for both the [inbound](../logic-apps/logic-apps-limits-and-config.md#inbound) and [outbound](../logic-apps/logic-apps-limits-and-config.md#outbound) IP addresses used by Azure Logic Apps in the Azure region where you create your logic app.
This example uses managed connectors that require your firewall to allow access for *all* the [managed connector outbound IP addresses](/connectors/common/outbound-ip-addresses) in the Azure region for your logic app resource.
In this tutorial, you learn how to:
|-|-|-|-| | **Subscription** | Yes | <*Azure-subscription-name*> | Select the same Azure subscription for all the services in this tutorial. | | **Resource Group** | Yes | <*Azure-resource-group*> | The Azure resource group name for your logic app, which you can select for all the services in this tutorial. |
- | **Type** | Yes | Consumption | The resource type for your logic app. For this tutorial, make sure that you select **Consumption**. |
| **Logic App name** | Yes | <*logic-app-name*> | Provide a unique name for your logic app. | | **Publish** | Yes | Workflow | Select the deployment destination for your logic app. For this tutorial, make sure that you select **Workflow**, which deploys to Azure. | | **Region** | Yes | <*Azure-region*> | Select the same region for all services in this tutorial. |
+ | **Plan type** | Yes | Consumption | The resource type for your logic app. For this tutorial, make sure that you select **Consumption**. |
||||| > [!NOTE]
- > If you later want to use the Event Grid operations with a Standard logic app resource instead, make sure that you create a *stateful* workflow, not a stateless workflow.
- > To add the Event Grid operations to your workflow in the designer, on the operations picker pane, make sure that you select the **Azure** tab.
+ >
+ > If you later want to use the Azure Event Grid operations with a Standard logic app resource instead,
+ > make sure that you create a *stateful* workflow, not a stateless workflow. This tutorial applies only
+ > to Consumption logic apps, which follow a different user experience. To add Azure Event Grid operations
+ > to your workflow in the designer, on the operations picker pane, make sure that you select the **Azure** tab.
> For more information about multi-tenant versus single-tenant Azure Logic Apps, review [Single-tenant versus multi-tenant and integration service environment](../logic-apps/single-tenant-overview-compare.md). 1. When you're done, select **Review + create**. On the next pane, confirm the provided information, and select **Create**.
In this tutorial, you learn how to:
1. Under **Templates**, select **Blank Logic App**.
- ![Screenshot of Logic Apps templates, showing selection to create a blank logic app.](./media/monitor-virtual-machine-changes-event-grid-logic-app/choose-logic-app-template.png)
+ > [!NOTE]
+ >
+ > The workflow templates gallery is available only for Consumption logic apps, not Standard logic apps.
+
+ ![Screenshot showing Azure Logic Apps templates with selected "Blank Logic App" template.](./media/monitor-virtual-machine-changes-event-grid-logic-app/choose-logic-app-template.png)
- The workflow designer now shows you the [*triggers*](../logic-apps/logic-apps-overview.md#logic-app-concepts) that you can use to start your logic app. Every logic app must start with a trigger, which fires when a specific event happens or when a specific condition is met. Each time the trigger fires, Azure Logic Apps creates a workflow instance that runs your logic app.
+ The workflow designer now shows you the [*triggers*](../logic-apps/logic-apps-overview.md#logic-app-concepts) that you can use to start your logic app. Every workflow must start with a trigger, which fires when a specific event happens or when a specific condition is met. Each time the trigger fires, Azure Logic Apps creates a workflow instance that runs your logic app.
-## Add an Event Grid trigger
+## Add an Azure Event Grid trigger
-Now add the Event Grid trigger, which you use to monitor the resource group for your virtual machine.
+Now add the Azure Event Grid trigger, which you use to monitor the resource group for your virtual machine.
1. On the designer, in the search box, enter `event grid`. From the triggers list, select the **When a resource event occurs** trigger.
- ![Screenshot that shows the workflow designer with the selected Event Grid trigger.](./media/monitor-virtual-machine-changes-event-grid-logic-app/logic-app-event-grid-trigger.png)
+ ![Screenshot that shows the workflow designer with the selected Azure Event Grid trigger.](./media/monitor-virtual-machine-changes-event-grid-logic-app/logic-app-event-grid-trigger.png)
1. When prompted, sign in to Azure Event Grid with your Azure account credentials. In the **Tenant** list, which shows the Azure Active Directory tenant that's associated with your Azure subscription, check that the correct tenant appears, for example:
- ![Screenshot that shows the workflow designer with the Azure sign-in prompt to connect to Event Grid.](./media/monitor-virtual-machine-changes-event-grid-logic-app/sign-in-event-grid.png)
+ ![Screenshot that shows the workflow designer with the Azure sign-in prompt to connect to Azure Event Grid.](./media/monitor-virtual-machine-changes-event-grid-logic-app/sign-in-event-grid.png)
> [!NOTE]
+ >
> If you're signed in with a personal Microsoft account, such as @outlook.com or @hotmail.com,
- > the Event Grid trigger might not appear correctly. As a workaround, select
+ > the Azure Event Grid trigger might not appear correctly. As a workaround, select
> [Connect with Service Principal](../active-directory/develop/howto-create-service-principal-portal.md), > or authenticate as a member of the Azure Active Directory that's associated with > your Azure subscription, for example, *user-name*@emailoutlook.onmicrosoft.com.
Now add the Event Grid trigger, which you use to monitor the resource group for
| **Subscription** | Yes | <*event-publisher-Azure-subscription-name*> | Select the name for the Azure subscription that's associated with the *event publisher*. For this tutorial, select the Azure subscription name for your virtual machine. | | **Resource Type** | Yes | <*event-publisher-Azure-resource-type*> | Select the Azure resource type for the event publisher. For more information about Azure resource types, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md). For this tutorial, select the `Microsoft.Resources.ResourceGroups` value to monitor Azure resource groups. | | **Resource Name** | Yes | <*event-publisher-Azure-resource-name*> | Select the Azure resource name for the event publisher. This list varies based on the resource type that you selected. For this tutorial, select the name for the Azure resource group that includes your virtual machine. |
- | **Event Type Item** | No | <*event-types*> | Select one or more specific event types to filter and send to your event grid. For example, you can optionally add these event types to detect when resources are changed or deleted: <p><p>- `Microsoft.Resources.ResourceActionSuccess` <br>- `Microsoft.Resources.ResourceDeleteSuccess` <br>- `Microsoft.Resources.ResourceWriteSuccess` <p>For more information, see these topics: <p><p>- [Azure Event Grid event schema for resource groups](../event-grid/event-schema-resource-groups.md) <br>- [Understand event filtering](../event-grid/event-filtering.md) <br>- [Filter events for Event Grid](../event-grid/how-to-filter-events.md) |
+ | **Event Type Item** | No | <*event-types*> | Select one or more specific event types to filter and send to Azure Event Grid. For example, you can optionally add these event types to detect when resources are changed or deleted: <p><p>- `Microsoft.Resources.ResourceActionSuccess` <br>- `Microsoft.Resources.ResourceDeleteSuccess` <br>- `Microsoft.Resources.ResourceWriteSuccess` <p>For more information, see these topics: <p><p>- [Azure Event Grid event schema for resource groups](../event-grid/event-schema-resource-groups.md) <br>- [Understand event filtering](../event-grid/event-filtering.md) <br>- [Filter events for Azure Event Grid](../event-grid/how-to-filter-events.md) |
| To add optional properties, select **Add new parameter**, and then select the properties that you want. | No | {see descriptions} | * **Prefix Filter**: For this tutorial, leave this property empty. The default behavior matches all values. However, you can specify a prefix string as a filter, for example, a path and a parameter for a specific resource. <p>* **Suffix Filter**: For this tutorial, leave this property empty. The default behavior matches all values. However, you can specify a suffix string as a filter, for example, a file name extension, when you want only specific file types. <p>* **Subscription Name**: For this tutorial, you can provide a unique name for your event subscription. | |||
-1. Save your logic app. On the designer toolbar, select **Save**. To collapse and hide an action's details in your logic app, select the action's title bar.
+1. Save your logic app workflow. On the designer toolbar, select **Save**. To collapse and hide an action's details in your workflow, select the action's title bar.
![Screenshot that shows the workflow designer and the "Save" button selected.](./media/monitor-virtual-machine-changes-event-grid-logic-app/logic-app-event-grid-save.png)
- When you save your logic app with an event grid trigger, Azure automatically creates an event subscription for your logic app to your selected resource. So when the resource publishes an event to the event grid, that event grid automatically pushes the event to your logic app. This event triggers your logic app, then creates and runs an instance of the workflow that you define in these next steps.
+ When you save your logic app workflow with an Azure Event Grid trigger, Azure automatically creates an event subscription for your logic app to your selected resource. So when the resource publishes an event to the Azure Event Grid service, the service automatically pushes the event to your logic app. This event triggers and runs the logic app workflow you define in these next steps.
-Your logic app is now live and listens to events from the event grid, but doesn't do anything until you add actions to the workflow.
+Your logic app is now live and listens to events from Azure Event Grid, but doesn't do anything until you add actions to the workflow.
## Add a condition
-If you want to your logic app to run only when a specific event or operation happens, add a condition that checks for the `Microsoft.Compute/virtualMachines/write` operation. When this condition is true, your logic app sends you email with details about the updated virtual machine.
+If you want to your logic app workflow to run only when a specific event or operation happens, add a condition that checks for the **Microsoft.Compute/virtualMachines/write** operation. When this condition is true, your logic app workflow sends you an email, which has details about the updated virtual machine.
-1. In Logic App Designer, under the event grid trigger, select **New step**.
+1. In the workflow designer, under the Azure Event Grid trigger, select **New step**.
![Screenshot that shows the workflow designer with "New step" selected.](./media/monitor-virtual-machine-changes-event-grid-logic-app/choose-new-step-condition.png)
If you want to your logic app to run only when a specific event or operation hap
![Screenshot that shows the workflow designer with "Condition" selected.](./media/monitor-virtual-machine-changes-event-grid-logic-app/select-condition.png)
- The Logic App Designer adds an empty condition to your workflow, including action paths to follow based whether the condition is true or false.
+ The workflow designer adds an empty condition to your workflow, including action paths to follow based whether the condition is true or false.
![Screenshot that shows the workflow designer with an empty condition added to the workflow.](./media/monitor-virtual-machine-changes-event-grid-logic-app/empty-condition.png)
If you want to your logic app to run only when a specific event or operation hap
![Screenshot that shows the workflow designer with the condition editor's context menu and "Rename" selected.](./media/monitor-virtual-machine-changes-event-grid-logic-app/rename-condition.png)
-1. Create a condition that checks the event `body` for a `data` object where the `operationName` property is equal to the `Microsoft.Compute/virtualMachines/write` operation. Learn more about [Event Grid event schema](../event-grid/event-schema.md).
+1. Create a condition that checks the event `body` for a `data` object where the `operationName` property is equal to the `Microsoft.Compute/virtualMachines/write` operation. Learn more about [Azure Event Grid event schema](../event-grid/event-schema.md).
1. On the first row under **And**, click inside the left box. In the dynamic content list that appears, select **Expression**.
If you want to your logic app to run only when a specific event or operation hap
For example:
- ![Screenshot of Logic Apps designer, showing condition editor with expression to extract the operation name.](./media/monitor-virtual-machine-changes-event-grid-logic-app/condition-add-data-operation-name.png)
+ ![Screenshot showing workflow designer and condition editor with expression to extract the operation name.](./media/monitor-virtual-machine-changes-event-grid-logic-app/condition-add-data-operation-name.png)
1. In the middle box, keep the operator **is equal to**.
Now add an [*action*](../logic-apps/logic-apps-overview.md#logic-app-concepts) s
> [!NOTE] > If you select a field that represents an array, the designer automatically adds a **For each** loop around
- > the action that references the array. That way, your logic app performs that action on each array item.
+ > the action that references the array. That way, your logic app workflow performs that action on each array item.
Now, your email action might look like this example: ![Screenshot that shows the workflow designer with selected outputs to send in email when VM is updated.](./media/monitor-virtual-machine-changes-event-grid-logic-app/logic-app-send-email-details.png)
- And your finished logic app might look like this example:
+ And your finished logic app workflow might look like the following example:
- ![Screenshot that shows the workflow designer with finished logic app and details for trigger and actions.](./media/monitor-virtual-machine-changes-event-grid-logic-app/logic-app-completed.png)
+ ![Screenshot showing designer with complete workflow and details for trigger and actions.](./media/monitor-virtual-machine-changes-event-grid-logic-app/logic-app-completed.png)
1. Save your logic app. To collapse and hide each action's details in your logic app, select the action's title bar.
- Your logic app is now live, but waits for changes to your virtual machine before doing anything. To test your logic app now, continue to the next section.
+ Your logic app is now live, but waits for changes to your virtual machine before doing anything. To test your workflow now, continue to the next section.
## Test your logic app workflow
-1. To check that your logic app is getting the specified events, update your virtual machine.
+1. To check that your workflow is getting the specified events, update your virtual machine.
For example, you can [resize your virtual machine](../virtual-machines/resize-vm.md).
Now add an [*action*](../logic-apps/logic-apps-overview.md#logic-app-concepts) s
![Screenshot of logic app's runs history, showing details for each run.](./media/monitor-virtual-machine-changes-event-grid-logic-app/logic-app-run-history-details.png)
-Congratulations, you've created and run a logic app that monitors resource events through an event grid and emails you when those events happen. You also learned how easily you can create workflows that automate processes and integrate systems and cloud services.
+Congratulations, you've created and run a logic app workflow that monitors resource events through Azure Event Grid and emails you when those events happen. You also learned how easily you can create workflows that automate processes and integrate systems and cloud services.
You can monitor other configuration changes with event grids and logic apps, for example:
You can monitor other configuration changes with event grids and logic apps, for
This tutorial uses resources and performs actions that incur charges on your Azure subscription. So when you're done with the tutorial and testing, make sure that you disable or delete any resources where you don't want to incur charges.
-* To stop running your logic app without deleting your work, disable your app. On your logic app menu, select **Overview**. On the toolbar, select **Disable**.
+* To stop running your workflow without deleting your work, disable your app. On your logic app menu, select **Overview**. On the toolbar, select **Disable**.
![Screenshot of logic app's overview, showing Disable button selected to disable the logic app.](./media/monitor-virtual-machine-changes-event-grid-logic-app/turn-off-disable-logic-app.png)
This tutorial uses resources and performs actions that incur charges on your Azu
## Next steps
-* [Create and route custom events with Event Grid](../event-grid/custom-event-quickstart.md)
+* [Create and route custom events with Azure Event Grid](../event-grid/custom-event-quickstart.md)
-See the following samples to learn about publishing events to and consuming events from Event Grid using different programming languages.
+See the following samples to learn about publishing events to and consuming events from Azure Event Grid using different programming languages.
- [Azure Event Grid samples for .NET](/samples/azure/azure-sdk-for-net/azure-event-grid-sdk-samples/) - [Azure Event Grid samples for Java](/samples/azure/azure-sdk-for-java/eventgrid-samples/)
event-grid Partner Events Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-overview.md
Last updated 03/31/2022
# Partner Events overview for customers - Azure Event Grid (preview)
-Event Grid's **Partner Events** allows customers to **subscribe to events** that originate in a registered system using the same mechanism they would use for any other event source on Azure, such as an Azure service. Those registered systems integrate with Event Grid are known as "partners". This feature also enables customers to **send events** to partner systems that support receiving and routing events to customer's solutions/endpoints in their platform. Typically, partners are software-as-a-service (SaaS) or [ERP](https://en.wikipedia.org/wiki/Enterprise_resource_planning) providers, but they might be corporate platforms wishing to make their events available to internal teams. They purposely integrate with Event Grid to realize end-to-end customer use cases that end on Azure (customers subscribe to events sent by partner) or end on a partner system (customers subscribe to Microsoft events sent by Azure Event Grid). Customers bank on Azure Event Grid to send events published by a partner to supported destinations such as webhooks, Azure Functions, Azure Event Hubs, or Azure Service Bus, to name a few. Customers also rely on Azure Event Grid to route events that originate in Microsoft services, such as Azure Storage, Outlook, Teams, or Azure AD, to partner systems where customer's solutions can react to them. With Partner Events, customers can build event-driven solutions across platforms and network boundaries to receive or send events reliably, securely and at a scale.
+Azure Event Grid's **Partner Events** allows customers to **subscribe to events** that originate in a registered system using the same mechanism they would use for any other event source on Azure, such as an Azure service. Those registered systems integrate with Event Grid are known as "partners".
+
+This feature also enables customers to **send events** to partner systems that support receiving and routing events to customer's solutions/endpoints in their platform. Typically, partners are software-as-a-service (SaaS) or [ERP](https://en.wikipedia.org/wiki/Enterprise_resource_planning) providers, but they might be corporate platforms wishing to make their events available to internal teams.
+
+They purposely integrate with Event Grid to realize end-to-end customer use cases that end on Azure (customers subscribe to events sent by partner) or end on a partner system (customers subscribe to Microsoft events sent by Azure Event Grid). Customers bank on Azure Event Grid to send events published by a partner to supported destinations such as webhooks, Azure Functions, Azure Event Hubs, or Azure Service Bus, to name a few.
+
+Customers also rely on Azure Event Grid to route events that originate in Microsoft services, such as Azure Storage, Outlook, Teams, or Azure AD, to partner systems where customer's solutions can react to them.
+
+With Partner Events, customers can build event-driven solutions across platforms and network boundaries to receive or send events reliably, securely and at a scale.
> [!NOTE] > If you're new to Event Grid, see the following articles that provide you with knowledge on foundational concepts:
The process to send events to a partner is similar to that of receiving events f
You may want to use the Partner Events feature if you've one or more of the following requirements. - You want to subscribe to events that originate in a [partner](#available-partners) system and route them to event handlers on Azure or to any application or service with a public endpoint.-- You want to take advantage of the rich set Event Grid's[destinations/event handlers](overview.md#event-handlers) that react to events from partners.
+- You want to take advantage of the rich set Event Grid's [destinations/event handlers](overview.md#event-handlers) that react to events from partners.
- You want to forward events raised by your custom application on Azure, an Azure service, or a Microsoft service to your application or service hosted by the [partner](#available-partners) system. For example, you may want to send Azure AD, Teams, SharePoint, or Azure Storage events to a partner system on which you're a tenant for processing. - You need a resilient push delivery mechanism with send-retry support and at-least once semantics. - You want to use [Cloud Events 1.0](https://cloudevents.io/) schema for your events.
expressroute About Fastpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/about-fastpath.md
To configure FastPath, the virtual network gateway must be either:
While FastPath supports most configurations, it doesn't support the following features:
-* UDR on the gateway subnet: FastPath doesn't honor UDRs configured on the gateway subnet. FastPath traffic bypasses any next-hops determined by UDRs configured on the gateway subnet.
- * Basic Load Balancer: If you deploy a Basic internal load balancer in your virtual network or the Azure PaaS service you deploy in your virtual network uses a Basic internal load balancer, the network traffic from your on-premises network to the virtual IPs hosted on the Basic load balancer will be sent to the virtual network gateway. The solution is to upgrade the Basic load balancer to a [Standard load balancer](../load-balancer/load-balancer-overview.md). * Private Link: If you connect to a [private endpoint](../private-link/private-link-overview.md) in your virtual network from your on-premises network, over a non-100Gbps ExpressRoute Direct circuit, the connection will go through the virtual network gateway. FastPath Connectivity to a private endpoint over a 100Gb ExpressRoute Direct circuit is supported.
The following FastPath features are in Public preview:
Available in all regions.
+**User Defined Routes (UDR)** - FastPath will honor UDRs configured on the GatewaySubnet and send traffic directly to an Azure Firewall or third party NVA.
+
+Available in all regions.
+ **Private Link Connectivity for 10Gbps ExpressRoute Direct Connectivity** - Private Link traffic sent over ExpressRoute FastPath will bypass the ExpressRoute virtual network gateway in the data path. This preview is available in the following Azure Regions. - Australia East
expressroute Expressroute Howto Add Ipv6 Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-add-ipv6-portal.md
Follow the steps below if you have an existing environment of Azure resources th
Set-AzVirtualNetworkGateway -VirtualNetworkGateway $gw >[!NOTE]
-> If you have an existing gateway that is not zone-redundant (meaning it is Standard, High Performance, or Ultra Performance SKU), you will need to delete and [recreate the gateway](expressroute-howto-add-gateway-portal-resource-manager.md#create-the-virtual-network-gateway) using any SKU and a Standard, Static public IP address.
+> If you have an existing gateway that is not zone-redundant (meaning it is Standard, High Performance, or Ultra Performance SKU) **and** uses a Basic public IP address, you will need to delete and [recreate the gateway](expressroute-howto-add-gateway-portal-resource-manager.md#create-the-virtual-network-gateway) using any SKU and a Standard, Static public IP address.
## Create a connection to a new virtual network
expressroute Expressroute Howto Linkvnet Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-arm.md
Register-AzProviderFeature -FeatureName ExpressRoutePrivateEndpointGatewayBypass
FastPath support for virtual network peering is now in Public preview, both IPv4 and IPv6 scenarios are supported. IPv4 FastPath and Vnet peering can be enabled on connections associated to both ExpressRoute Direct and ExpressRoute Partner circuits. IPv6 FastPath and Vnet peering support is limited to connections associated to ExpressRoute Direct.
-### FastPath and virtual network peering
+### FastPath virtual network peering and user defined routes (UDRs).
With FastPath and virtual network peering, you can enable ExpressRoute connectivity directly to VMs in a local or peered virtual network, bypassing the ExpressRoute virtual network gateway in the data path.
-To enroll in this preview, run the follow Azure PowerShell command in the target Azure subscription:
+With FastPath and UDR, you can configure a UDR on the GatewaySubnet to direct ExpressRoute traffic to an Azure Firewall or third party NVA. FastPath will honor the UDR and send traffic directly to the target Azure Firewall or NVA, bypassing the ExpressRoute virtual network gateway in the data path.
+
+> [!NOTE]
+> The previews for virtual network peering and user defined routes (UDRs) are offered together. You cannot enable only one scenario.
+>
+
+To enroll in these previews, run the follow Azure PowerShell command in the target Azure subscription:
```azurepowershell-interactive Register-AzProviderFeature -FeatureName ExpressRouteVnetPeeringGatewayBypass -ProviderNamespace Microsoft.Network
frontdoor Front Door Quickstart Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-quickstart-template-samples.md
The following table includes links to Azure Resource Manager deployment model te
| [Azure Functions with Private Link](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-premium-function-private-link) | Creates an Azure Functions app with a private endpoint, and a Front Door profile. | |**API Management origins**| **Description** | | [API Management (external)](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-standard-premium-api-management-external) | Creates an API Management instance with external VNet integration, and a Front Door profile. |
-| [API Management with Private Link](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-premium-api-management-private-link) | Creates an API Management instance with a private endpoint, and a Front Door profile. |
|**Storage origins**| **Description** | | [Storage static website](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-standard-premium-storage-static-website) | Creates an Azure Storage account and static website with a public endpoint, and a Front Door profile. | | [Storage blobs with Private Link](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-premium-storage-blobs-private-link) | Creates an Azure Storage account and blob container with a private endpoint, and a Front Door profile. |
frontdoor How To Enable Private Link Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-enable-private-link-web-app.md
Title: 'Connect Azure Front Door Premium to an App service origin with Private Link'
+ Title: 'Connect Azure Front Door Premium to an App Service origin with Private Link'
description: Learn how to connect your Azure Front Door Premium to a webapp privately. Previously updated : 03/18/2022 Last updated : 06/09/2022
-# Connect Azure Front Door Premium to an App service origin with Private Link
+# Connect Azure Front Door Premium to an App Service origin with Private Link
This article will guide you through how to configure Azure Front Door Premium tier to connect to your App service privately using the Azure Private Link service.
This article will guide you through how to configure Azure Front Door Premium ti
* Create a [Private Link](../../private-link/create-private-link-service-portal.md) service for your origin web servers. > [!Note]
-> Private Endpoint is available in public regions for PremiumV2-tier, PremiumV3-tier Windows web apps, Linux web apps, and the Azure Functions Premium plan (sometimes referred to as the Elastic Premium plan).
+> Private endpoints requires your App Service plan or function hosting plan to meet some requirements. For more information, see [Using Private Endpoints for Azure Web App](../../app-service/networking/private-endpoint.md).
## Sign in to Azure
governance Determine Non Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/determine-non-compliance.md
Title: Determine causes of non-compliance
-description: When a resource is non-compliant, there are many possible reasons. Learn to find out what caused the non-compliance.
Previously updated : 09/01/2021
+description: When a resource is non-compliant, there are many possible reasons. Discover what caused the non-compliance quickly and easily.
Last updated : 06/09/2022 ++ # Determine causes of non-compliance
responsible [condition](../concepts/definition-structure.md#conditions) in the p
|Current value must not case-insensitive match the target value. |notMatchInsensitively or **not** matchInsensitively | |No related resources match the effect details in the policy definition. |A resource of the type defined in **then.details.type** and related to the resource defined in the **if** portion of the policy rule doesn't exist. |
+#### Azure Policy Resource Provider mode compliance reasons
+
+The following table maps each `Microsoft.PolicyInsights`
+[Resource Provider mode](../concepts/definition-structure.md#resource-provider-modes) reason code to
+its corresponding explanation:
+
+|Compliance reason code |Error message and explanation |
+|-|-|
+|NonModifiablePolicyAlias |NonModifiableAliasConflict: The alias '{alias}' is not modifiable in requests using API version '{apiVersion}'. This error happens when a request using an API version where the alias does not support the 'modify' effect or only supports the 'modify' effect with a different token type. |
+|AppendPoliciesNotApplicable |AppendPoliciesUnableToAppend: The aliases: '{ aliases }' are not modifiable in requests using API version: '{ apiVersion }'. This can happen in requests using API versions for which the aliases do not support the 'modify' effect, or support the 'modify' effect with a different token type. |
+|ConflictingAppendPolicies |ConflictingAppendPolicies: Found conflicting policy assignments that modify the '{notApplicableFields}' field. Policy identifiers: '{policy}'. Please contact the subscription administrator to update the policy assignments.
+ |
+|AppendPoliciesFieldsExist |AppendPoliciesFieldsExistWithDifferentValues: Policy assignments attempted to append fields which already exist in the request with different values. Fields: '{existingFields}'. Policy identifiers: '{policy}'. Please contact the subscription administrator to update the policies.
+ |
+|AppendPoliciesUndefinedFields |AppendPoliciesUndefinedFields: Found policy definition that refer to an undefined field property for API version '{apiVersion}'. Fields: '{nonExistingFields}'. Policy identifiers: '{policy}'. Please contact the subscription administrator to update the policies.
+ |
+|MissingRegistrationForType |MissingRegistrationForResourceType: The subscription is not registered for the resource type '{ResourceType}'. Please check that the resource type exists and that the resource type is registered.
+ |
+|AmbiguousPolicyEvaluationPaths |The request content has one or more ambiguous paths: '{0}' required by policies: '{1}'. |
+|InvalidResourceNameWildcardPosition |The policy assignment '{0}' associated with the policy definition '{1}' could not be evaluated. The resource name '{2}' within an ifNotExists condition contains the wildcard '?' character in an invalid position. Wildcards can only be located at the end of the name in a segment by themselves (ex. TopLevelResourceName/?). Please either fix the policy or remove the policy assignment to unblock. |
+|TooManyResourceNameSegments |The policy assignment '{0}' associated with the policy definition '{1}' could not be evaluated. The resource name '{2}' within an ifNotExists condition contains too many name segments. The number of name segments must be equal to or less than the number of type segments (excluding the resource provider namespace). Please either fix the policy definition or remove the policy assignment to unblock. |
+|InvalidPolicyFieldPath |The field path '{0}' within the policy definition is invalid. Field paths must contain no empty segments. They may contain only alphanumeric characters with the exception of the '.' character for splitting segments and the '[*]' character sequence to access array properties.
+ #### AKS Resource Provider mode compliance reasons The following table maps each `Microsoft.Kubernetes.Data`
governance Guest Configuration Create Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/guest-configuration-create-definition.md
New-GuestConfigurationPolicy `
### Publish the Azure Policy definition
-Finally, publish the policy definitions using the `Publish-GuestConfigurationPolicy` cmdlet. The
-cmdlet only has the **Path** parameter that points to the location of the JSON files created by
-`New-GuestConfigurationPolicy`.
+Finally, you can publish the policy definitions using the New-AzPolicyDefinition cmdlet. The below commands will publish your guest configuration policy to the policy center.
-To run the Publish command, you need access to create policy definitions in Azure. The specific authorization
+To run the New-AzPolicyDefinition command, you need access to create policy definitions in Azure. The specific authorization
requirements are documented in the [Azure Policy Overview](../overview.md) page. The recommended built-in role is **Resource Policy Contributor**. ```powershell
-Publish-GuestConfigurationPolicy -Path '.\policies'
+New-AzPolicyDefinition -Name 'mypolicydefinition' -Policy '.\policies'
``` With the policy definition created in Azure, the last step is to assign the definition. See how to assign the definition with [Portal](../assign-policy-portal.md), [Azure CLI](../assign-policy-azurecli.md), and [Azure PowerShell](../assign-policy-powershell.md).
-### Optional: Piping output from each command to the next
-
-The commands in the guest configuration support pipeline parameters by name.
-You can use the `|` operator to pipeline output from each command to the next.
-Piping is useful in developer environments when rapidly iterating because you
-won't need to copy/paste the output of each command.
-
-To run the sequence using the `|` operator:
-
-```powershell
-# End to end flow piping output of each command to the next
-$ConfigName = myConfigName
-$ResourceGroupName = myResourceGroupName
-$StorageAccountName = myStorageAccountName
-$DisplayName = 'Configure Linux machine per my scenario.'
-$Description = 'Details about my policy.'
-New-GuestConfigurationPackage -Name $ConfigName -Configuration ./$ConfigName.mof -Path ./package/ -Type AuditAndSet -Force |
-Publish-GuestConfigurationPackage -ResourceGroupName $ResourceGroupName -StorageAccountName $StorageAccountName -Force |
-New-GuestConfigurationPolicy -PolicyId 'My GUID' -DisplayName $DisplayName -Description $Description -Path './policies' -Platform 'Linux' -Version 1.0.0 -Mode 'ApplyAndAutoCorrect' |
-Publish-GuestConfigurationPolicy
-```
- ## Policy lifecycle If you would like to release an update to the policy definition, make the change for both the guest
governance Guest Configuration Create Publish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/guest-configuration-create-publish.md
If you don't have a storage account, use the following example to create one.
```powershell # Creates a new resource group, storage account, and container New-AzResourceGroup -name myResourceGroupName -Location WestUS
-New-AzStorageAccount -ResourceGroupName myResourceGroupName -Name myStorageAccountName -SkuName 'Standard_LRS' -Location 'WestUs' | New-AzStorageContainer -Name guestconfiguration -Permission Blob
+New-AzStorageAccount -ResourceGroupName myResourceGroupName -Name mystorageaccount -SkuName 'Standard_LRS' -Location 'WestUs' | New-AzStorageContainer -Name guestconfiguration -Permission Blob
```
-The `Publish-GuestConfigurationPackage` command uploads the configuration package
-to Azure Blob Storage and retrieves a SAS token so it can be accessed securely.
+To publish your configuration package to Azure blob storage, you can follow the below steps which leverages the Az.Storage module.
-Parameters of the `Publish-GuestConfigurationPackage` cmdlet:
+First, obtain the context of the storage account in which the package will be stored. This example creates a context by specifying a connection string and saves the context in the variable $Context.
-- **Path**: Location of the package to be published-- **ResourceGroupName**: Name of the resource group where the storage account is located-- **StorageAccountName**: Name of the storage account where the package should be published-- **StorageContainerName**: (default: _guestconfiguration_) Name of the storage container in the
- storage account
-- **Force**: Overwrite existing package in the storage account with the same name
+```powershell
+$Context = New-AzStorageContext -ConnectionString "DefaultEndpointsProtocol=https;AccountName=ContosoGeneral;AccountKey=< Storage Key for ContosoGeneral ends with == >;"
+```
+
+Next, add the configuration package to the storage account. This example uploads the zip file ./MyConfig.zip to the blob "guestconfiguration".
+
+```powershell
+Set-AzStorageBlobContent -Container "guestconfiguration" -File ./MyConfig.zip -Blob "guestconfiguration" -Context $Context
+```
-The following example publishes the package to a storage container name 'guestconfiguration'.
+Optionally, you can add a SAS token in the URL, this ensures that the content package will be accessed securely. The below example generates a blob SAS token with full blob permission and returns the full blob URI with the shared access signature token.
```powershell
-Publish-GuestConfigurationPackage -Path ./MyConfig.zip -ResourceGroupName myResourceGroupName -StorageAccountName myStorageAccountName | % ContentUri
+$contenturi = New-AzStorageBlobSASToken -Context $Context -FullUri -Container guestconfiguration -Blob "guestconfiguration" -Permission rwd
``` ## Next steps
governance Guest Configuration Create Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/guest-configuration-create-setup.md
versions of PowerShell listed below.
| OS | PowerShell Version | |-|-| |Windows|[PowerShell 7.1.3](https://github.com/PowerShell/PowerShell/releases/tag/v7.1.3)|
-|Ubuntu 18|[PowerShell 7.2.0-preview.6](https://github.com/PowerShell/PowerShell/releases/tag/v7.2.0-preview.6)|
+|Ubuntu 18|[PowerShell 7.2.4](https://github.com/PowerShell/PowerShell/releases/tag/v7.2.4)|
The `GuestConfiguration` module requires the following software: - Azure PowerShell 5.9.0 or higher. The required Az modules are installed automatically with the `GuestConfiguration` module, or you can follow [these instructions](/powershell/azure/install-az-ps).
- - Only the Az modules 'Az.Accounts', 'Az.Resources', and 'Az.Storage' are
- required.
-- `PSDesiredStateConfiguration` module.+ ### Install the module from the PowerShell Gallery
Validate that the module has been imported:
Get-Command -Module 'GuestConfiguration' ```
-## Update and import the PSDesiredStateConfiguration module on Linux
-
-Starting with PowerShell 7.2 Preview 6, DSC is released independently from
-PowerShell as a module in the PowerShell Gallery. To install DSC version 3 in
-your PowerShell environment on Linux, run the command below.
-
-```powershell
-# Install the DSC module before compiling using the Configuration keyword
-Install-Module PSDesiredStateConfiguration -AllowPreRelease -Force
-```
- ## Next steps - [Create a package artifact](./guest-configuration-create.md)
hdinsight Hdinsight 40 Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-40-component-versioning.md
Title: Apache Hadoop components and versions - Azure HDInsight 4.0
description: Learn about the Apache Hadoop components and versions in Azure HDInsight 4.0. Previously updated : 02/08/2021 Last updated : 06/10/2022 # HDInsight 4.0 component versions
hdinsight Hdinsight Apache Kafka Spark Structured Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-apache-kafka-spark-structured-streaming.md
description: Learn how to use Apache Spark streaming to get data into or out of
Previously updated : 04/22/2020 Last updated : 06/10/2022 #Customer intent: As a developer, I want to learn how to use Spark Structured Streaming with Kafka on HDInsight.
hdinsight Apache Spark Jupyter Notebook Use External Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-jupyter-notebook-use-external-packages.md
description: Step-by-step instructions on how to configure Jupyter Notebooks ava
Previously updated : 11/22/2019 Last updated : 06/10/2022 # Use external packages with Jupyter Notebooks in Apache Spark clusters on HDInsight
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/release-notes.md
Previously updated : 05/13/2022 Last updated : 06/10/2022
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Server for Azure. The server is an implementation of the [FHIR](https://hl7.org/fhir) standard. This document provides details about the features and enhancements made to Azure API for FHIR.
+## May 2022
+
+### **Enhancement**
+
+|Enhancement |Related information |
+| :-- | : |
+|Prevent no impact updates from creating version - SQL |If a user updates a record that already exists and nothing has changed in the content, then the user should get a 200 but nothing should update the record. We updated the UpsertAsync method to add validations in the code instead of in UpsertResource SP and check the existing resource rawData with new resource rawData by ignoring meta.versionId and meta.lastUpdated. If it's an absolute match, then we return Ok with existing resource information without updating VersionId and lastUpdated. If the strings don't match, then we proceed with further steps of creating a new version. For more information, see [#2519](https://github.com/microsoft/fhir-server/pull/2519). |
## April 2022 ### **Enhancements**
-|Enhancements &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |Related information |
-| :-- | : |
+|Enhancements |Related information |
+| :-- | : |
|FHIRPath Patch |FHIRPath Patch was added as a feature to both the Azure API for FHIR. This implements FHIRPath Patch as defined on the [HL7](http://hl7.org/fhir/fhirpatch.html) website. | |Move Bundle notification to Core |With the introduction of the Resource.Bundle namespace to Core, the Resource references to the string resources file had to be made more explicit. For more information, see [PR #2478](https://github.com/microsoft/fhir-server/pull/2478). | |Handles invalid header on versioned update |When the versioning policy is set to "versioned-update", we required that the most recent version of the resource is provided in the request's if-match header on an update. The specified version must be in ETag format. Previously, a 500 would be returned if the version was invalid or in an incorrect format. This update now returns a 400 Bad Request. For more information, see [PR #2467](https://github.com/microsoft/fhir-server/pull/2467). |
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Ser
### **Bug fixes** |Bug fixes |Related information |
-| :-- | : |
+| :-- | : |
|Adds core to resource path |Part of the path to a string resource was accidentally removed in the versioning policy. This fix adds it back in. For more information, see [PR #2470](https://github.com/microsoft/fhir-server/pull/2470). | |SQL timeout is returning a 500 error |Fixed a bug when a SQL request hits a timeout and the request returns a 500. In the logs, this is a timeout from SQL compared to getting a 429 error from front end. For more information, see [PR #2497](https://github.com/microsoft/fhir-server/pull/2497). |
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Ser
### **Features**
-|Feature &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |Related information |
-| :-- | : |
+|Feature |Related information |
+| :-- | : |
|FHIRPath Patch |This new feature enables you to use the FHIRPath Patch operation on FHIR resources. For more information, see [FHIR REST API capabilities for Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/fhir-rest-api-capabilities.md). | ### **Bug fixes** |Bug fixes |Related information |
-| :-- | : |
+| :-- | : |
|Duplicate resources in search with `_include` |Fixed issue where a single resource can be returned twice in a search that has `_include`. For more information, see [PR #2448](https://github.com/microsoft/fhir-server/pull/2448). | |PUT creates on versioned update |Fixed issue where creates with PUT resulted in an error when the versioning policy is configured to `versioned-update`. For more information, see [PR #2457](https://github.com/microsoft/fhir-server/pull/2457). | |Invalid header handling on versioned update |Fixed issue where invalid `if-match` header would result in an HTTP 500 error. Now an HTTP Bad Request is returned instead. For more information, see [PR #2467](https://github.com/microsoft/fhir-server/pull/2467). |
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Ser
### **Features and enhancements**
-|Enhancements &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |Related information |
-| :-- | : |
+|Enhancements |Related information |
+| :-- | : |
|Added 429 retry and logging in BundleHandler |We sometimes encounter 429 errors when processing a bundle. If the FHIR service receives a 429 at the BundleHandler layer, we abort processing of the bundle and skip the remaining resources. We've added another retry (in addition to the retry present in the data store layer) that will execute one time per resource that encounters a 429. For more about this feature enhancement, see [PR #2400](https://github.com/microsoft/fhir-server/pull/2400).| |Billing for `$convert-data` and `$de-id` |Azure API for FHIR's data conversion and de-identified export features are now Generally Available. Billing for `$convert-data` and `$de-id` operations in Azure API for FHIR has been enabled. Billing meters were turned on March 1, 2022. | ### **Bug fixes** |Bug fixes |Related information |
-| :-- | : |
+| :-- | : |
|Update compartment search index |There was a corner case where the compartment search index wasn't being set on resources. Now we use the same index as the main search for compartment search to ensure all data is being returned. For more about the code fix, see [PR #2430](https://github.com/microsoft/fhir-server/pull/2430).|
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Ser
### **Features and enhancements**
-|Enhancements &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |Related information |
-| :- | : |
+|Enhancements |Related information |
+| :- | : |
|Added Publisher to `CapabiilityStatement.name` |You can now find the publisher in the capability statement at `CapabilityStatement.name`. [#2319](https://github.com/microsoft/fhir-server/pull/2319) | |Log `FhirOperation` linked to anonymous calls to Request metrics |We werenΓÇÖt logging operations that didnΓÇÖt require authentication. We extended the ability to get `FhirOperation` type in `RequestMetrics` for anonymous calls. [#2295](https://github.com/microsoft/fhir-server/pull/2295) | ### **Bug fixes** |Bug fixes |Related information |
-| :-- | : |
+| :-- | : |
|Fixed 500 error when `SearchParameter` Code is null |Fixed an issue with `SearchParameter` if it had a null value for Code, the result would be a 500. Now it will result in an `InvalidResourceException` like the other values do. [#2343](https://github.com/microsoft/fhir-server/pull/2343) | |Returned `BadRequestException` with valid message when input JSON body is invalid |For invalid JSON body requests, the FHIR server was returning a 500 error. Now we'll return a `BadRequestException` with a valid message instead of 500. [#2239](https://github.com/microsoft/fhir-server/pull/2239) | |`_sort` can cause `ChainedSearch` to return incorrect results |Previously, the sort options from the chained search's `SearchOption` object wasn't cleared, causing the sorting options to be passed through to the chained subsearch, which aren't valid. This could result in no results when there should be results. This bug is now fixed [#2347](https://github.com/microsoft/fhir-server/pull/2347). It addressed GitHub bug [#2344](https://github.com/microsoft/fhir-server/issues/2344). |
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Ser
### **Features and enhancements**
-|Enhancements &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |Related information |
-| :- | : |
+|Enhancements |Related information |
+| :- | : |
|Process Patient-everything links |We've expanded the Patient-everything capabilities to process patient links [#2305](https://github.com/microsoft/fhir-server/pull/2305). For more information, see [Patient-everything in FHIR](../../healthcare-apis/fhir/patient-everything.md#processing-patient-links) documentation. | |Added software name and version to capability statement |In the capability statement, the software name now distinguishes if you're using Azure API for FHIR or Azure Health Data Services. The software version will now specify which open-source [release package](https://github.com/microsoft/fhir-server/releases) is live in the managed service [#2294](https://github.com/microsoft/fhir-server/pull/2294). Addresses: [#1778](https://github.com/microsoft/fhir-server/issues/1778) and [#2241](https://github.com/microsoft/fhir-server/issues/2241) | |Log 500's to `RequestMetric` |Previously, 500s or any unknown/unhandled errors weren't getting logged in `RequestMetric`. They're now getting logged [#2240](https://github.com/microsoft/fhir-server/pull/2240). For more information, see [Enable diagnostic settings in Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/enable-diagnostic-logging.md) |
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Ser
### **Bug fixes** |Bug fixes |Related information |
-| :-- | : |
+| :-- | : |
|Resolved 500 error when the date was passed with a time zone. |This fixes a 500 error when a date with a time zone was passed into a datetime field [#2270](https://github.com/microsoft/fhir-server/pull/2270). | |Resolved issue when posting a bundle with incorrect Media Type returned a 500 error. |Previously when posting a search with a key that contains certain characters, a 500 error was returned. This fixes this issue [#2264](https://github.com/microsoft/fhir-server/pull/2264), and it addresses [#2148](https://github.com/microsoft/fhir-server/issues/2148). |
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Ser
### **Bug fixes** | Infinite loop bug | Related information |
-| :-- | -: |
+| :-- | :- |
|Fixed issue where [Conditional Delete](./././../azure-api-for-fhir/fhir-rest-api-capabilities.md#conditional-delete) could result in an infinite loop. | [#2269](https://github.com/microsoft/fhir-server/pull/2269) | ## September 2021 ### **Features and enhancements**
-|Enhancements &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |Related information |
-| :- | : |
-
+|Enhancements |Related information |
+| :- | : |
|Added support for conditional patch | [Conditional patch](././../azure-api-for-fhir/fhir-rest-api-capabilities.md#patch-and-conditional-patch)|
-| :-- | : |
|Conditional patch |[#2163](https://github.com/microsoft/fhir-server/pull/2163) | |Added conditional patch audit event. |[#2213](https://github.com/microsoft/fhir-server/pull/2213) | |Allow JSON patch in bundles | [JSON patch in bundles](././../azure-api-for-fhir/fhir-rest-api-capabilities.md#json-patch-in-bundles)|
-| :-- | : |
+| :-- | : |
|Allows for search history bundles with Patch requests. |[#2156](https://github.com/microsoft/fhir-server/pull/2156) | |Enabled JSON patch in bundles using Binary resources. |[#2143](https://github.com/microsoft/fhir-server/pull/2143) | |New audit event subtypes |Related information |
-| :-- | : |
+| :-- | : |
|Added new audit [OperationName subtypes](././../azure-api-for-fhir/enable-diagnostic-logging.md#audit-log-details).| [#2170](https://github.com/microsoft/fhir-server/pull/2170) | |[Reindex improvements](how-to-run-a-reindex.md) |Related information |
-| -- | : |
+| :-- | : |
|Added [boundaries for reindex](how-to-run-a-reindex.md) parameters. |[#2103](https://github.com/microsoft/fhir-server/pull/2103)| |Update error message for reindex parameter boundaries. |[#2109](https://github.com/microsoft/fhir-server/pull/2109)| |Added final reindex count check. |[#2099](https://github.com/microsoft/fhir-server/pull/2099)|
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Ser
### **Bug fixes** |Bug fixes |Related information |
-| :-- | : |
+| :-- | : |
|Wider catch for exceptions when applying patch. |[#2192](https://github.com/microsoft/fhir-server/pull/2192)| |Fixes history with PATCH in STU3.| [#2177](https://github.com/microsoft/fhir-server/pull/2177)| |Custom search bugs |Related information |
-| -- | : |
+| :-- | : |
|Addresses the delete failure with Custom Search parameters. | [#2133](https://github.com/microsoft/fhir-server/pull/2133)| |Added retry logic while Deleting Search parameter. | [#2121](https://github.com/microsoft/fhir-server/pull/2121)| |Set max item count in search options in SearchParameterDefinitionManager. | [#2141](https://github.com/microsoft/fhir-server/pull/2141)| |Provides better exception if there's a bad expression in search parameter. |[#2157](https://github.com/microsoft/fhir-server/pull/2157)| |Resolved retry 503 error |Related information |
-| :-- | : |
+| :-- | : |
|Retry 503 error from Cosmos DB. |[#2106](https://github.com/microsoft/fhir-server/pull/2106)| |Fixes processing 429s from StoreProcedures. |[#2165](https://github.com/microsoft/fhir-server/pull/2165)| |GitHub issues closed |Related information |
-| :-- | : |
+| :-- | : |
|Unable to create custom search parameter for the CarePlan medical device. |[#2146](https://github.com/microsoft/fhir-server/issues/2146) | |Unclear error message for conditional create with no ID. | [#2168](https://github.com/microsoft/fhir-server/issues/2168)| ### IoT connector for FHIR (preview)
-|Bug fixes &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|Related information |
-| :-- | : |
+|Bug fixes |Related information |
+| :-- | : |
|Fixed broken link.| Updated link to the IoT connector Azure documentation in the Azure API for FHIR portal. | ## Next steps
For information about the features and bug fixes in Azure Health Data Services (
>[!div class="nextstepaction"] >[Release notes: Azure Health Data Services](../release-notes.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Healthcare Apis Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-faqs.md
Previously updated : 06/06/2022 Last updated : 06/10/2022
Refer to the [Products by region](https://azure.microsoft.com/global-infrastruct
For more information, see [Azure Health Data Services service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-health-data-services) for the most current information.
-### What is the backup and recovery policy for Azure Health Data Services?
-
-Data for the managed service is automatically backed up every 12 hours, and the backups are kept for seven days. Data can be restored by the support team. Customers can make a request to restore the data, or change the default data backup policy, through a support ticket.
- ## More frequently asked questions [FAQs about Azure Health Data Services FHIR service](./fhir/fhir-faq.md)
healthcare-apis Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/known-issues.md
Previously updated : 05/13/2022 Last updated : 06/07/2022
This article describes the currently known issues with Azure Health Data Service
Refer to the table below to find details about resolution dates or possible workarounds. For more information about the different feature enhancements and bug fixes in Azure Health Data Services, see [Release notes: Azure Health Data Services](release-notes.md). + ## FHIR service
-
+ |Issue | Date discovered | Status | Date resolved | | :- | : | :- | :- |
-|The SQL Provider will cause the `RawResource` column in the database to save incorrectly. This issue occurs in a few cases when a transient exception occurs that causes the provider to use its retry logic. |April 2022 |Doesn't have a workaround. |Not resolved |
-
+|Using [token type](https://www.hl7.org/fhir/search.html#token) fields of length more than 128 characters can result in undesired behavior on create, search, update, and delete operations. | May 2022 |No workaround | Not resolved |
+|The SQL provider will cause the `RawResource` column in the database to save incorrectly. This occurs in a small number of cases when a transient exception occurs that causes the provider to use its retry logic.ΓÇ»|April 2022 |Resolved [#2571](https://github.com/microsoft/fhir-server/pull/2571)|May 2022 |
## Next steps
For information about the features and bug fixes in Azure API for FHIR, see
>[!div class="nextstepaction"] >[Release notes: Azure API for FHIR](./azure-api-for-fhir/release-notes.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
Previously updated : 05/13/2022 Last updated : 06/10/2022
Azure Health Data Services is a set of managed API services based on open standards and frameworks for the healthcare industry. They enable you to build scalable and secure healthcare solutions by bringing protected health information (PHI) datasets together and connecting them end-to-end with tools for machine learning, analytics, and AI. This document provides details about the features and enhancements made to Azure Health Data Services including the different service types (FHIR service, DICOM service, and MedTech service) that seamlessly work with one another.
+## May 2022
+
+### FHIR service
+
+#### **Bug fixes**
+
+|Bug fixes |Related information |
+| :-- | : |
+|Removes SQL retry on upsert |Removes retry on SQL command for upsert. The error still occurs, but data is saved correctly in success cases. For more information, see [#2571](https://github.com/microsoft/fhir-server/pull/2571). |
+|Added handling for SqlTruncate errors |Added a check for SqlTruncate exceptions and tests. In particular, this will catch SqlTruncate exceptions for Decimal type based on the specified precision and scale. For more information, see [#2553](https://github.com/microsoft/fhir-server/pull/2553). |
+
+### DICOM service
+
+#### **Features**
+
+|Enhancements | Related information |
+| : | :- |
+|DICOM service supports cross-origin resource sharing (CORS) |DICOM service now supports [CORS](./../healthcare-apis/fhir/configure-cross-origin-resource-sharing.md). CORS allows you to configure settings so that applications from one domain (origin) can access resources from a different domain, known as a cross-domain request. |
+|DICOMcast supports Private Link |DICOMcast has been updated to support Azure Health Data Services workspaces that have been configured to use [Private Link](./../healthcare-apis/healthcare-apis-configure-private-link.md). |
+|UPS-RS supports Change and Retrieve work item |Modality worklist (UPS-RS) endpoints have been added to support Change and Retrieve operations for work items. |
+|API version is now required as part of the URI |All REST API requests to the DICOM service must now include the API version in the URI. For more details, see [API versioning for DICOM service](./../healthcare-apis/dicom/api-versioning-dicom-service.md). |
+
+#### **Bug fixes**
+
+|Bug fixes |Related information |
+| :-- | : |
+|Index the first value for DICOM tags that incorrectly specify multiple values |Attributes that are defined to have a single value but have specified multiple values will now be leniently accepted. The first value for such attributes will be indexed. |
+ ## April 2022 ### FHIR service
-### **Features and enhancements**
+#### **Features and enhancements**
-|Enhancements &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |Related information |
-| :- | : |
+|Enhancements |Related information |
+| :- | : |
|FHIRPath Patch |FHIRPath Patch was added as a feature to both the Azure API for FHIR. This implements FHIRPath Patch as defined on the [HL7](http://hl7.org/fhir/fhirpatch.html) website. | |Handles invalid header on versioned update |When the versioning policy is set to "versioned-update", we required that the most recent version of the resource is provided in the request's if-match header on an update. The specified version must be in ETag format. Previously, a 500 would be returned if the version was invalid or in an incorrect format. This update now returns a 400 Bad Request. For more information, see [PR #2467](https://github.com/microsoft/fhir-server/pull/2467). | |Bulk import in public preview |The bulk-import feature enables importing FHIR data to the FHIR server at high throughput using the $import operation. It's designed for initial data load into the FHIR server. For more information, see [Bulk-import FHIR data (Preview)](./../healthcare-apis/fhir/import-data.md). |
-### **Bug fixes**
+#### **Bug fixes**
|Bug fixes |Related information |
-| :-- | : |
+| :-- | : |
|Adds core to resource path |Part of the path to a string resource was accidentally removed in the versioning policy. This fix adds it back in. For more information, see [PR #2470](https://github.com/microsoft/fhir-server/pull/2470). |
-### **Known issues**
+#### **Known issues**
For more information about the currently known issues with the FHIR service, see [Known issues: FHIR service](known-issues.md). ### DICOM service
-### **Bug fixes**
+#### **Bug fixes**
|Bug fixes |Related information |
-| :-- | : |
+| :-- | : |
|Reduce the strictness of validation applied to incoming DICOM files |When value representation (VR) is a decimal string (DS)/ integer string (IS), `fo-dicom` serialization treats value as a number. Customer DICOM files could be old and contains invalid numbers. Our service blocks such file upload due to the serialization exception. For more information, see [PR #1450](https://github.com/microsoft/dicom-server/pull/1450). | |Correctly parse a range of input in the content negotiation headers |Currently, WADO with Accept: multipart/related; type=application/dicom will throw an error. It will accept Accept: multipart/related; type="application/dicom", but they should be equivalent. For more information, see [PR #1462](https://github.com/microsoft/dicom-server/pull/1462). | |Fixed an issue where parallel upload of images in a study could fail under certain circumstances |Handle race conditions during parallel instance inserts in the same study. For more information, see [PR #1491](https://github.com/microsoft/dicom-server/pull/1491) and [PR #1496](https://github.com/microsoft/dicom-server/pull/1496). |
For more information about the currently known issues with the FHIR service, see
### Azure Health Data Services
-### **Features**
+#### **Features**
-|Feature &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |Related information |
-| :- | : |
+|Feature |Related information |
+| :- | : |
|Private Link |The Private Link feature is now available. With Private Link, you can access Azure Health Data Services securely from your VNet as a first-party service without having to go through a public Domain Name System (DNS). For more information, see [Configure Private Link for Azure Health Data Services](./../healthcare-apis/healthcare-apis-configure-private-link.md). | ### FHIR service
-### **Features**
+#### **Features**
|Feature | Related information |
-| : | -: |
+| : | :- |
|FHIRPath Patch |This new feature enables you to use the FHIRPath Patch operation on FHIR resources. For more information, see [FHIR REST API capabilities for Azure Health Data Services FHIR service](./../healthcare-apis/fhir/fhir-rest-api-capabilities.md). |
-### **Bug fixes**
+#### **Bug fixes**
|Bug fixes |Related information |
-| :-- | : |
+| :-- | : |
|SQL timeout returns 408 |Previously, a SQL timeout would return a 500. Now a timeout in SQL will return a FHIR OperationOutcome with a 408 status code. For more information, see [PR #2497](https://github.com/microsoft/fhir-server/pull/2497). | |Duplicate resources in search with `_include` |Fixed issue where a single resource can be returned twice in a search that has `_include`. For more information, see [PR #2448](https://github.com/microsoft/fhir-server/pull/2448). | |PUT creates on versioned update |Fixed issue where creates with PUT resulted in an error when the versioning policy is configured to `versioned-update`. For more information, see [PR #2457](https://github.com/microsoft/fhir-server/pull/2457). |
For more information about the currently known issues with the FHIR service, see
### MedTech service
-### **Features and enhancements**
+#### **Features and enhancements**
|Enhancements | Related information |
-| : | -: |
+| : | :- |
|Events |The Events feature within Health Data Services is now generally available (GA). The Events feature allows customers to receive notifications and triggers when FHIR observations are created, updated, or deleted. For more information, see [Events message structure](events/events-message-structure.md) and [What are events?](events/events-overview.md). | |Events documentation for Azure Health Data Services |Updated docs to allow for better understanding, knowledge, and help for Events as it went GA. Updated troubleshooting for ease of use for the customer. | |One touch deploy button for MedTech service launch in the portal |Enables easier deployment and use of MedTech service for customers without the need to go back and forth between pages or interfaces. | ## January 2022
-### **Features and enhancements**
+#### **Features and enhancements**
-|Enhancements &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |Related information |
-| :- | : |
+|Enhancement |Related information |
+| :- | : |
|Export FHIR data behind firewalls |This new feature enables exporting FHIR data to storage accounts behind firewalls. For more information, see [Configure export settings and set up a storage account](./././fhir/configure-export-data.md). | |Deploy Azure Health Data Services with Azure Bicep |This new feature enables you to deploy Azure Health Data Services using Azure Bicep. For more information, see [Deploy Azure Health Data Services using Azure Bicep](deploy-healthcare-apis-using-bicep.md). |
For more information about the currently known issues with the FHIR service, see
#### **Feature enhancements** |Enhancements | Related information |
-| : | -: |
+| : | :- |
|Customers can define their own query tags using the Extended Query Tags feature |With Extended Query Tags feature, customers now efficiently query non-DICOM metadata for capabilities like multitenancy and cohorts. It's available for all customers in Azure Health Data Services. | ## December 2021 ### Azure Health Data Services
-### **Features and enhancements**
+#### **Features and enhancements**
-|Enhancements &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |Related information |
-| :- | : |
+|Enhancements |Related information |
+| :- | : |
|Quota details for support requests |We've updated the quota details for customer support requests with the latest information. | |Local RBAC |We've updated the local RBAC documentation to clarify the use of the secondary tenant and the steps to disable it. | |Deploy and configure Azure Health Data Services using scripts |We've started the process of providing PowerShell, CLI scripts, and ARM templates to configure app registration and role assignments. Note that scripts for deploying Azure Health Data Services will be available after GA. | ### FHIR service
-### **Features and enhancements**
+#### **Features and enhancements**
|Enhancements | Related information |
-| : | -: |
+| : | :- |
|Added Publisher to `CapabiilityStatement.name` |You can now find the publisher in the capability statement at `CapabilityStatement.name`. [#2319](https://github.com/microsoft/fhir-server/pull/2319) | |Log `FhirOperation` linked to anonymous calls to Request metrics |We werenΓÇÖt logging operations that didnΓÇÖt require authentication. We extended the ability to get `FhirOperation` type in `RequestMetrics` for anonymous calls. [#2295](https://github.com/microsoft/fhir-server/pull/2295) |
-### **Bug fixes**
+#### **Bug fixes**
|Bug fixes |Related information |
-| :-- | : |
+| :-- | : |
|Fixed 500 error when `SearchParameter` Code is null |Fixed an issue with `SearchParameter` if it had a null value for Code, the result would be a 500. Now it will result in an `InvalidResourceException` like the other values do. [#2343](https://github.com/microsoft/fhir-server/pull/2343) | |Returned `BadRequestException` with valid message when input JSON body is invalid |For invalid JSON body requests, the FHIR server was returning a 500 error. Now we'll return a `BadRequestException` with a valid message instead of 500. [#2239](https://github.com/microsoft/fhir-server/pull/2239) | |Handled SQL Timeout issue |If SQL Server timed out, the PUT `/resource{id}` returned a 500 error. Now we handle the 500 error and return a timeout exception with an operation outcome. [#2290](https://github.com/microsoft/fhir-server/pull/2290) |
For more information about the currently known issues with the FHIR service, see
#### **Feature enhancements**
-| Enhancements &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Related information |
-| :- | --: |
+| Enhancements | Related information |
+| :- | :-- |
|Process Patient-everything links |We've expanded the Patient-everything capabilities to process patient links [#2305](https://github.com/microsoft/fhir-server/pull/2305). For more information, see [Patient-everything in FHIR](./../healthcare-apis/fhir/patient-everything.md#processing-patient-links) documentation. | |Added software name and version to capability statement. |In the capability statement, the software name now distinguishes if you're using Azure API for FHIR or Azure Health Data Services. The software version will now specify which open-source [release package](https://github.com/microsoft/fhir-server/releases) is live in the managed service [#2294](https://github.com/microsoft/fhir-server/pull/2294). Addresses: [#1778](https://github.com/microsoft/fhir-server/issues/1778) and [#2241](https://github.com/microsoft/fhir-server/issues/2241) | |Compress continuation tokens |In certain instances, the continuation token was too long to be able to follow the [next link](./../healthcare-apis/fhir/overview-of-search.md#pagination) in searches and would result in a 404. To resolve this, we compressed the continuation token to ensure it stays below the size limit [#2279](https://github.com/microsoft/fhir-server/pull/2279). Addresses issue [#2250](https://github.com/microsoft/fhir-server/issues/2250). | |FHIR service autoscale |The [FHIR service autoscale](./fhir/fhir-service-autoscale.md) is designed to provide optimized service scalability automatically to meet customer demands when they perform data transactions in consistent or various workloads at any time. It's available in all regions where the FHIR service is supported. |
-### **Bug fixes**
+#### **Bug fixes**
|Bug fixes |Related information |
-| :-- | : |
+| :-- | : |
|Resolved 500 error when the date was passed with a time zone. |This fixes a 500 error when a date with a time zone was passed into a datetime field [#2270](https://github.com/microsoft/fhir-server/pull/2270). | |Resolved issue when posting a bundle with incorrect Media Type returned a 500 error. |Previously when posting a search with a key that contains certain characters, a 500 error is returned. This fixes this issue [#2264](https://github.com/microsoft/fhir-server/pull/2264), and it addresses [#2148](https://github.com/microsoft/fhir-server/issues/2148). |
For more information about the currently known issues with the FHIR service, see
#### **Feature enhancements** |Enhancements | Related information |
-| : | -: |
+| : | :- |
|Content-Type header now includes transfer-syntax. |This enables the user to know which transfer syntax is used in case multiple accept headers are being supplied. | ## October 2021
For more information about the currently known issues with the FHIR service, see
#### **Feature enhancements**
-| Enhancements &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Related information |
-| :- | --: |
+| Enhancements | Related information |
+| :- | :-- |
|Test Data Generator tool |We've updated Azure Health Data Services GitHub samples repo to include a [Test Data Generator tool](https://github.com/microsoft/healthcare-apis-samples/blob/main/docs/HowToRunPerformanceTest.md) using Synthea data. This tool is an improvement to the open source [public test projects](https://github.com/ShadowPic/PublicTestProjects), based on Apache JMeter, that can be deployed to Azure AKS for performance tests. | ### FHIR service
For more information about the currently known issues with the FHIR service, see
#### **Feature enhancements** |Enhancements | Related information |
-| : | -: |
+| : | :- |
|Added support for [_sort](././../healthcare-apis/fhir/overview-of-search.md#search-result-parameters) on strings and dateTime. |[#2169](https://github.com/microsoft/fhir-server/pull/2169) | #### **Bug fixes** |Bug fixes | Related information |
-| : | -: |
+| : | :- |
|Fixed issue where [Conditional Delete](././../healthcare-apis/fhir/fhir-rest-api-capabilities.md#conditional-delete) could result in an infinite loop. | [#2269](https://github.com/microsoft/fhir-server/pull/2269) | |Resolved 500 error possibly caused by a malformed transaction body in a bundle POST. We've added a check that the URL is populated in the [transaction bundle](././..//healthcare-apis/fhir/fhir-features-supported.md#rest-api) requests. | [#2255](https://github.com/microsoft/fhir-server/pull/2255) | ### **DICOM service** |Added support | Related information |
-| : | -: |
+| : | :- |
|Regions | South Brazil and Central Canada. For more information about Azure regions and availability zones, see [Azure services that support availability zones](./../availability-zones/az-region.md). | |Extended Query tags |DateTime (DT) and Time (TM) Value Representation (VR) types | |Bug fixes | Related information |
-| : | -: |
+| : | :- |
|Implemented fix to workspace names. |Enabled DICOM service to work with workspaces that have names beginning with a letter. | ## September 2021
For more information about the currently known issues with the FHIR service, see
#### **Feature enhancements** |Enhancements | Related information |
-| :- | -: |
-
+| :- | :- |
|Added support for conditional patch | [Conditional patch](./././azure-api-for-fhir/fhir-rest-api-capabilities.md#patch-and-conditional-patch)|
-| :- | -:|
|Conditional patch | [#2163](https://github.com/microsoft/fhir-server/pull/2163) | |Added conditional patch audit event. | [#2213](https://github.com/microsoft/fhir-server/pull/2213) | |Allow JSON patch in bundles | [JSON patch in bundles](./././azure-api-for-fhir/fhir-rest-api-capabilities.md#json-patch-in-bundles)|
-| :- | -:|
+| :- | :-|
|Allows for search history bundles with Patch requests. |[#2156](https://github.com/microsoft/fhir-server/pull/2156) | |Enabled JSON patch in bundles using Binary resources. |[#2143](https://github.com/microsoft/fhir-server/pull/2143) | |Added new audit event [OperationName subtypes](./././azure-api-for-fhir/enable-diagnostic-logging.md#audit-log-details)| [#2170](https://github.com/microsoft/fhir-server/pull/2170) | | Running a reindex job | [Reindex improvements](./././fhir/how-to-run-a-reindex.md)|
-| :- | -:|
+| :- | :-|
|Added [boundaries for reindex](./././azure-api-for-fhir/how-to-run-a-reindex.md#performance-considerations) parameters. |[#2103](https://github.com/microsoft/fhir-server/pull/2103)| |Updated error message for reindex parameter boundaries. |[#2109](https://github.com/microsoft/fhir-server/pull/2109)| |Added final reindex count check. |[#2099](https://github.com/microsoft/fhir-server/pull/2099)|
For more information about the currently known issues with the FHIR service, see
#### **Bug fixes** |Bug fixes | Related information |
-| :- | --: |
+| :- | :-- |
| Wider catch for exceptions during applying patch | [#2192](https://github.com/microsoft/fhir-server/pull/2192)| |Fix history with PATCH in STU3 |[#2177](https://github.com/microsoft/fhir-server/pull/2177) | |Custom search bugs | Related information |
-| :- | -: |
+| :- | :- |
|Addresses the delete failure with Custom Search parameters |[#2133](https://github.com/microsoft/fhir-server/pull/2133) | |Added retry logic while Deleting Search parameter | [#2121](https://github.com/microsoft/fhir-server/pull/2121)| |Set max item count in search options in SearchParameterDefinitionManager |[#2141](https://github.com/microsoft/fhir-server/pull/2141) | |Better exception if there's a bad expression in a search parameter |[#2157](https://github.com/microsoft/fhir-server/pull/2157) | |Resolved SQL batch reindex if one resource fails | Related information |
-| :- | -: |
+| :- | :- |
|Updates SQL batch reindex retry logic |[#2118](https://github.com/microsoft/fhir-server/pull/2118) | |GitHub issues closed | Related information |
-| :- | -: |
+| :- | :- |
|Unclear error message for conditional create with no ID |[#2168](https://github.com/microsoft/fhir-server/issues/2168) | ### **DICOM service**
+#### **Bug fixes**
+ |Bug fixes | Related information |
-| :- | -: |
+| :- | :- |
|Implemented fix to resolve QIDO paging-ordering issues | [#989](https://github.com/microsoft/dicom-server/pull/989) |
-| :- | -: |
+| :- | :- |
### **MedTech service**
+#### **Bug fixes**
+ |Bug fixes | Related information |
-|:- | -: |
+| :- | :- |
| MedTech service normalized improvements with calculations to support and enhance health data standardization. | See: [Use Device mappings](./../healthcare-apis/iot/how-to-use-device-mappings.md) and [Calculated Functions](./../healthcare-apis/iot/how-to-use-calculated-functions-mappings.md) | ## Next steps
For information about the features and bug fixes in Azure API for FHIR, see
>[!div class="nextstepaction"] >[Release notes: Azure API for FHIR](./azure-api-for-fhir/release-notes.md)+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
internet-peering Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/faqs.md
Title: Internet peering - FAQs
description: Internet peering - FAQs -+ Last updated 11/27/2019-+ # Internet peering - FAQs
internet-peering How To Exchange Route Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/how-to-exchange-route-server-portal.md
Title: Peering Connection for Exchange partners with route server by using the P
description: Create or modify an Exchange peering with route server by using the Azure portal -+ Last updated 5/19/2020-+ # Create or modify an Exchange peering with route server in Azure portal
internet-peering Howto Direct Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-direct-portal.md
Title: Create or modify a Direct peering by using the Azure portal
description: Create or modify a Direct peering by using the Azure portal -+ Last updated 5/19/2020-+ # Create or modify a Direct peering by using the Azure portal
internet-peering Howto Direct Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-direct-powershell.md
Title: Create or modify a Direct peering by using PowerShell
description: Create or modify a Direct peering by using PowerShell -+ Last updated 11/27/2019-+
internet-peering Howto Exchange Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-exchange-portal.md
Title: Create or modify an Exchange peering by using the Azure portal
description: Create or modify an Exchange peering by using the Azure portal -+ Last updated 5/2/2020-+ # Create or modify an Exchange peering by using the Azure portal
internet-peering Howto Exchange Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-exchange-powershell.md
Title: Create or modify an Exchange peering by using PowerShell
description: Create or modify an Exchange peering by using PowerShell -+ Last updated 11/27/2019-+
internet-peering Howto Legacy Direct Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-legacy-direct-portal.md
Title: Convert a legacy Direct peering to an Azure resource by using the Azure p
description: Convert a legacy Direct peering to an Azure resource by using the Azure portal -+ Last updated 11/27/2019-+ # Convert a legacy Direct peering to an Azure resource by using the Azure portal
internet-peering Howto Legacy Direct Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-legacy-direct-powershell.md
Title: Convert a legacy Direct peering to an Azure resource by using PowerShell
description: Convert a legacy Direct peering to an Azure resource by using PowerShell -+ Last updated 11/27/2019-+
internet-peering Howto Legacy Exchange Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-legacy-exchange-portal.md
Title: Convert a legacy Exchange peering to an Azure resource by using the Azure
description: Convert a legacy Exchange peering to an Azure resource by using the Azure portal -+ Last updated 5/21/2020-+ # Convert a legacy Exchange peering to an Azure resource by using the Azure portal
internet-peering Howto Legacy Exchange Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-legacy-exchange-powershell.md
Title: Convert a legacy Exchange peering to an Azure resource by using PowerShel
description: Convert a legacy Exchange peering to an Azure resource by using PowerShell -+ Last updated 12/15/2020-+
internet-peering Howto Peering Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-peering-service-portal.md
Title: Enable Azure Peering Service on a Direct peering by using the Azure porta
description: Enable Azure Peering Service on a Direct peering by using the Azure portal -+ Last updated 3/18/2020-+ # Enable Azure Peering Service on a Direct peering by using the Azure portal
internet-peering Howto Peering Service Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-peering-service-powershell.md
Title: Enable Azure Peering Service on a Direct peering by using PowerShell
description: Enable Azure Peering Service on a Direct peering by using PowerShell -+ Last updated 11/27/2019-+
internet-peering Howto Subscription Association Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-subscription-association-portal.md
Title: Associate peer ASN to Azure subscription using the portal
description: Associate peer ASN to Azure subscription using the portal -+ Last updated 5/18/2020-+ # Associate peer ASN to Azure subscription using the portal
internet-peering Howto Subscription Association Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-subscription-association-powershell.md
Title: Associate peer ASN to Azure subscription using PowerShell
description: Associate peer ASN to Azure subscription using PowerShell -+ Last updated 12/15/2020-+
internet-peering Overview Peering Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/overview-peering-service.md
Title: Internet peering vs. Peering Service
description: Internet peering vs. Peering Service -+ Last updated 5/22/2020-+ # Internet peering vs. Peering Service
internet-peering Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/overview.md
Title: Set up peering with Microsoft
description: Overview of peering -+ Last updated 12/15/2020-+ # Internet peering overview
internet-peering Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/policy.md
- Title: Microsoft peering policy description: Microsoft peering policy -+ Last updated 12/15/2020--+ # Peering policy
internet-peering Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/prerequisites.md
Title: Prerequisites to set up peering with Microsoft
description: Prerequisites to set up peering with Microsoft -+ Last updated 12/15/2020-+ # Prerequisites to set up peering with Microsoft
internet-peering Walkthrough Communications Services Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-communications-services-partner.md
Title: Azure Internet peering for Communications Services walkthrough
description: Azure Internet peering for Communications Services walkthrough -+ Last updated 03/30/2021-+ # Azure Internet peering for Communications Services walkthrough
internet-peering Walkthrough Direct All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-direct-all.md
Title: Direct peering walkthrough
description: Direct peering walkthrough -+ Last updated 12/15/2020-+ # Direct peering walkthrough
internet-peering Walkthrough Exchange All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-exchange-all.md
Title: Exchange peering walkthrough
description: Exchange peering walkthrough -+ Last updated 12/15/2020-+
internet-peering Walkthrough Peering Service All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-peering-service-all.md
Title: Peering Service partner walkthrough
description: Peering Service partner walkthrough -+ Last updated 12/15/2020-+ # Peering Service partner walkthrough
iot-edge Troubleshoot Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-common-errors.md
By default, IoT Edge starts modules in their own isolated container network. The
**Option 1: Set DNS server in container engine settings**
-Specify the DNS server for your environment in the container engine settings, which will apply to all container modules started by the engine. Create a file named `daemon.json` specifying the DNS server to use. For example:
+Specify the DNS server for your environment in the container engine settings, which will apply to all container modules started by the engine. Create a file named `daemon.json`, then specify the DNS server to use. For example:
```json {
Specify the DNS server for your environment in the container engine settings, wh
} ```
-The above example sets the DNS server to a publicly accessible DNS service. If the edge device can't access this IP from its environment, replace it with DNS server address that is accessible.
+This DNS server is set to a publicly accessible DNS service. However some networks, such as corporate networks, have their own DNS servers installed and won't allow access to public DNS servers. Therefore, if your edge device can't access a public DNS server, replace it with an accessible DNS server address.
<!-- 1.1 --> :::moniker range="iotedge-2018-06"
key-vault How To Integrate Certificate Authority https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/how-to-integrate-certificate-authority.md
Make sure you have the following information from your Global Sign account:
- E-mail of Administrator - Phone Number of Administrator - ## Add the certificate authority in Key Vault After you gather the preceding information from your DigiCert CertCentral account, you can add DigiCert to the certificate authority list in the key vault.
Merge the CSR signed by the certificate authority to complete the request. For i
For more information, see [Certificate operations in the Key Vault REST API reference](/rest/api/keyvault). For information on establishing permissions, see [Vaults - Create or update](/rest/api/keyvault/keyvault/vaults/create-or-update) and [Vaults - Update access policy](/rest/api/keyvault/keyvault/vaults/update-access-policy).
-## Frequently asked questions
--- **Can I generate a DigiCert wildcard certificate by using Key Vault?**
-
- Yes, though it depends on how you configured your DigiCert account.
-- **How can I create an OV SSL or EV SSL certificate with DigiCert?**
-
- Key Vault supports the creation of OV and EV SSL certificates. When you create a certificate, select **Advanced Policy Configuration** and then specify the certificate type. Supported values: OV SSL, EV SSL
-
- You can create this type of certificate in Key Vault if your DigiCert account allows it. For this type of certificate, validation is performed by DigiCert. If validation fails, the DigiCert support team can help. You can add information when you create a certificate by defining the information in `subjectName`.
-
- For example,
- `SubjectName="CN = docs.microsoft.com, OU = Microsoft Corporation, O = Microsoft Corporation, L = Redmond, S = WA, C = US"`.
-
-- **Does it take longer to create a DigiCert certificate via integration than it does to acquire it directly from DigiCert?**
-
- No. When you create a certificate, the verification process might take time. DigiCert controls that process.
-- ## Next steps-
+- [Frequently asked questions: Integrate Key Vault with Integrated Certificate Authorities](faq.yml)
- [Authentication, requests, and responses](../general/authentication-requests-and-responses.md) - [Key Vault Developer's Guide](../general/developers-guide.md)
key-vault Overview Renew Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/overview-renew-certificate.md
For more information about creating a new CSR, see [Create and merge a CSR in Ke
Azure Key Vault also handles autorenewal of self-signed certificates. To learn more about changing the issuance policy and updating a certificate's lifecycle attributes, see [Configure certificate autorotation in Key Vault](./tutorial-rotate-certificates.md#update-lifecycle-attributes-of-a-stored-certificate).
-## Troubleshoot
-* If the issued certificate is in *disabled* status in the Azure portal, go to **Certificate Operation** to view the certificate's error message.
-* Error type "The CSR used to get your certificate has already been used. Please try to generate a new certificate with a new CSR."
- Go to 'Advanced Policy' section of the certificate and check if **'reuse key on renewal'** option is turned off.
--
-## Frequently asked questions
-
-**How can I test the autorotation feature of the certificate?**
-
-Create a self-signed certificate with a validity of **1 month**, and then set the lifetime action for rotation at **1%**. You should be able to view certificate version history being created over next few days.
-
-**Will the tags be replicated after autorenewal of the certificate?**
-
-Yes, the tags are replicated after autorenewal.
- ## Next steps
-* [Integrate Key Vault with DigiCert certificate authority](how-to-integrate-certificate-authority.md)
-* [Tutorial: Configure certificate autorotation in Key Vault](tutorial-rotate-certificates.md)
+- [Azure Key Vault certificate renewal frequently as questions](faq.yml)
+- [Integrate Key Vault with DigiCert certificate authority](how-to-integrate-certificate-authority.md)
+- [Tutorial: Configure certificate autorotation in Key Vault](tutorial-rotate-certificates.md)
key-vault Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/private-link-service.md
Your private endpoint uses a private IP address in your virtual network.
# [Azure portal](#tab/portal)
-## Establish a private link connection to Key Vault using the Azure portal
+## Establish a private link connection to Key Vault using the Azure portal
First, create a virtual network by following the steps in [Create a virtual network using the Azure portal](../../virtual-network/quick-create-portal.md)
Select the "Review + Create" button and create the key vault. It will take 5-10
If you already have a key vault, you can create a private link connection by following these steps:
-1. Sign in to the Azure portal.
-1. In the search bar, type in "key vaults"
+1. Sign in to the Azure portal.
+1. In the search bar, type in "key vaults".
1. Select the key vault from the list to which you want to add a private endpoint.
-1. Select the "Networking" tab under Settings
-1. Select the Private endpoint connections tab at the top of the page
-1. Select the "+ Private Endpoint" button at the top of the page.
+1. Select the "Networking" tab under Settings.
+1. Select the "Private endpoint connections" tab at the top of the page.
+1. Select the "+ Create" button at the top of the page.
![Screenshot that shows the '+ Private Endpoint' button on the 'Networking' page.](../media/private-link-service-3.png) ![Screenshot that shows the 'Basics' tab on the 'Create a private endpoint (Preview) page.](../media/private-link-service-4.png)
-You can choose to create a private endpoint for any Azure resource in using this blade. You can either use the dropdown menus to select a resource type and select a resource in your directory, or you can connect to any Azure resource using a resource ID. Leave the "integrate with the private zone DNS" option unchanged.
+1. Under "Project Details", select the Resource Group that contains the virtual network that you created as a prerequisite for this tutorial. Under "Instance details", enter "myPrivateEndpoint" as the Name, and select the same location as the virtual network that you created as a prerequisite for this tutorial.
+
+ You can choose to create a private endpoint for any Azure resource in using this blade. You can either use the dropdown menus to select a resource type and select a resource in your directory, or you can connect to any Azure resource using a resource ID. Leave the "integrate with the private zone DNS" option unchanged.
+
+1. Advance to the "Resources" blade. For "Resource type", select "Microsoft.KeyVault/vaults"; for "Resource", select the key vault you created as a prerequisite for this tutorial. "Target sub-resource" will auto-populate with "vault".
+1. Advance to the "Virtual Network". Select the virtual network and subnet that you created as a prerequisite for this tutorial.
+1. Advance through the "DNS" and "Tags" blades, accepting the defaults.
+1. On the "Review + Create" blade, select "Create".
When you create a private endpoint, the connection must be approved. If the resource for which you are creating a private endpoint is in your directory, you will be able to approve the connection request provided you have sufficient permissions; if you are connecting to an Azure resource in another directory, you must wait for the owner of that resource to approve your connection request. There are four provisioning states:
-| Service provide action | Service consumer private endpoint state | Description |
+| Service action | Service consumer private endpoint state | Description |
|--|--|--| | None | Pending | Connection is created manually and is pending approval from the Private Link resource owner. | | Approve | Approved | Connection was automatically or manually approved and is ready to be used. |
Aliases: <your-key-vault-name>.vault.azure.net
2. Click Overview and check if there is an A record with the simple name of your key vault (i.e. fabrikam). Do not specify any suffix. 3. Make sure you check the spelling, and either create or fix the A record. You can use a TTL of 600 (10 mins). 4. Make sure you specify the correct private IP address.
-
+ * Check to make sure the A record has the correct IP Address. 1. You can confirm the IP address by opening the Private Endpoint resource in Azure portal. 2. Navigate to the Microsoft.Network/privateEndpoints resource, in the Azure portal (not the Key Vault resource)
lab-services Class Type Adobe Creative Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-adobe-creative-cloud.md
The size of VM that you need to use for your lab depends on the types of project
> [!WARNING] > The **Small GPU (Visualization)** virtual machine size is configured to enable a high-performing graphics experience and meets [AdobeΓÇÖs system requirements for each application](https://helpx.adobe.com/creative-cloud/system-requirements.html). Make sure to choose Small GPU (Visualization) not Small GPU (Compute). For more information about this virtual machine size, see the article on [how to set up a lab with GPUs](./how-to-setup-lab-gpu.md).
+#### GPU drivers
+
+When you create the lab, we recommend that you install the GPU drivers by selecting the **Install GPU drivers** option in the lab creation wizard. You should also validate that the GPU drivers are correctly installed. For more information, read the following sections:
+- [Ensure that the appropriate GPU drivers are installed](../lab-services/how-to-setup-lab-gpu.md#ensure-that-the-appropriate-gpu-drivers-are-installed)
+- [Validate the installed drivers](../lab-services/how-to-setup-lab-gpu.md#validate-the-installed-drivers)
+ ## Template machine configuration ### Creative Cloud deployment package
Consider saving your template VM for future use. To save the template VM, see [
- When self-service is *enabled*, the template VMΓÇÖs image will have Creative Cloud desktop installed. Teachers can then reuse this image to create labs and to choose which Creative Cloud apps to install. This helps reduce IT overhead since teachers can independently set up labs and have full control over installing the Creative Cloud apps required for their classes. - When self-service is *disabled*, the template VMΓÇÖs image will already have the specified Creative Cloud apps installed. Teachers can reuse this image to create labs; however, they wonΓÇÖt be able to install additional Creative Cloud apps.
+### Troubleshooting
+
+Adobe Creative Cloud may show an error saying *Your graphics processor is incompatible* when the GPU drivers or the GPU is not configured correctly.
++
+To fix this issue:
+- Ensure that you selected the Small GPU *(Visualization)* size when you created your lab. You can see the VM size used by the lab on the lab's [Template page](../lab-services/how-to-create-manage-template.md).
+- Try [manually installing the Small GPU Visualization drivers](../lab-services/how-to-setup-lab-gpu.md#install-the-small-gpu-visualization-drivers).
+ ## Cost In this section, weΓÇÖll look at a possible cost estimate for this class. WeΓÇÖll use a class of 25 students with 20 hours of scheduled class time. Also, each student gets 10 hours quota for homework or assignments outside scheduled class time. The virtual machine size we chose was **Small GPU (Visualization)**, which is 160 lab units.
lab-services How To Enable Nested Virtualization Template Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-enable-nested-virtualization-template-vm.md
To use the automated setup for nested virtualization with Windows Server 2016 or
### Using Windows tools to enable nested virtualization To configure nested virtualization for Windows Server 2016 or 2019 manually, see [Enable nested virtualization on a template virtual machine in Azure Lab Services manually](how-to-enable-nested-virtualization-template-vm-ui.md). Instructions will also cover configuring networking so the Hyper-V VMs have internet access.+
+### Processor compatibility
+
+The nested virtualization VM sizes may use different processors as shown in the following table:
+
+ Size | Series | Processor |
+| - | -- | -- |
+| Medium (nested virtualization) | [Standard_D4s_v4](../virtual-machines/dv4-dsv4-series.md) | 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) or the Intel® Xeon® Platinum 8272CL (Cascade Lake) |
+| Large (nested virtualization) | [Standard_D8s_v4](../virtual-machines/dv4-dsv4-series.md) | 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) or the Intel® Xeon® Platinum 8272CL (Cascade Lake) |
+
+Each time that a template VM or a student VM is stopped and started, the underlying processor may change. To help ensure that nested VMs work consistently across processors, try enabling [processor compatibility mode](/windows-server/virtualization/hyper-v/manage/processor-compatibility-mode-hyper-v) on the nested VMs. It's recommended to enable **Processor Compatibility** mode on the template VM's nested VMs before publishing or exporting the image. You should also test the performance of the nested VMs with the **Processor Compatibility** mode enabled to ensure performance isn't negatively impacted. For more information, see [ramifications of using processor compatibility mode](/windows-server/virtualization/hyper-v/manage/processor-compatibility-mode-hyper-v.md#ramifications-of-using-processor-compatibility-mode).
lab-services How To Get Started Create Lab Within Canvas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-get-started-create-lab-within-canvas.md
The student list for the course is automatically synced with the course roster.
This section outlines common error messages that you may see, along with the steps to resolve them.
+- Insufficient permissions to create lab.
+
+ In Canvas, an educator will see a message indicating that they don't have sufficient permission. Educators should contact their Azure admin so they can be [added as a **Lab Creator**](tutorial-setup-lab-plan.md#add-a-user-to-the-lab-creator-role). For example, educators can be added as a **Lab Creator** to the resource group that contains their lab.
+
+- Message that there isn't enough capacity to create lab VMs.
+
+ [Request a limit increase](capacity-limits.md#request-a-limit-increase) which needs to be done by an Azure Labs Services administrator.
+ - Student sees warning that the lab isn't available yet. In Canvas, you'll see the following message if the educator hasn't published the lab yet. Educators must [publish the lab](tutorial-setup-lab.md#publish-a-lab) and [sync users](how-to-manage-user-lists-within-canvas.md#sync-users) for students to have access to a lab. :::image type="content" source="./media/how-to-get-started-create-labs-within-canvas/troubleshooting-lab-isnt-available-yet.png" alt-text="Troubleshooting -> This lab is not available yet"::: -- Insufficient permissions to create lab.
+- Student or educator is prompted to grant access.
- In Canvas, an educator will see a message indicating that they don't have sufficient permission. Educators should contact their Azure admin so they can be [added as a **Lab Creator**](tutorial-setup-lab-plan.md#add-a-user-to-the-lab-creator-role).
+ Before a student or educator can first access their lab, some browsers require that they first grant Azure Lab Services access to the browser's local storage. To grant access, educators and students should click the **Grant access** button when they are prompted:
-- Message that there isn't enough capacity to create lab VMs.
+ :::image type="content" source="./media/how-to-get-started-create-labs-within-canvas/canvas-grant-access-prompt.png" alt-text="Screenshot of page to grant Azure Lab Services access to use local storage for the browser.":::
+
+ Educators and students will see the message **Access granted** when access is successfully granted to Azure Lab Services. The educator or student should then reload the browser window to start using Azure Lab Services.
+
+ :::image type="content" source="./media/how-to-get-started-create-labs-within-canvas/canvas-access-granted-success.png" alt-text="Screenshot of access granted page in Azure Lab Services.":::
+
+ > [!IMPORTANT]
+ > Ensure that students and educators are using an up-to-date version of their browser. For older browser versions, students and educators may experience issues with being able to successfully grant access to Azure Lab Services.
+
+ - Educator isn't prompted for their credentials after they click sign-in.
+
+ When an educator accesses Azure Lab Services within their course, they may be prompted to sign in. Ensure that the browser's settings allow popups from the url of your Canvas instance, otherwise the popup may be blocked by default.
- [Request a limit increase](capacity-limits.md#request-a-limit-increase).
+ :::image type="content" source="./media/how-to-get-started-create-labs-within-canvas/canvas-sign-in.png" alt-text="Azure Lab Services sign-in screen.":::
## Next steps
lab-services How To Setup Lab Gpu 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-setup-lab-gpu-1.md
+
+ Title: Set up a lab with GPUs in Azure Lab Services when using lab accounts | Microsoft Docs
+description: Learn how to set up a lab with graphics processing unit (GPU) virtual machines when using lab accounts.
++ Last updated : 06/26/2020+++
+# Set up GPU virtual machines in labs contained within lab accounts
++
+This article shows you how to do the following tasks:
+
+- Choose between *visualization* and *compute* graphics processing units (GPUs).
+- Ensure that the appropriate GPU drivers are installed.
+
+## Choose between visualization and compute GPU sizes
+
+On the first page of the lab creation wizard, in the **Which virtual machine size do you need?** drop-down list, you select the size of the VMs that are needed for your class.
+
+![Screenshot of the "New lab" pane for selecting a VM size](./media/how-to-setup-gpu-1/lab-gpu-selection.png)
+
+In this process, you have the option of selecting either **Visualization** or **Compute** GPUs. It's important to choose the type of GPU that's based on the software that your students will use.
+
+As described in the following table, the *compute* GPU size is intended for compute-intensive applications. For example, the [Deep Learning in Natural Language Processing class type](./class-type-deep-learning-natural-language-processing.md) uses the **Small GPU (Compute)** size. The compute GPU is suitable for this type of class, because students use deep learning frameworks and tools that are provided by the [Data Science Virtual Machine image](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804) to train deep learning models with large sets of data.
+
+| Size | vCPUs | RAM | Description |
+| - | -- | | -- |
+| Small GPU (Compute) | 6 vCPUs | 56 GB RAM | [Standard_NC6](../virtual-machines/nc-series.md). This size is best suited for compute-intensive applications such as artificial intelligence (AI) and deep learning. |
+
+The *visualization* GPU sizes are intended for graphics-intensive applications. For example, the [SOLIDWORKS engineering class type](./class-type-solidworks.md) shows using the **Small GPU (Visualization)** size. The visualization GPU is suitable for this type of class, because students interact with the SOLIDWORKS 3D computer-aided design (CAD) environment for modeling and visualizing solid objects.
+
+| Size | vCPUs | RAM | Description |
+| - | -- | | -- |
+| Small GPU (Visualization) | 6 vCPUs | 56 GB RAM | [Standard_NV6](../virtual-machines/nv-series.md). This size is best suited for remote visualization, streaming, gaming, and encoding that use frameworks such as OpenGL and DirectX. |
+| Medium GPU (Visualization) | 12 vCPUs | 112 GB RAM | [Standard_NV12](../virtual-machines/nv-series.md?bc=%2fazure%2fvirtual-machines%2flinux%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json). This size is best suited for remote visualization, streaming, gaming, and encoding that use frameworks such as OpenGL and DirectX. |
+
+> [!NOTE]
+> You may not see some of these VM sizes in the list when creating a lab. The list is populated based on the current capacity of the lab's location. For availability of VMs, see [Products available by region](https://azure.microsoft.com/regions/services/?products=virtual-machines).
+
+## Ensure that the appropriate GPU drivers are installed
+
+To take advantage of the GPU capabilities of your lab VMs, ensure that the appropriate GPU drivers are installed. In the lab creation wizard, when you select a GPU VM size, you can select the **Install GPU drivers** option.
+
+![Screenshot of the "New lab" showing the "Install GPU drivers" option](./media/how-to-setup-gpu-1/lab-gpu-drivers.png)
+
+As shown in the preceding image, this option is enabled by default, which ensures that recently released drivers are installed for the type of GPU and image that you selected:
+
+- When you select a *compute* GPU size, your lab VMs are powered by the [NVIDIA Tesla K80](https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/tesla-product-literature/Tesla-K80-BoardSpec-07317-001-v05.pdf) GPU. In this case, recent [Compute Unified Device Architecture (CUDA)](http://developer.download.nvidia.com/compute/cuda/2_0/docs/CudaReferenceManual_2.0.pdf) drivers are installed, which enables high-performance computing.
+- When you select a *visualization* GPU size, your lab VMs are powered by the [NVIDIA Tesla M60](https://images.nvidia.com/content/tesla/pdf/188417-Tesla-M60-DS-A4-fnl-Web.pdf) GPU and [GRID technology](https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/solutions/resources/documents1/NVIDIA_GRID_vPC_Solution_Overview.pdf). In this case, recent GRID drivers are installed, which enables the use of graphics-intensive applications.
+
+> [!IMPORTANT]
+> The **Install GPU drivers** option only installs the drivers when they aren't present on your lab's image. For example, the GPU drivers are already installed on the Azure marketplace's [Data Science image](../machine-learning/data-science-virtual-machine/overview.md#whats-included-on-the-dsvm). If you create a lab using the Data Science image and choose to **Install GPU drivers**, the drivers won't be updated to a more recent version. To update the drivers, you will need to manually install them as explained in the next section.
+
+### Install the drivers manually
+
+You might need to install a different version of the drivers than the version that Azure Lab Services installs for you. This section shows how to manually install the appropriate drivers, depending on whether you're using a *compute* GPU or a *visualization* GPU.
+
+#### Install the compute GPU drivers
+
+To manually install drivers for the *compute* GPU size, by doing the following steps:
+
+1. In the lab creation wizard, when you're [creating your lab](./how-to-manage-labs.md), disable the **Install GPU drivers** setting.
+
+1. After your lab is created, connect to the template VM to install the appropriate drivers.
+
+ ![Screenshot of the NVIDIA Driver Downloads page](./media/how-to-setup-gpu-1/nvidia-driver-download.png)
+
+ a. In a browser, go to the [NVIDIA Driver Downloads page](https://www.nvidia.com/Download/index.aspx).
+ b. Set the **Product Type** to **Tesla**.
+ c. Set the **Product Series** to **K-Series**.
+ d. Set the **Operating System** according to the type of base image you selected when you created your lab.
+ e. Set the **CUDA Toolkit** to the version of CUDA driver that you need.
+ f. Select **Search** to look for your drivers.
+ g. Select **Download** to download the installer.
+ h. Run the installer so that the drivers are installed on the template VM.
+1. Validate that the drivers are installed correctly by following the instructions in the [Validate the installed drivers](how-to-setup-lab-gpu.md#validate-the-installed-drivers) section.
+1. After you've installed the drivers and other software that are required for your class, select **Publish** to create your students' VMs.
+
+> [!NOTE]
+> If you're using a Linux image, after you've downloaded the installer, install the drivers by following the instructions in [Install CUDA drivers on Linux](../virtual-machines/linux/n-series-driver-setup.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json#install-cuda-drivers-on-n-series-vms).
+
+#### Install the visualization GPU drivers
+
+To manually install drivers for the *visualization* GPU sizes, follow these steps:
+
+1. In the lab creation wizard, when you're [creating your lab](./how-to-manage-labs.md), disable the **Install GPU drivers** setting.
+1. After your lab is created, connect to the template VM to install the appropriate drivers.
+1. Install the GRID drivers that are provided by Microsoft on the template VM by following the instructions for your operating system:
+ - [Windows NVIDIA GRID drivers](../virtual-machines/windows/n-series-driver-setup.md#nvidia-grid-drivers)
+ - [Linux NVIDIA GRID drivers](../virtual-machines/linux/n-series-driver-setup.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json#nvidia-grid-drivers)
+
+1. Restart the template VM.
+1. Validate that the drivers are installed correctly by following the instructions in the [Validate the installed drivers](how-to-setup-lab-gpu.md#validate-the-installed-drivers) section.
+1. After you've installed the drivers and other software that are required for your class, select **Publish** to create your students' VMs.
+
+### Validate the installed drivers
+
+This section describes how to validate that your GPU drivers are properly installed.
+
+#### Windows images
+
+1. Follow the instructions in the "Verify driver installation" section of [Install NVIDIA GPU drivers on N-series VMs running Windows](../virtual-machines/windows/n-series-driver-setup.md#verify-driver-installation).
+1. If you're using a *visualization* GPU, you can also:
+ - View and adjust your GPU settings in the NVIDIA Control Panel. To do so, in **Windows Control Panel**, select **Hardware**, and then select **NVIDIA Control Panel**.
+
+ ![Screenshot of Windows Control Panel showing the NVIDIA Control Panel link](./media/how-to-setup-gpu-1/control-panel-nvidia-settings.png)
+
+ - View your GPU performance by using **Task Manager**. To do so, select the **Performance** tab, and then select the **GPU** option.
+
+ ![Screenshot showing the Task Manager GPU Performance tab](./media/how-to-setup-gpu-1/task-manager-gpu.png)
+
+ > [!IMPORTANT]
+ > The NVIDIA Control Panel settings can be accessed only for *visualization* GPUs. If you attempt to open the NVIDIA Control Panel for a compute GPU, you'll get the following error: "NVIDIA Display settings are not available. You are not currently using a display attached to an NVIDIA GPU." Similarly, the GPU performance information in Task Manager is provided only for visualization GPUs.
+
+ Depending on your scenario, you may also need to do additional validation to ensure the GPU is properly configured. Read the class type about [Python and Jupyter Notebooks](class-type-jupyter-notebook.md#template-machine-configuration) that explains an example where specific versions of drivers are needed.
+
+#### Linux images
+
+Follow the instructions in the "Verify driver installation" section of [Install NVIDIA GPU drivers on N-series VMs running Linux](../virtual-machines/linux/n-series-driver-setup.md#verify-driver-installation).
+
+## Next steps
+
+See the following articles:
+
+- [Create and manage labs](how-to-manage-labs.md)
+- [SOLIDWORKS computer-aided design (CAD) class type](class-type-solidworks.md)
+- [MATLAB (matrix laboratory) class type](class-type-matlab.md)
lab-services How To Setup Lab Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-setup-lab-gpu.md
Title: Set up a lab with GPUs in Azure Lab Services | Microsoft Docs
description: Learn how to set up a lab with graphics processing unit (GPU) virtual machines. Previously updated : 06/26/2020 Last updated : 06/09/2022 # Set up a lab with GPU virtual machines + This article shows you how to do the following tasks: - Choose between *visualization* and *compute* graphics processing units (GPUs).
This article shows you how to do the following tasks:
## Choose between visualization and compute GPU sizes
-On the first page of the lab creation wizard, in the **Which virtual machine size do you need?** drop-down list, you select the size of the VMs that are needed for your class.
+On the first page of the lab creation wizard, in the **Virtual machine size** drop-down list, you select the size of the VMs that are needed for your class.
![Screenshot of the "New lab" pane for selecting a VM size](./media/how-to-setup-gpu/lab-gpu-selection.png)
As described in the following table, the *compute* GPU size is intended for comp
| Size | vCPUs | RAM | Description | | - | -- | | -- |
-| Small GPU (Compute) | 6 vCPUs | 56 GB RAM | [Standard_NC6](../virtual-machines/nc-series.md). This size is best suited for compute-intensive applications such as artificial intelligence (AI) and deep learning. |
+| Small GPU (Compute) | 6 vCPUs | 112 GB RAM | [Standard_NC6s_v3](../virtual-machines/ncv3-series.md). This size supports both Windows and Linux and is best suited for compute-intensive applications such as artificial intelligence (AI) and deep learning. |
The *visualization* GPU sizes are intended for graphics-intensive applications. For example, the [SOLIDWORKS engineering class type](./class-type-solidworks.md) shows using the **Small GPU (Visualization)** size. The visualization GPU is suitable for this type of class, because students interact with the SOLIDWORKS 3D computer-aided design (CAD) environment for modeling and visualizing solid objects. | Size | vCPUs | RAM | Description | | - | -- | | -- |
-| Small GPU (Visualization) | 6 vCPUs | 56 GB RAM | [Standard_NV6](../virtual-machines/nv-series.md). This size is best suited for remote visualization, streaming, gaming, and encoding that use frameworks such as OpenGL and DirectX. |
-| Medium GPU (Visualization) | 12 vCPUs | 112 GB RAM | [Standard_NV12](../virtual-machines/nv-series.md?bc=%2fazure%2fvirtual-machines%2flinux%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json). This size is best suited for remote visualization, streaming, gaming, and encoding that use frameworks such as OpenGL and DirectX. |
+| Small GPU (Visualization) | 8 vCPUs | 28 GB RAM | [Standard_NV8as_v4](../virtual-machines/nvv4-series.md). This size is best suited for remote visualization, streaming, gaming, and encoding that use frameworks such as OpenGL and DirectX. Currently, this size supports Windows only. |
+| Medium GPU (Visualization) | 12 vCPUs | 112 GB RAM | [Standard_NV12s_v3](../virtual-machines/nvv3-series.md). This size supports both Windows and Linux. It's best suited for remote visualization, streaming, gaming, and encoding that use frameworks such as OpenGL and DirectX. |
> [!NOTE]
-> You may not see some of these VM sizes in the list when creating a lab. The list is populated based on the current capacity of the lab's location. For availability of VMs, see [Products available by region](https://azure.microsoft.com/regions/services/?products=virtual-machines).
+> You may not see some of these VM sizes in the list when creating a lab. This list is populated based on the capacity assigned to your Microsoft-managed Azure subscription. For more information about capacity, see [Capacity limits in Azure Lab Services](../lab-services/capacity-limits.md). For availability of VM sizes, see [Products available by region](https://azure.microsoft.com/regions/services/?products=virtual-machines).
## Ensure that the appropriate GPU drivers are installed
-To take advantage of the GPU capabilities of your lab VMs, ensure that the appropriate GPU drivers are installed. In the lab creation wizard, when you select a GPU VM size, you can select the **Install GPU drivers** option.
+To take advantage of the GPU capabilities of your lab VMs, ensure that the appropriate GPU drivers are installed. In the lab creation wizard, when you select a GPU VM size, you can select the **Install GPU drivers** option. This option is enabled by default.
![Screenshot of the "New lab" showing the "Install GPU drivers" option](./media/how-to-setup-gpu/lab-gpu-drivers.png)
-As shown in the preceding image, this option is enabled by default, which ensures that recently released drivers are installed for the type of GPU and image that you selected:
+Selecting **Install GPU drivers** ensures that recently released drivers are installed for the type of GPU and image that you selected.
-- When you select a *compute* GPU size, your lab VMs are powered by the [NVIDIA Tesla K80](https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/tesla-product-literature/Tesla-K80-BoardSpec-07317-001-v05.pdf) GPU. In this case, recent [Compute Unified Device Architecture (CUDA)](http://developer.download.nvidia.com/compute/cuda/2_0/docs/CudaReferenceManual_2.0.pdf) drivers are installed, which enables high-performance computing.-- When you select a *visualization* GPU size, your lab VMs are powered by the [NVIDIA Tesla M60](https://images.nvidia.com/content/tesla/pdf/188417-Tesla-M60-DS-A4-fnl-Web.pdf) GPU and [GRID technology](https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/solutions/resources/documents1/NVIDIA_GRID_vPC_Solution_Overview.pdf). In this case, recent GRID drivers are installed, which enables the use of graphics-intensive applications.
+- When you select the Small GPU *(Compute)* size, your lab VMs are powered by the [NVIDIA Tesla V100 GPU](https://www.nvidia.com/en-us/data-center/v100/) GPU. In this case, recent Compute Unified Device Architecture (CUDA) drivers are installed, which enables high-performance computing.
+- When you select the Small GPU *(Visualization)* size, your lab VMs are powered by the [AMD Raedon Instinct MI25 Accelerator GPU](https://www.amd.com/en/products/professional-graphics/instinct-mi25). In this case, recent AMD GPU drivers are installed, which enables the use of graphics-intensive applications.
+- When you select the Medium GPU *(Visualization)* size, your lab VMs are powered by the [NVIDIA Tesla M60](https://images.nvidia.com/content/tesla/pdf/188417-Tesla-M60-DS-A4-fnl-Web.pdf) GPU and [GRID technology](https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/solutions/resources/documents1/NVIDIA_GRID_vPC_Solution_Overview.pdf). In this case, recent GRID drivers are installed, which enables the use of graphics-intensive applications.
> [!IMPORTANT]
-> The **Install GPU drivers** option only installs the drivers when they aren't present on your lab's image. For example, the GPU drivers are already installed on the Azure marketplace's [Data Science image](../machine-learning/data-science-virtual-machine/overview.md#whats-included-on-the-dsvm). If you create a lab using the Data Science image and choose to **Install GPU drivers**, the drivers won't be updated to a more recent version. To update the drivers, you will need to manually install them as explained in the next section.
+> The **Install GPU drivers** option only installs the drivers when they aren't present on your lab's image. For example, NVIDIA GPU drivers are already installed on the Azure marketplace's [Data Science image](../machine-learning/data-science-virtual-machine/overview.md#whats-included-on-the-dsvm). If you create a Small GPU (Compute) lab using the Data Science image and choose to **Install GPU drivers**, the drivers won't be updated to a more recent version. To update the drivers, you will need to manually install them as explained in the next section.
### Install the drivers manually
-You might need to install a different version of the drivers than the version that Azure Lab Services installs for you. This section shows how to manually install the appropriate drivers, depending on whether you're using a *compute* GPU or a *visualization* GPU.
+You might need to install a different version of the drivers than the version that Azure Lab Services installs for you. This section shows how to manually install the appropriate drivers.
-#### Install the compute GPU drivers
+#### Install the Small GPU (Compute) drivers
-To manually install drivers for the *compute* GPU size, do the following:
+To manually install drivers for the Small GPU *(Compute)* size, follow these steps:
1. In the lab creation wizard, when you're [creating your lab](./how-to-manage-labs.md), disable the **Install GPU drivers** setting.
-1. After your lab is created, connect to the template VM to install the appropriate drivers.
+1. After your lab is created, connect to the template VM to install the appropriate drivers. Read [NVIDIA Tesla (CUDA) drivers](../virtual-machines/windows/n-series-driver-setup.md#nvidia-tesla-cuda-drivers) for more information about specific driver versions that are recommended depending on the Windows OS version being used. Otherwise, follow the below steps to install the latest NVIDIA drivers:
![Screenshot of the NVIDIA Driver Downloads page](./media/how-to-setup-gpu/nvidia-driver-download.png)
- a. In a browser, go to the [NVIDIA Driver Downloads page](https://www.nvidia.com/Download/index.aspx).
- b. Set the **Product Type** to **Tesla**.
- c. Set the **Product Series** to **K-Series**.
- d. Set the **Operating System** according to the type of base image you selected when you created your lab.
- e. Set the **CUDA Toolkit** to the version of CUDA driver that you need.
- f. Select **Search** to look for your drivers.
- g. Select **Download** to download the installer.
- h. Run the installer so that the drivers are installed on the template VM.
-1. Validate that the drivers are installed correctly by following the instructions in the [Validate the installed drivers](how-to-setup-lab-gpu.md#validate-the-installed-drivers) section.
-1. After you've installed the drivers and other software that are required for your class, select **Publish** to create your students' VMs.
+ Otherwise, follow the below steps to install the latest NVIDIA drivers:
+
+ a. In a browser, go to the [NVIDIA Driver Downloads page](https://www.nvidia.com/Download/index.aspx).
+ b. Set the **Product Type** to **Tesla**.
+ c. Set the **Product Series** to **V-Series**.
+ d. Set the **Operating System** according to the type of base image you selected when you created your lab.
+ e. Set the **CUDA Toolkit** to the version of CUDA driver that you need.
+ f. Select **Search** to look for your drivers.
+ g. Select **Download** to download the installer.
+ h. Run the installer so that the drivers are installed on the template VM.
+
+1. Validate that the drivers are installed correctly by following instructions in the [Validate the installed drivers](how-to-setup-lab-gpu.md#validate-the-installed-drivers) section.
+1. After you've installed the drivers and other software that is required for your class, select **Publish** to create your students' VMs.
> [!NOTE] > If you're using a Linux image, after you've downloaded the installer, install the drivers by following the instructions in [Install CUDA drivers on Linux](../virtual-machines/linux/n-series-driver-setup.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json#install-cuda-drivers-on-n-series-vms).
-#### Install the visualization GPU drivers
+#### Install the Small GPU (Visualization) drivers
+
+To manually install drivers for the Small GPU *(visualization)* size, doing the following steps:
+
+1. In the lab creation wizard, when you're [creating your lab](./how-to-manage-labs.md), disable the **Install GPU drivers** setting.
+1. After your lab is created, connect to the template VM to install the appropriate drivers.
+1. Install the AMD drivers template VM by following instructions in the [Install AMD GPU drivers on N-series VMs running Windows](../virtual-machines/windows/n-series-amd-driver-setup.md) article.
+1. Restart the template VM.
+1. Validate that the drivers are installed correctly by following the instructions in the [Validate the installed drivers](./how-to-setup-lab-gpu.md#validate-the-installed-drivers) section.
+1. After you've installed the drivers and other software that are required for your class, select **Publish** to create your students' VMs.
-To manually install drivers for the *visualization* GPU sizes, do the following:
+#### Install the Medium GPU (Visualization) drivers
1. In the lab creation wizard, when you're [creating your lab](./how-to-manage-labs.md), disable the **Install GPU drivers** setting. 1. After your lab is created, connect to the template VM to install the appropriate drivers.
To manually install drivers for the *visualization* GPU sizes, do the following:
This section describes how to validate that your GPU drivers are properly installed.
-#### Windows images
+#### Small GPU (Visualization) Windows images
+
+To verify driver installation for **Small GPU (Visualization)** size, see [validate the AMD GPU drivers on N-series VMs running Windows](/virtual-machines/windows/n-series-driver-setup.md#verify-driver-installation).
+
+#### Small GPU (Compute) and Medium GPU (Visualization) Windows images
+
+To verify driver installation for **Small GPU (Visualization)** size, see [validate the NVIDIA GPU drivers on N-series VMs running Windows](../virtual-machines/windows/n-series-driver-setup.md#verify-driver-installation).
+
+You can also validate the NVIDIA control panel settings, which only apply to the **Medium GPU (visualization)** VM size:
-1. Follow the instructions in the "Verify driver installation" section of [Install NVIDIA GPU drivers on N-series VMs running Windows](../virtual-machines/windows/n-series-driver-setup.md#verify-driver-installation).
-1. If you're using a *visualization* GPU, you can also:
- - View and adjust your GPU settings in the NVIDIA Control Panel. To do so, in **Windows Control Panel**, select **Hardware**, and then select **NVIDIA Control Panel**.
+1. View and adjust your GPU settings in the NVIDIA Control Panel. To do so, in **Windows Control Panel**, select **Hardware**, and then select **NVIDIA Control Panel**.
- ![Screenshot of Windows Control Panel showing the NVIDIA Control Panel link](./media/how-to-setup-gpu/control-panel-nvidia-settings.png)
+ :::image type="content" source="./media/how-to-setup-gpu/control-panel-nvidia-settings.png" alt-text="Screenshot of Windows Control Panel showing the NVIDIA Control Panel link.":::
- - View your GPU performance by using **Task Manager**. To do so, select the **Performance** tab, and then select the **GPU** option.
+1. View your GPU performance by using **Task Manager**. To do so, select the **Performance** tab, and then select the **GPU** option.
- ![Screenshot showing the Task Manager GPU Performance tab](./media/how-to-setup-gpu/task-manager-gpu.png)
+ :::image type="content" source="./media/how-to-setup-gpu/task-manager-gpu.png" alt-text="Screenshot of the Task Manager GPU Performance tab.":::
- > [!IMPORTANT]
- > The NVIDIA Control Panel settings can be accessed only for *visualization* GPUs. If you attempt to open the NVIDIA Control Panel for a compute GPU, you'll get the following error: "NVIDIA Display settings are not available. You are not currently using a display attached to an NVIDIA GPU." Similarly, the GPU performance information in Task Manager is provided only for visualization GPUs.
+ > [!IMPORTANT]
+ > The NVIDIA Control Panel settings can be accessed only for the Medium GPU (visualization) VM size. If you attempt to open the NVIDIA Control Panel for a compute GPU, you'll get the error: "NVIDIA Display settings are not available. You are not currently using a display attached to an NVIDIA GPU." Similarly, the GPU performance information in Task Manager is provided only for visualization GPUs.
- Depending on your scenario, you may also need to do additional validation to ensure the GPU is properly configured. Read the class type about [Python and Jupyter Notebooks](class-type-jupyter-notebook.md#template-machine-configuration) that explains an example where specific versions of drivers are needed.
+Depending on your scenario, you may also need to do more validation to ensure the GPU is properly configured. Read the class type about [Python and Jupyter Notebooks](class-type-jupyter-notebook.md#template-machine-configuration) that explains an example where specific versions of drivers are needed.
-#### Linux images
+#### Small GPU (Compute) and Medium GPU (Visualization) Linux images
-Follow the instructions in the "Verify driver installation" section of [Install NVIDIA GPU drivers on N-series VMs running Linux](../virtual-machines/linux/n-series-driver-setup.md#verify-driver-installation).
+To verify driver installation for Linux images, see [verify driver installation for NVIDIA GPU drivers on N-series VMs running Linux](../virtual-machines/linux/n-series-driver-setup.md#verify-driver-installation).
## Next steps See the following articles: -- [Create and manage labs](how-to-manage-labs.md)-- [SOLIDWORKS computer-aided design (CAD) class type](class-type-solidworks.md)-- [MATLAB (matrix laboratory) class type](class-type-matlab.md)
+- As an administrator, [create and manage labs](how-to-manage-labs.md).
+- As an educator, create a class using [SOLIDWORKS computer-aided design (CAD)](class-type-solidworks.md) software.
+- As an educator, create a class using [MATLAB (matrix laboratory)](class-type-matlab.md) software.
load-balancer Upgrade Basic Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basic-standard.md
An Azure PowerShell script is available that does the following procedures:
* If the load balancer doesn't have a frontend IP configuration or backend pool, you'll encounter an error running the script. Ensure the load balancer has a frontend IP and backend pool
-* The script cannot migrate Virtual Machine Scale Set from Basic Load Balancer's backend to Standard Load Balancer's backend. We recommend manually creating a Standard Load Balancer and follow [Update or delete a load balancer used by virtual machine scale sets](update-load-balancer-with-vm-scale-set.md) to complete the migration.
+* The script cannot migrate Virtual Machine Scale Set from Basic Load Balancer's backend to Standard Load Balancer's backend. We recommend manually creating a Standard Load Balancer and follow [Update or delete a load balancer used by virtual machine scale sets](./update-load-balancer-with-vm-scale-set.md) to complete the migration.
### Change allocation method of the public IP address to static
load-balancer Upgrade Internalbasic To Publicstandard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-internalbasic-to-publicstandard.md
An Azure PowerShell script is available that does the following procedures:
### Constraints
-* The script supports an internal load balancer upgrade where outbound connectivity is required. If outbound connectivity isn't required, see [Upgrade an internal basic load balancer - Outbound connections not required](upgrade-basicinternal-standard.md).
+* The script supports an internal load balancer upgrade where outbound connectivity is required. If outbound connectivity isn't required, see [Upgrade an internal basic load balancer - Outbound connections not required](./upgrade-basicinternal-standard.md).
* The standard load balancer has a new public address. ItΓÇÖs impossible to move the IP addresses associated with existing basic internal load balancer to a standard public load balancer because of different SKUs.
An Azure PowerShell script is available that does the following procedures:
* If the load balancer doesn't have a frontend IP configuration or backend pool, you'll encounter an error running the script. Ensure the load balancer has a frontend IP and backend pool
-* The script cannot migrate Virtual Machine Scale Set from Basic Load Balancer's backend to Standard Load Balancer's backend. We recommend manually creating a Standard Load Balancer and follow [Update or delete a load balancer used by virtual machine scale sets](update-load-balancer-with-vm-scale-set.md) to complete the migration.
+* The script cannot migrate Virtual Machine Scale Set from Basic Load Balancer's backend to Standard Load Balancer's backend. We recommend manually creating a Standard Load Balancer and follow [Update or delete a load balancer used by virtual machine scale sets](./update-load-balancer-with-vm-scale-set.md) to complete the migration.
## Download the script
load-testing How To Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-configure-customer-managed-keys.md
Azure Load Testing uses the customer-managed key to encrypt the following data i
- An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -- An existing user-assigned managed identity. For more information about creating a user-assigned managed identity, see (Manage user-assigned managed identities)[/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity].
+- An existing user-assigned managed identity. For more information about creating a user-assigned managed identity, see [Manage user-assigned managed identities](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity).
## Limitations
To configure customer-managed keys for a new Azure Load Testing resource, follow
1. In the Azure portal, navigate to the **Azure Load Testing** page, and select the **Create** button to create a new resource.
-1. Follow the steps outlined in [create an Azure Load Testing resource](./quickstart-create-and-run-load-test.md#create_resource) to fill out the fields on the **Basics** tab.
+1. Follow the steps outlined in [create an Azure Load Testing resource](./quickstart-create-and-run-load-test.md#create-an-azure-load-testing-resource) to fill out the fields on the **Basics** tab.
1. Go to the **Encryption** tab. In the **Encryption type** field, select **Customer-managed keys (CMK)**.
When you revoke the encryption key you may be able to run tests for about 10 min
## Next steps - Learn how to [Monitor server-side application metrics](./how-to-monitor-server-side-metrics.md).-- Learn how to [Parameterize a load test](./how-to-parameterize-load-tests.md).
+- Learn how to [Parameterize a load test](./how-to-parameterize-load-tests.md).
load-testing How To Create And Run Load Test With Jmeter Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-create-and-run-load-test-with-jmeter-script.md
Title: Create and run a load test with Azure Load Testing with a JMeter script
-description: 'Learn how to load test a website with Azure Load Testing in the Azure portal by using an existing Apache JMeter script.'
+ Title: Create a JMeter-based load test
+
+description: 'Learn how to load test a website by using an existing Apache JMeter script and Azure Load Testing.'
Previously updated : 05/16/2022 Last updated : 06/10/2022 adobe-target: true
-# Load test a website with Azure Load Testing Preview in the Azure portal by using an existing JMeter script
+# Load test a website by using an existing JMeter script in Azure Load Testing Preview
-This article describes how to load test a web application with Azure Load Testing Preview from the Azure portal. You'll use an existing Apache JMeter script to configure the load test.
+Learn how to use an Apache JMeter script to load test a web application with Azure Load Testing Preview from the Azure portal.
-After you complete this, you'll have a resource and load test that you can use for other tutorials.
+Azure Load Testing enables you to take an existing Apache JMeter script, and use it to run a load test at cloud scale. Alternatively, you can also [create a URL-based load test in the Azure portal](./quickstart-create-and-run-load-test.md).
-Learn more about the [key concepts for Azure Load Testing](./concept-load-testing-concepts.md).
+Use cases for creating a load test with an existing JMeter script include:
+
+- You want to reuse existing JMeter scripts to test your application.
+- You want to test multiple endpoints in a single load test.
+- You have a data-driven load test. For example, you want to [read CSV data in a load test](./how-to-read-csv-data.md).
> [!IMPORTANT] > Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
Learn more about the [key concepts for Azure Load Testing](./concept-load-testin
## Create an Apache JMeter script
-In this section, you'll create a sample Apache JMeter script that you'll use in the next section to load test a web endpoint. If you already have a script, you can skip to [Create a load test](#create-a-load-test).
+If you don't have an existing Apache JMeter script, you'll create a sample script to load test a single web application endpoint. If you already have a script, you can skip to [Create a load test](#create-a-load-test).
1. Create a *SampleTest.jmx* file on your local machine:
In this section, you'll create a sample Apache JMeter script that you'll use in
1. Open *SampleTest.jmx* in a text editor and paste the following code snippet in the file:
+ This script simulates a load test of five virtual users that simultaneously access a web endpoint, and takes 2 minutes to complete.
+ ```xml <?xml version="1.0" encoding="UTF-8"?> <jmeterTestPlan version="1.2" properties="5.0" jmeter="5.4.1">
In this section, you'll create a sample Apache JMeter script that you'll use in
<stringProp name="TestPlan.user_define_classpath"></stringProp> </TestPlan> <hashTree>
- <ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup" testname="Thread Group" enabled="true">
+ <ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup" testname="Web endpoint test" enabled="true">
<stringProp name="ThreadGroup.on_sample_error">continue</stringProp> <elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel" testclass="LoopController" testname="Loop Controller" enabled="true"> <boolProp name="LoopController.continue_forever">false</boolProp>
In this section, you'll create a sample Apache JMeter script that you'll use in
<boolProp name="ThreadGroup.same_user_on_next_iteration">true</boolProp> </ThreadGroup> <hashTree>
- <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Homepage" enabled="true">
+ <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="HTTP request" enabled="true">
<elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true"> <collectionProp name="Arguments.arguments"/> </elementProp>
- <stringProp name="HTTPSampler.domain">your-endpoint-url</stringProp>
+ <stringProp name="HTTPSampler.domain"></stringProp>
<stringProp name="HTTPSampler.port"></stringProp> <stringProp name="HTTPSampler.protocol"></stringProp> <stringProp name="HTTPSampler.contentEncoding"></stringProp>
In this section, you'll create a sample Apache JMeter script that you'll use in
</jmeterTestPlan> ```
- This sample Apache JMeter script simulates a load test of five virtual users simultaneously accessing a web endpoint. It takes less than two minutes to complete.
-
-1. In the file, replace the placeholder text `your-endpoint-url` with your own endpoint URL.
+1. In the file, set the value of the `HTTPSampler.domain` node to the host name of your endpoint. For example, if you want to test the endpoint `https://www.contoso.com/app/products`, the host name is `www.contoso.com`.
> [!IMPORTANT] > Don't include `https` or `http` in the endpoint URL.
+1. In the file, set the value of the `HTTPSampler.path` node to the path of your endpoint. For example, the path for the URL `https://www.contoso.com/app/products` is `/app/products`.
+ 1. Save and close the file. ## Create a load test
-With Azure Load Testing, you can use an Apache JMeter script to create a load test. This script defines the application test plan. It contains information about the web endpoint, the number of virtual users, and other test configuration settings.
+To create a load test in Azure Load Testing, you have to specify a JMeter script. This script defines the [test plan](./how-to-create-manage-test.md#test-plan) for the load test. You can create multiple load tests in an Azure Load Testing resource.
-To create a load test by using an existing Apache JMeter script:
+> [!NOTE]
+> When you [create a quick test by using a URL](./quickstart-create-and-run-load-test.md), Azure Load Testing automatically generates the JMeter script.
-1. Go to your Azure Load Testing resource, select **Tests** from the left pane, and then select **+ Create new test**.
+To create a load test using an existing JMeter script in the Azure portal:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) by using the credentials for your Azure subscription.
+
+1. Go to your Azure Load Testing resource, select **Tests** from the left pane, select **+ Create**, and then select **Upload a JMeter script**.
:::image type="content" source="./media/how-to-create-and-run-load-test-with-jmeter-script/create-new-test.png" alt-text="Screenshot that shows the Azure Load Testing page and the button for creating a new test." :::
To create a load test by using an existing Apache JMeter script:
:::image type="content" source="./media/how-to-create-and-run-load-test-with-jmeter-script/create-new-test-test-plan.png" alt-text="Screenshot that shows the Test plan tab." ::: > [!NOTE]
- > You can select and upload additional Apache JMeter configuration files or other files that are referenced in the JMX file. For example, if your test script uses CSV data sets, you can upload the corresponding *.csv* file(s).
-
-1. (Optional) On the **Parameters** tab, configure input parameters for your Apache JMeter script.
-
-1. For this quickstart, you can leave the default value on the **Load** tab:
-
- |Field |Default value |Description |
- ||||
- |**Engine instances** |**1** |The number of parallel test engines that run the Apache JMeter script. |
-
- :::image type="content" source="./media/how-to-create-and-run-load-test-with-jmeter-script/create-new-test-load.png" alt-text="Screenshot that shows the Load tab for creating a test." :::
-
-1. (Optional) On the **Test criteria** tab, configure criteria to determine when your load test should fail.
-
-1. (Optional) On the **Monitoring** tab, you can specify the Azure application components to capture server-side metrics for. For this quickstart, you're not testing an Azure-hosted application.
+ > You can upload additional JMeter configuration files or other files that you reference in the JMX file. For example, if your test script uses CSV data sets, you can upload the corresponding *.csv* file(s). See also how to [read data from a CSV file](./how-to-read-csv-data.md).
1. Select **Review + create**. Review all settings, and then select **Create** to create the load test.
To create a load test by using an existing Apache JMeter script:
## Run the load test
-In this section, you'll run the load test that you just created.
+When Azure Load Testing starts your load test, it will first deploy the JMeter script and any other files onto test engine instances and run the test.
-1. Go to your Load Testing resource, select **Tests** from the left pane, and then select the test that you created.
+If you selected **Run test after creation**, your load test will start automatically. To manually start the load test you created earlier, perform the following steps:
+
+1. Go to your Load Testing resource, select **Tests** from the left pane, and then select the test that you created earlier.
:::image type="content" source="./media/how-to-create-and-run-load-test-with-jmeter-script/tests.png" alt-text="Screenshot that shows the list of load tests." :::
-1. On the test details page, select **Run** or **Run test**. Then, select **Run** on the **Run test** confirmation pane to start the load test.
+1. On the test details page, select **Run** or **Run test**. Then, select **Run** on the confirmation pane to start the load test. Optionally, provide a test run description.
:::image type="content" source="./media/how-to-create-and-run-load-test-with-jmeter-script/run-test-confirm.png" alt-text="Screenshot that shows the run confirmation page." ::: > [!TIP] > You can stop a load test at any time from the Azure portal.
+While the test runs and after it finishes, you can view the test run details, statistics, and metrics in the test run dashboard.
++ ## Next steps
+- To learn more about [creating and managing tests](./how-to-create-manage-test.md).
+ - To learn how to export test results, see [Export test results](./how-to-export-test-results.md). - To learn how to monitor server side metrics, see [Monitor server side metrics](./how-to-monitor-server-side-metrics.md).
load-testing How To Create Manage Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-create-manage-test.md
Learn how to create and manage [tests](./concept-load-testing-concepts.md#test)
## Prerequisites * An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* An Azure Load Testing resource. To create a Load Testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md#create_resource).
+* An Azure Load Testing resource. To create a Load Testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md#create-an-azure-load-testing-resource).
## Create a test
load-testing How To Read Csv Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-read-csv-data.md
In this article, you learn how to:
## Prerequisites * An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* An Azure Load Testing resource. To create a Load Testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md#create_resource).
+* An Azure Load Testing resource. To create a Load Testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md#create-an-azure-load-testing-resource).
* An Apache JMeter test script (JMX). * (Optional) Apache JMeter GUI to author your test script. To install Apache JMeter, see [Apache JMeter Getting Started](https://jmeter.apache.org/usermanual/get-started.html).
load-testing Quickstart Create And Run Load Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/quickstart-create-and-run-load-test.md
Learn more about the [key concepts for Azure Load Testing](./concept-load-testin
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure RBAC role with permission to create and manage resources in the subscription, such as [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Owner](../role-based-access-control/built-in-roles.md#owner)
-## <a name="create_resource"></a> Create an Azure Load Testing resource
+## Create an Azure Load Testing resource
First, you'll create the top-level resource for Azure Load Testing. It provides a centralized place to view and manage test plans, test results, and related artifacts.
Azure Load Testing enables you to quickly create a load test from the Azure port
> [!NOTE] > Azure Load Testing auto-generates an Apache JMeter script for your load test.
-> You can download the JMeter script from the test run dashboard. Select **Download**, and then select **Input file**. To run the script, you have to provide environment variables to configure the URL and test parameters.```
+> You can download the JMeter script from the test run dashboard. Select **Download**, and then select **Input file**. To run the script locally, you have to provide environment variables to configure the URL and test parameters.
## View the test results
You now have an Azure Load Testing resource, which you used to load test an exte
You can reuse this resource to learn how to identify performance bottlenecks in an Azure-hosted application by using server-side metrics. > [!div class="nextstepaction"]
-> [Identify performance bottlenecks](./tutorial-identify-bottlenecks-azure-portal.md)
+> [Tutorial: Identify performance bottlenecks](./tutorial-identify-bottlenecks-azure-portal.md)
machine-learning How To Deploy With Triton https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-with-triton.md
description: 'Learn to deploy your model with NVIDIA Triton Inference Server in
Previously updated : 03/31/2022 Last updated : 06/10/2022
Triton is multi-framework, open-source software that is optimized for inference.
In this article, you will learn how to deploy Triton and a model to a managed online endpoint. Information is provided on using both the CLI (command line) and Azure Machine Learning studio. > [!NOTE]
-> [NVIDIA Triton Inference Server](https://aka.ms/nvidia-triton-docs) is an open-source third-party software that is integrated in Azure Machine Learning.
+> * [NVIDIA Triton Inference Server](https://aka.ms/nvidia-triton-docs) is an open-source third-party software that is integrated in Azure Machine Learning.
+> * While Azure Machine Learning online endpoints are generally available, _using Triton with an online endpoint deployment is still in preview_.
## Prerequisites
This section shows how you can deploy Triton to managed online endpoint using [A
# [Endpoints page](#tab/endpoint)
- 1. From the __Endpoints__ page, Select **+Create (preview)**.
+ 1. From the __Endpoints__ page, select **Create**.
:::image type="content" source="media/how-to-deploy-with-triton/create-option-from-endpoints-page.png" lightbox="media/how-to-deploy-with-triton/create-option-from-endpoints-page.png" alt-text="Screenshot showing create option on the Endpoints UI page.":::
This section shows how you can deploy Triton to managed online endpoint using [A
# [Models page](#tab/models)
- 1. Select the Triton model, and then select __Deploy__. When prompted, select __Deploy to real-time endpoint (preview)__.
+ 1. Select the Triton model, and then select __Deploy__. When prompted, select __Deploy to real-time endpoint__.
- :::image type="content" source="media/how-to-deploy-with-triton/deploy-from-models-page.png" lightbox="media/how-to-deploy-with-triton/deploy-from-models-page.png" alt-text="Screenshot showing how to deploy model from Models UI":::
+ :::image type="content" source="media/how-to-deploy-with-triton/deploy-from-models-page.png" lightbox="media/how-to-deploy-with-triton/deploy-from-models-page.png" alt-text="Screenshot showing how to deploy model from Models UI.":::
1. Complete the wizard to deploy the model to the endpoint.
This section shows how you can deploy Triton to managed online endpoint using [A
To learn more, review these articles: -- [Deploy models with REST (preview)](how-to-deploy-with-rest.md)-- [Create and use managed online endpoints (preview) in the studio](how-to-use-managed-online-endpoint-studio.md)-- [Safe rollout for online endpoints (preview)](how-to-safely-rollout-managed-endpoints.md)
+- [Deploy models with REST](how-to-deploy-with-rest.md)
+- [Create and use managed online endpoints in the studio](how-to-use-managed-online-endpoint-studio.md)
+- [Safe rollout for online endpoints ](how-to-safely-rollout-managed-endpoints.md)
- [How to autoscale managed online endpoints](how-to-autoscale-endpoints.md)-- [View costs for an Azure Machine Learning managed online endpoint (preview)](how-to-view-online-endpoints-costs.md)-- [Access Azure resources with a managed online endpoint and managed identity (preview)](how-to-access-resources-from-endpoints-managed-identities.md)
+- [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md)
+- [Access Azure resources with a managed online endpoint and managed identity](how-to-access-resources-from-endpoints-managed-identities.md)
- [Troubleshoot managed online endpoints deployment](how-to-troubleshoot-managed-online-endpoints.md)
machine-learning How To High Availability Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-high-availability-machine-learning.md
Runs in Azure Machine Learning are defined by a run specification. This specific
* Manage configurations as code.
- * Avoid hardcoded references to the workspace. Instead, configure a reference to the workspace instance using a [config file](how-to-configure-environment.md#workspace) and use [Workspace.from_config()](/python/api/azureml-core/azureml.core.workspace.workspace#remarks) to initialize the workspace. To automate the process, use the [Azure CLI extension for machine learning](v1/reference-azure-machine-learning-cli.md) command [az ml folder attach](/cli/azure/ml(v1)/folder#ext_azure_cli_ml_az_ml_folder_attach).
+ * Avoid hardcoded references to the workspace. Instead, configure a reference to the workspace instance using a [config file](how-to-configure-environment.md#workspace) and use [Workspace.from_config()](/python/api/azureml-core/azureml.core.workspace.workspace#remarks) to initialize the workspace. To automate the process, use the [Azure CLI extension for machine learning](v1/reference-azure-machine-learning-cli.md) command [az ml folder attach](/cli/azure/ml(v1)/folder#az-ml(v1)-folder-attach).
* Use run submission helpers such as [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) and [Pipeline](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline(class)). * Use [Environments.save_to_directory()](/python/api/azureml-core/azureml.core.environment(class)#save-to-directory-path--overwrite-false-) to save your environment definitions. * Use a Dockerfile if you use custom Docker images.
The following artifacts can be exported and imported between workspaces by using
| Artifact | Export | Import | | -- | -- | -- |
-| Models | [az ml model download --model-id {ID} --target-dir {PATH}](/cli/azure/ext/azure-cli-ml/ml/model#ext_azure_cli_ml_az_ml_model_download) | [az ml model register ΓÇôname {NAME} --path {PATH}](/cli/azure/ext/azure-cli-ml/ml/model) |
-| Environments | [az ml environment download -n {NAME} -d {PATH}](/cli/azure/ext/azure-cli-ml/ml/environment#ext_azure_cli_ml_az_ml_environment_download) | [az ml environment register -d {PATH}](/cli/azure/ext/azure-cli-ml/ml/environment#ext_azure_cli_ml_az_ml_environment_register) |
-| Azure ML pipelines (code-generated) | [az ml pipeline get --path {PATH}](/cli/azure/ml(v1)/pipeline#ext_azure_cli_ml_az_ml_pipeline_get) | [az ml pipeline create --name {NAME} -y {PATH}](/cli/azure/ml(v1)/pipeline#ext_azure_cli_ml_az_ml_pipeline_create)
+| Models | [az ml model download --model-id {ID} --target-dir {PATH}](/cli/azure/ml/model#az-ml-model-download) | [az ml model register ΓÇôname {NAME} --path {PATH}](/cli/azure/ml/model) |
+| Environments | [az ml environment download -n {NAME} -d {PATH}](/cli/azure/ml/environment#ml-az-ml-environment-download) | [az ml environment register -d {PATH}](/cli/azure/ml/environment#ml-az-ml-environment-register) |
+| Azure ML pipelines (code-generated) | [az ml pipeline get --path {PATH}](/cli/azure/ml(v1)/pipeline#az-ml(v1)-pipeline-get) | [az ml pipeline create --name {NAME} -y {PATH}](/cli/azure/ml(v1)/pipeline#az-ml(v1)-pipeline-create)
> [!TIP]
-> * __Registered datasets__ cannot be downloaded or moved. This includes datasets generated by Azure ML, such as intermediate pipeline datasets. However datasets that refer to a shared file location that both workspaces can access, or where the underlying data storage is replicated, can be registered on both workspaces. Use the [az ml dataset register](/cli/azure/ml(v1)/dataset#ext_azure_cli_ml_az_ml_dataset_register) to register a dataset.
+> * __Registered datasets__ cannot be downloaded or moved. This includes datasets generated by Azure ML, such as intermediate pipeline datasets. However datasets that refer to a shared file location that both workspaces can access, or where the underlying data storage is replicated, can be registered on both workspaces. Use the [az ml dataset register](/cli/azure/ml(v1)/dataset#ml-az-ml-dataset-register) to register a dataset.
> * __Run outputs__ are stored in the default storage account associated with a workspace. While run outputs might become inaccessible from the studio UI in the case of a service outage, you can directly access the data through the storage account. For more information on working with data stored in blobs, see [Create, download, and list blobs with Azure CLI](../storage/blobs/storage-quickstart-blobs-cli.md). ## Recovery options
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace.md
The Workspace.list(..) method does not return the full workspace object. It incl
## Search for assets across a workspace (preview)
-With the public preview search capability, you can search for machine learning assets such as jobs, models, components, environments, and datasets across all workspaces, resource groups, and subscriptions in your organization through a unified global view.
+With the public preview search capability, you can search for machine learning assets such as jobs, models, components, environments, and data across all workspaces, resource groups, and subscriptions in your organization through a unified global view.
### Free text search
If an asset filter (job, model, component, environment, dataset) is present, res
### View search results
-You can view your search results in the individual **Jobs**, **Models**, **Components**, **Environments**, and **Datasets** tabs. Select an asset to open its **Details** page in the context of the relevant workspace. Results from workspaces you don't have permissions to view are not displayed.
+You can view your search results in the individual **Jobs**, **Models**, **Components**, **Environments**, and **Data** tabs. Select an asset to open its **Details** page in the context of the relevant workspace. Results from workspaces you don't have permissions to view are not displayed.
:::image type="content" source="./media/how-to-manage-workspace/results.png" alt-text="Results displayed after search":::
machine-learning How To Monitor Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-datasets.md
monitor = monitor.enable_schedule()
<a name="studio-monitor"></a> 1. Navigate to the [studio's homepage](https://ml.azure.com).
-1. Select the **Datasets** tab on the left.
+1. Select the **Data** tab on the left.
1. Select **Dataset monitors**. ![Monitor list](./media/how-to-monitor-datasets/monitor-list.png)
machine-learning How To Secure Inferencing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-inferencing-vnet.md
Previously updated : 04/04/2022 Last updated : 06/06/2022 # Secure an Azure Machine Learning inferencing environment with virtual networks + In this article, you learn how to secure inferencing environments with a virtual network in Azure Machine Learning. > [!TIP]
In this article you learn how to secure the following inferencing resources in a
> - Default Azure Kubernetes Service (AKS) cluster > - Private AKS cluster > - AKS cluster with private link
-> - Azure Container Instances (ACI)
## Prerequisites
In this article you learn how to secure the following inferencing resources in a
### Azure Container Instances
-* When using Azure Container Instances in a virtual network, the virtual network must be in the same resource group as your Azure Machine Learning workspace. Otherwise, the virtual network can be in a different resource group.
-* If your workspace has a __private endpoint__, the virtual network used for Azure Container Instances must be the same as the one used by the workspace private endpoint.
-
-> [!WARNING]
-> When using Azure Container Instances inside the virtual network, the Azure Container Registry (ACR) for your workspace can't be in the virtual network. Because of this limitation, we do not recommend Azure Container instances for secure deployments with Azure Machine Learning.
+When your Azure Machine Learning workspace is configured with a private endpoint, deploying to Azure Container Instances in a VNet is not supported. Instead, consider using a [Managed online endpoint with network isolation](how-to-secure-online-endpoint.md).
### Azure Kubernetes Service
When __attaching an existing cluster__ to your workspace, use the `load_balancer
For information on attaching a cluster, see [Attach an existing AKS cluster](how-to-create-attach-kubernetes.md).
-## Enable Azure Container Instances (ACI)
-
-Azure Container Instances are dynamically created when deploying a model. To enable Azure Machine Learning to create ACI inside the virtual network, you must enable __subnet delegation__ for the subnet used by the deployment. To use ACI in a virtual network to your workspace, use the following steps:
-
-1. Your user account must have permissions to the following actions in Azure role-based access control (Azure RBAC):
-
- * `Microsoft.Network/virtualNetworks/*/read` on the virtual network resource. This permission isn't needed for Azure Resource Manager template deployments.
- * `Microsoft.Network/virtualNetworks/subnet/join/action` on the subnet resource.
-
-1. In the [Azure portal](https://portal.azure.com), search for the name of the virtual network. When it appears in the search results, select it.
-1. Select **Subnets**, under **SETTINGS**, and then select the subnet.
-1. On the subnet page, for the **Subnet delegation** list, select `Microsoft.ContainerInstance/containerGroups`.
-1. Deploy the model using [AciWebservice.deploy_configuration()](/python/api/azureml-core/azureml.core.webservice.aci.aciwebservice#deploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none--dns-name-label-none--primary-key-none--secondary-key-none--collect-model-data-none--cmk-vault-base-url-none--cmk-key-name-none--cmk-key-version-none--vnet-name-none--subnet-name-none-), use the `vnet_name` and `subnet_name` parameters. Set these parameters to the virtual network name and subnet where you enabled delegation.
-
-For more information, see [Add or remove a subnet delegation](../virtual-network/manage-subnet-delegation.md).
- ## Limit outbound connectivity from the virtual network If you don't want to use the default outbound rules and you do want to limit the outbound access of your virtual network, you must allow access to Azure Container Registry. For example, make sure that your Network Security Groups (NSG) contains a rule that allows access to the __AzureContainerRegistry.RegionName__ service tag where `{RegionName} is the name of an Azure region.
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-workspace-vnet.md
In this article you learn how to enable the following workspaces resources in a
+ An existing virtual network and subnet to use with your compute resources.
- > [!TIP]
- > If you plan on using Azure Container Instances in the virtual network (to deploy models), then the workspace and virtual network must be in the same resource group. Otherwise, they can be in different groups.
- + To deploy resources into a virtual network or subnet, your user account must have permissions to the following actions in Azure role-based access control (Azure RBAC): - "Microsoft.Network/virtualNetworks/join/action" on the virtual network resource.
In this article you learn how to enable the following workspaces resources in a
* If the storage account uses a __service endpoint__, the workspace private endpoint and storage service endpoint must be in the same subnet of the VNet. * If the storage account uses a __private endpoint__, the workspace private endpoint and storage private endpoint must be in the same VNet. In this case, they can be in different subnets.
+### Azure Container Instances
+
+When your Azure Machine Learning workspace is configured with a private endpoint, deploying to Azure Container Instances in a VNet is not supported. Instead, consider using a [Managed online endpoint with network isolation](how-to-secure-online-endpoint.md).
+ ### Azure Container Registry When ACR is behind a virtual network, Azure Machine Learning canΓÇÖt use it to directly build Docker images. Instead, the compute cluster is used to build the images.
machine-learning How To Track Designer Experiments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-designer-experiments.md
For more information on how to use the Python SDK to log values, see [Enable log
After the pipeline run completes, you can see the *Mean_Absolute_Error* in the Experiments page.
-1. Navigate to the **Experiments** section.
+1. Navigate to the **Jobs** section.
1. Select your experiment. 1. Select the run in your experiment you want to view. 1. Select **Metrics**.
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
Below is a list of reasons you might run into this error:
* [Startup task failed due to incorrect role assignments on resource](#authorization-error) * [Unable to download user container image](#unable-to-download-user-container-image) * [Unable to download user model or code artifacts](#unable-to-download-user-model-or-code-artifacts)
+* [azureml-fe for kubernetes online endpoint is not ready](#azureml-fe-not-ready)
#### Resource requests greater than limits
Make sure model and code artifacts are registered to the same workspace as the d
`az storage blob exists --account-name foobar --container-name 210212154504-1517266419 --name WebUpload/210212154504-1517266419/GaussianNB.pkl --subscription <sub-name>`
+#### azureml-fe not ready
+The front-end component (azureml-fe) that routes incoming inference requests to deployed services automatically scales as needed. It's installed during your k8s-extension installation.
+
+This component should be healthy on cluster, at least one healthy replica. You will get this error message if it's not avaliable when you trigger kubernetes online endpoint and deployment creation/update request.
+
+Please check the pod status and logs to fix this issue, you can also try to update the k8s-extension intalled on the cluster.
++ ### ERROR: ResourceNotReady To run the `score.py` provided as part of the deployment, Azure creates a container that includes all the resources that the `score.py` needs, and runs the scoring script on that container. The error in this scenario is that this container is crashing when running, which means scoring can't happen. This error happens when:
machine-learning How To Deploy Azure Container Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-azure-container-instance.md
For information on quota and region availability for ACI, see [Quotas and region
## Limitations
-* When using Azure Container Instances in a virtual network, the virtual network must be in the same resource group as your Azure Machine Learning workspace.
-* When using Azure Container Instances inside the virtual network, the Azure Container Registry (ACR) for your workspace cannot also be in the virtual network.
-
-For more information, see [How to secure inferencing with virtual networks](../how-to-secure-inferencing-vnet.md#enable-azure-container-instances-aci).
+When your Azure Machine Learning workspace is configured with a private endpoint, deploying to Azure Container Instances in a VNet is not supported. Instead, consider using a [Managed online endpoint with network isolation](/azure/machine-learning/how-to-secure-online-endpoint).
## Deploy to ACI
marketplace Isv App License https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/isv-app-license.md
The ISV creates a solution package for the offer that includes license plan info
| Transactable offers | Licensable-only offers | | | - |
-| Customers discover the ISVΓÇÖs offer in AppSource, purchase a subscription to the offer from AppSource, and get licenses for the ISV app. | - Customers discover the ISVΓÇÖs offer in AppSource or directly on the ISVΓÇÖs website. Customers purchase licenses for the plans they want directly from the ISV.<br>- The ISV registers the purchase with Microsoft in Partner Center. As part of [deal registration](/partner-center/csp-commercial-marketplace-licensing#register-isv-connect-deal-in-deal-registration), the ISV will specify the type and quantity of each licensing plan purchased by the customer. |
+| Customers discover the ISVΓÇÖs offer in AppSource, purchase a subscription to the offer from AppSource, and get licenses for the ISV app. | - Customers discover the ISVΓÇÖs offer in AppSource or directly on the ISVΓÇÖs website. Customers purchase licenses for the plans they want directly from the ISV.<br>- The ISV registers the purchase with Microsoft in Partner Center. As part of [deal registration](/partner-center/register-deals), the ISV will specify the type and quantity of each licensing plan purchased by the customer. |
### Step 4: Manage subscription | Transactable offers | Licensable-only offers | | | - |
-| Customers can manage subscriptions for the Apps they purchased in [Microsoft 365 admin center](https://admin.microsoft.com/), just like they normally do for any of their Microsoft Office or Dynamics subscriptions. | ISVs activate and manage deals in Partner Center ([deal registration portal(https://partner.microsoft.com/)]) |
+| Customers can manage subscriptions for the Apps they purchased in [Microsoft 365 admin center](https://admin.microsoft.com/), just like they normally do for any of their Microsoft Office or Dynamics subscriptions. | ISVs activate and manage deals in Partner Center [deal registration portal](https://partner.microsoft.com/) |
### Step 5: Assign licenses
mysql How To Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-read-replicas-portal.md
To delete a source server from the Azure portal, use the following steps:
## Next steps -- Learn more about [read replicas](concepts-read-replicas.md)
+- Learn more about [read replicas](concepts-read-replicas.md)
+- You can also monitor the replication latency by following the steps mentioned [here](../single-server/how-to-troubleshoot-replication-latency.md#monitoring-replication-latency).
+- To troubleshoot high replication latency observed in Metrics, visit the [link](../single-server/how-to-troubleshoot-replication-latency.md#common-scenarios-for-high-replication-latency).
mysql Concepts Azure Advisor Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-azure-advisor-recommendations.md
description: Learn about Azure Advisor recommendations for MySQL.
- Previously updated : 04/08/2021 Last updated : 06/03/2022 # Azure Advisor for MySQL Learn about how Azure Advisor is applied to Azure Database for MySQL and get answers to common questions. ## What is Azure Advisor for MySQL?
mysql Quickstart Create Server Up Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-server-up-azure-cli.md
az account set --subscription <subscription id>
## Create an Azure Database for MySQL server
-To use the commands, install the [db-up](/cli/azure/ext/db-up/mysql
+To use the commands, install the [db-up](/cli/azure/mysql
) extension. If an error is returned, ensure you have installed the latest version of the Azure CLI. See [Install Azure CLI](/cli/azure/install-azure-cli). ```azurecli
open-datasets Dataset 1000 Genomes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-1000-genomes.md
West Central US: 'https://dataset1000genomes-secondary.blob.core.windows.net/dat
## Data Access: Curated 1000 genomes dataset in parquet format
-East US: https://curated1000genomes.blob.core.windows.net/dataset
+East US: `https://curated1000genomes.blob.core.windows.net/dataset`
SAS Token: sv=2018-03-28&si=prod&sr=c&sig=BgIomQanB355O4FhxqBL9xUgKzwpcVlRZdBewO5%2FM4E%3D
open-datasets Dataset Seattle Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-seattle-safety.md
This dataset is stored in the East US Azure region. We recommend locating comput
## Additional information
-This dataset is sourced from city of Seattle government. For more information, see the [city of Seattle website](http://web5.seattle.gov/MNM/incidentresponse.aspx). View the [Licensing and Attribution for the terms of using this dataset](https://creativecommons.org/publicdomain/zero/1.0/legalcode). Email open.data@seattle.gov if you have any questions about the data source.
+This dataset is sourced from city of Seattle government. For more information, see the [city of Seattle website](http://www.seattle.gov/). View the [Licensing and Attribution for the terms of using this dataset](https://creativecommons.org/publicdomain/zero/1.0/legalcode). Email open.data@seattle.gov if you have any questions about the data source.
## Columns
FROM
## Next steps
-View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
+View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
peering-service About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/about.md
Title: Azure Peering Service overview description: Learn about Azure Peering Service overview -+ na Last updated 05/18/2020-+ # Azure Peering Service Overview
peering-service Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/azure-portal.md
Title: Create Azure Peering Service Connection - Azure portal description: Learn how to create Azure Peering Service by using the Azure portal -+ na Last updated 04/07/2021-+ # Create Peering Service Connection using the Azure portal
peering-service Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/cli.md
Title: Register a Peering Service Preview connection by using the Azure CLI description: Learn how to register a Peering Service connection by using the Azure CLI -+ na Last updated 05/2/2020-+ # Register a Peering Service connection by using the Azure CLI
peering-service Connection Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/connection-telemetry.md
Title: 'Azure Peering Service: How to access connection telemetry ' description: In this tutorial learn how to access connection telemetry. -+ Last updated 04/06/2021-+ # Customer intent: Customer wants to access their connection telemetry per prefix to Microsoft services with Azure Peering Service. # Tutorial: Accessing Peering Service connection telemetry
peering-service Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/connection.md
Title: Azure Peering Service connection description: Learn about Microsoft Azure Peering Service connection -+ na Last updated 04/07/2021-+ # Peering Service connection
peering-service Location Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/location-partners.md
Title: Azure Peering Service locations and partners description: Learn about Azure Peering Service locations and partners -+ na Last updated 11/06/2020-+ # Peering Service partners
peering-service Measure Connection Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/measure-connection-telemetry.md
Title: 'Azure Peering Service: How to measure connection telemetry ' description: In this tutorial learn how to measure connection telemetry. -+ Last updated 05/18/2020-+ # Customer intent: Customer wants to measure their connection telemetry per prefix to Microsoft services with Azure Peering Service. # Tutorial: Measure Peering Service connection telemetry
peering-service Onboarding Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/onboarding-model.md
Title: Azure Peering Service onboarding model description: Learn about on how to onboard Azure Peering Service -+ na Last updated 05/18/2020-+ # Onboarding Peering Service model
peering-service Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/powershell.md
Title: 'Register a Peering Service connection - Azure PowerShell ' description: In this tutorial learn how to register a Peering Service connection with PowerShell. -+ Last updated 05/18/2020-+ # Customer intent: Customer wants to measure their connection telemetry per prefix to Microsoft services with Azure Peering Service .
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-extensions.md
Using the [Azure portal](https://portal.azure.com):
Using [Azure CLI](/cli/azure/):
- You can allow-list extensions via CLI parameter set [command]( https://docs.microsoft.com/cli/azure/postgres/flexible-server/parameter?view=azure-cli-latest&preserve-view=true).
+ You can allow-list extensions via CLI parameter set [command](/cli/azure/postgres/flexible-server/parameter?view=azure-cli-latest&preserve-view=true).
```bash az postgres flexible-server parameter set --resource-group <your resource group> --server-name <your server name> --subscription <your subscription id> --name azure.extensions --value <extension name>,<extension name>
Using the [Azure portal](https://portal.azure.com):
Using [Azure CLI](/cli/azure/):
- You can set `shared_preload_libraries` via CLI parameter set [command]( https://docs.microsoft.com/cli/azure/postgres/flexible-server/parameter?view=azure-cli-latest&preserve-view=true).
+ You can set `shared_preload_libraries` via CLI parameter set [command](/cli/azure/postgres/flexible-server/parameter?view=azure-cli-latest&preserve-view=true).
```bash az postgres flexible-server parameter set --resource-group <your resource group> --server-name <your server name> --subscription <your subscription id> --name shared_preload_libraries --value <extension name>,<extension name>
For more details on restore method wiith Timescale enabled database see [Timesca
## Next steps
-If you don't see an extension that you'd like to use, let us know. Vote for existing requests or create new feedback requests in our [feedback forum](https://feedback.azure.com/d365community/forum/c5e32b97-ee24-ec11-b6e6-000d3a4f0da0).
+If you don't see an extension that you'd like to use, let us know. Vote for existing requests or create new feedback requests in our [feedback forum](https://feedback.azure.com/d365community/forum/c5e32b97-ee24-ec11-b6e6-000d3a4f0da0).
postgresql Quickstart Create Server Up Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-server-up-azure-cli.md
Azure Database for PostgreSQL is a managed service that enables you to run, mana
[!INCLUDE [cli-launch-cloud-shell-sign-in.md](../../../includes/cli-launch-cloud-shell-sign-in.md)]
-Install the [db-up](/cli/azure/ext/db-up/mysql) extension. If an error is returned, ensure you have installed the latest version of the Azure CLI. See [Install Azure CLI](/cli/azure/install-azure-cli).
+Install the [db-up](/cli/azure/mysql) extension. If an error is returned, ensure you have installed the latest version of the Azure CLI. See [Install Azure CLI](/cli/azure/install-azure-cli).
```azurecli az extension add --name db-up
search Search Indexer Howto Access Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-private.md
Previously updated : 05/03/2022 Last updated : 06/09/2022 # Make outbound connections through a private endpoint
-Many Azure resources, such as Azure storage accounts, can be configured to accept connections from a list of virtual networks and refuse outside connections that originate from a public network. If you're using an indexer and your Azure PaaS data source is on a private network, you can create an outbound [private endpoint connection](../private-link/private-endpoint-overview.md) used by Azure Cognitive Search to reach the data.
+Many Azure resources, such as Azure storage accounts, can be configured to accept connections from a list of virtual networks and refuse outside connections that originate from a public network. If you're using an indexer and your Azure PaaS data source is on a private network, you can create an outbound [private endpoint connection](../private-link/private-endpoint-overview.md) used by Azure Cognitive Search to reach the data.
-Private endpoints created through Azure Cognitive Search APIs are referred to as *shared private links* or *managed outbound private endpoints*. The concept of a "shared private link" is that an Azure PaaS resource already has a private endpoint through [Azure Private Link service](https://azure.microsoft.com/services/private-link/), and Azure Cognitive Search is sharing access. Although access is shared, a shared private link creates its own private connection. The shared private link is the mechanism by which Azure Cognitive Search makes the connection to resources in a private network.
+For [Azure Storage](../storage/common/storage-network-security.md?tabs=azure-portal), if both the storage account and the search service are in the same region, outbound traffic uses a private IP address to communicate to storage and occurs over the Microsoft backbone network. For this scenario, you can omit private endpoints through Azure Cognitive Search. For other Azure PaaS resources, we suggest that you review the networking documentation for those resources to determine whether a private endpoint is helpful.
To create a shared private link, use the Azure portal or the [Create Or Update Shared Private Link](/rest/api/searchmanagement/2020-08-01/shared-private-link-resources/create-or-update) operation in the Azure Cognitive Search Management REST API.
+## Terminology
+
+Private endpoints created through Azure Cognitive Search APIs are referred to as *shared private links* or *managed outbound private endpoints*. The concept of a "shared private link" is that an Azure PaaS resource already has a private endpoint through [Azure Private Link service](https://azure.microsoft.com/services/private-link/), and Azure Cognitive Search is sharing access. Although access is shared, a shared private link creates its own private connection. The shared private link is the mechanism by which Azure Cognitive Search makes the connection to resources in a private network.
+ ## Prerequisites + The Azure resource that provides content or code must be previously registered with the [Azure Private Link service](https://azure.microsoft.com/services/private-link/).
To create a shared private link, use the Azure portal or the [Create Or Update S
+ If you're connecting to a preview data source, such as Azure Database for MySQL or Azure Functions, use a preview version of the Management REST API to create the shared private link. Preview versions that support a shared private link include `2020-08-01-preview` or `2021-04-01-preview`.
-+ If you're using the [Azure portal](https://portal.azure.com/), make sure that access to all public networks is enabled in the data source resource firewall while going through the instructions below. Otherwise, you need to enable access to all public networks during this setup and then disable it again, or instead, you must use REST API from a device with an authorized IP in the firewall rules, to perform these operations. If the supported data source resource has public networks access disabled, there will be errors when connecting from the portal to it.
++ Connections from the search client should be programmatic, either REST APIs or an Azure SDK, rather than through the Azure portal. The device must connect using an authorized IP in the Azure PaaS resource's firewall rules. > [!NOTE]
-> When using Private Link for data sources, [Import data](search-import-data-portal.md) wizard is not supported.
+> When using Private Link for data sources, Azure portal access (from Cognitive Search to your content) - such as through the [Import data](search-import-data-portal.md) wizard - is not supported.
<a name="group-ids"></a>
search Search Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-limits-quotas-capacity.md
Previously updated : 03/31/2022 Last updated : 06/10/2022 # Service limits in Azure Cognitive Search
Maximum running times exist to provide balance and stability to the service as a
| Blob indexer: maximum blob size, MB |16 |16 |128 |256 |256 |N/A |256 |256 | | Blob indexer: maximum characters of content extracted from a blob |32,000 |64,000 |4&nbsp;million |8&nbsp;million |16&nbsp;million |N/A |4&nbsp;million |4&nbsp;million |
-<sup>1</sup> Free services have indexer maximum execution time of 3 minutes for blob sources and 1 minute for all other data sources. For AI indexing that calls into Cognitive Services, free services are limited to 20 free transactions per day, where a transaction is defined as a document that successfully passes through the enrichment pipeline.
+<sup>1</sup> Free services have indexer maximum execution time of 3 minutes for blob sources and 1 minute for all other data sources. Indexer invocation is once every 180 seconds. For AI indexing that calls into Cognitive Services, free services are limited to 20 free transactions per indexer per day, where a transaction is defined as a document that successfully passes through the enrichment pipeline (tip: you can reset an indexer to reset its count).
<sup>2</sup> Basic services created before December 2017 have lower limits (5 instead of 15) on indexers, data sources, and skillsets.
search Search Sku Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-sku-tier.md
Most features are available on all tiers, including the free tier. In a few case
| Feature | Limitations | ||-|
-| [indexers](search-indexer-overview.md) | Indexers are not available on S3 HD. |
+| [indexers](search-indexer-overview.md) | Indexers are not available on S3 HD. Indexers have [more limitations](search-limits-quotas-capacity.md#indexer-limits) on the free tier. |
| [AI enrichment](cognitive-search-concept-intro.md) | Runs on the Free tier but not recommended. | | [Managed or trusted identities for outbound (indexer) access](search-howto-managed-identities-data-sources.md) | Not available on the Free tier.| | [Customer-managed encryption keys](search-security-manage-encryption-keys.md) | Not available on the Free tier. |
service-connector Tutorial Java Spring Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-java-spring-mysql.md
The following procedure uses the Azure CLI extension to provision an instance of
The following procedure uses the Azure CLI extension to provision an instance of Azure Database for MySQL.
-1. Install the [db-up](/cli/azure/ext/db-up/mysql) extension.
+1. Install the [db-up](/cli/azure/mysql) extension.
```azurecli az extension add --name db-up
service-fabric Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/release-notes.md
The following resources are also available:
- <a href="/azure/service-fabric/service-fabric-versions" target="blank">Supported Versions</a> - <a href="https://azure.microsoft.com/resources/samples/?service=service-fabric&sort=0" target="blank">Code Samples</a>
+## Service Fabric 9.0
+
+We are excited to announce that 9.0 release of the Service Fabric runtime has started rolling out to the various Azure regions along with tooling and SDK updates. The updates for .NET SDK, Java SDK and Service Fabric runtime are available through Web Platform Installer, NuGet packages and Maven repositories.
+
+### Key announcements
+- **General Availability** Support for .NET 6.0
+- **General Availability** Support for Ubuntu 20.04
+- **General Availability** Support for Multi-AZ within a single VM Scale Set (VMSS)
+- Added support for IHost, IHostBuilder and Minimal Hosting Model
+- Enabling opt-in option for Data Contract Serialization (DCS) based remoting exception
+- Support creation of End-to-End Developer Experience for Linux development on Windows using WSL2
+- Support for parallel recursive queries to Service Fabric DNS Service
+- Support for Managed KeyVaultReference
+- Expose Container ID for currently deployed code packages
+- Added Fabric_InstanceId environment variable for stateless guest applications
+- Exposed API for reporting MoveCost
+- Enforce a configurable Max value on the InstanceCloseDelayDuration
+- Added ability to enumerate actor reminders
+- Made updates to platform events
+- Introduced a property in Service Fabric runtime that can be set via SFRP as the ARM resource ID
+- Exposed application type provision timestamp
+- Support added for Service Fabric Resource Provider (SFRP) metadata to application type + version entities, starting with ARM resource ID
+
+### Service Fabric 9.0 releases
+| Release date | Release | More info |
+||||
+| April 29, 2022 | [Azure Service Fabric 9.0](https://techcommunity.microsoft.com/t5/azure-service-fabric-blog/azure-service-fabric-9-0-release/ba-p/3299108) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_90.md)|
+| June 06, 2022 | [Azure Service Fabric 9.0 First Refresh Release](https://techcommunity.microsoft.com/t5/azure-service-fabric-blog/microsoft-azure-service-fabric-9-0-first-refresh-release/ba-p/3469489) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_90CU1.md)|
+ ## Service Fabric 8.2
We are excited to announce that 8.2 release of the Service Fabric runtime has st
| Release date | Release | More info | |||| | October 29, 2021 | [Azure Service Fabric 8.2](https://techcommunity.microsoft.com/t5/azure-service-fabric/azure-service-fabric-8-2-release/ba-p/2895108) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_82.md)|
+| December 16, 2021 | [Azure Service Fabric 8.2 First Refresh Release](https://techcommunity.microsoft.com/t5/azure-service-fabric-blog/azure-service-fabric-8-2-first-refresh-release/ba-p/3040415) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_82CU1.md)|
+| February 12, 2022 | [Azure Service Fabric 8.2 Second Refresh Release](https://techcommunity.microsoft.com/t5/azure-service-fabric-blog/azure-service-fabric-8-2-second-refresh-release/ba-p/3095454) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_82CU2.md)|
+| June 06, 2022 | [Azure Service Fabric 8.2 Third Refresh Release](https://techcommunity.microsoft.com/t5/azure-service-fabric-blog/azure-service-fabric-8-2-third-refresh-release/ba-p/3469508) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_82CU3.md)|
## Service Fabric 8.1
We are excited to announce that 8.1 release of the Service Fabric runtime has st
| October 06 2021 | [Azure Service Fabric 8.1 Third Refresh Release](https://techcommunity.microsoft.com/t5/azure-service-fabric/azure-service-fabric-8-1-third-refresh-release/ba-p/2816117) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_81CU3.md) | + ## Service Fabric 8.0 We are excited to announce that 8.0 release of the Service Fabric runtime has started rolling out to the various Azure regions along with tooling and SDK updates. The updates for .NET SDK, Java SDK and Service Fabric runtime are available through Web Platform Installer, NuGet packages and Maven repositories.
service-fabric Run To Completion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/run-to-completion.md
Title: RunToCompletion semantics in Service Fabric
-description: Describes RunToCompletion semantics in Service Fabric.
+ Title: RunToCompletion semantics and specifications
+description: Learn about Service Fabric RunToCompletion semantics and specifications, and see complete code examples and considerations.
- Previously updated : 03/11/2020 Last updated : 06/08/2022 + # RunToCompletion
-Starting with version 7.1, Service Fabric supports **RunToCompletion** semantics for [containers][containers-introduction-link] and [guest executable][guest-executables-introduction-link] applications. These semantics enable applications and services which complete a task and exit, in contrast to, always running applications and services.
+Starting with version 7.1, Service Fabric supports **RunToCompletion** semantics for [containers][containers-introduction-link] and [guest executable applications][guest-executables-introduction-link]. These semantics enable applications and services that complete a task and exit, in contrast to always running applications and services.
-Before proceeding with this article, we recommend getting familiar with the [Service Fabric application model][application-model-link] and the [Service Fabric hosting model][hosting-model-link].
+Before you proceed with this article, be familiar with the [Service Fabric application model][application-model-link] and the [Service Fabric hosting model][hosting-model-link].
> [!NOTE]
-> RunToCompletion semantics are currently not supported for services written using the [Reliable Services][reliable-services-link] programming model.
+> RunToCompletion semantics aren't supported for services that use the [Reliable Services][reliable-services-link] programming model.
## RunToCompletion semantics and specification
-RunToCompletion semantics can be specified as an **ExecutionPolicy** when [importing the ServiceManifest][application-and-service-manifests-link]. The specified policy is inherited by all the CodePackages comprising the ServiceManifest. The following ApplicationManifest.xml snippet provides an example.
+
+You can specify RunToCompletion semantics as an `ExecutionPolicy` when you [import the ServiceManifest][application-and-service-manifests-link]. All the CodePackages comprising the ServiceManifest inherit the specified policy. The following snippet from *ApplicationManifest.xml* provides an example:
```xml <ServiceManifestImport>
RunToCompletion semantics can be specified as an **ExecutionPolicy** when [impor
</Policies> </ServiceManifestImport> ```
-**ExecutionPolicy** allows the following two attributes:
-* **Type:** **RunToCompletion** is currently the only allowed value for this attribute.
-* **Restart:** This attribute specifies the restart policy that is applied to CodePackages comprising the ServicePackage, on failure. A CodePackage exiting with a **non-zero exit code** is considered to have failed. Allowed values for this attribute are **OnFailure** and **Never** with **OnFailure** being the default.
+`ExecutionPolicy` allows two attributes:
+
+- `Type` has `RunToCompletion` as the only allowed value.
+- `Restart` specifies the restart policy to apply to CodePackages in the ServicePackage on failure. A CodePackage that exits with a non-zero exit code is considered to have failed. Allowed values for this attribute are `OnFailure` and `Never`, with `OnFailure` as the default.
+
+ - With restart policy set to `OnFailure`, any CodePackage that fails with a non-zero exit code restarts, with back-offs between repeated failures.
+
+ - With restart policy set to `Never`, if any CodePackage fails, the deployment status of the DeployedServicePackage is marked **Failed**, but other CodePackages continue execution.
-With restart policy set to **OnFailure**, if any CodePackage fails **(non-zero exit code)**, it is restarted, with back-offs between repeated failures. With restart policy set to **Never**, if any CodePackage fails, the deployment status of the DeployedServicePackage is marked as **Failed** but other CodePackages are allowed to continue execution. If all the CodePackages comprising the ServicePackage run to successful completion **(exit code 0)**, the deployment status of the DeployedServicePackage is marked as **RanToCompletion**.
+If all the CodePackages in the ServicePackage run to successful completion with exit code `0`, the deployment status of the DeployedServicePackage is marked **RanToCompletion**.
-## Complete example using RunToCompletion semantics
+## Code example using RunToCompletion semantics
-Let's look at a complete example using RunToCompletion semantics.
+Let's look at a complete example that uses RunToCompletion semantics.
> [!IMPORTANT]
-> The following example assumes familiarity with creating [Windows container applications using Service Fabric and Docker][containers-getting-started-link].
+> The following example assumes familiarity with [creating Windows container applications using Service Fabric and Docker][containers-getting-started-link].
>
-> This example references mcr.microsoft.com/windows/nanoserver:1809. Windows Server containers are not compatible across all versions of a host OS. To learn more, see [Windows Container Version Compatibility](/virtualization/windowscontainers/deploy-containers/version-compatibility).
+> Windows Server containers aren't compatible across all versions of a host OS. This example references `mcr.microsoft.com/windows/nanoserver:1809`. For more information, see [Windows container version compatibility](/virtualization/windowscontainers/deploy-containers/version-compatibility).
-The following ServiceManifest.xml describes a ServicePackage consisting of two CodePackages, which represent containers. *RunToCompletionCodePackage1* just logs a message to **stdout** and exits. *RunToCompletionCodePackage2* pings the loopback address for a while and then exits with an exit code of either **0**, **1** or **2**.
+The following *ServiceManifest.xml* describes a ServicePackage consisting of two CodePackages, which represent containers. `RunToCompletionCodePackage1` just logs a message to **stdout** and exits. `RunToCompletionCodePackage2` pings the loopback address for a while and then exits with an exit code of either `0`, `1`, or `2`.
```xml <?xml version="1.0" encoding="UTF-8"?>
The following ServiceManifest.xml describes a ServicePackage consisting of two C
</ServiceManifest> ```
-The following ApplicationManifest.xml describes an application based on the ServiceManifest.xml discussed above. It specifies **RunToCompletion** **ExecutionPolicy** for *WindowsRunToCompletionServicePackage* with a restart policy of **OnFailure**. Upon activation of *WindowsRunToCompletionServicePackage*, its constituent CodePackages will be started. *RunToCompletionCodePackage1* should exit successfully on the first activation. However, *RunToCompletionCodePackage2* can fail **(non-zero exit code)**, in which case it will be restarted since the restart policy is **OnFailure**.
+The following *ApplicationManifest.xml* describes an application based on the *ServiceManifest.xml* discussed above. The code specifies RunToCompletion ExecutionPolicy for `WindowsRunToCompletionServicePackage` with a restart policy of `OnFailure`.
+
+Upon `WindowsRunToCompletionServicePackage` activation, its constituent CodePackages are started. `RunToCompletionCodePackage1` should exit successfully on the first activation. `RunToCompletionCodePackage2` can fail with a non-zero exit code, and will restart because the restart policy is `OnFailure`.
```xml <?xml version="1.0" encoding="UTF-8"?>
The following ApplicationManifest.xml describes an application based on the Serv
</DefaultServices> </ApplicationManifest> ```
-## Querying deployment status of a DeployedServicePackage
-Deployment status of a DeployedServicePackage can be queried from PowerShell using [Get-ServiceFabricDeployedServicePackage][deployed-service-package-link] or from C# using [FabricClient][fabric-client-link] API [GetDeployedServicePackageListAsync(String, Uri, String)][deployed-service-package-fabricclient-link]
+## Query deployment status of a DeployedServicePackage
-## Considerations when using RunToCompletion semantics
+You can query deployment status of a DeployedServicePackage.
-The following points should be noted for the current RunToCompletion support.
-* These semantics are only supported for [containers][containers-introduction-link] and [guest executable][guest-executables-introduction-link] applications.
-* Upgrade scenarios for applications with RunToCompletion semantics are not allowed. Users should delete and recreate such applications, if necessary.
-* Failover events can cause CodePackages to re-execute after successful completion, on the same node, or other nodes of the cluster. Examples of failover events are, node restarts and Service Fabric runtime upgrades on a node.
-* RunToCompletion is incompatible with ServicePackageActivationMode="SharedProcess". Users must specify ServicePackageActivationMode="ExclusiveProcess", given that SharedProcess is the default value. Service Fabric runtime version 9.0 and higher will fail validation for such services.
+- From PowerShell, use the [Get-ServiceFabricDeployedServicePackage][deployed-service-package-link]
+- From C#, use the [FabricClient][fabric-client-link] API [GetDeployedServicePackageListAsync(String, Uri, String)][deployed-service-package-fabricclient-link].
-## Next steps
+## Considerations for RunToCompletion semantics
+
+Consider the following points about RunToCompletion support:
-See the following articles for related information.
+- RunToCompletion semantics are supported only for [containers][containers-introduction-link] and [guest executable applications][guest-executables-introduction-link].
+- Upgrade scenarios for applications with RunToCompletion semantics aren't allowed. You need to delete and recreate such applications if necessary.
+- Failover events can cause CodePackages to re-execute after successful completion, on the same node or on other nodes of the cluster. Examples of failover events are node restarts and Service Fabric runtime upgrades on a node.
+- RunToCompletion is incompatible with `ServicePackageActivationMode="SharedProcess"`. Service Fabric runtime version 9.0 and higher fails validation for such services. `SharedProcess` is the default value, so you must specify `ServicePackageActivationMode="ExclusiveProcess"` to use RunToCompletion semantics.
+
+## Next steps
-* [Service Fabric and containers.][containers-introduction-link]
-* [Service Fabric and guest executables.][guest-executables-introduction-link]
+- [Service Fabric and containers][containers-introduction-link]
+- [Service Fabric and guest executables][guest-executables-introduction-link]
<!-- Links --> [containers-introduction-link]: service-fabric-containers-overview.md
service-fabric Service Fabric Application Secret Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-secret-store.md
Formalizing the model, the following are the rules implemented and enforced in t
- Deleting a secret resource causes the deletion of all its versions - The value of a secret version is immutable
-The full set of REST management APIs for secret resources can be found [here](/rest/api/servicefabric/sfclient-index-meshsecrets), and for secret versions, [here](/rest/api/servicefabric/sfclient-index-meshsecretvalues).
### Declare a secret resource You can create a secret resource by using the REST API.
Central Secret Service should no longer be running in the cluster, and will not
- Learn more about [application and service security](service-fabric-application-and-service-security.md). - Get introduced to [Managed Identity for Service Fabric Applications](concepts-managed-identity.md).-- Expand CSS's functionality with [KeyVaultReferences](service-fabric-keyvault-references.md)
+- Expand CSS's functionality with [KeyVaultReferences](service-fabric-keyvault-references.md)
spatial-anchors Tutorial Share Anchors Across Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/tutorials/tutorial-share-anchors-across-devices.md
In this tutorial, you deployed an ASP.NET Core web app in Azure, and you configu
You can improve your ASP.NET Core web app so that it uses Azure Cosmos DB to persist the storage of your shared spatial anchors identifiers. By adding Azure Cosmos DB support, you can have your ASP.NET Core web app create an anchor today. Then, by using the anchor identifier that's stored in your web app, you can have the app return days later to locate the anchor again. > [!div class="nextstepaction"]
-> [Use Azure Cosmos DB to store anchors](./tutorial-use-cosmos-db-to-store-anchors.md)
+> [Use Azure Cosmos DB to store anchors](./tutorial-use-cosmos-db-to-store-anchors.md)
spring-cloud Quickstart Deploy Apps Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-deploy-apps-enterprise.md
Use the following steps to provision an Azure Spring Apps service instance.
az term accept \ --publisher vmware-inc \ --product azure-spring-cloud-vmware-tanzu-2 \
- --plan tanzu-asc-ent-mtr
+ --plan asa-ent-hr-mtr
``` 1. Select a location. This location must be a location supporting Azure Spring Apps Enterprise tier. For more information, see the [Azure Spring Apps FAQ](faq.md).
storage Data Lake Storage Query Acceleration How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-query-acceleration-how-to.md
Previously updated : 04/11/2022 Last updated : 06/09/2022
private static async Task DumpQueryCsv(BlockBlobClient blob, string query, bool
{ try {
- var options = new BlobQueryOptions() {
- InputTextConfiguration = new BlobQueryCsvTextOptions() { HasHeaders = headers },
- OutputTextConfiguration = new BlobQueryCsvTextOptions() { HasHeaders = true },
- ProgressHandler = new Progress<long>((finishedBytes) => Console.Error.WriteLine($"Data read: {finishedBytes}"))
+ var options = new BlobQueryOptions()
+ {
+ InputTextConfiguration = new BlobQueryCsvTextOptions()
+ {
+ HasHeaders = true,
+ RecordSeparator = "\n",
+ ColumnSeparator = ",",
+ EscapeCharacter = '\\',
+ QuotationCharacter = '"'
+ },
+ OutputTextConfiguration = new BlobQueryCsvTextOptions()
+ {
+ HasHeaders = true,
+ RecordSeparator = "\n",
+ ColumnSeparator = ",",
+ EscapeCharacter = '\\',
+ QuotationCharacter = '"' },
+ ProgressHandler = new Progress<long>((finishedBytes) =>
+ Console.Error.WriteLine($"Data read: {finishedBytes}"))
}; options.ErrorHandler += (BlobQueryError err) => { Console.ForegroundColor = ConsoleColor.Red;
private static async Task DumpQueryCsv(BlockBlobClient blob, string query, bool
query, options)).Value.Content)) {
- using (var parser = new CsvReader(reader, new CsvConfiguration(CultureInfo.CurrentCulture, hasHeaderRecord: true) { HasHeaderRecord = true }))
+ using (var parser = new CsvReader
+ (reader, new CsvConfiguration(CultureInfo.CurrentCulture) { HasHeaderRecord = true }))
{ while (await parser.ReadAsync()) {
private static async Task DumpQueryCsv(BlockBlobClient blob, string query, bool
} catch (Exception ex) {
- Console.Error.WriteLine("Exception: " + ex.ToString());
+ System.Windows.Forms.MessageBox.Show("Exception: " + ex.ToString());
} }
stream-analytics Stream Analytics Define Inputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-define-inputs.md
The following table explains each property in the **New input** page in the Azur
| **Storage account** | The name of the storage account where the blob files are located. | | **Storage account key** | The secret key associated with the storage account. This option is automatically populated in unless you select the option to provide the settings manually. | | **Container** | Containers provide a logical grouping for blobs. You can choose either **Use existing** container or **Create new** to have a new container created.|
-| **Path pattern** (optional) | The file path used to locate the blobs within the specified container. If you want to read blobs from the root of the container, do not set a path pattern. Within the path, you can specify one or more instances of the following three variables: `{date}`, `{time}`, or `{partition}`<br/><br/>Example 1: `cluster1/logs/{date}/{time}/{partition}`<br/><br/>Example 2: `cluster1/logs/{date}`<br/><br/>The `*` character is not an allowed value for the path prefix. Only valid <a HREF="/rest/api/storageservices/Naming-and-Referencing-Containers--Blobs--and-Metadata">Azure blob characters</a> are allowed. Do not include container names or file names. |
+| **Path pattern** (optional) | The file path used to locate the blobs within the specified container. If you want to read blobs from the root of the container, do not set a path pattern. Within the path, you can specify one or more instances of the following three variables: `{date}`, `{time}`, or `{partition}`<br/><br/>Example 1: `cluster1/logs/{date}/{time}/{partition}`<br/><br/>Example 2: `cluster1/logs/{date}`<br/><br/>The `*` character is not an allowed value for the path prefix. Only valid <a href="/rest/api/storageservices/Naming-and-Referencing-Containers--Blobs--and-Metadata">Azure blob characters</a> are allowed. Do not include container names or file names. |
| **Date format** (optional) | If you use the date variable in the path, the date format in which the files are organized. Example: `YYYY/MM/DD` <br/><br/> When blob input has `{date}` or `{time}` in its path, the folders are looked at in ascending time order.| | **Time format** (optional) | If you use the time variable in the path, the time format in which the files are organized. Currently the only supported value is `HH` for hours. | | **Partition key** | This is an optional field that is available only if your job is configured to use [compatibility level](./stream-analytics-compatibility-level.md) 1.2 or higher. If your input is partitioned by a property, you can add the name of this property here. This is used for improving the performance of your query if it includes a PARTITION BY or GROUP BY clause on this property. If this job uses compatibility level 1.2 or higher, this field defaults to "PartitionId". |
stream-analytics Stream Analytics Previews https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-previews.md
Previously updated : 03/23/2022 Last updated : 06/10/2022 # Azure Stream Analytics preview features This article summarizes all the features currently in preview for Azure Stream Analytics. Using preview features in a production environment isn't recommended.
-## Authenticate to SQL Database output with managed identities (preview)
-
-Azure Stream Analytics supports [Managed Identity authentication](../active-directory/managed-identities-azure-resources/overview.md) for Azure SQL Database output sinks. Managed identities eliminate the limitations of user-based authentication methods, like the need to reauthenticate due to password changes.
- ## C# custom de-serializers Developers can leverage the power of Azure Stream Analytics to process data in Protobuf, XML, or any custom format. You can implement [custom de-serializers](custom-deserializer-examples.md) in C#, which can then be used to de-serialize events received by Azure Stream Analytics.
synapse-analytics Sql Data Warehouse Tables Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-overview.md
Previously updated : 11/02/2021 Last updated : 06/10/2022
INNER JOIN sys.dm_pdw_nodes_db_partition_stats nps
ON nt.[object_id] = nps.[object_id] AND nt.[pdw_node_id] = nps.[pdw_node_id] AND nt.[distribution_id] = nps.[distribution_id]
+ AND i.[index_id] = nps.[index_id]
LEFT OUTER JOIN (select * from sys.pdw_column_distribution_properties where distribution_ordinal = 1) cdp ON t.[object_id] = cdp.[object_id] LEFT OUTER JOIN sys.columns c
virtual-desktop Create Fslogix Profile Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-fslogix-profile-container.md
Title: FSLogix profile containers NetApp Azure Virtual Desktop - Azure
description: How to create an FSLogix profile container using Azure NetApp Files in Azure Virtual Desktop. Previously updated : 06/05/2020 Last updated : 06/09/2020
The instructions in this guide are specifically for Azure Virtual Desktop users.
Before you can create an FSLogix profile container for a host pool, you must: - Set up and configure Azure Virtual Desktop-- Provision a Azure Virtual Desktop host pool
+- Provision an Azure Virtual Desktop host pool
## Set up your Azure NetApp Files account
After you create the volume, configure the volume access parameters.
2. Under Configuration in the **Active Directory** drop-down menu, select the same directory that you originally connected in [Join an Active Directory connection](create-fslogix-profile-container.md#join-an-active-directory-connection). Keep in mind that there's a limit of one Active Directory per subscription. 3. In the **Share name** text box, enter the name of the share used by the session host pool and its users.
+ If you want to enable Continuous Availability for the SMB volume, select **Enable Continuous Availability**.
+
+ The SMB Continuous Availability feature is currently in public preview. You need to submit a waitlist request for accessing the feature through the **[Azure NetApp Files SMB Continuous Availability Shares Public Preview waitlist submission page](https://aka.ms/anfsmbcasharespreviewsignup)**. Wait for an official confirmation email from the Azure NetApp Files team before using the Continuous Availability feature.
+
+ Using SMB Continuous Availability shares is only supported for workloads using:
+ * Citrix App Layering
+ * FSLogix user profile containers
+ * Microsoft SQL Server (not Linux SQL Server)
+
+ If you are using a non-administrator (domain) account to install SQL Server, ensure that the account has the required security privilege assigned. If the domain account does not have the required security privilege (`SeSecurityPrivilege`), and the privilege cannot be set at the domain level, you can grant the privilege to the account by using the *Security privilege users* field of Active Directory connections. See [Create an Active Directory connection](../azure-netapp-files/create-active-directory-connections.md).
+ 4. Select **Review + create** at the bottom of the page. This opens the validation page. After your volume is validated successfully, select **Create**. 5. At this point, the new volume will start to deploy. Once deployment is complete, you can use the Azure NetApp Files share.
virtual-machine-scale-sets Instance Generalized Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/instance-generalized-image-version.md
To create a VM using an image shared to a community gallery, use the unique ID o
/CommunityGalleries/<community gallery name>/Images/<image name>/Versions/latest ```
-To list all of the image definitions that are available in a community gallery using [az sig image-definition list-community](/cli/azure/sig/image-definition#az_sig_image_definition_list_community). In this example, we list all of the images in the *ContosoImage* gallery in *West US* and by name, the unique ID that is needed to create a VM, OS and OS state.
+To list all of the image definitions that are available in a community gallery using [az sig image-definition list-community](/cli/azure/sig/image-definition#az-sig-image-definition-list-community). In this example, we list all of the images in the *ContosoImage* gallery in *West US* and by name, the unique ID that is needed to create a VM, OS and OS state.
```azurecli-interactive az sig image-definition list-community \
virtual-machine-scale-sets Instance Specialized Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/instance-specialized-image-version.md
To create a VM using an image shared to a community gallery, use the unique ID o
/CommunityGalleries/<community gallery name>/Images/<image name>/Versions/latest ```
-To list all of the image definitions that are available in a community gallery using [az sig image-definition list-community](/cli/azure/sig/image-definition#az_sig_image_definition_list_community). In this example, we list all of the images in the *ContosoImage* gallery in *West US* and by name, the unique ID that is needed to create a VM, OS and OS state.
+To list all of the image definitions that are available in a community gallery using [az sig image-definition list-community](/cli/azure/sig/image-definition#az-sig-image-definition-list-community). In this example, we list all of the images in the *ContosoImage* gallery in *West US* and by name, the unique ID that is needed to create a VM, OS and OS state.
```azurecli-interactive az sig image-definition list-community \
virtual-machines Ddv4 Ddsv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ddv4-ddsv4-series.md
The new Ddv4 VM sizes include fast, larger local SSD storage (up to 2,400 GiB) a
| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps<sup>*</sup> | Max NICs|Expected network bandwidth (Mbps) | |||||||||
-| Standard_D2d_v4<sup>1</sup> | 2 | 8 | 75 | 4 | 9000/125 | 2 | 1000 |
-| Standard_D4d_v4 | 4 | 16 | 150 | 8 | 19000/250 | 2 | 2000 |
-| Standard_D8d_v4 | 8 | 32 | 300 | 16 | 38000/500 | 4 | 4000 |
-| Standard_D16d_v4 | 16 | 64 | 600 | 32 | 75000/1000 | 8 | 8000 |
+| Standard_D2d_v4<sup>1</sup> | 2 | 8 | 75 | 4 | 9000/125 | 2 | 5000 |
+| Standard_D4d_v4 | 4 | 16 | 150 | 8 | 19000/250 | 2 | 10000 |
+| Standard_D8d_v4 | 8 | 32 | 300 | 16 | 38000/500 | 4 | 12500 |
+| Standard_D16d_v4 | 16 | 64 | 600 | 32 | 75000/1000 | 8 | 12500 |
| Standard_D32d_v4 | 32 | 128 | 1200 | 32 | 150000/2000 | 8 | 16000 | | Standard_D48d_v4 | 48 | 192 | 1800 | 32 | 225000/3000 | 8 | 24000 | | Standard_D64d_v4 | 64 | 256 | 2400 | 32 | 300000/4000 | 8 | 30000 |
The new Ddsv4 VM sizes include fast, larger local SSD storage (up to 2,400 GiB)
| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps<sup>*</sup> | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs|Expected network bandwidth (Mbps) | |||||||||||
-| Standard_D2ds_v4<sup>2</sup> | 2 | 8 | 75 | 4 | 9000/125 | 3200/48 | 4000/200 | 2 | 1000 |
-| Standard_D4ds_v4 | 4 | 16 | 150 | 8 | 19000/250 | 6400/96 | 8000/200 | 2 | 2000 |
-| Standard_D8ds_v4 | 8 | 32 | 300 | 16 | 38000/500 | 12800/192 | 16000/400 | 4 | 4000 |
-| Standard_D16ds_v4 | 16 | 64 | 600 | 32 | 85000/1000 | 25600/384 | 32000/800 | 8 | 8000 |
+| Standard_D2ds_v4<sup>2</sup> | 2 | 8 | 75 | 4 | 9000/125 | 3200/48 | 4000/200 | 2 | 5000 |
+| Standard_D4ds_v4 | 4 | 16 | 150 | 8 | 19000/250 | 6400/96 | 8000/200 | 2 | 10000 |
+| Standard_D8ds_v4 | 8 | 32 | 300 | 16 | 38000/500 | 12800/192 | 16000/400 | 4 | 12500 |
+| Standard_D16ds_v4 | 16 | 64 | 600 | 32 | 85000/1000 | 25600/384 | 32000/800 | 8 | 12500 |
| Standard_D32ds_v4 | 32 | 128 | 1200 | 32 | 150000/2000 | 51200/768 | 64000/1600 | 8 | 16000 | | Standard_D48ds_v4 | 48 | 192 | 1800 | 32 | 225000/3000 | 76800/1152 | 80000/2000 | 8 | 24000 | | Standard_D64ds_v4 | 64 | 256 | 2400 | 32 | 300000/4000 | 80000/1200 | 80000/2000 | 8 | 30000 |
virtual-machines Edv4 Edsv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/edv4-edsv4-series.md
Edv4-series sizes run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice L
| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps<sup>*</sup> | Max NICs|Max network bandwidth (Mbps) | |||||||||
-| Standard_E2d_v4<sup>1</sup> | 2 | 16 | 75 | 4 | 9000/125 | 2 | 1000 |
-| Standard_E4d_v4 | 4 | 32 | 150 | 8 | 19000/250 | 2 | 2000 |
-| Standard_E8d_v4 | 8 | 64 | 300 | 16 | 38000/500 | 4 | 4000 |
-| Standard_E16d_v4 | 16 | 128 | 600 | 32 | 75000/1000 | 8 | 8000 |
-| Standard_E20d_v4 | 20 | 160 | 750 | 32 | 94000/1250 | 8 | 10000 |
+| Standard_E2d_v4<sup>1</sup> | 2 | 16 | 75 | 4 | 9000/125 | 2 | 5000 |
+| Standard_E4d_v4 | 4 | 32 | 150 | 8 | 19000/250 | 2 | 10000 |
+| Standard_E8d_v4 | 8 | 64 | 300 | 16 | 38000/500 | 4 | 12500 |
+| Standard_E16d_v4 | 16 | 128 | 600 | 32 | 75000/1000 | 8 | 12500 |
+| Standard_E20d_v4 | 20 | 160 | 750 | 32 | 94000/1250 | 8 | 16000 |
| Standard_E32d_v4 | 32 | 256 | 1200 | 32 | 150000/2000 | 8 | 16000 | | Standard_E48d_v4 | 48 | 384 | 1800 | 32 | 225000/3000 | 8 | 24000 | | Standard_E64d_v4 | 64 | 504 | 2400 | 32 | 300000/4000 | 8 | 30000 |
Edsv4-series sizes run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice
| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps<sup>*</sup> | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs|Max network bandwidth (Mbps) | |||||||||||
-| Standard_E2ds_v4<sup>4</sup> | 2 | 16 | 75 | 4 | 9000/125 | 3200/48 | 4000/200 | 2 | 1000 |
-| Standard_E4ds_v4 | 4 | 32 | 150 | 8 | 19000/250 | 6400/96 | 8000/200 | 2 | 2000 |
-| Standard_E8ds_v4 | 8 | 64 | 300 | 16 | 38000/500 | 12800/192 | 16000/400 | 4 | 4000 |
-| Standard_E16ds_v4 | 16 | 128 | 600 | 32 | 75000/1000 | 25600/384 | 32000/800 | 8 | 8000 |
-| Standard_E20ds_v4 | 20 | 160 | 750 | 32 | 94000/1250 | 32000/480 | 40000/1000 | 8 | 10000 |
+| Standard_E2ds_v4<sup>4</sup> | 2 | 16 | 75 | 4 | 9000/125 | 3200/48 | 4000/200 | 2 | 5000 |
+| Standard_E4ds_v4 | 4 | 32 | 150 | 8 | 19000/250 | 6400/96 | 8000/200 | 2 | 10000 |
+| Standard_E8ds_v4 | 8 | 64 | 300 | 16 | 38000/500 | 12800/192 | 16000/400 | 4 | 12500 |
+| Standard_E16ds_v4 | 16 | 128 | 600 | 32 | 75000/1000 | 25600/384 | 32000/800 | 8 | 12500 |
+| Standard_E20ds_v4 | 20 | 160 | 750 | 32 | 94000/1250 | 32000/480 | 40000/1000 | 8 | 16000 |
| Standard_E32ds_v4 | 32 | 256 | 1200 | 32 | 150000/2000 | 51200/768 | 64000/1600 | 8 | 16000 | | Standard_E48ds_v4 | 48 | 384 | 1800 | 32 | 225000/3000 | 76800/1152 | 80000/2000 | 8 | 24000 | | Standard_E64ds_v4 <sup>2</sup> | 64 | 504 | 2400 | 32 | 300000/4000 | 80000/1200 | 80000/2000 | 8 | 30000 |
virtual-machines Disk Encryption Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-linux.md
You can disable encryption using Azure PowerShell, the Azure CLI, or with a Reso
- **Disable disk encryption with Azure PowerShell:** To disable the encryption, use the [Disable-AzVMDiskEncryption](/powershell/module/az.compute/disable-azvmdiskencryption) cmdlet. ```azurepowershell-interactive
- Disable-AzVMDiskEncryption -ResourceGroupName "MyVirtualMachineResourceGroup" -VMName "MySecureVM" -VolumeType "all"
+ Disable-AzVMDiskEncryption -ResourceGroupName "MyVirtualMachineResourceGroup" -VMName "MySecureVM" -VolumeType "data"
``` - **Disable encryption with the Azure CLI:** To disable encryption, use the [az vm encryption disable](/cli/azure/vm/encryption#az-vm-encryption-disable) command. ```azurecli-interactive
- az vm encryption disable --name "MySecureVM" --resource-group "MyVirtualMachineResourceGroup" --volume-type "all"
+ az vm encryption disable --name "MySecureVM" --resource-group "MyVirtualMachineResourceGroup" --volume-type "data"
``` - **Disable encryption with a Resource Manager template:**
virtual-machines Using Cloud Init https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/using-cloud-init.md
Once the VM has been provisioned, cloud-init will run through all the modules an
For more details of cloud-init logging, refer to the [cloud-init documentation](https://cloudinit.readthedocs.io/en/latest/topics/logging.html) ## Telemetry
-cloud-init collects usage data and sends it to Microsoft to help improve our products and services. Telemetry is only collected during the provisioning process (first boot of the VM). The data collected helps us investigate provisioning failures and monitor performance and reliability. Data collected does not include any personally identifiable information. Read our [privacy statement](http://go.microsoft.com/fwlink/?LinkId=521839) to learn more. Some examples of telemetry being collected are (this is not an exhaustive list): OS-related information (cloud-init version, distro version, kernel version), performance metrics of essential VM provisioning actions (time to obtain DHCP lease, time to retrieve metadata necessary to configure the VM, etc.), cloud-init log, and dmesg log.
+cloud-init collects usage data and sends it to Microsoft to help improve our products and services. Telemetry is only collected during the provisioning process (first boot of the VM). The data collected helps us investigate provisioning failures and monitor performance and reliability. Data collected does not include any personally identifiable information. Read our [privacy statement](https://go.microsoft.com/fwlink/?LinkId=521839) to learn more. Some examples of telemetry being collected are (this is not an exhaustive list): OS-related information (cloud-init version, distro version, kernel version), performance metrics of essential VM provisioning actions (time to obtain DHCP lease, time to retrieve metadata necessary to configure the VM, etc.), cloud-init log, and dmesg log.
Telemetry collection is currently enabled for a majority of our marketplace images that use cloud-init. It is enabled by specifying KVP telemetry reporter for cloud-init. In a majority of Azure marketplace images, this configuration can be found in the file /etc/cloud/cloud.cfg.d/10-azure-kvp.cfg. Removing this file during image preparation will disable telemetry collection for any VM created from this image.
virtual-machines Redhat Rhui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/redhat-rhui.md
At the time of this writing, EUS support has ended for RHEL <= 7.4. See the "Red
* RHEL 7.6 EUS support ends May 31, 2021 * RHEL 7.7 EUS support ends August 30, 2021
+### Switch a RHEL VM 7.x to EUS (version-lock to a specific minor version)
+Use the following instructions to lock a RHEL 7.x VM to a particular minor release (run as root):
+
+>[!NOTE]
+> This only applies for RHEL 7.x versions for which EUS is available. At the time of this writing, this includes RHEL 7.2-7.7. More details are available at the [Red Hat Enterprise Linux Life Cycle](https://access.redhat.com/support/policy/updates/errata) page.
+1. Disable non-EUS repos:
+ ```bash
+ yum --disablerepo='*' remove 'rhui-azure-rhel7'
+ ```
+
+1. Add EUS repos:
+ ```bash
+ yum --config='https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel7-eus.config' install 'rhui-azure-rhel7-eus'
+ ```
+
+1. Lock the `releasever` variable (run as root):
+ ```bash
+ echo $(. /etc/os-release && echo $VERSION_ID) > /etc/yum/vars/releasever
+ ```
+
+ >[!NOTE]
+ > The above instruction will lock the RHEL minor release to the current minor release. Enter a specific minor release if you are looking to upgrade and lock to a later minor release that is not the latest. For example, `echo 7.5 > /etc/yum/vars/releasever` will lock your RHEL version to RHEL 7.5.
+1. Update your RHEL VM
+ ```bash
+ sudo yum update
+ ```
+
+### Switch a RHEL VM 8.x to EUS (version-lock to a specific minor version)
+Use the following instructions to lock a RHEL 8.x VM to a particular minor release (run as root):
+
+>[!NOTE]
+> This only applies for RHEL 8.x versions for which EUS is available. At the time of this writing, this includes RHEL 8.1-8.2. More details are available at the [Red Hat Enterprise Linux Life Cycle](https://access.redhat.com/support/policy/updates/errata) page.
+1. Disable non-EUS repos:
+ ```bash
+ yum --disablerepo='*' remove 'rhui-azure-rhel8'
+ ```
+
+1. Get the EUS repos config file:
+ ```bash
+ wget https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel8-eus.config
+ ```
+
+1. Add EUS repos:
+ ```bash
+ yum --config=rhui-microsoft-azure-rhel8-eus.config install rhui-azure-rhel8-eus
+ ```
+
+1. Lock the `releasever` variable (run as root):
+ ```bash
+ echo $(. /etc/os-release && echo $VERSION_ID) > /etc/yum/vars/releasever
+ ```
+
+ >[!NOTE]
+ > The above instruction will lock the RHEL minor release to the current minor release. Enter a specific minor release if you are looking to upgrade and lock to a later minor release that is not the latest. For example, `echo 8.1 > /etc/yum/vars/releasever` will lock your RHEL version to RHEL 8.1.
+ >[!NOTE]
+ > If there are permission issues to access the releasever, you can edit the file using 'nano /etc/yum/vars/releaseve' and add the image version details and save ('Ctrl+o' then press enter and then 'Ctrl+x').
+1. Update your RHEL VM
+ ```bash
+ sudo yum update
+ ```
++
+### Switch a RHEL 7.x VM back to non-EUS (remove a version lock)
+Run the following as root:
+1. Remove the `releasever` file:
+ ```bash
+ rm /etc/yum/vars/releasever
+ ```
+
+1. Disable EUS repos:
+ ```bash
+ yum --disablerepo='*' remove 'rhui-azure-rhel7-eus'
+ ```
+
+1. Configure RHEL VM
+ ```bash
+ yum --config='https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel7.config' install 'rhui-azure-rhel7'
+ ```
+
+1. Update your RHEL VM
+ ```bash
+ sudo yum update
+ ```
+
+### Switch a RHEL 8.x VM back to non-EUS (remove a version lock)
+Run the following as root:
+1. Remove the `releasever` file:
+ ```bash
+ rm /etc/yum/vars/releasever
+ ```
+
+1. Disable EUS repos:
+ ```bash
+ yum --disablerepo='*' remove 'rhui-azure-rhel8-eus'
+ ```
+
+1. Get the regular repos config file:
+ ```bash
+ wget https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel8.config
+ ```
+
+1. Add non-EUS repos:
+ ```bash
+ yum --config=rhui-microsoft-azure-rhel8.config install rhui-azure-rhel8
+ ```
+
+1. Update your RHEL VM
+ ```bash
+ sudo yum update
+ ```
+ ## The IPs for the RHUI content delivery servers RHUI is available in all regions where RHEL on-demand images are available. It currently includes all public regions listed on the [Azure status dashboard](https://azure.microsoft.com/status/) page, Azure US Government, and Microsoft Azure Germany regions.
virtual-network Create Custom Ip Address Prefix Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-cli.md
az network custom-ip prefix update \
--state commission ```
-As before, the operation is asynchronous. Use [az network custom-ip prefix show](/cli/azure/network/custom-ip/prefix#az_network_custom_ip_prefix_show) to retrieve the status. The **CommissionedState** field will initially show the prefix as **Commissioning**, followed in the future by **Commissioned**. The advertisement rollout isn't binary and the range will be partially advertised while still in **Commissioning**.
+As before, the operation is asynchronous. Use [az network custom-ip prefix show](/cli/azure/network/custom-ip/prefix#az-network-custom-ip-prefix-show) to retrieve the status. The **CommissionedState** field will initially show the prefix as **Commissioning**, followed in the future by **Commissioned**. The advertisement rollout isn't binary and the range will be partially advertised while still in **Commissioning**.
> [!NOTE] > The estimated time to fully complete the commissioning process is 3-4 hours.
virtual-network Create Public Ip Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-public-ip-cli.md
ms.devlang: azurecli
# Quickstart: Create a public IP address using the Azure CLI
-In this quickstart, you'll learn how to create an Azure public IP address. Public IP addresses in Azure are used for public connections to Azure resources. Public IP addresses are available in two SKUs, basic, and standard. Two tiers of public IP addresses are available, regional, and global. The routing preference of a public IP address is set when created. Internet routing and Microsoft Network routing are the available choices.
+In this quickstart, you'll learn how to create an Azure public IP address. Public IP addresses in Azure are used for public connections to Azure resources. Public IP addresses are available in two SKUs: basic, and standard. Two tiers of public IP addresses are available: regional, and global. The routing preference of a public IP address is set when created. Internet routing and Microsoft Network routing are the available choices.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
virtual-network Create Public Ip Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-public-ip-portal.md
# Quickstart: Create a public IP address using the Azure portal
-In this quickstart, you'll learn how to create an Azure public IP address. Public IP addresses in Azure are used for public connections to Azure resources. Public IP addresses are available in two SKUs, basic, and standard. Two tiers of public IP addresses are available, regional, and global. The routing preference of a public IP address is set when created. Internet routing and Microsoft Network routing are the available choices.
+In this quickstart, you'll learn how to create an Azure public IP address. Public IP addresses in Azure are used for public connections to Azure resources. Public IP addresses are available in two SKUs: basic, and standard. Two tiers of public IP addresses are available: regional, and global. The routing preference of a public IP address is set when created. Internet routing and Microsoft Network routing are the available choices.
## Prerequisites
virtual-network Create Public Ip Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-public-ip-powershell.md
# Quickstart: Create a public IP address using PowerShell
-In this quickstart, you'll learn how to create an Azure public IP address. Public IP addresses in Azure are used for public connections to Azure resources. Public IP addresses are available in two SKUs, basic, and standard. Two tiers of public IP addresses are available, regional, and global. The routing preference of a public IP address is set when created. Internet routing and Microsoft Network routing are the available choices.
+In this quickstart, you'll learn how to create an Azure public IP address. Public IP addresses in Azure are used for public connections to Azure resources. Public IP addresses are available in two SKUs: basic, and standard. Two tiers of public IP addresses are available: regional, and global. The routing preference of a public IP address is set when created. Internet routing and Microsoft Network routing are the available choices.
## Prerequisites
virtual-network Manage Custom Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/manage-custom-ip-address-prefix.md
Use the following CLI and PowerShell commands to create public IP prefixes with
|Tool|Command| |||
-|CLI|[az network public-ip prefix create](/cli/azure/network/public-ip/prefix#az_network_public_ip_prefix_create)|
+|CLI|[az network public-ip prefix create](/cli/azure/network/public-ip/prefix#az-network-public-ip-prefix-create)|
|PowerShell|[New-AzPublicIpPrefix](/powershell/module/az.network/new-azpublicipprefix)| > [!NOTE]
To view a custom IP prefix, the following commands can be used in Azure CLI and
|Tool|Command| |||
-|CLI|[az network custom-ip prefix list](/cli/azure/network/public-ip/prefix#az_network_custom_ip_prefix_list) to list custom IP prefixes<br>[az network custom-ip prefix show](/cli/azure/network/public-ip/prefix#az_network_custom_ip_prefix_show) to show settings and any derived public IP prefixes<br>
+|CLI|[az network custom-ip prefix list](/cli/azure/network/public-ip/prefix#az-network-custom-ip-prefix-list) to list custom IP prefixes<br>[az network custom-ip prefix show](/cli/azure/network/public-ip/prefix#az-network-custom-ip-prefix-show) to show settings and any derived public IP prefixes<br>
|PowerShell|[Get-AzCustomIpPrefix](/powershell/module/az.network/get-azcustomipprefix) to retrieve a custom IP prefix object and view its settings and any derived public IP prefixes| ## Decommission a custom IP prefix
virtual-network Nat Gateway Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-gateway-resource.md
Design recommendations for configuring timers:
- To upgrade a basic load balancer to standard, see [Upgrade Azure Public Load Balancer](../../load-balancer/upgrade-basic-standard.md)
- - To upgrade a basic public IP address too standard, see [Upgrade a public IP address](../ip-services/public-ip-upgrade-portal.md)
+ - To upgrade a basic public IP address to standard, see [Upgrade a public IP address](../ip-services/public-ip-upgrade-portal.md)
- NAT gateway does not support ICMP
Design recommendations for configuring timers:
- Learn about [metrics and alerts for NAT gateway](nat-metrics.md). -- Learn how to [troubleshoot NAT gateway](troubleshoot-nat.md).
+- Learn how to [troubleshoot NAT gateway](troubleshoot-nat.md).
virtual-wan About Virtual Hub Routing Preference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/about-virtual-hub-routing-preference.md
A Virtual WAN virtual hub connects to virtual networks (VNets) and on-premises using connectivity gateways, such as site-to-site (S2S) VPN gateway, ExpressRoute (ER) gateway, point-to-site (P2S) gateway, and SD-WAN Network Virtual Appliance (NVA). The virtual hub router provides central route management and enables advanced routing scenarios using route propagation, route association, and custom route tables.
-The virtual hub router takes routing decisions using built-in route selection algorithm. To influence routing decisions in virtual hub router towards on-premises, we now have a new Virtual WAN hub feature called **Hub routing preference** (HRP). When a virtual hub router learns multiple routes across S2S VPN, ER and SD-WAN NVA connections for a destination route-prefix in on-premises, the virtual hub routerΓÇÖs route selection algorithm will adapt based on the hub routing preference configuration and selects the best routes. You can now configure **Hub routing preference** using the [Azure Preview portal](https://portal.azure.com/?feature.customRouterAsn=true&feature.virtualWanRoutingPreference=true#home).
+The virtual hub router takes routing decisions using built-in route selection algorithm. To influence routing decisions in virtual hub router towards on-premises, we now have a new Virtual WAN hub feature called **Hub routing preference** (HRP). When a virtual hub router learns multiple routes across S2S VPN, ER and SD-WAN NVA connections for a destination route-prefix in on-premises, the virtual hub routerΓÇÖs route selection algorithm will adapt based on the hub routing preference configuration and selects the best routes. You can now configure **Hub routing preference** using the Azure Preview Portal. For steps, see [How to configure virtual hub routing preference](howto-virtual-hub-routing-preference.md).
> [!IMPORTANT] > The Virtual WAN feature **Hub routing preference** is currently in public preview. If you are interested in trying this feature, please follow the documentation below.
This section explains the route selection algorithm in a virtual hub along with
* When there are multiple virtual hubs in a Virtual WAN scenario, a virtual hub selects the best routes using the route selection algorithm described above, and then advertises them to the other virtual hubs in the virtual WAN.
-* **Limitation:** If a route-prefix is reachable via ER or VPN connections, and via virtual hub SD-WAN NVA, then the latter route is ignored by the route-selection algorithm. Therefore, the flows to prefixes reachable only via virtual hub SD-WAN NVA will ever take the route through the NVA. This is a limitation during the Preview phase of the **Hub routing preference** feature.
+* **Limitation:** If a route-prefix is reachable via ER or VPN connections, and via virtual hub SD-WAN NVA, then the latter route is ignored by the route-selection algorithm. Therefore, the flows to prefixes reachable only via virtual hub SD-WAN NVA will take the route through the NVA. This is a limitation during the Preview phase of the **Hub routing preference** feature.
## Routing scenarios
virtual-wan Howto Virtual Hub Routing Preference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/howto-virtual-hub-routing-preference.md
This public preview is provided without a service-level agreement and shouldn't
## Configure
-You can configure a new virtual hub to include the virtual hub routing preference setting by using the [Azure Preview portal](https://portal.azure.com/?feature.customRouterAsn=true&feature.virtualWanRoutingPreference=true#home). Follow the steps in the [Tutorial: Create a site-to-site connection](virtual-wan-site-to-site-portal.md) article.
+You can configure a new virtual hub to include the virtual hub routing preference setting by using the [Azure Preview portal]( https://portal.azure.com/?feature.virtualWanRoutingPreference=true#home). Follow the steps in the [Tutorial: Create a site-to-site connection](virtual-wan-site-to-site-portal.md) article.
To configure virtual hub routing preference for an existing virtual hub, use the following steps.
-1. Open the [Azure Preview portal](https://portal.azure.com/?feature.customRouterAsn=true&feature.virtualWanRoutingPreference=true#home). You can't use the regular Azure portal yet for this feature.
+1. Open the [Azure Preview portal]( https://portal.azure.com/?feature.virtualWanRoutingPreference=true#home). You can't use the regular Azure portal yet for this feature.
1. Go to your virtual WAN. In the left pane, under the **Connectivity** section, click **Hubs** to view the list of hubs. Select **… > Edit virtual hub** to open the **Edit virtual hub** dialog box.
To configure virtual hub routing preference for an existing virtual hub, use the
## Next steps
-To learn more about virtual hub routing preference, see [About virtual hub routing preference](about-virtual-hub-routing-preference.md).
+To learn more about virtual hub routing preference, see [About virtual hub routing preference](about-virtual-hub-routing-preference.md).
virtual-wan Hub Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/hub-settings.md
For pricing information, see [Azure Virtual WAN pricing](https://azure.microsoft
A Virtual WAN virtual hub connects to virtual networks (VNets) and on-premises sites using connectivity gateways, such as site-to-site (S2S) VPN gateway, ExpressRoute (ER) gateway, point-to-site (P2S) gateway, and SD-WAN Network Virtual Appliance (NVA). The virtual hub router provides central route management and enables advanced routing scenarios using route propagation, route association, and custom route tables. When a virtual hub router makes routing decisions, it considers the configuration of such capabilities.
-Previously, there wasn't a configuration option for you to use to influence routing decisions within virtual hub router for prefixes in on-premises sites. These decisions relied on the virtual hub router's built-in route selection algorithm and the options available within gateways to manage routes before they reach the virtual hub router. To influence routing decisions in virtual hub router for prefixes in on-premises sites, you can now adjust the **Hub routing preference** using the [Azure Preview portal](https://portal.azure.com/?feature.customRouterAsn=true&feature.virtualWanRoutingPreference=true#home).
+Previously, there wasn't a configuration option for you to use to influence routing decisions within virtual hub router for prefixes in on-premises sites. These decisions relied on the virtual hub router's built-in route selection algorithm and the options available within gateways to manage routes before they reach the virtual hub router. To influence routing decisions in virtual hub router for prefixes in on-premises sites, you can now adjust the **Hub routing preference**.
For more information, see [About virtual hub routing preference](about-virtual-hub-routing-preference.md).
virtual-wan Virtual Wan About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-about.md
The following features are currently in gated public preview. If, after working
| Routing intent and policies enabling Inter-hub security | This feature allows you to configure internet-bound, private, or inter-hub traffic flow through the Azure Firewall. For more information, see [Routing intent and policies](../virtual-wan/how-to-routing-policies.md).| previewinterhub@microsoft.com | | Hub-to-hub over ER preview link | This feature allows traffic between 2 hubs traverse through the Azure Virtual WAN router in each hub and uses a hub-to-hub path instead of the ExpressRoute path (which traverses through the Microsoft edge routers/MSEE). For more information, see [Hub-to-hub over ER preview link](virtual-wan-faq.md#expressroute-bow-tie).| previewpreferh2h@microsoft.com | | BGP peering with a virtual hub | This feature provides the ability for the virtual hub to pair with and directly exchange routing information through Border Gateway Protocol (BGP) routing protocol. For more information, see [BGP peering with a virtual hub](create-bgp-peering-hub-portal.md) and [How to peer BGP with a virtual hub](scenario-bgp-peering-hub.md).| previewbgpwithvhub@microsoft.com |
-| Virtual hub routing preference | This features allows you to influence routing decisions for the virtual hub router. For more information, see [Virtual hub routing preference](about-virtual-hub-routing-preference.md). | Coming soon |
+| Virtual hub routing preference | This features allows you to influence routing decisions for the virtual hub router. For more information, see [Virtual hub routing preference](about-virtual-hub-routing-preference.md). | preview-vwan-hrp@microsoft.com |
## <a name="faq"></a>FAQ
vpn-gateway Vpn Gateway About Vpn Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpn-devices.md
Previously updated : 12/10/2021 Last updated : 06/10/2022
In partnership with device vendors, we have validated a set of standard VPN devi
To help configure your VPN device, refer to the links that correspond to the appropriate device family. The links to configuration instructions are provided on a best-effort basis. For VPN device support, contact your device manufacturer. |**Vendor** |**Device family** |**Minimum OS version** |**PolicyBased configuration instructions** |**RouteBased configuration instructions** |
-| | | | | |
+| | | | | |
| A10 Networks, Inc. |Thunder CFW |ACOS 4.1.1 |Not compatible |[Configuration guide](https://www.a10networks.com/wp-content/uploads/A10-DG-16161-EN.pdf)| | AhnLab | TrusGuard | TG 2.7.6<br>TG 3.5.x | Not tested | [Configuration guide](https://help.ahnlab.com/trusguard/cloud/azure/install/en_us/start.htm) | Allied Telesis |AR Series VPN Routers |AR-Series 5.4.7+ | [Configuration guide](https://www.alliedtelesis.com/documents/how-to/configure/site-to-site-vpn-between-azure-and-ar-series-router) |[Configuration guide](https://www.alliedtelesis.com/documents/how-to/configure/site-to-site-vpn-between-azure-and-ar-series-router)|
To help configure your VPN device, refer to the links that correspond to the app
| Citrix |NetScaler MPX, SDX, VPX |10.1 and above |[Configuration guide](https://docs.citrix.com/en-us/netscaler/11-1/system/cloudbridge-connector-introduction/cloudbridge-connector-azure.html) |Not compatible | | F5 |BIG-IP series |12.0 |[Configuration guide](https://community.f5.com/t5/technical-articles/connecting-to-windows-azure-with-the-big-ip/ta-p/282476) |[Configuration guide](https://community.f5.com/t5/technical-articles/big-ip-to-azure-dynamic-ipsec-tunneling/ta-p/282665) | | Fortinet |FortiGate |FortiOS 5.6 | Not tested |[Configuration guide](https://docs.fortinet.com/document/fortigate/5.6.0/cookbook/255100/ipsec-vpn-to-azure) |
+| Fujitsu | Si-R G series | V04: V04.12<br>V20: V20.14 | [Configuration guide](https://www.fujitsu.com/jp/products/network/router/sir/example/#cloud00) | [Configuration guide](https://www.fujitsu.com/jp/products/network/router/sir/example/#cloud00) |
| Hillstone Networks | Next-Gen Firewalls (NGFW) | 5.5R7 | Not tested | [Configuration guide](https://www.hillstonenet.com/wp-content/uploads/How-to-setup-Site-to-Site-VPN-between-Microsoft-Azure-and-an-on-premise-Hillstone-Networks-Security-Gateway.pdf) | | Internet Initiative Japan (IIJ) |SEIL Series |SEIL/X 4.60<br>SEIL/B1 4.60<br>SEIL/x86 3.20 |[Configuration guide](https://www.iij.ad.jp/biz/seil/ConfigAzureSEILVPN.pdf) |Not compatible | | Juniper |SRX |PolicyBased: JunOS 10.2<br>Routebased: JunOS 11.4 |Supported |[Configuration script](vpn-gateway-download-vpndevicescript.md) |
After you download the provided VPN device configuration sample, youΓÇÖll need t
### To edit a sample: 1. Open the sample using Notepad.
-2. Search and replace all <*text*> strings with the values that pertain to your environment. Be sure to include < and >. When a name is specified, the name you select should be unique. If a command does not work, consult your device manufacturer documentation.
+2. Search and replace all <*text*> strings with the values that pertain to your environment. Be sure to include < and >. When a name is specified, the name you select should be unique. If a command doesn't work, consult your device manufacturer documentation.
| **Sample text** | **Change to** | | | |
After you download the provided VPN device configuration sample, youΓÇÖll need t
The tables below contain the combinations of algorithms and parameters Azure VPN gateways use in default configuration (**Default policies**). For route-based VPN gateways created using the Azure Resource Management deployment model, you can specify a custom policy on each individual connection. Refer to [Configure IPsec/IKE policy](vpn-gateway-ipsecikepolicy-rm-powershell.md) for detailed instructions.
-Additionally, you must clamp TCP **MSS** at **1350**. Or if your VPN devices do not support MSS clamping, you can alternatively set the **MTU** on the tunnel interface to **1400** bytes instead.
+Additionally, you must clamp TCP **MSS** at **1350**. Or if your VPN devices don't support MSS clamping, you can alternatively set the **MTU** on the tunnel interface to **1400** bytes instead.
In the following tables:
The following table lists IPsec SA (IKE Quick Mode) Offers. Offers are listed th
| 25|AES128 |SHA256 |14 | | 26|3DES |SHA1 |14 |
-* You can specify IPsec ESP NULL encryption with RouteBased and HighPerformance VPN gateways. Null based encryption does not provide protection to data in transit, and should only be used when maximum throughput and minimum latency is required. Clients may choose to use this in VNet-to-VNet communication scenarios, or when encryption is being applied elsewhere in the solution.
+* You can specify IPsec ESP NULL encryption with RouteBased and HighPerformance VPN gateways. Null based encryption doesn't provide protection to data in transit, and should only be used when maximum throughput and minimum latency is required. Clients may choose to use this in VNet-to-VNet communication scenarios, or when encryption is being applied elsewhere in the solution.
* For cross-premises connectivity through the Internet, use the default Azure VPN gateway settings with encryption and hashing algorithms listed in the tables above to ensure security of your critical communication. ## <a name="known"></a>Known device compatibility issues
The following table lists IPsec SA (IKE Quick Mode) Offers. Offers are listed th
### Feb. 16, 2017
-**Palo Alto Networks devices with version prior to 7.1.4** for Azure route-based VPN: If you are using VPN devices from Palo Alto Networks with PAN-OS version prior to 7.1.4 and are experiencing connectivity issues to Azure route-based VPN gateways, perform the following steps:
+**Palo Alto Networks devices with version prior to 7.1.4** for Azure route-based VPN: If you're using VPN devices from Palo Alto Networks with PAN-OS version prior to 7.1.4 and are experiencing connectivity issues to Azure route-based VPN gateways, perform the following steps:
1. Check the firmware version of your Palo Alto Networks device. If your PAN-OS version is older than 7.1.4, upgrade to 7.1.4. 2. On the Palo Alto Networks device, change the Phase 2 SA (or Quick Mode SA) lifetime to 28,800 seconds (8 hours) when connecting to the Azure VPN gateway.
-3. If you are still experiencing connectivity issues, open a support request from the Azure portal.
+3. If you're still experiencing connectivity issues, open a support request from the Azure portal.
web-application-firewall Create Waf Policy Ag https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/create-waf-policy-ag.md
Previously updated : 02/08/2020 Last updated : 06/10/2022
Associating a WAF policy with listeners allows for multiple sites behind a singl
You can make as many policies as you want. Once you create a policy, it must be associated to an Application Gateway to go into effect, but it can be associated with any combination of Application Gateways and listeners.
-If your Application Gateway has a policy applied, and then you apply a different policy to a listener on that Application Gateway, the listener's policy will take effect, but just for the listener(s) that they're assigned to. The Application Gateway policy still applies to all other listeners that don't have a specific policy assigned to them.
+If your Application Gateway has an associated policy, and then you associated a different policy to a listener on that Application Gateway, the listener's policy will take effect, but just for the listener(s) that they're assigned to. The Application Gateway policy still applies to all other listeners that don't have a specific policy assigned to them.
> [!NOTE] > Once a Firewall Policy is associated to a WAF, there must always be a policy associated to that WAF. You may overwrite that policy, but disassociating a policy from the WAF entirely isn't supported.
-All new Web Application Firewall's WAF settings (custom rules, managed rulset configurations, exclusions, etc.) live inside of a WAF Policy. If you have an existing WAF, these settings may still exist in your WAF config. For steps on how to move to the new WAF Policy, see [Migrate your WAF Config to a WAF Policy](#migrate) later in this article.
+All new Web Application Firewall's WAF settings (custom rules, managed ruleset configurations, exclusions, etc.) live inside of a WAF Policy. If you have an existing WAF, these settings may still exist in your WAF config. For steps on how to move to the new WAF Policy, see [Migrate your WAF Config to a WAF Policy](#migrate) later in this article.
## Create a policy First, create a basic WAF policy with a managed Default Rule Set (DRS) using the Azure portal. 1. On the upper left side of the portal, select **Create a resource**. Search for **WAF**, select **Web Application Firewall**, then select **Create**.
-2. On **Create a WAF policy** page, **Basics** tab, enter or select the following information, accept the defaults for the remaining settings, and then select **Review + create**:
+2. On **Create a WAF policy** page, **Basics** tab, enter or select the following information and accept the defaults for the remaining settings:
|Setting |Value | |||
First, create a basic WAF policy with a managed Default Rule Set (DRS) using the
|Subscription |Select your subscription name| |Resource group |Select your resource group| |Policy name |Type a unique name for your WAF policy.|
-3. On the **Association** tab, enter one of the following settings, then select **Add**:
+3. On the **Association** tab, select **Add association**, then select one of the following settings:
|Setting |Value | |||
- |Associate Application Gateway |Select your Application Gateway profile name.|
- |Associate Listeners |Select the name of your Application Gateway Listener, then select **Add**.|
+ |Application Gateway |Select the application gateway, and then select **Add**.|
+ |HTTP Listener |Select the application gateway, select the listeners, then select **Add**.|
+ |Route Path|Select the application gateway, select the listener, select the routing rule, and then select **Add**.
> [!NOTE] > If you assign a policy to your Application Gateway (or listener) that already has a policy in place, the original policy is overwritten and replaced by the new policy.
To create a custom rule, select **Add custom rule** under the **Custom rules** t
If you have an existing WAF, you may have noticed some changes in the portal. First you need to identify what kind of Policy you've enabled on your WAF. There are three potential states: -- No WAF Policy-- Custom Rules only Policy-- WAF Policy
+1. No WAF Policy
+2. Custom Rules only Policy
+3. WAF Policy
You can tell which state your WAF is in by looking at it in the portal. If the WAF settings are visible and can be changed from within the Application Gateway view, your WAF is in state 1.
If you have a Custom Rules only WAF Policy, then you may want to move to the new
Edits to the custom rule only WAF policy are disabled. To edit any WAF settings such as disabling rules, adding exclusions, etc. you have to migrate to a new top-level firewall policy resource.
-To do so, create a *Web Application Firewall Policy* and associate it to your Application Gateway(s) and listener(s) of choice. This new Policy must be exactly the same as the current WAF config, meaning every custom rule, exclusion, disabled rule, etc. must be copied into the new Policy you are creating. Once you have a Policy associated with your Application Gateway, then you can continue to make changes to your WAF rules and settings. You can also do this with Azure PowerShell. For more information, see [Associate a WAF policy with an existing Application Gateway](associate-waf-policy-existing-gateway.md).
+To do so, create a *Web Application Firewall Policy* and associate it to your Application Gateway(s) and listener(s) of choice. This new Policy must be exactly the same as the current WAF config, meaning every custom rule, exclusion, disabled rule, etc. must be copied into the new Policy you're creating. Once you have a Policy associated with your Application Gateway, then you can continue to make changes to your WAF rules and settings. You can also do this with Azure PowerShell. For more information, see [Associate a WAF policy with an existing Application Gateway](associate-waf-policy-existing-gateway.md).
Optionally, you can use a migration script to migrate to a WAF policy. For more information, see [Migrate Web Application Firewall policies using Azure PowerShell](migrate-policy.md).
$appgw = Get-AzApplicationGateway -Name <your Application Gateway name> -Resourc
$appgw.ForceFirewallPolicyAssociation = $true ```
-Then procees with the steps to associate a WAF Policy to your application gateway. For more information, see [Associate a WAF Policy with an existing Application Gateway.](associate-waf-policy-existing-gateway.md)
+Then proceed with the steps to associate a WAF Policy to your application gateway. For more information, see [Associate a WAF Policy with an existing Application Gateway.](associate-waf-policy-existing-gateway.md)
## Next steps
web-application-firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/overview.md
description: This article provides an overview of Azure Web Application Firewall
Previously updated : 05/11/2021 Last updated : 06/10/2022