Updates from: 08/04/2022 01:08:53
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Json Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/json-transformations.md
The following example generates a JSON string based on the claim value of "email
<InputClaims> <InputClaim ClaimTypeReferenceId="email" TransformationClaimType="personalizations.0.to.0.email" /> <InputClaim ClaimTypeReferenceId="otp" TransformationClaimType="personalizations.0.dynamic_template_data.otp" />
+ <InputClaim ClaimTypeReferenceId="email" TransformationClaimType="personalizations.0.dynamic_template_data.verify-email" />
</InputClaims> <InputParameters> <InputParameter Id="template_id" DataType="string" Value="d-4c56ffb40fa648b1aa6822283df94f60"/>
Output claim:
## Next steps -- Find more [claims transformation samples](https://github.com/azure-ad-b2c/unit-tests/tree/main/claims-transformation/json) on the Azure AD B2C community GitHub repo
+- Find more [claims transformation samples](https://github.com/azure-ad-b2c/unit-tests/tree/main/claims-transformation/json) on the Azure AD B2C community GitHub repo
active-directory Active Directory Certificate Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-certificate-credentials.md
To compute the assertion, you can use one of the many JWT libraries in the langu
Claim type | Value | Description - | - | -
-`aud` | `https://login.microsoftonline.com/{tenantId}/V2.0/token` | The "aud" (audience) claim identifies the recipients that the JWT is intended for (here Azure AD) See [RFC 7519, Section 4.1.3](https://tools.ietf.org/html/rfc7519#section-4.1.3). In this case, that recipient is the login server (login.microsoftonline.com).
+`aud` | `https://login.microsoftonline.com/{tenantId}/oauth2/V2.0/token` | The "aud" (audience) claim identifies the recipients that the JWT is intended for (here Azure AD) See [RFC 7519, Section 4.1.3](https://tools.ietf.org/html/rfc7519#section-4.1.3). In this case, that recipient is the login server (login.microsoftonline.com).
`exp` | 1601519414 | The "exp" (expiration time) claim identifies the expiration time on or after which the JWT MUST NOT be accepted for processing. See [RFC 7519, Section 4.1.4](https://tools.ietf.org/html/rfc7519#section-4.1.4). This allows the assertion to be used until then, so keep it short - 5-10 minutes after `nbf` at most. Azure AD does not place restrictions on the `exp` time currently. `iss` | {ClientID} | The "iss" (issuer) claim identifies the principal that issued the JWT, in this case your client application. Use the GUID application ID. `jti` | (a Guid) | The "jti" (JWT ID) claim provides a unique identifier for the JWT. The identifier value MUST be assigned in a manner that ensures that there is a negligible probability that the same value will be accidentally assigned to a different data object; if the application uses multiple issuers, collisions MUST be prevented among values produced by different issuers as well. The "jti" value is a case-sensitive string. [RFC 7519, Section 4.1.7](https://tools.ietf.org/html/rfc7519#section-4.1.7)
active-directory Reference Third Party Cookies Spas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-third-party-cookies-spas.md
For the Microsoft identity platform, SPAs and native clients follow similar prot
SPAs have two additional restrictions: -- [The redirect URI must be marked as type `spa`](v2-oauth2-auth-code-flow.md#redirect-uri-setup-required-for-single-page-apps) to enable CORS on login endpoints.
+- [The redirect URI must be marked as type `spa`](v2-oauth2-auth-code-flow.md#redirect-uris-for-single-page-apps-spas) to enable CORS on login endpoints.
- Refresh tokens issued through the authorization code flow to `spa` redirect URIs have a 24-hour lifetime rather than a 90-day lifetime. :::image type="content" source="media/v2-oauth-auth-code-spa/active-directory-oauth-code-spa.svg" alt-text="Diagram showing the OAuth 2 authorization code flow between a single-page app and the security token service endpoint." border="false":::
active-directory Sample V2 Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/sample-v2-code.md
The following samples show public client desktop applications that access the Mi
> | Language/<br/>Platform | Code sample(s) <br/> on GitHub | Auth<br/> libraries | Auth flow | > | - | -- | - | -- | > | .NET Core | &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/1-Calling-MSGraph/1-1-AzureAD) <br/> &#8226; [Call Microsoft Graph with token cache](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/2-TokenCache) <br/> &#8226; [Call Micrsoft Graph with custom web UI HTML](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/3-CustomWebUI/3-1-CustomHTML) <br/> &#8226; [Call Microsoft Graph with custom web browser](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/3-CustomWebUI/3-2-CustomBrowser) <br/> &#8226; [Sign in users with device code flow](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/4-DeviceCodeFlow) | MSAL.NET |&#8226; Authorization code with PKCE <br/> &#8226; Device code |
-> | .NET | &#8226; [Call Microsoft Graph with daemon console](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/1-Call-MSGraph) <br/> &#8226; [Call web API with daemon console](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/blob/master/2-Call-OwnApi/README.md) | MSAL.NET | Authorization code with PKCE |
> | .NET | [Invoke protected API with integrated Windows authentication](https://github.com/azure-samples/active-directory-dotnet-iwa-v2) | MSAL.NET | Integrated Windows authentication | > | Java | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/2.%20Client-Side%20Scenarios/Integrated-Windows-Auth-Flow) | MSAL Java | Integrated Windows authentication | > | Node.js | [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-desktop) | MSAL Node | Authorization code with PKCE |
active-directory V2 Oauth2 Auth Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-auth-code-flow.md
Title: Microsoft identity platform and OAuth 2.0 authorization code flow
-description: Build web applications using the Microsoft identity platform implementation of the OAuth 2.0 authentication protocol.
-
+description: Protocol reference for the Microsoft identity platform's implementation of the OAuth 2.0 authorization code grant
-- Previously updated : 02/02/2022+ Last updated : 07/29/2022
# Microsoft identity platform and OAuth 2.0 authorization code flow
-The OAuth 2.0 authorization code grant can be used in apps that are installed on a device to gain access to protected resources, such as web APIs. Using the Microsoft identity platform implementation of OAuth 2.0 and Open ID Connect (OIDC), you can add sign in and API access to your mobile and desktop apps.
+The OAuth 2.0 authorization code grant type, or _auth code flow_, enables a client application to obtain authorized access to protected resources like web APIs. The auth code flow requires a user-agent that supports redirection from the authorization server (the Microsoft identity platform) back to your application. For example, a web browser, desktop, or mobile application operated by a user to sign in to your app and access their data.
-This article describes how to program directly against the protocol in your application using any language. When possible, we recommend you use the supported Microsoft Authentication Libraries (MSAL) to [acquire tokens and call secured web APIs](authentication-flows-app-scenarios.md#scenarios-and-supported-authentication-flows). For more information, look at [sample apps that use MSAL](sample-v2-code.md).
+This article describes low-level protocol details usually required only when manually crafting and issuing raw HTTP requests to execute the flow, which we do **not** recommend. Instead, use a [Microsoft-built and supported authentication library](reference-v2-libraries.md) to get security tokens and call protected web APIs in your apps.
-The OAuth 2.0 authorization code flow is described in [section 4.1 of the OAuth 2.0 specification](https://tools.ietf.org/html/rfc6749). With OIDC, this flow does authentication and authorization for most app types. These types include [single page apps](v2-app-types.md#single-page-apps-javascript), [web apps](v2-app-types.md#web-apps), and [natively installed apps](v2-app-types.md#mobile-and-native-apps). The flow enables apps to securely acquire an `access_token` that can be used to access resources secured by the Microsoft identity platform. Apps can refresh tokens to get other access tokens and ID tokens for the signed in user.
+## Applications that support the auth code flow
+Use the auth code flow paired with Proof Key for Code Exchange (PKCE) and OpenID Connect (OIDC) to get access tokens and ID tokens in these types of apps:
+
+- [Single-page web application (SPA)](v2-app-types.md#single-page-apps-javascript)
+- [Standard (server-based) web application](v2-app-types.md#web-apps)
+- [Desktop and mobile apps](v2-app-types.md#mobile-and-native-apps)
+
+## Protocol details
-## Protocol diagram
+The OAuth 2.0 authorization code flow is described in [section 4.1 of the OAuth 2.0 specification](https://tools.ietf.org/html/rfc6749). Apps using the OAuth 2.0 authorization code flow acquire an `access_token` to include in requests to resources protected by the Microsoft identity platform (typically APIs). Apps can also request new ID and access tokens for previously authenticated entities by using a refresh mechanism.
-This diagram provides a high-level overview of the authentication flow for an application:
+
+This diagram shows a high-level view of the authentication flow:
![Diagram shows OAuth authorization code flow. Native app and Web A P I interact by using tokens as described in this article.](./media/v2-oauth2-auth-code-flow/convergence-scenarios-native.svg)
-## Redirect URI setup required for single-page apps
+## Redirect URIs for single-page apps (SPAs)
+
+Redirect URIs for SPAs that use the auth code flow require special configuration.
-The authorization code flow for single page applications requires additional setup. Follow the instructions for [creating your single-page application](scenario-spa-app-registration.md#redirect-uri-msaljs-20-with-auth-code-flow) to correctly mark your redirect URI as enabled for Cross-Origin Resource Sharing (CORS). To update an existing redirect URI to enable CORS, open the manifest editor and set the `type` field for your redirect URI to `spa` in the `replyUrlsWithType` section. Or, you can select the redirect URI in **Authentication** > **Web** and select URIs to migrate to using the authorization code flow.
+- **Add a redirect URI** that supports auth code flow with PKCE and cross-origin resource sharing (CORS): Follow the steps in [Redirect URI: MSAL.js 2.0 with auth code flow](scenario-spa-app-registration.md#redirect-uri-msaljs-20-with-auth-code-flow).
+- **Update a redirect URI**: Set the redirect URI's `type` to `spa` by using the [application manifest editor](reference-app-manifest.md) in the Azure portal.
-The `spa` redirect type is backwards compatible with the implicit flow. Apps currently using the implicit flow to get tokens can move to the `spa` redirect URI type without issues and continue using the implicit flow.
+The `spa` redirect type is backward-compatible with the implicit flow. Apps currently using the implicit flow to get tokens can move to the `spa` redirect URI type without issues and continue using the implicit flow.
If you attempt to use the authorization code flow without setting up CORS for your redirect URI, you will see this error in the console:
This example is an Error response:
|`invalid_scope` | The scope requested by the app is invalid. | Update the value of the `scope` parameter in the authentication request to a valid value. | > [!NOTE]
-> Single page apps may receive an `invalid_request` error indicating that cross-origin token redemption is permitted only for the 'Single-Page Application' client-type. This indicates that the redirect URI used to request the token has not been marked as a `spa` redirect URI. Review the [application registration steps](#redirect-uri-setup-required-for-single-page-apps) on how to enable this flow.
+> Single page apps may receive an `invalid_request` error indicating that cross-origin token redemption is permitted only for the 'Single-Page Application' client-type. This indicates that the redirect URI used to request the token has not been marked as a `spa` redirect URI. Review the [application registration steps](#redirect-uris-for-single-page-apps-spas) on how to enable this flow.
## Use the access token
active-directory V2 Oauth2 On Behalf Of Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-on-behalf-of-flow.md
A service-to-service request for a SAML assertion contains the following paramet
The response contains a SAML token encoded in UTF8 and Base64url. -- **SubjectConfirmationData for a SAML assertion sourced from an OBO call**: If the target application requires a recipient value in **SubjectConfirmationData**, then the value must be a non-wildcard Reply URL in the resource application configuration.-- **The SubjectConfirmationData node**: The node can't contain an **InResponseTo** attribute since it's not part of a SAML response. The application receiving the SAML token must be able to accept the SAML assertion without an **InResponseTo** attribute.
+- **SubjectConfirmationData for a SAML assertion sourced from an OBO call**: If the target application requires a `Recipient` value in `SubjectConfirmationData`, then the value must be configured as the first non-wildcard Reply URL in the resource application configuration. Since the default Reply URL isn't used to determine the `Recipient` value, you might have to reorder the Reply URLs in the application configuration.
+- **The SubjectConfirmationData node**: The node can't contain an `InResponseTo` attribute since it's not part of a SAML response. The application receiving the SAML token must be able to accept the SAML assertion without an `InResponseTo` attribute.
- **API permissions**: You have to [add the necessary API permissions](quickstart-configure-app-access-web-apis.md) on the middle-tier application to allow access to the SAML application, so that it can request a token for the `/.default` scope of the SAML application. - **Consent**: Consent must have been granted to receive a SAML token containing user data on an OAuth flow. For information, see [Gaining consent for the middle-tier application](#gaining-consent-for-the-middle-tier-application) below.
active-directory Device Management Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/device-management-azure-portal.md
To view or copy BitLocker keys, you need to be the owner of the device or have o
- Security Administrator - Security Reader
+## Block users from viewing their BitLocker keys (preview)
+In this preivew, admins can block self-service BitLocker key access to the registered owner of the device. Default users without the BitLocker read permission will be unable to view or copy their BitLocker key(s) for their owned devices.
+
+To disable/enable self-service BitLocker recovery:
+
+```PowerShell
+Connect-MgGraph -Scopes Policy.ReadWrite.Authorization
+$authPolicyUri = "https://graph.microsoft.com/beta/policies/authorizationPolicy/authorizationPolicy"
+$body = @{
+ defaultUserRolePermissions = @{
+ allowedToReadBitlockerKeysForOwnedDevice = $false #Set this to $true to allow BitLocker self-service recovery
+ }
+}| ConvertTo-Json
+Invoke-MgGraphRequest -Uri $authPolicyUri -Method PATCH -Body $body
+# Show current policy setting
+$authPolicy = Invoke-MgGraphRequest -Uri $authPolicyUri
+$authPolicy.defaultUserRolePermissions
+```
+ ## View and filter your devices (preview) In this preview, you have the ability to infinitely scroll, reorder columns, and select all devices. You can filter the device list by these device attributes:
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
For more information about how to better secure your organization by using autom
**Service category:** Enterprise Apps **Product capability:** 3rd Party Integration
-In January 2022, weΓÇÖve added the following 47 new applications in our App gallery with Federation support:
+In January 2022, we've added the following 47 new applications in our App gallery with Federation support:
[Jooto](../saas-apps/jooto-tutorial.md), [Proprli](https://app.proprli.com/), [Pace Scheduler](https://www.pacescheduler.com/accounts/login/), [DRTrack](../saas-apps/drtrack-tutorial.md), [Dining Sidekick](../saas-apps/dining-sidekick-tutorial.md), [Cryotos](https://app.cryotos.com/oauth2/authorization/azure-client), [Emergency Management Systems](https://secure.emsystems.com.au/), [Manifestly Checklists](../saas-apps/manifestly-checklists-tutorial.md), [eLearnPOSH](../saas-apps/elearnposh-tutorial.md), [Scuba Analytics](../saas-apps/scuba-analytics-tutorial.md), [Athena Systems sign-in Platform](../saas-apps/athena-systems-login-platform-tutorial.md), [TimeTrack](../saas-apps/timetrack-tutorial.md), [MiHCM](../saas-apps/mihcm-tutorial.md), [Health Note](https://www.healthnote.com/), [Active Directory SSO for DoubleYou](../saas-apps/active-directory-sso-for-doubleyou-tutorial.md), [Emplifi platform](../saas-apps/emplifi-platform-tutorial.md), [Flexera One](../saas-apps/flexera-one-tutorial.md), [Hypothesis](https://web.hypothes.is/help/authorizing-hypothesis-from-the-azure-ad-app-gallery/), [Recurly](../saas-apps/recurly-tutorial.md), [XpressDox AU Cloud](https://au.xpressdox.com/Authentication/Login.aspx), [Zoom for Intune](https://zoom.us/), [UPWARD AGENT](https://app.upward.jp/login/), [Linux Foundation ID](https://openprofile.dev/), [Asset Planner](../saas-apps/asset-planner-tutorial.md), [Kiho](https://v3.kiho.fi/index/sso), [chezie](https://app.chezie.co/), [Excelity HCM](../saas-apps/excelity-hcm-tutorial.md), [yuccaHR](https://app.yuccahr.com/), [Blue Ocean Brain](../saas-apps/blue-ocean-brain-tutorial.md), [EchoSpan](../saas-apps/echospan-tutorial.md), [Archie](../saas-apps/archie-tutorial.md), [Equifax Workforce Solutions](../saas-apps/equifax-workforce-solutions-tutorial.md), [Palantir Foundry](../saas-apps/palantir-foundry-tutorial.md), [ATP SpotLight and ChronicX](../saas-apps/atp-spotlight-and-chronicx-tutorial.md), [DigiSign](https://app.digisign.org/selfcare/sso), [mConnect](https://mconnect.skooler.com/), [BrightHR](https://login.brighthr.com/), [Mural Identity](../saas-apps/mural-identity-tutorial.md), [NordPass SSO](https://app.nordpass.com/login%20use%20%22Log%20in%20to%20business%22%20option), [CloudClarity](https://portal.cloudclarity.app/dashboard), [Twic](../saas-apps/twic-tutorial.md), [Eduhouse Online](https://app.eduhouse.fi/palvelu/kirjaudu/microsoft), [Bealink](../saas-apps/bealink-tutorial.md), [Time Intelligence Bot](https://teams.microsoft.com/), [SentinelOne](https://sentinelone.com/)
For listing your application in the Azure AD app gallery, read the details in: h
**Service category:** Access Reviews **Product capability:** Identity Governance
-Azure AD access reviews reviewer recommendations now account for non-interactive sign-in information, improving upon original recommendations based on interactive last sign-ins only. Reviewers can now make more accurate decisions based on the last sign-in activity of the users theyΓÇÖre reviewing. To learn more about how to create access reviews, go to [Create an access review of groups and applications in Azure AD](../governance/create-access-review.md).
+Azure AD access reviews reviewer recommendations now account for non-interactive sign-in information, improving upon original recommendations based on interactive last sign-ins only. Reviewers can now make more accurate decisions based on the last sign-in activity of the users they're reviewing. To learn more about how to create access reviews, go to [Create an access review of groups and applications in Azure AD](../governance/create-access-review.md).
To prevent accidental notification approvals, admins can now require users to e
**Service category:** Reporting **Product capability:** Monitoring & Reporting
-WeΓÇÖre no longer publishing sign-in logs with the following error codes because these events are pre-authentication events that occur before our service has authenticated a user. Because these events happen before authentication, our service isnΓÇÖt always able to correctly identify the user. If a user continues on to authenticate, the user sign-in will show up in your tenant Sign-in logs. These logs are no longer visible in the Azure portal UX, and querying these error codes in the Graph API will no longer return results.
+We're no longer publishing sign-in logs with the following error codes because these events are pre-authentication events that occur before our service has authenticated a user. Because these events happen before authentication, our service isn't always able to correctly identify the user. If a user continues on to authenticate, the user sign-in will show up in your tenant Sign-in logs. These logs are no longer visible in the Azure portal UX, and querying these error codes in the Graph API will no longer return results.
|Error code | Failure reason| | | |
-|50058| Session information isnΓÇÖt sufficient for single-sign-on.|
-|16000| Either multiple user identities are available for the current request or selected account isnΓÇÖt supported for the scenario.|
+|50058| Session information isn't sufficient for single-sign-on.|
+|16000| Either multiple user identities are available for the current request or selected account isn't supported for the scenario.|
|500581| Rendering JavaScript. Fetching sessions for single-sign-on on V2 with prompt=none requires JavaScript to verify if any MSA accounts are signed in.| |81012| The user trying to sign in to Azure AD is different from the user signed into the device.|
The new Conditional Access overview dashboard enables all tenants to see insight
**Service category:** Azure AD Connect Cloud Sync **Product capability:** Identity Lifecycle Management
-The Public Preview feature for Azure AD Connect Cloud Sync Password writeback provides customers the capability to write back a userΓÇÖs password changes in the cloud to the on-premises directory in real time using the lightweight Azure AD cloud provisioning agent.[Learn more](../authentication/tutorial-enable-cloud-sync-sspr-writeback.md).
+The Public Preview feature for Azure AD Connect Cloud Sync Password writeback provides customers the capability to write back a user's password changes in the cloud to the on-premises directory in real time using the lightweight Azure AD cloud provisioning agent.[Learn more](../authentication/tutorial-enable-cloud-sync-sspr-writeback.md).
We have recently added other property to the sign-in logs called "Session Lifeti
**Service category:** User Access Management **Product capability:** Entitlement Management
-Entitlement ManagementΓÇÖs enriched review experience allows even more flexibility on access packages reviews. Admins can now choose what happens to access if the reviewers don't respond, provide helper information to reviewers, or decide whether a justification is necessary. [Learn more](../governance/entitlement-management-access-reviews-create.md).
+Entitlement Management's enriched review experience allows even more flexibility on access packages reviews. Admins can now choose what happens to access if the reviewers don't respond, provide helper information to reviewers, or decide whether a justification is necessary. [Learn more](../governance/entitlement-management-access-reviews-create.md).
Updated "switch organizations" user interface in My Account. This visually impro
Sometimes, application developers configure their apps to require more permissions than it's possible to grant. To prevent this from happening, a limit on the total number of required permissions that can be configured for an app registration will be enforced.
-The total number of required permissions for any single application registration mustn't exceed 400 permissions, across all APIs. The change to enforce this limit will begin rolling out mid-October 2021. Applications exceeding the limit can't increase the number of permissions theyΓÇÖre configured for. The existing limit on the number of distinct APIs for which permissions are required remains unchanged and may not exceed 50 APIs.
+The total number of required permissions for any single application registration mustn't exceed 400 permissions, across all APIs. The change to enforce this limit will begin rolling out mid-October 2021. Applications exceeding the limit can't increase the number of permissions they're configured for. The existing limit on the number of distinct APIs for which permissions are required remains unchanged and may not exceed 50 APIs.
In the Azure portal, the required permissions are listed under API permissions for the application you wish to configure. Using Microsoft Graph or Microsoft Graph PowerShell, the required permissions are listed in the requiredResourceAccess property of an [application](/graph/api/resources/application) entity. [Learn more](../enterprise-users/directory-service-limits-restrictions.md).
Previously, we announced that starting October 31, 2021, Microsoft Azure Active
**Service category:** Conditional Access **Product capability:** End User Experiences
-If there's no trust relation between a home and resource tenant, a guest user would have previously been asked to re-register their device, which would break the previous registration. However, the user would end up in a registration loop because only home tenant device registration is supported. In this specific scenario, instead of this loop, weΓÇÖve created a new conditional access blocking page. The page tells the end user that they can't get access to conditional access protected resources as a guest user. [Learn more](../external-identities/b2b-quickstart-add-guest-users-portal.md#prerequisites).
+If there's no trust relation between a home and resource tenant, a guest user would have previously been asked to re-register their device, which would break the previous registration. However, the user would end up in a registration loop because only home tenant device registration is supported. In this specific scenario, instead of this loop, we've created a new conditional access blocking page. The page tells the end user that they can't get access to conditional access protected resources as a guest user. [Learn more](../external-identities/b2b-quickstart-add-guest-users-portal.md#prerequisites).
The load time of My Apps has been improved. Users going to myapps.microsoft.com
**Service category:** Authentications (Logins) **Product capability:** Developer Experience
-The modern Edge browser is now included in the requirement to provide an `Origin` header when redeeming a [single page app authorization code](../develop/v2-oauth2-auth-code-flow.md#redirect-uri-setup-required-for-single-page-apps). A compatibility fixes accidentally exempted the modern Edge browser from CORS controls, and that bug is being fixed during October. A subset of applications depended on CORS being disabled in the browser, which has the side effect of removing the `Origin` header from traffic. This is an unsupported configuration for using Azure AD, and these apps that depended on disabling CORS can no longer use modern Edge as a security workaround. All modern browsers must now include the `Origin` header per HTTP spec, to ensure CORS is enforced. [Learn more](../develop/reference-breaking-changes.md#the-device-code-flow-ux-will-now-include-an-app-confirmation-prompt).
+The modern Edge browser is now included in the requirement to provide an `Origin` header when redeeming a [single page app authorization code](../develop/v2-oauth2-auth-code-flow.md#redirect-uris-for-single-page-apps-spas). A compatibility fix accidentally exempted the modern Edge browser from CORS controls, and that bug is being fixed during October. A subset of applications depended on CORS being disabled in the browser, which has the side effect of removing the `Origin` header from traffic. This is an unsupported configuration for using Azure AD, and these apps that depended on disabling CORS can no longer use modern Edge as a security workaround. All modern browsers must now include the `Origin` header per HTTP spec, to ensure CORS is enforced. [Learn more](../develop/reference-breaking-changes.md#the-device-code-flow-ux-will-now-include-an-app-confirmation-prompt).
For more information about how to better secure your organization by using autom
**Product capability:** Identity Security & Protection
-To help administrators understand that their users are blocked for multi-factor authentication as a result of fraud report, weΓÇÖve added a new audit event. This audit event is tracked when the user reports fraud. The audit log is available in addition to the existing information in the sign-in logs about fraud report. To learn how to get the audit report, see [multi-factor authentication Fraud alert](../authentication/howto-mfa-mfasettings.md#fraud-alert).
+To help administrators understand that their users are blocked for multi-factor authentication as a result of fraud report, we've added a new audit event. This audit event is tracked when the user reports fraud. The audit log is available in addition to the existing information in the sign-in logs about fraud report. To learn how to get the audit report, see [multi-factor authentication Fraud alert](../authentication/howto-mfa-mfasettings.md#fraud-alert).
Currently, this user action only allows you to enable Azure AD MFA as a control
**Service category:** App Proxy **Product capability:** Access Control
-With this new capability, connector groups can be assigned to the closest regional Application Proxy service an application is hosted in. This can improve app performance in scenarios where apps are hosted in regions other than the home tenantΓÇÖs region. [Learn more](../app-proxy/application-proxy-network-topology.md#optimize-connector-groups-to-use-closest-application-proxy-cloud-service).
+With this new capability, connector groups can be assigned to the closest regional Application Proxy service an application is hosted in. This can improve app performance in scenarios where apps are hosted in regions other than the home tenant's region. [Learn more](../app-proxy/application-proxy-network-topology.md#optimize-connector-groups-to-use-closest-application-proxy-cloud-service).
For guidance to remove deprecating protocols dependencies, please refer to [EEna
In November 2020 we have added following 52 new applications in our App gallery with Federation support:
-[Travel & Expense Management](https://app.expenseonce.com/Account/Login), [Tribeloo](../saas-apps/tribeloo-tutorial.md), [Itslearning File Picker](https://pmteam.itslearning.com/), [Crises Control](../saas-apps/crises-control-tutorial.md), [CourtAlert](https://www.courtalert.com/), [StealthMail](https://stealthmail.com/), [Edmentum - Study Island](https://app.studyisland.com/cfw/login/), [Virtual Risk Manager](../saas-apps/virtual-risk-manager-tutorial.md), [TIMU](../saas-apps/timu-tutorial.md), [Looker Analytics Platform](../saas-apps/looker-analytics-platform-tutorial.md), [Talview - Recruit](https://recruit.talview.com/login), Real Time Translator, [Klaxoon](https://access.klaxoon.com/login), [Podbean](../saas-apps/podbean-tutorial.md), [zcal](https://zcal.co/signup), [expensemanager](https://api.expense-manager.com/), [Netsparker Enterprise](../saas-apps/netsparker-enterprise-tutorial.md), [En-trak Tenant Experience Platform](https://portal.en-trak.app/), [Appian](../saas-apps/appian-tutorial.md), [Panorays](../saas-apps/panorays-tutorial.md), [Builterra](https://portal.builterra.com/), [EVA Check-in](https://my.evacheckin.com/organization), [HowNow WebApp SSO](../saas-apps/hownow-webapp-sso-tutorial.md), [Coupa Risk Assess](../saas-apps/coupa-risk-assess-tutorial.md), [Lucid (All Products)](../saas-apps/lucid-tutorial.md), [GoBright](https://portal.brightbooking.eu/), [SailPoint IdentityNow](../saas-apps/sailpoint-identitynow-tutorial.md),[Resource Central](../saas-apps/resource-central-tutorial.md), [UiPathStudioO365App](https://www.uipath.com/product/platform), [Jedox](../saas-apps/jedox-tutorial.md), [Cequence Application Security](../saas-apps/cequence-application-security-tutorial.md), [PerimeterX](../saas-apps/perimeterx-tutorial.md), [TrendMiner](../saas-apps/trendminer-tutorial.md), [Lexion](../saas-apps/lexion-tutorial.md), [WorkWare](../saas-apps/workware-tutorial.md), [ProdPad](../saas-apps/prodpad-tutorial.md), [AWS ClientVPN](../saas-apps/aws-clientvpn-tutorial.md), [AppSec Flow SSO](../saas-apps/appsec-flow-sso-tutorial.md), [Luum](../saas-apps/luum-tutorial.md), [Freight Measure](https://www.gpcsl.com/freight.html), [Terraform Cloud](../saas-apps/terraform-cloud-tutorial.md), [Nature Research](../saas-apps/nature-research-tutorial.md), [Play Digital Signage](https://login.playsignage.com/login), [RemotePC](../saas-apps/remotepc-tutorial.md), [Prolorus](../saas-apps/prolorus-tutorial.md), [Hirebridge ATS](../saas-apps/hirebridge-ats-tutorial.md), [Teamgage](https://teamgage.com), [Roadmunk](../saas-apps/roadmunk-tutorial.md), [Sunrise Software Relations CRM](https://cloud.relations-crm.com/), [Procaire](../saas-apps/procaire-tutorial.md), [Mentor® by eDriving: Business](https://www.edriving.com/), [Gradle Enterprise](https://gradle.com/)
+[Travel & Expense Management](https://app.expenseonce.com/Account/Login), [Tribeloo](../saas-apps/tribeloo-tutorial.md), [Itslearning File Picker](https://pmteam.itslearning.com/), [Crises Control](../saas-apps/crises-control-tutorial.md), [CourtAlert](https://www.courtalert.com/), [StealthMail](https://stealthmail.com/), [Edmentum - Study Island](https://app.studyisland.com/cfw/login/), [Virtual Risk Manager](../saas-apps/virtual-risk-manager-tutorial.md), [TIMU](../saas-apps/timu-tutorial.md), [Looker Analytics Platform](../saas-apps/looker-analytics-platform-tutorial.md), [Talview - Recruit](https://recruit.talview.com/login), Real Time Translator, [Klaxoon](https://access.klaxoon.com/login), [Podbean](../saas-apps/podbean-tutorial.md), [zcal](https://zcal.co/signup), [expensemanager](https://api.expense-manager.com/), [Netsparker Enterprise](../saas-apps/netsparker-enterprise-tutorial.md), [En-trak Tenant Experience Platform](https://portal.en-trak.app/), [Appian](../saas-apps/appian-tutorial.md), [Panorays](../saas-apps/panorays-tutorial.md), [Builterra](https://portal.builterra.com/), [EVA Check-in](https://my.evacheckin.com/organization), [HowNow WebApp SSO](../saas-apps/hownow-webapp-sso-tutorial.md), [Coupa Risk Assess](../saas-apps/coupa-risk-assess-tutorial.md), [Lucid (All Products)](../saas-apps/lucid-tutorial.md), [GoBright](https://portal.brightbooking.eu/), [SailPoint IdentityNow](../saas-apps/sailpoint-identitynow-tutorial.md),[Resource Central](../saas-apps/resource-central-tutorial.md), [UiPathStudioO365App](https://www.uipath.com/product/platform), [Jedox](../saas-apps/jedox-tutorial.md), [Cequence Application Security](../saas-apps/cequence-application-security-tutorial.md), [PerimeterX](../saas-apps/perimeterx-tutorial.md), [TrendMiner](../saas-apps/trendminer-tutorial.md), [Lexion](../saas-apps/lexion-tutorial.md), [WorkWare](../saas-apps/workware-tutorial.md), [ProdPad](../saas-apps/prodpad-tutorial.md), [AWS ClientVPN](../saas-apps/aws-clientvpn-tutorial.md), [AppSec Flow SSO](../saas-apps/appsec-flow-sso-tutorial.md), [Luum](../saas-apps/luum-tutorial.md), [Freight Measure](https://www.gpcsl.com/freight.html), [Terraform Cloud](../saas-apps/terraform-cloud-tutorial.md), [Nature Research](../saas-apps/nature-research-tutorial.md), [Play Digital Signage](https://login.playsignage.com/login), [RemotePC](../saas-apps/remotepc-tutorial.md), [Prolorus](../saas-apps/prolorus-tutorial.md), [Hirebridge ATS](../saas-apps/hirebridge-ats-tutorial.md), [Teamgage](https://teamgage.com), [Roadmunk](../saas-apps/roadmunk-tutorial.md), [Sunrise Software Relations CRM](https://cloud.relations-crm.com/), [Procaire](../saas-apps/procaire-tutorial.md), [Mentor&reg; by eDriving: Business](https://www.edriving.com/), [Gradle Enterprise](https://gradle.com/)
You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial
Manually created connected organizations will have a default setting of "configu
Risk-based Conditional Access and risk detection features of Identity Protection are now available in [Azure AD B2C](../..//active-directory-b2c/conditional-access-identity-protection-overview.md). With these advanced security features, customers can now: - Leverage intelligent insights to assess risk with B2C apps and end user accounts. Detections include atypical travel, anonymous IP addresses, malware-linked IP addresses, and Azure AD threat intelligence. Portal and API-based reports are also available. - Automatically address risks by configuring adaptive authentication policies for B2C users. App developers and administrators can mitigate real-time risk by requiring Azure Active Directory Multi-Factor Authentication (MFA) or blocking access depending on the user risk level detected, with additional controls available based on location, group, and app.-- Integrate with Azure AD B2C user flows and custom policies. Conditions can be triggered from built-in user flows in Azure AD B2C or can be incorporated into B2C custom policies. As with other aspects of the B2C user flow, end user experience messaging can be customized. Customization is according to the organizationΓÇÖs voice, brand, and mitigation alternatives.
+- Integrate with Azure AD B2C user flows and custom policies. Conditions can be triggered from built-in user flows in Azure AD B2C or can be incorporated into B2C custom policies. As with other aspects of the B2C user flow, end user experience messaging can be customized. Customization is according to the organization's voice, brand, and mitigation alternatives.
If you have outbound firewall rules in your organization, update the rules so th
**Service category:** Identity Protection **Product capability:** Identity Security & Protection
-We're updating the Identity Secure Score portal to align with the changes introduced in Microsoft Secure ScoreΓÇÖs [new release](/microsoft-365/security/mtp/microsoft-secure-score-whats-new).
+We're updating the Identity Secure Score portal to align with the changes introduced in Microsoft Secure Score's [new release](/microsoft-365/security/mtp/microsoft-secure-score-whats-new).
The preview version with the changes will be available at the beginning of September. The changes in the preview version include:-- ΓÇ£Identity Secure ScoreΓÇ¥ renamed to ΓÇ£Secure Score for IdentityΓÇ¥ for brand alignment with Microsoft Secure Score
+- "Identity Secure Score" renamed to "Secure Score for Identity" for brand alignment with Microsoft Secure Score
- Points normalized to standard scale and reported in percentages instead of points In this preview, customers can toggle between the existing experience and the new experience. This preview will last until the end of November 2020. After the preview, the customers will automatically be directed to the new UX experience.
You can expand a managed domain to have more than one replica set per Azure AD t
**Service category:** Authentications (Logins) **Product capability:** End User Experiences
-Azure AD My sign-ins is a new feature that allows enterprise users to review their sign-in history to check for any unusual activity. Additionally, this feature allows end users to report ΓÇ£This wasnΓÇÖt meΓÇ¥ or ΓÇ£This was meΓÇ¥ on suspicious activities. To learn more about using this feature, see [View and search your recent sign-in activity from the My sign-ins page](https://support.microsoft.com/account-billing/view-and-search-your-work-or-school-account-sign-in-activity-from-my-sign-ins-9e7d108c-8e3f-42aa-ac3a-bca892898972#confirm-unusual-activity).
+Azure AD My sign-ins is a new feature that allows enterprise users to review their sign-in history to check for any unusual activity. Additionally, this feature allows end users to report "This wasn't me" or "This was me" on suspicious activities. To learn more about using this feature, see [View and search your recent sign-in activity from the My sign-ins page](https://support.microsoft.com/account-billing/view-and-search-your-work-or-school-account-sign-in-activity-from-my-sign-ins-9e7d108c-8e3f-42aa-ac3a-bca892898972#confirm-unusual-activity).
For more information about users flows, see [User flow versions in Azure Active
In July 2020 we have added following 55 new applications in our App gallery with Federation support:
-[Appreiz](https://microsoftteams.appreiz.com/), [Inextor Vault](https://inexto.com/inexto-suite/inextor), [Beekast](https://my.beekast.com/), [Templafy OpenID Connect](https://app.templafy.com/), [PeterConnects receptionist](https://msteams.peterconnects.com/), [AlohaCloud](https://www.alohacloud.com/), Control Tower, [Cocoom](https://start.cocoom.com/), [COINS Construction Cloud](https://sso.coinsconstructioncloud.com/#login/), [Medxnote MT](https://task.teamsmain.medx.im/authorization), [Reflekt](https://reflekt.konsolute.com/login), [Rever](https://app.reverscore.net/access), [MyCompanyArchive](https://login.mycompanyarchive.com/), [GReminders](https://app.greminders.com/o365-oauth), [Titanfile](../saas-apps/titanfile-tutorial.md), [Wootric](../saas-apps/wootric-tutorial.md), [SolarWinds Orion](https://support.solarwinds.com/SuccessCenter/s/orion-platform?language=en_US), [OpenText Directory Services](../saas-apps/opentext-directory-services-tutorial.md), [Datasite](../saas-apps/datasite-tutorial.md), [BlogIn](../saas-apps/blogin-tutorial.md), [IntSights](../saas-apps/intsights-tutorial.md), [kpifire](../saas-apps/kpifire-tutorial.md), [Textline](../saas-apps/textline-tutorial.md), [Cloud Academy - SSO](../saas-apps/cloud-academy-sso-tutorial.md), [Community Spark](../saas-apps/community-spark-tutorial.md), [Chatwork](../saas-apps/chatwork-tutorial.md), [CloudSign](../saas-apps/cloudsign-tutorial.md), [C3M Cloud Control](../saas-apps/c3m-cloud-control-tutorial.md), [SmartHR](https://smarthr.jp/), [NumlyEngageΓäó](../saas-apps/numlyengage-tutorial.md), [Michigan Data Hub Single Sign-On](../saas-apps/michigan-data-hub-single-sign-on-tutorial.md), [Egress](../saas-apps/egress-tutorial.md), [SendSafely](../saas-apps/sendsafely-tutorial.md), [Eletive](https://app.eletive.com/), [Right-Hand Cybersecurity ADI](https://right-hand.ai/), [Fyde Enterprise Authentication](https://enterprise.fyde.com/), [Verme](../saas-apps/verme-tutorial.md), [Lenses.io](../saas-apps/lensesio-tutorial.md), [Momenta](../saas-apps/momenta-tutorial.md), [Uprise](https://app.uprise.co/sign-in), [Q](https://q.moduleq.com/login), [CloudCords](../saas-apps/cloudcords-tutorial.md), [TellMe Bot](https://tellme365liteweb.azurewebsites.net/), [Inspire](https://app.inspiresoftware.com/), [Maverics Identity Orchestrator SAML Connector](https://www.strata.io/identity-fabric/), [Smartschool (School Management System)](https://smartschoolz.com/login), [Zepto - Intelligent timekeeping](https://user.zepto-ai.com/signin), [Studi.ly](https://studi.ly/), [Trackplan](http://www.trackplanfm.com/), [Skedda](../saas-apps/skedda-tutorial.md), [WhosOnLocation](../saas-apps/whos-on-location-tutorial.md), [Coggle](../saas-apps/coggle-tutorial.md), [Kemp LoadMaster](https://kemptechnologies.com/cloud-load-balancer/), [BrowserStack Single Sign-on](../saas-apps/browserstack-single-sign-on-tutorial.md)
+[Appreiz](https://microsoftteams.appreiz.com/), [Inextor Vault](https://inexto.com/inexto-suite/inextor), [Beekast](https://my.beekast.com/), [Templafy OpenID Connect](https://app.templafy.com/), [PeterConnects receptionist](https://msteams.peterconnects.com/), [AlohaCloud](https://www.alohacloud.com/), Control Tower, [Cocoom](https://start.cocoom.com/), [COINS Construction Cloud](https://sso.coinsconstructioncloud.com/#login/), [Medxnote MT](https://task.teamsmain.medx.im/authorization), [Reflekt](https://reflekt.konsolute.com/login), [Rever](https://app.reverscore.net/access), [MyCompanyArchive](https://login.mycompanyarchive.com/), [GReminders](https://app.greminders.com/o365-oauth), [Titanfile](../saas-apps/titanfile-tutorial.md), [Wootric](../saas-apps/wootric-tutorial.md), [SolarWinds Orion](https://support.solarwinds.com/SuccessCenter/s/orion-platform?language=en_US), [OpenText Directory Services](../saas-apps/opentext-directory-services-tutorial.md), [Datasite](../saas-apps/datasite-tutorial.md), [BlogIn](../saas-apps/blogin-tutorial.md), [IntSights](../saas-apps/intsights-tutorial.md), [kpifire](../saas-apps/kpifire-tutorial.md), [Textline](../saas-apps/textline-tutorial.md), [Cloud Academy - SSO](../saas-apps/cloud-academy-sso-tutorial.md), [Community Spark](../saas-apps/community-spark-tutorial.md), [Chatwork](../saas-apps/chatwork-tutorial.md), [CloudSign](../saas-apps/cloudsign-tutorial.md), [C3M Cloud Control](../saas-apps/c3m-cloud-control-tutorial.md), [SmartHR](https://smarthr.jp/), [NumlyEngage&trade;](../saas-apps/numlyengage-tutorial.md), [Michigan Data Hub Single Sign-On](../saas-apps/michigan-data-hub-single-sign-on-tutorial.md), [Egress](../saas-apps/egress-tutorial.md), [SendSafely](../saas-apps/sendsafely-tutorial.md), [Eletive](https://app.eletive.com/), [Right-Hand Cybersecurity ADI](https://right-hand.ai/), [Fyde Enterprise Authentication](https://enterprise.fyde.com/), [Verme](../saas-apps/verme-tutorial.md), [Lenses.io](../saas-apps/lensesio-tutorial.md), [Momenta](../saas-apps/momenta-tutorial.md), [Uprise](https://app.uprise.co/sign-in), [Q](https://q.moduleq.com/login), [CloudCords](../saas-apps/cloudcords-tutorial.md), [TellMe Bot](https://tellme365liteweb.azurewebsites.net/), [Inspire](https://app.inspiresoftware.com/), [Maverics Identity Orchestrator SAML Connector](https://www.strata.io/identity-fabric/), [Smartschool (School Management System)](https://smartschoolz.com/login), [Zepto - Intelligent timekeeping](https://user.zepto-ai.com/signin), [Studi.ly](https://studi.ly/), [Trackplan](http://www.trackplanfm.com/), [Skedda](../saas-apps/skedda-tutorial.md), [WhosOnLocation](../saas-apps/whos-on-location-tutorial.md), [Coggle](../saas-apps/coggle-tutorial.md), [Kemp LoadMaster](https://kemptechnologies.com/cloud-load-balancer/), [BrowserStack Single Sign-on](../saas-apps/browserstack-single-sign-on-tutorial.md)
You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial
For listing your application in the Azure AD app gallery, please read the detail
**Service category:** Conditional Access **Product capability:** Identity Security & Protection
-[Report-only mode for Azure AD Conditional Access](../conditional-access/concept-conditional-access-report-only.md) lets you evaluate the result of a policy without enforcing access controls. You can test report-only policies across your organization and understand their impact before enabling them, making deployment safer and easier. Over the past few months, weΓÇÖve seen strong adoption of report-only modeΓÇöover 26M users are already in scope of a report-only policy. With the announcement today, new Azure AD Conditional Access policies will be created in report-only mode by default. This means you can monitor the impact of your policies from the moment theyΓÇÖre created. And for those of you who use the MS Graph APIs, you can [manage report-only policies programmatically](/graph/api/resources/conditionalaccesspolicy) as well.
+[Report-only mode for Azure AD Conditional Access](../conditional-access/concept-conditional-access-report-only.md) lets you evaluate the result of a policy without enforcing access controls. You can test report-only policies across your organization and understand their impact before enabling them, making deployment safer and easier. Over the past few months, we've seen strong adoption of report-only modeΓÇöover 26M users are already in scope of a report-only policy. With the announcement today, new Azure AD Conditional Access policies will be created in report-only mode by default. This means you can monitor the impact of your policies from the moment they're created. And for those of you who use the MS Graph APIs, you can [manage report-only policies programmatically](/graph/api/resources/conditionalaccesspolicy) as well.
With External Identities in Azure AD, you can allow people outside your organiza
**Service category:** Conditional Access **Product capability:** Identity Security & Protection
-The [insights and reporting workbook](../conditional-access/howto-conditional-access-insights-reporting.md) gives admins a summary view of Azure AD Conditional Access in their tenant. With the capability to select an individual policy, admins can better understand what each policy does and monitor any changes in real time. The workbook streams data stored in Azure Monitor, which you can set up in a few minutes [following these instructions](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md). To make the dashboard more discoverable, weΓÇÖve moved it to the new insights and reporting tab within the Azure AD Conditional Access menu.
+The [insights and reporting workbook](../conditional-access/howto-conditional-access-insights-reporting.md) gives admins a summary view of Azure AD Conditional Access in their tenant. With the capability to select an individual policy, admins can better understand what each policy does and monitor any changes in real time. The workbook streams data stored in Azure Monitor, which you can set up in a few minutes [following these instructions](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md). To make the dashboard more discoverable, we've moved it to the new insights and reporting tab within the Azure AD Conditional Access menu.
Continuous Access Evaluation is a new security feature that enables near real-ti
**Product capability:** User Authentication
-Office is launching a series of mobile-first business apps that cater to non-traditional organizations, and to employees in large organizations that donΓÇÖt use email as their primary communication method. These apps target frontline employees, deskless workers, field agents, or retail employees that may not get an email address from their employer, have access to a computer, or to IT. This project will let these employees sign in to business applications by entering a phone number and roundtripping a code. For more details, please see our [admin documentation](../authentication/howto-authentication-sms-signin.md) and [end user documentation](https://support.microsoft.com/account-billing/set-up-sms-sign-in-as-a-phone-verification-method-0aa5b3b3-a716-4ff2-b0d6-31d2bcfbac42).
+Office is launching a series of mobile-first business apps that cater to non-traditional organizations, and to employees in large organizations that don't use email as their primary communication method. These apps target frontline employees, deskless workers, field agents, or retail employees that may not get an email address from their employer, have access to a computer, or to IT. This project will let these employees sign in to business applications by entering a phone number and roundtripping a code. For more details, please see our [admin documentation](../authentication/howto-authentication-sms-signin.md) and [end user documentation](https://support.microsoft.com/account-billing/set-up-sms-sign-in-as-a-phone-verification-method-0aa5b3b3-a716-4ff2-b0d6-31d2bcfbac42).
We're expanding B2B invitation capability to allow existing internal accounts to
**Product capability:** Identity Security & Protection
-[Report-only mode for Azure AD Conditional Access](../conditional-access/concept-conditional-access-report-only.md) lets you evaluate the result of a policy without enforcing access controls. You can test report-only policies across your organization and understand their impact before enabling them, making deployment safer and easier. Over the past few months, weΓÇÖve seen strong adoption of report-only mode, with over 26M users already in scope of a report-only policy. With this announcement, new Azure AD Conditional Access policies will be created in report-only mode by default. This means you can monitor the impact of your policies from the moment theyΓÇÖre created. And for those of you who use the MS Graph APIs, you can also [manage report-only policies programmatically](/graph/api/resources/conditionalaccesspolicy).
+[Report-only mode for Azure AD Conditional Access](../conditional-access/concept-conditional-access-report-only.md) lets you evaluate the result of a policy without enforcing access controls. You can test report-only policies across your organization and understand their impact before enabling them, making deployment safer and easier. Over the past few months, we've seen strong adoption of report-only mode, with over 26M users already in scope of a report-only policy. With this announcement, new Azure AD Conditional Access policies will be created in report-only mode by default. This means you can monitor the impact of your policies from the moment they're created. And for those of you who use the MS Graph APIs, you can also [manage report-only policies programmatically](/graph/api/resources/conditionalaccesspolicy).
We're expanding B2B invitation capability to allow existing internal accounts to
**Product capability:** Identity Security & Protection
-The Conditional Access [insights and reporting workbook](../conditional-access/howto-conditional-access-insights-reporting.md) gives admins a summary view of Azure AD Conditional Access in their tenant. With the capability to select an individual policy, admins can better understand what each policy does and monitor any changes in real time. The workbook streams data stored in Azure Monitor, which you can set up in a few minutes [following these instructions](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md). To make the dashboard more discoverable, weΓÇÖve moved it to the new insights and reporting tab within the Azure AD Conditional Access menu.
+The Conditional Access [insights and reporting workbook](../conditional-access/howto-conditional-access-insights-reporting.md) gives admins a summary view of Azure AD Conditional Access in their tenant. With the capability to select an individual policy, admins can better understand what each policy does and monitor any changes in real time. The workbook streams data stored in Azure Monitor, which you can set up in a few minutes [following these instructions](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md). To make the dashboard more discoverable, we've moved it to the new insights and reporting tab within the Azure AD Conditional Access menu.
Delta query for administrative units is available for public preview! You can no
**Product capability:** Developer Experience
-These APIs are a key tool for managing your usersΓÇÖ authentication methods. Now you can programmatically pre-register and manage the authenticators used for multifactor authentication (MFA) and self-service password reset (SSPR). This has been one of the most-requested features in the Azure AD Multi-Factor Authentication (MFA), SSPR, and Microsoft Graph spaces. The new APIs weΓÇÖve released in this wave give you the ability to:
+These APIs are a key tool for managing your users' authentication methods. Now you can programmatically pre-register and manage the authenticators used for multifactor authentication (MFA) and self-service password reset (SSPR). This has been one of the most-requested features in the Azure AD Multi-Factor Authentication (MFA), SSPR, and Microsoft Graph spaces. The new APIs we've released in this wave give you the ability to:
-- Read, add, update, and remove a userΓÇÖs authentication phones-- Reset a userΓÇÖs password
+- Read, add, update, and remove a user's authentication phones
+- Reset a user's password
- Turn on and off SMS-sign-in For more information, see [Azure AD authentication methods API overview](/graph/api/resources/authenticationmethods-overview).
For more information, check out the following:
**Product capability:**
-My Staff enables Firstline Managers, such as a store manager, to ensure that their staff members are able to access their Azure AD accounts. Instead of relying on a central helpdesk, organizations can delegate common tasks, such as resetting passwords or changing phone numbers, to a Firstline Manager. With My Staff, a user who canΓÇÖt access their account can re-gain access in just a couple of selections, with no helpdesk or IT staff required. For more information, see the [Manage your users with My Staff (preview)](../roles/my-staff-configure.md) and [Delegate user management with My Staff (preview)](https://support.microsoft.com/account-billing/manage-front-line-users-with-my-staff-c65b9673-7e1c-4ad6-812b-1a31ce4460bd).
+My Staff enables Firstline Managers, such as a store manager, to ensure that their staff members are able to access their Azure AD accounts. Instead of relying on a central helpdesk, organizations can delegate common tasks, such as resetting passwords or changing phone numbers, to a Firstline Manager. With My Staff, a user who can't access their account can re-gain access in just a couple of selections, with no helpdesk or IT staff required. For more information, see the [Manage your users with My Staff (preview)](../roles/my-staff-configure.md) and [Delegate user management with My Staff (preview)](https://support.microsoft.com/account-billing/manage-front-line-users-with-my-staff-c65b9673-7e1c-4ad6-812b-1a31ce4460bd).
Azure Monitor integration with Azure AD logs is now available in Azure Governmen
**Service category:** Identity Protection **Product capability:** Identity Security & Protection
-We’re excited to share that we've now rolled out the refreshed [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md) experience in the [Microsoft Azure Government portal](https://portal.azure.us/). For more information, see our [announcement blog post](https://techcommunity.microsoft.com/t5/public-sector-blog/identity-protection-refresh-in-microsoft-azure-government/ba-p/1223667).
+We're excited to share that we've now rolled out the refreshed [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md) experience in the [Microsoft Azure Government portal](https://portal.azure.us/). For more information, see our [announcement blog post](https://techcommunity.microsoft.com/t5/public-sector-blog/identity-protection-refresh-in-microsoft-azure-government/ba-p/1223667).
active-directory Tutorial Linux Vm Access Nonaad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-nonaad.md
To complete these steps, you need an SSH client.  If you are using Windows, you
2. **Connect** to the VM with the SSH client of your choice.  3. In the terminal window, use CURL to make a request to the local managed identities for Azure resources endpoint to get an access token for Azure Key Vault.    
+
The CURL request for the access token is below.   ```bash
Alternatively you may also do this via [PowerShell or the CLI](../../azure-resou
In this tutorial, you learned how to use a Linux VM system-assigned managed identity to access Azure Key Vault. To learn more about Azure Key Vault see: > [!div class="nextstepaction"]
->[Azure Key Vault](../../key-vault/general/overview.md)
+>[Azure Key Vault](../../key-vault/general/overview.md)
active-directory Sonarqube Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sonarqube-tutorial.md
Title: 'Tutorial: Azure AD SSO integration with Sonarqube'
-description: Learn how to configure single sign-on between Azure Active Directory and Sonarqube.
+ Title: 'Tutorial: Azure AD SSO integration with SonarQube'
+description: Learn how to configure single sign-on between Azure Active Directory and SonarQube.
Last updated 06/25/2021
-# Tutorial: Azure AD SSO integration with Sonarqube
+# Tutorial: Azure AD SSO integration with SonarQube
-In this tutorial, you'll learn how to integrate Sonarqube with Azure Active Directory (Azure AD). When you integrate Sonarqube with Azure AD, you can:
+In this tutorial, you'll learn how to integrate SonarQube with Azure Active Directory (Azure AD). When you integrate SonarQube with Azure AD, you can:
-* Control in Azure AD who has access to Sonarqube.
-* Enable your users to be automatically signed-in to Sonarqube with their Azure AD accounts.
+* Control in Azure AD who has access to SonarQube.
+* Enable your users to be automatically signed-in to SonarQube with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate Sonarqube with Azure Active Dire
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Sonarqube single sign-on (SSO) enabled subscription.
+* SonarQube single sign-on (SSO) enabled subscription.
+
+> [!NOTE]
+> Help on installing SonarQube can be found in the [online documentation](https://docs.sonarqube.org/latest/setup/install-server/).
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Sonarqube supports **SP** initiated SSO.
+* SonarQube supports **SP** initiated SSO.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Add Sonarqube from the gallery
+## Add SonarQube from the gallery
-To configure the integration of Sonarqube into Azure AD, you need to add Sonarqube from the gallery to your list of managed SaaS apps.
+To configure the integration of SonarQube into Azure AD, you need to add SonarQube from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Sonarqube** in the search box.
-1. Select **Sonarqube** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **SonarQube** in the search box.
+1. Select **SonarQube** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for Sonarqube
+## Configure and test Azure AD SSO for SonarQube
-Configure and test Azure AD SSO with Sonarqube using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Sonarqube.
+Configure and test Azure AD SSO with SonarQube using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in SonarQube.
-To configure and test Azure AD SSO with Sonarqube, perform the following steps:
+To configure and test Azure AD SSO with SonarQube, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Sonarqube SSO](#configure-sonarqube-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Sonarqube test user](#create-sonarqube-test-user)** - to have a counterpart of B.Simon in Sonarqube that is linked to the Azure AD representation of user.
+1. **[Configure SonarQube SSO](#configure-sonarqube-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create SonarQube test user](#create-sonarqube-test-user)** - to have a counterpart of B.Simon in SonarQube that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **Sonarqube** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **SonarQube** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
![The Certificate download link](common/certificatebase64.png)
-1. On the **Set up Sonarqube** section, copy the appropriate URL(s) based on your requirement.
+1. On the **Set up SonarQube** section, copy the appropriate URL(s) based on your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Sonarqube.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to SonarQube.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Sonarqube**.
+1. In the applications list, select **SonarQube**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure Sonarqube SSO
+## Configure SonarQube SSO
-1. Open a new web browser window and sign into your Sonarqube company site as an administrator.
+1. Open a new web browser window and sign into your SonarQube company site as an administrator.
1. Click on **Administration > Configuration > Security** and go to the **SAML Plugin** perform the following steps.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
j. Click **Save**.
-### Create Sonarqube test user
+### Create SonarQube test user
-In this section, you create a user called B.Simon in Sonarqube. Work with [Sonarqube Client support team](https://sonarsource.com/company/contact/) to add the users in the Sonarqube platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called B.Simon in SonarQube. Work with [SonarQube Client support team](https://sonarsource.com/company/contact/) to add the users in the SonarQube platform. Users must be created and activated before you use single sign-on.
## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on **Test this application** in Azure portal. This will redirect to Sonarqube Sign-on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to SonarQube Sign-on URL where you can initiate the login flow.
-* Go to Sonarqube Sign-on URL directly and initiate the login flow from there.
+* Go to SonarQube Sign-on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the Sonarqube tile in the My Apps, this will redirect to Sonarqube Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* You can use Microsoft My Apps. When you click the SonarQube tile in the My Apps, this will redirect to SonarQube Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
-* Once you configure the Sonarqube you can enforce session controls, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session controls extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+* Once you configure SonarQube, you can enforce session controls, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session controls extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
aks Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-configuration.md
By using `containerd` for AKS nodes, pod startup latency improves and node resou
`Containerd` works on every GA version of Kubernetes in AKS, and in every upstream kubernetes version above v1.19, and supports all Kubernetes and AKS features. > [!IMPORTANT]
-> Clusters with Linux node pools created on Kubernetes v1.19 or greater default to `containerd` for its container runtime. Clusters with node pools on a earlier supported Kubernetes versions receive Docker for their container runtime. Linux node pools will be updated to `containerd` once the node pool Kubernetes version is updated to a version that supports `containerd`. You can still use Docker node pools and clusters on versions below 1.23, but Docker is no longer supported as of September 2022.
+> Clusters with Linux node pools created on Kubernetes v1.19 or greater default to `containerd` for its container runtime. Clusters with node pools on a earlier supported Kubernetes versions receive Docker for their container runtime. Linux node pools will be updated to `containerd` once the node pool Kubernetes version is updated to a version that supports `containerd`.
>
-> Using `containerd` with Windows Server 2019 node pools is generally available, and will be the only container runtime option in Kubernetes 1.21 and greater. For more details, see [Add a Windows Server node pool with `containerd`][/learn/aks-add-np-containerd].
+> Using `containerd` with Windows Server 2019 node pools is generally available, and will be the only container runtime option in Kubernetes 1.21 and greater. You can still use Docker node pools and clusters on versions below 1.23, but Docker is no longer supported as of September 2022. For more details, see [Add a Windows Server node pool with `containerd`][aks-add-np-containerd].
> > It is highly recommended to test your workloads on AKS node pools with `containerd` prior to using clusters with a Kubernetes version that supports `containerd` for your node pools.
az aks show -n aks -g myResourceGroup --query "oidcIssuerProfile.issuerUrl" -ots
[az-feature-register]: /cli/azure/feature#az_feature_register [az-feature-list]: /cli/azure/feature#az_feature_list [az-provider-register]: /cli/azure/provider#az_provider_register
-[aks-add-np-containerd]: windows-container-cli.md#add-a-windows-server-node-pool-with-containerd
+[aks-add-np-containerd]: ./learn/quick-windows-container-deploy-cli.md#add-a-windows-server-node-pool-with-containerd
aks Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/integrations.md
Azure Kubernetes Service (AKS) provides additional, supported functionality for
## Add-ons
-Add-ons are a fully-supported way to provide extra capabilities for your AKS cluster. Add-ons' installation, configuration, and lifecycle is managed by AKS. Use `az aks addon` to install an add-on or manage the add-ons for your cluster.
+Add-ons are a fully supported way to provide extra capabilities for your AKS cluster. Add-ons' installation, configuration, and lifecycle is managed by AKS. Use `az aks addon` to install an add-on or manage the add-ons for your cluster.
The following rules are used by AKS for applying updates to installed add-ons:
Cluster extensions build on top of certain Helm charts and provide an Azure Reso
Both extensions and add-ons are supported ways to add functionality to your AKS cluster. When you install an add-on, the functionality is added as part of the AKS resource provider in the Azure API. When you install an extension, the functionality is added as part of a separate resource provider in the Azure API.
+## GitHub Actions
+
+GitHub Actions helps you automate your software development workflows from within GitHub. For more details on using GitHub Actions with Azure, see [What is GitHub Actions for Azures][github-actions]. For an example of using GitHub Actions with an AKS cluster, see [Build, test, and deploy containers to Azure Kubernetes Service using GitHub Actions][github-actions-aks].
+ ## Open source and third-party integrations You can install many open source and third-party integrations on your AKS cluster, but these open-source and third-party integrations are not covered by the [AKS support policy][aks-support-policy].
The below table shows a few examples of open-source and third-party integrations
[keda]: keda-about.md [web-app-routing]: web-app-routing.md [maintenance-windows]: planned-maintenance.md
-[release-tracker]: release-tracker.md
+[release-tracker]: release-tracker.md
+[github-actions]: /azure/developer/github/github-actions
+[azure/aks-set-context]: https://github.com/Azure/aks-set-context
+[azure/k8s-set-context]: https://github.com/Azure/k8s-set-context
+[azure/k8s-bake]: https://github.com/Azure/k8s-bake
+[azure/k8s-create-secret]: https://github.com/Azure/k8s-create-secret
+[azure/k8s-deploy]: https://github.com/Azure/k8s-deploy
+[azure/k8s-lint]: https://github.com/Azure/k8s-lint
+[azure/setup-helm]: https://github.com/Azure/setup-helm
+[azure/setup-kubectl]: https://github.com/Azure/setup-kubectl
+[azure/k8s-artifact-substitute]: https://github.com/Azure/k8s-artifact-substitute
+[azure/aks-create-action]: https://github.com/Azure/aks-create-action
+[azure/aks-github-runner]: https://github.com/Azure/aks-github-runner
+[github-actions-aks]: kubernetes-action.md
aks Kubernetes Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-action.md
Title: Build, test, and deploy containers to Azure Kubernetes Service using GitHub Actions description: Learn how to use GitHub Actions to deploy your container to Kubernetes - Previously updated : 05/16/2022- Last updated : 08/02/2022 # GitHub Actions for deploying to Kubernetes service
-[GitHub Actions](https://docs.github.com/en/actions) gives you the flexibility to build an automated software development lifecycle workflow. You can use multiple Kubernetes actions to deploy to containers from Azure Container Registry to Azure Kubernetes Service with GitHub Actions.
+[GitHub Actions](https://docs.github.com/en/actions) gives you the flexibility to build an automated software development lifecycle workflow. You can use multiple Kubernetes actions to deploy to containers from Azure Container Registry to Azure Kubernetes Service with GitHub Actions.
-## Prerequisites
+## Prerequisites
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- A GitHub account. If you don't have one, sign up for [free](https://github.com/join). -- A working Kubernetes cluster
- - [Tutorial: Prepare an application for Azure Kubernetes Service](tutorial-kubernetes-prepare-app.md)
+- A GitHub account. If you don't have one, sign up for [free](https://github.com/join).
+- An existing AKS cluster with an attached Azure Container Registry (ACR).
-## Workflow file overview
+## Configure integration between Azure and your GitHub repository
-A workflow is defined by a YAML (.yml) file in the `/.github/workflows/` path in your repository. This definition contains the various steps and parameters that make up the workflow.
+When using GitHub Actions, you need to configure the integration between Azure and your GitHub repository. For more details on connecting your GitHub repository to Azure, see [Use GitHub Actions to connect to Azure][connect-gh-azure].
-For a workflow targeting AKS, the file has three sections:
+## Available actions
-|Section |Tasks |
-|||
-|**Authentication** | Generate deployment credentials. |
-|**Build** | Build & push the container image |
-|**Deploy** | 1. Set the target AKS cluster |
-| |2. Create a generic/docker-registry secret in Kubernetes cluster |
-||3. Deploy to the Kubernetes cluster|
-
-## Create a service principal
-
-# [Service principal](#tab/userlevel)
-
-You can create a [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) by using the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac) command in the [Azure CLI](/cli/azure/). You can run this command using [Azure Cloud Shell](https://shell.azure.com/) in the Azure portal or by selecting the **Try it** button.
-
-```azurecli-interactive
-az ad sp create-for-rbac --name "myApp" --role contributor --scopes /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP> --sdk-auth
-```
-
-In the above command, replace the placeholders with your subscription ID, and resource group. The output is the role assignment credentials that provide access to your resource. The command should output a JSON object similar to this.
-
-```json
- {
- "clientId": "<GUID>",
- "clientSecret": "<GUID>",
- "subscriptionId": "<GUID>",
- "tenantId": "<GUID>",
- (...)
- }
-```
-Copy this JSON object, which you can use to authenticate from GitHub.
-
-# [Open ID Connect](#tab/openid)
-
-Open ID Connect is an authentication method that uses short-lived tokens. Setting up [Open ID Connect with GitHub Actions](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect) is more complex process that offers hardened security.
-
-1. If you do not have an existing application, register a [new Active Directory application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). Create the Active Directory application.
-
- ```azurecli-interactive
- az ad app create --display-name myApp
- ```
-
- This command will output JSON with an `appId` that is your `client-id`. Save the value to use as the `AZURE_CLIENT_ID` GitHub secret later.
-
- You'll use the `objectId` value when creating federated credentials with Graph API and reference it as the `APPLICATION-OBJECT-ID`.
-
-1. Create a service principal. Replace the `$appID` with the appId from your JSON output.
-
- This command generates JSON output with a different `objectId` and will be used in the next step. The new `objectId` is the `assignee-object-id`.
-
- Copy the `appOwnerTenantId` to use as a GitHub secret for `AZURE_TENANT_ID` later.
-
- ```azurecli-interactive
- az ad sp create --id $appId
- ```
-
-1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID, `$resourceGroupName` with your resource group name, and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
-
- ```azurecli-interactive
- az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --scopes /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Web/sites/--assignee-principal-type ServicePrincipal
- ```
-
-1. Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for your active directory application.
-
- * Replace `APPLICATION-OBJECT-ID` with the **objectId (generated while creating app)** for your Active Directory application.
- * Set a value for `CREDENTIAL-NAME` to reference later.
- * Set the `subject`. The value of this is defined by GitHub depending on your workflow:
- * Jobs in your GitHub Actions environment: `repo:< Organization/Repository >:environment:< Name >`
- * For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`.
- * For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull_request`.
-
- ```azurecli
- az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/<APPLICATION-OBJECT-ID>/federatedIdentityCredentials' --body '{"name":"<CREDENTIAL-NAME>","issuer":"https://token.actions.githubusercontent.com","subject":"repo:organization/repository:ref:refs/heads/main","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
- ```
-
-To learn how to create a Create an active directory application, service principal, and federated credentials in Azure portal, see [Connect GitHub and Azure](/azure/developer/github/connect-from-azure#use-the-azure-login-action-with-openid-connect).
----
-## Configure the GitHub secrets
-
-# [Service principal](#tab/userlevel)
-
-Follow the steps to configure the secrets:
-
-1. In [GitHub](https://github.com/), browse to your repository, select **Settings > Secrets > New repository secret**.
-
- :::image type="content" source="media/kubernetes-action/secrets.png" alt-text="Screenshot shows the Add a new secret link for a repository.":::
-
-2. Paste the contents of the above `az cli` command as the value of secret variable. For example, `AZURE_CREDENTIALS`.
+GitHub Actions helps you automate your software development workflows from within GitHub. For more details on using GitHub Actions with Azure, see [What is GitHub Actions for Azures][github-actions].
-3. Similarly, define the following additional secrets for the container registry credentials and set them in Docker login action.
+The below table shows the available GitHub Actions that integrate specifically with AKS.
- - REGISTRY_USERNAME
- - REGISTRY_PASSWORD
+| Name | Description | More details |
+||||
+|`azure/aks-set-context`|Set the target AKS cluster context which will be used by other actions or run any kubectl commands.|[azure/aks-set-context][azure/aks-set-context]|
+|`azure/k8s-set-context`|Set the target Kubernetes cluster context which will be used by other actions or run any kubectl commands.|[azure/k8s-set-context][azure/k8s-set-context]|
+|`azure/k8s-bake`|Bake manifest file to be used for deployments using Helm, kustomize or kompose.|[azure/k8s-bake][azure/k8s-bake]|
+|`azure/k8s-create-secret`|Create a generic secret or docker-registry secret in the Kubernetes cluster.|[azure/k8s-create-secret][azure/k8s-create-secret]|
+|`azure/k8s-deploy`|Deploy manifests to Kubernetes clusters.|[azure/k8s-deploy][azure/k8s-deploy]|
+|`azure/k8s-lint`|Validate/lint your manifest files.|[azure/k8s-lint][azure/k8s-lint]|
+|`azure/setup-helm`|Install a specific version of Helm binary on the runner.|[azure/setup-helm][azure/setup-helm]|
+|`azure/setup-kubectl`|Installs a specific version of kubectl on the runner.|[azure/setup-kubectl][azure/setup-kubectl]|
+|`azure/k8s-artifact-substitute`|Update the tag or digest for container images.|[azure/k8s-artifact-substitute][azure/k8s-artifact-substitute]|
+|`azure/aks-create-action`|Create an AKS cluster using Terraform.|[azure/aks-create-action][azure/aks-create-action]|
+|`azure/aks-github-runner`|Set up self-hosted agents for GitHub Actions.|[azure/aks-github-runner][azure/aks-github-runner]|
-4. You will see the secrets as shown below once defined.
+In addition, the example in the next section uses the [azure/acr-build][azure/acr-build] action.
- :::image type="content" source="media/kubernetes-action/kubernetes-secrets.png" alt-text="Screenshot shows existing secrets for a repository.":::
+## Example of using GitHub Actions with AKS
-# [OpenID Connect](#tab/openid)
+As an example, you can use GitHub Actions to deploy an application to your AKS cluster every time a change is pushed to your GitHub repository. This example uses the [Azure Vote][gh-azure-vote] application.
-You need to provide your application's **Client ID**, **Tenant ID**, and **Subscription ID** to the login action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option.
-
-1. Open your GitHub repository and go to **Settings**.
+> [!NOTE]
+> This example uses a service principal for authentication with your ACR and AKS cluster. Alternatively, you can configure Open ID Connect (OIDC) and update the `azure/login` action to use OIDC. For more details, see [Set up Azure Login with OpenID Connect authentication][oidc-auth].
-1. Select **Settings > Secrets > New secret**.
+### Fork and update the repository
-1. Create secrets for `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_SUBSCRIPTION_ID`. Use these values from your Active Directory application for your GitHub secrets:
+Navigate to the [Azure Vote][gh-azure-vote] repository and click the **Fork** button.
- |GitHub Secret | Active Directory Application |
- |||
- |AZURE_CLIENT_ID | Application (client) ID |
- |AZURE_TENANT_ID | Directory (tenant) ID |
- |AZURE_SUBSCRIPTION_ID | Subscription ID |
+Once the repository is forked, update `azure-vote-all-in-one-redis.yaml` to use your ACR for the `azure-vote-front` image
-1. Similarly, define the following additional secrets for the container registry credentials and set them in Docker login action.
+```yaml
+...
+ containers:
+ - name: azure-vote-front
+ image: <registryName>.azurecr.io/azuredocs/azure-vote-front:v1
+...
+```
- - REGISTRY_USERNAME
- - REGISTRY_PASSWORD
+> [!IMPORTANT]
+> The update to `azure-vote-all-in-one-redis.yaml` must be committed to your repository before you can complete the later steps.
-
+### Create secrets
-## Build a container image and deploy to Azure Kubernetes Service cluster
+Create a service principal to access your resource group with the `Contributor` role using the following command, replacing:
-The build and push of the container images is done using `azure/docker-login@v1` action.
+- `<SUBSCRIPTION_ID>` with the subscription ID of your Azure account
+- `<RESOURCE_GROUP>` with the name of the resource group where your ACR is located
+```azurecli-interactive
+az ad sp create-for-rbac \
+ --name "ghActionAzureVote" \
+ --scope /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP> \
+ --role Contributor \
+ --sdk-auth
+```
-```yml
-env:
- REGISTRY_NAME: {registry-name}
- CLUSTER_NAME: {cluster-name}
- CLUSTER_RESOURCE_GROUP: {resource-group-name}
- NAMESPACE: {namespace-name}
- APP_NAME: {app-name}
-
-jobs:
- build:
- runs-on: ubuntu-latest
- steps:
- - uses: actions/checkout@main
-
- # Connect to Azure Container Registry (ACR)
- - uses: azure/docker-login@v1
- with:
- login-server: ${{ env.REGISTRY_NAME }}.azurecr.io
- username: ${{ secrets.REGISTRY_USERNAME }}
- password: ${{ secrets.REGISTRY_PASSWORD }}
-
- # Container build and push to a Azure Container Registry (ACR)
- - run: |
- docker build . -t ${{ env.REGISTRY_NAME }}.azurecr.io/${{ env.APP_NAME }}:${{ github.sha }}
- docker push ${{ env.REGISTRY_NAME }}.azurecr.io/${{ env.APP_NAME }}:${{ github.sha }}
- working-directory: ./<path-to-Dockerfile-directory>
+The following shows an example output from the above command.
+```output
+{
+ "clientId": <clientId>,
+ "clientSecret": <clientSecret>,
+ "subscriptionId": <subscriptionId>,
+ "tenantId": <tenantId>,
+ ...
+}
```
-### Deploy to Azure Kubernetes Service cluster
+In your GitHub repository, create the below secrets for your action to use. To create a secret:
+1. Navigate to the repository's settings, and click *Secrets* then *Actions*.
+1. For each secret, click *New Repository Secret* and enter the name and value of the secret.
-To deploy a container image to AKS, you will need to use the `azure/k8s-deploy@v1` action. This action has five parameters:
+For more details on creating secrets, see [Encrypted Secrets][github-actions-secrets].
-| **Parameter** | **Explanation** |
+|Secret name |Secret value |
|||
-| **namespace** | (Optional) Choose the target Kubernetes namespace. If the namespace is not provided, the commands will run in the default namespace |
-| **manifests** | (Required) Path to the manifest files, that will be used for deployment |
-| **images** | (Optional) Fully qualified resource URL of the image(s) to be used for substitutions on the manifest files |
-| **imagepullsecrets** | (Optional) Name of a docker-registry secret that has already been set up within the cluster. Each of these secret names is added under imagePullSecrets field for the workloads found in the input manifest files |
-| **kubectl-version** | (Optional) Installs a specific version of kubectl binary |
-
-> [!NOTE]
-> The manifest files should be created manually by you. Currently there are no tools that will generate such files in an automated way, for more information see [this sample repository with example manifest files](https://github.com/MicrosoftDocs/mslearn-aks-deploy-container-app/tree/master/kubernetes).
-
-Before you can deploy to AKS, you'll need to set target Kubernetes namespace and create an image pull secret. See [Pull images from an Azure container registry to a Kubernetes cluster](../container-registry/container-registry-auth-kubernetes.md), to learn more about how pulling images works.
-
-```yaml
- # Create namespace if doesn't exist
- - run: |
- kubectl create namespace ${{ env.NAMESPACE }} --dry-run=client -o json | kubectl apply -f -
-
- # Create image pull secret for ACR
- - uses: azure/k8s-create-secret@v1
- with:
- container-registry-url: ${{ env.REGISTRY_NAME }}.azurecr.io
- container-registry-username: ${{ secrets.REGISTRY_USERNAME }}
- container-registry-password: ${{ secrets.REGISTRY_PASSWORD }}
- secret-name: ${{ env.SECRET }}
- namespace: ${{ env.NAMESPACE }}
- arguments: --force true
-```
+|AZURE_CREDENTIALS|The entire JSON output from the `az ad sp create-for-rbac` command|
+|service_principal | The value of `<clientId>`|
+|service_principal_password| The value of `<clientSecret>`|
+|subscription| The value of `<subscriptionId>`|
+|tenant|The value of `<tenantId>`|
+|registry|The name of your registry|
+|repository|azuredocs|
+|resource_group|The name of your resource group|
+|cluster_name|The name of your cluster|
-Complete your deployment with the `azure/k8s-deploy@v1` action. Replace the environment variables with values for your application.
+### Create actions file
-# [Service principal](#tab/userlevel)
+Create a `.github/workflows/main.yml` in your repository with the following contents:
```yaml-
-on: [push]
-
-# Environment variables available to all jobs and steps in this workflow
-env:
- REGISTRY_NAME: {registry-name}
- CLUSTER_NAME: {cluster-name}
- CLUSTER_RESOURCE_GROUP: {resource-group-name}
- NAMESPACE: {namespace-name}
- SECRET: {secret-name}
- APP_NAME: {app-name}
-
+name: build_deploy_aks
+on:
+ push:
+ paths:
+ - "azure-vote/**"
jobs: build: runs-on: ubuntu-latest steps:
- - uses: actions/checkout@main
-
- # Connect to Azure Container Registry (ACR)
- - uses: azure/docker-login@v1
- with:
- login-server: ${{ env.REGISTRY_NAME }}.azurecr.io
- username: ${{ secrets.REGISTRY_USERNAME }}
- password: ${{ secrets.REGISTRY_PASSWORD }}
-
- # Container build and push to a Azure Container Registry (ACR)
- - run: |
- docker build . -t ${{ env.REGISTRY_NAME }}.azurecr.io/${{ env.APP_NAME }}:${{ github.sha }}
- docker push ${{ env.REGISTRY_NAME }}.azurecr.io/${{ env.APP_NAME }}:${{ github.sha }}
- working-directory: ./<path-to-Dockerfile-directory>
-
- # Set the target Azure Kubernetes Service (AKS) cluster.
- - uses: azure/aks-set-context@v1
- with:
- creds: '${{ secrets.AZURE_CREDENTIALS }}'
- cluster-name: ${{ env.CLUSTER_NAME }}
- resource-group: ${{ env.CLUSTER_RESOURCE_GROUP }}
-
- # Create namespace if doesn't exist
- - run: |
- kubectl create namespace ${{ env.NAMESPACE }} --dry-run=client -o json | kubectl apply -f -
-
- # Create image pull secret for ACR
- - uses: azure/k8s-create-secret@v1
- with:
- container-registry-url: ${{ env.REGISTRY_NAME }}.azurecr.io
- container-registry-username: ${{ secrets.REGISTRY_USERNAME }}
- container-registry-password: ${{ secrets.REGISTRY_PASSWORD }}
- secret-name: ${{ env.SECRET }}
- namespace: ${{ env.NAMESPACE }}
- arguments: --force true
-
- # Deploy app to AKS
- - uses: azure/k8s-deploy@v1
- with:
- manifests: |
- ${{ github.workspace }}/manifests/deployment.yaml
- ${{ github.workspace }}/manifests/service.yaml
- images: |
- ${{ env.REGISTRY_NAME }}.azurecr.io/${{ env.APP_NAME }}:${{ github.sha }}
- imagepullsecrets: |
- ${{ env.SECRET }}
- namespace: ${{ env.NAMESPACE }}
+ - name: Checkout source code
+ uses: actions/checkout@v3
+ - name: ACR build
+ id: build-push-acr
+ uses: azure/acr-build@v1
+ with:
+ service_principal: ${{ secrets.service_principal }}
+ service_principal_password: ${{ secrets.service_principal_password }}
+ tenant: ${{ secrets.tenant }}
+ registry: ${{ secrets.registry }}
+ repository: ${{ secrets.repository }}
+ image: azure-vote-front
+ folder: azure-vote
+ branch: master
+ tag: ${{ github.sha }}
+ - name: Azure login
+ id: login
+ uses: azure/login@v1.4.3
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+ - name: Set AKS context
+ id: set-context
+ uses: azure/aks-set-context@v3
+ with:
+ resource-group: '${{ secrets.resource_group }}'
+ cluster-name: '${{ secrets.cluster_name }}'
+ - name: Setup kubectl
+ id: install-kubectl
+ uses: azure/setup-kubectl@v3
+ - name: Deploy to AKS
+ id: deploy-aks
+ uses: Azure/k8s-deploy@v4
+ with:
+ namespace: 'default'
+ manifests: |
+ azure-vote-all-in-one-redis.yaml
+ images: '${{ secrets.registry }}.azurecr.io/${{ secrets.repository }}/azure-vote-front:${{ github.sha }}'
+ pull: false
```
-# [Open ID Connect](#tab/openid)
+> [!IMPORTANT]
+> The `.github/workflows/main.yml` file must be committed to your repository before you can run the action.
-The Azure Kubernetes Service set context action ([azure/aks-set-context](https://github.com/Azure/aks-set-context)) can be used to set cluster context before other actions like [k8s-deploy](https://github.com/Azure/k8s-deploy). For Open ID Connect, you'll use the Azure Login action before set context.
-
-```yaml
+The `on` section contains the event that triggers the action. In the above file, the action is triggered when a change is pushed to the `azure-vote` directory.
-on: [push]
-
-# Environment variables available to all jobs and steps in this workflow
-env:
- REGISTRY_NAME: {registry-name}
- CLUSTER_NAME: {cluster-name}
- CLUSTER_RESOURCE_GROUP: {resource-group-name}
- NAMESPACE: {namespace-name}
- SECRET: {secret-name}
- APP_NAME: {app-name}
-
-jobs:
- build:
- runs-on: ubuntu-latest
- steps:
- - uses: actions/checkout@main
-
- # Connect to Azure Container Registry (ACR)
- - uses: azure/docker-login@v1
- with:
- login-server: ${{ env.REGISTRY_NAME }}.azurecr.io
- username: ${{ secrets.REGISTRY_USERNAME }}
- password: ${{ secrets.REGISTRY_PASSWORD }}
-
- # Container build and push to a Azure Container Registry (ACR)
- - run: |
- docker build . -t ${{ env.REGISTRY_NAME }}.azurecr.io/${{ env.APP_NAME }}:${{ github.sha }}
- docker push ${{ env.REGISTRY_NAME }}.azurecr.io/${{ env.APP_NAME }}:${{ github.sha }}
- working-directory: ./<path-to-Dockerfile-directory>
-
- - uses: azure/login@v1
- with:
- client-id: ${{ secrets.AZURE_CLIENT_ID }}
- tenant-id: ${{ secrets.AZURE_TENANT_ID }}
- subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
-
- # Set the target Azure Kubernetes Service (AKS) cluster.
- - uses: azure/aks-set-context@v2.0
- with:
- cluster-name: ${{ env.CLUSTER_NAME }}
- resource-group: ${{ env.CLUSTER_RESOURCE_GROUP }}
-
- # Create namespace if doesn't exist
- - run: |
- kubectl create namespace ${{ env.NAMESPACE }} --dry-run=client -o json | kubectl apply -f -
-
- # Create image pull secret for ACR
- - uses: azure/k8s-create-secret@v1
- with:
- container-registry-url: ${{ env.REGISTRY_NAME }}.azurecr.io
- container-registry-username: ${{ secrets.REGISTRY_USERNAME }}
- container-registry-password: ${{ secrets.REGISTRY_PASSWORD }}
- secret-name: ${{ env.SECRET }}
- namespace: ${{ env.NAMESPACE }}
- arguments: --force true
-
- # Deploy app to AKS
- - uses: azure/k8s-deploy@v1
- with:
- manifests: |
- ${{ github.workspace }}/manifests/deployment.yaml
- ${{ github.workspace }}/manifests/service.yaml
- images: |
- ${{ env.REGISTRY_NAME }}.azurecr.io/${{ env.APP_NAME }}:${{ github.sha }}
- imagepullsecrets: |
- ${{ env.SECRET }}
- namespace: ${{ env.NAMESPACE }}
-```
-
+In the above file, the `steps` section contains each distinct action, which is executed in order:
+1. *Checkout source code* uses the [GitHub Actions Checkout Action][actions/checkout] to clone the repository.
+1. *ACR build* uses the [Azure Container Registry Build Action][azure/acr-build] to build the image and upload it to your registry.
+1. *Azure login* uses the [Azure Login Action][azure/login] to sign in to your Azure account.
+1. *Set AKS context* uses the [Azure AKS Set Context Action][azure/aks-set-context] to set the context for your AKS cluster.
+1. *Setup kubectl* uses the [Azure AKS Setup Kubectl Action][azure/setup-kubectl] to install kubectl on your runner.
+1. *Deploy to AKS* uses the [Azure Kubernetes Deploy Action][azure/k8s-deploy] to deploy the application to your Kuberentes cluster.
+Confirm that the action is working by updating `azure-vote/azure-vote/config_file.cfg` to the following and pushing the changes to your repository:
-## Clean up resources
+```output
+# UI Configurations
+TITLE = 'Azure Voting App'
+VOTE1VALUE = 'Fish'
+VOTE2VALUE = 'Dogs'
+SHOWHOST = 'false'
+```
-When your Kubernetes cluster, container registry, and repository are no longer needed, clean up the resources you deployed by deleting the resource group and your GitHub repository.
+In your repository, click on *Actions* and confirm a workflow is running. Once complete, confirm the workflow has a green checkmark and the updated application is deployed to your cluster.
## Next steps
-> [!div class="nextstepaction"]
-> [Learn about Azure Kubernetes Service](/azure/architecture/reference-architectures/containers/aks-start-here)
+Review the following starter workflows for AKS. For more details on using starter workflows, see [Using starter workflows][use-starter-workflows].
+
+- [Azure Kubernetes Service (Basic)][aks-swf-basic]
+- [Azure Kubernetes Service Helm][aks-swf-helm]
+- [Azure Kubernetes Service Kustomize][aks-swf-kustomize]
+- [Azure Kubernetes Service Kompose][aks-swf-kompose]
> [!div class="nextstepaction"] > [Learn how to create multiple pipelines on GitHub Actions with AKS](/learn/modules/aks-deployment-pipeline-github-actions)
-### More Kubernetes GitHub Actions
+> [!div class="nextstepaction"]
+> [Learn about Azure Kubernetes Service](/azure/architecture/reference-architectures/containers/aks-start-here)
-* [Kubectl tool installer](https://github.com/Azure/setup-kubectl) (`azure/setup-kubectl`): Installs a specific version of kubectl on the runner.
-* [Kubernetes set context](https://github.com/Azure/k8s-set-context) (`azure/k8s-set-context`): Set the target Kubernetes cluster context which will be used by other actions or run any kubectl commands.
-* [AKS set context](https://github.com/Azure/aks-set-context) (`azure/aks-set-context`): Set the target Azure Kubernetes Service cluster context.
-* [Kubernetes create secret](https://github.com/Azure/k8s-create-secret) (`azure/k8s-create-secret`): Create a generic secret or docker-registry secret in the Kubernetes cluster.
-* [Kubernetes deploy](https://github.com/Azure/k8s-deploy) (`azure/k8s-deploy`): Bake and deploy manifests to Kubernetes clusters.
-* [Setup Helm](https://github.com/Azure/setup-helm) (`azure/setup-helm`): Install a specific version of Helm binary on the runner.
-* [Kubernetes bake](https://github.com/Azure/k8s-bake) (`azure/k8s-bake`): Bake manifest file to be used for deployments using helm2, kustomize or kompose.
-* [Kubernetes lint](https://github.com/azure/k8s-lint) (`azure/k8s-lint`): Validate/lint your manifest files.
+[oidc-auth]: /azure/developer/github/connect-from-azure?tabs=azure-cli%2Clinux#use-the-azure-login-action-with-openid-connect
+[aks-swf-basic]: https://github.com/actions/starter-workflows/blob/main/deployments/azure-kubernetes-service.yml
+[aks-swf-helm]: https://github.com/actions/starter-workflows/blob/main/deployments/azure-kubernetes-service-helm.yml
+[aks-swf-kustomize]: https://github.com/actions/starter-workflows/blob/main/deployments/azure-kubernetes-service-kustomize.yml
+[aks-swf-kompose]: https://github.com/actions/starter-workflows/blob/main/deployments/azure-kubernetes-service-kompose.yml
+[use-starter-workflows]: https://docs.github.com/actions/using-workflows/using-starter-workflows#using-starter-workflows
+[github-actions]: /azure/developer/github/github-actions
+[github-actions-secrets]: https://docs.github.com/actions/security-guides/encrypted-secrets#creating-encrypted-secrets-for-a-repository
+[azure/aks-set-context]: https://github.com/Azure/aks-set-context
+[azure/k8s-set-context]: https://github.com/Azure/k8s-set-context
+[azure/k8s-bake]: https://github.com/Azure/k8s-bake
+[azure/k8s-create-secret]: https://github.com/Azure/k8s-create-secret
+[azure/k8s-deploy]: https://github.com/Azure/k8s-deploy
+[azure/k8s-lint]: https://github.com/Azure/k8s-lint
+[azure/setup-helm]: https://github.com/Azure/setup-helm
+[azure/setup-kubectl]: https://github.com/Azure/setup-kubectl
+[azure/k8s-artifact-substitute]: https://github.com/Azure/k8s-artifact-substitute
+[azure/aks-create-action]: https://github.com/Azure/aks-create-action
+[azure/aks-github-runner]: https://github.com/Azure/aks-github-runner
+[azure/acr-build]: https://github.com/Azure/acr-build
+[azure/login]: https://github.com/Azure/login
+[connect-gh-azure]: /azure/developer/github/connect-from-azure?tabs=azure-cli%2Clinux
+[gh-azure-vote]: https://github.com/Azure-Samples/azure-voting-app-redis
+[actions/checkout]: https://github.com/actions/checkout
api-management Api Management Howto Aad B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-aad-b2c.md
Azure Active Directory B2C is a cloud identity management solution for consumer-
In this tutorial, you'll learn the configuration required in your API Management service to integrate with Azure Active Directory B2C. As noted later in this article, if you are using the deprecated legacy developer portal, some steps will differ. > [!IMPORTANT]
-> * This article has been updated with steps to configure an Azure AD B2C app using the Microsoft Authentication Library ([MSAL](../active-directory/develop/msal-overview.md)) v2.0.
+> * This article has been updated with steps to configure an Azure AD B2C app using the Microsoft Authentication Library ([MSAL](../active-directory/develop/msal-overview.md)).
> * If you previously configured an Azure AD B2C app for user sign-in using the Azure AD Authentication Library (ADAL), we recommend that you [migrate to MSAL](#migrate-to-msal). For information about enabling access to the developer portal by using classic Azure Active Directory, see [How to authorize developer accounts using Azure Active Directory](api-management-howto-aad.md).
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java.md
Next, determine if the data source should be available to one application or to
<Resource name="jdbc/dbconnection" type="javax.sql.DataSource"
- url="${dbuser}"
+ url="${connURL}"
driverClassName="<insert your driver class name>"
- username="${dbpassword}"
- password="${connURL}"
+ username="${dbuser}"
+ password="${dbpassword}"
/> </Context> ```
application-gateway Proxy Buffers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/proxy-buffers.md
Previously updated : 12/01/2021 Last updated : 08/03/2022 #Customer intent: As a user, I want to know how can I disable/enable proxy buffers.
For reference, visit [Azure SDK for .NET](/dotnet/api/microsoft.azure.management
## Limitations - API version 2020-01-01 or later should be used to configure buffers. - Currently, these changes are supported only through ARM templates.-- Request and Response Buffers cannot be disabled for WAF v2 SKU.
+- Request and Response Buffers can only be disabled for the WAF v2 SKU if request body checking is disabled. Otherwise, Request and Response Buffers cannot be disabled for the WAF v2 SKU.
availability-zones Migrate App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-app-service.md
description: Learn how to migrate Azure App Service to availability zone support
Previously updated : 06/07/2022 Last updated : 08/03/2022
Availability zone support is a property of the App Service plan. The following a
- France Central - UK South - Japan East
+ - East Asia
- Southeast Asia - Australia East - Availability zones can only be specified when creating a **new** App Service plan. A pre-existing App Service plan can't be converted to use availability zones.
There's no additional cost associated with enabling availability zones. Pricing
> [Overview of autoscale in Microsoft Azure](../azure-monitor/autoscale/autoscale-overview.md) > [!div class="nextstepaction"]
-> [Manage disaster recovery](../app-service/manage-disaster-recovery.md)
+> [Manage disaster recovery](../app-service/manage-disaster-recovery.md)
azure-app-configuration Howto Disable Public Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-disable-public-access.md
Previously updated : 05/25/2022 Last updated : 07/12/2022 # Disable public access in Azure App Configuration
-In this article, you'll learn how to disable public access for your Azure App Configuration store. Setting up private access can offer a better security for your configuration store.
+In this article, you'll learn how to disable public access for your Azure App Configuration store. Setting up private access can offer better security for your configuration store.
## Prerequisites
az appconfig update --name <name-of-the-appconfig-store> --enable-public-network
## Next steps > [!div class="nextstepaction"]
->[Using private endpoints for Azure App Configuration](./concept-private-endpoint.md)
+> [Use private endpoints for Azure App Configuration](./concept-private-endpoint.md)
+
+> [!div class="nextstepaction"]
+> [Set up private access to an Azure App Configuration store](howto-set-up-private-access.md)
azure-app-configuration Howto Set Up Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-set-up-private-access.md
+
+ Title: How to set up private access to an Azure App Configuration store
+description: How to set up private access to an Azure App Configuration store in the Azure portal and in the CLI.
++++ Last updated : 07/12/2022+++
+# Set up private access in Azure App Configuration
+
+In this article, you'll learn how to set up private access for your Azure App Configuration store, by creating a [private endpoint](/azure/private-link/private-endpoint-overview) with Azure Private Link. Private endpoints allow access to your App Configuration store using a private IP address from a virtual network.
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).
+* We assume you already have an App Configuration store. If you want to create one, go to [create an App Configuration store](quickstart-aspnet-core-app.md).
+
+## Sign in to Azure
+
+You'll need to sign in to Azure first to access the App Configuration service.
+
+### [Portal](#tab/azure-portal)
+
+Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.com/) with your Azure account.
+
+### [Azure CLI](#tab/azure-cli)
+
+Sign in to Azure using the `az login` command in the [Azure CLI](/cli/azure/install-azure-cli).
+
+```azurecli-interactive
+az login
+```
+
+This command will prompt your web browser to launch and load an Azure sign-in page. If the browser fails to open, use device code flow with `az login --use-device-code`. For more sign-in options, go to [sign in with the Azure CLI](/cli/azure/authenticate-azure-cli).
+++
+## Create a private endpoint
+
+### [Portal](#tab/azure-portal)
+
+1. In your App Configuration store, under **Settings**, select **Networking**.
+
+1. Select the **Private Access** tab and then **Create** to start setting up a new private endpoint.
+
+ :::image type="content" source="./media/private-endpoint/create-private-endpoint.png" alt-text="Screenshot of the Azure portal, select create a private endpoint.":::
+
+1. Fill out the form with the following information:
+
+ | Parameter | Description | Example |
+ ||--|-|
+ | Subscription | Select an Azure subscription. Your private endpoint must be in the same subscription as your virtual network. You'll select a virtual network later in this how-to guide. | *MyAzureSubscription* |
+ | Resource group | Select a resource group or create a new one. | *MyResourceGroup* |
+ | Name | Enter a name for the new private endpoint for your App Configuration store. | *MyPrivateEndpoint* |
+ | Network Interface Name | This field is completed automatically. Optionally edit the name of the network interface. | *MyPrivateEndpoint-nic* |
+ | Region | Select a region. Your private endpoint must be in the same region as your virtual network. | *Central US* |
+
+ :::image type="content" source="./media/private-endpoint/basics.png" alt-text="Screenshot of the Azure portal, create a private endpoint, basics tab.":::
+
+1. Select **Next : Resource >**. Private Link offers options to create private endpoints for different types of Azure resources, such as SQL servers, Azure storage accounts or App Configuration stores. The current App Configuration store is automatically filled in the **Resource** field as that is the resource the private endpoint is connecting to.
+
+ 1. The resource type **Microsoft.AppConfiguration/configurationStores** and the target subresource **configurationStores** indicate that you're creating an endpoint for an App Configuration store.
+
+ 1. The name of your configuration store is listed under **Resource**.
+
+ :::image type="content" source="./media/private-endpoint/resource.png" alt-text="Screenshot of the Azure portal, create a private endpoint, resource tab.":::
+
+1. Select **Next : Virtual Network >**.
+
+ 1. Select an existing **Virtual network** to deploy the private endpoint to. If you don't have a virtual network, [create a virtual network](../private-link/create-private-endpoint-portal.md#create-a-virtual-network-and-bastion-host).
+
+ 1. Select a **Subnet** from the list.
+
+ 1. Leave the box **Enable network policies for all private endpoints in this subnet** checked.
+
+ 1. Under **Private IP configuration**, select the option to allocate IP addresses dynamically. For more information, refer to [Private IP addresses](/azure/virtual-network/ip-services/private-ip-addresses#allocation-method).
+
+ 1. Optionally, you can select or create an **Application security group**. Application security groups allow you to group virtual machines and define network security policies based on those groups.
+
+ :::image type="content" source="./media/private-endpoint/virtual-network.png" alt-text="Screenshot of the Azure portal, create a private endpoint, virtual network tab.":::
+
+1. Select **Next : DNS >** to configure a DNS record. If you don't want to make changes to the default settings, you can move forward to the next tab.
+
+ 1. For **Integrate with private DNS zone**, select **Yes** to integrate your private endpoint with a private DNS zone. You may also use your own DNS servers or create DNS records using the host files on your virtual machines.
+
+ 1. A subscription and resource group for your private DNS zone are preselected. You can change them optionally.
+
+ :::image type="content" source="./media/private-endpoint/dns.png" alt-text="Screenshot of the Azure portal, create a private endpoint, DNS tab.":::
+
+ To learn more about DNS configuration, go to [Name resolution for resources in Azure virtual networks](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) and [DNS configuration for Private Endpoints](../private-link/private-endpoint-overview.md#dns-configuration).
+
+1. Select **Next : Tags >** and optionally create tags. Tags are name/value pairs that enable you to categorize resources and view consolidated billing by applying the same tag to multiple resources and resource groups.
+
+ :::image type="content" source="./media/private-endpoint/tags.png" alt-text="Screenshot of the Azure portal, create a private endpoint, tags tab.":::
+
+1. Select **Next : Review + create >** to review information about your App Configuration store, private endpoint, virtual network and DNS. You can also select **Download a template for automation** to reuse JSON data from this form later.
+
+ :::image type="content" source="./media/private-endpoint/review.png" alt-text="Screenshot of the Azure portal, create a private endpoint, review tab.":::
+
+1. Select **Create**.
+
+Once deployment is complete, you'll get a notification that your endpoint has been created. If it's auto-approved, you can start accessing your app configuration store privately, else you'll have to wait for approval.
+
+### [Azure CLI](#tab/azure-cli)
+
+1. To set up your private endpoint, you need a virtual network. If you don't have one yet, create a virtual network with [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create). Replace the placeholder texts `<vnet-name>`, `<rg-name>`, and `<subnet-name>` with a name for your new virtual network, a resource group name, and a subnet name.
+
+ ```azurecli-interactive
+ az network vnet create --name <vnet-name> --resource-group <rg-name> --subnet-name <subnet-name> --location <vnet-location>
+ ```
+
+ > [!div class="mx-tdBreakAll"]
+ > | Placeholder | Description | Example |
+ > ||-|-|
+ > | `<vnet-name>` | Enter a name for your new virtual network. A virtual network enables Azure resources to communicate privately with each other, and with the internet. | `MyVNet` |
+ > | `<rg-name>` | Enter the name of an existing resource group for your virtual network. | `MyResourceGroup` |
+ > | `<subnet-name>` | Enter a name for your new subnet. A subnet is a network inside a network. This is where the private IP address is assigned. | `MySubnet` |
+ > | `<vnet-location>`| Enter an Azure region. Your virtual network must be in the same region as your private endpoint. | `centralus` |
+
+1. Run the command [az appconfig show](/cli/azure/appconfig/#az-appconfig-show) to retrieve the properties of the App Configuration store, for which you want to set up private access. Replace the placeholder `name` with the name or the App Configuration store.
+
+ ```azurecli-interactive
+ az appconfig show --name <name>
+ ```
+
+ This command generates an output with information about your App Configuration store. Note down the *id* value. For instance: */subscriptions/123/resourceGroups/MyResourceGroup/providers/Microsoft.AppConfiguration/configurationStores/MyAppConfigStore*.
+
+1. Run the command [az network private-endpoint create](/cli/azure/network/private-endpoint#az-network-private-endpoint-create) to create a private endpoint for your App Configuration store. Replace the placeholder texts `<resource-group>`, `<private-endpoint-name>`, `<vnet-name>`, `<private-connection-resource-id>`, `<connection-name>`, and `<location>` with your own information.
+
+ ```azurecli-interactive
+ az network private-endpoint create --resource-group <resource-group> --name <private-endpoint-name> --vnet-name <vnet-name> --subnet Default --private-connection-resource-id <private-connection-resource-id> --connection-name <connection-name> --location <location> --group-id configurationStores
+ ```
+
+ > [!div class="mx-tdBreakAll"]
+ > | Placeholder | Description | Example |
+ > ||-||
+ > | `<resource-group>` | Enter the name of an existing resource group for your private endpoint. | `MyResourceGroup` |
+ > | `<private-endpoint-name>` | Enter a name for your new private endpoint. | `MyPrivateEndpoint` |
+ > | `<vnet-name>` | Enter the name of an existing vnet. | `Myvnet` |
+ > | `<private-connection-resource-id>` | Enter your App Configuration store's private connection resource ID. This the ID you saved from the output of the previous step. | `/subscriptions/123/resourceGroups/MyResourceGroup/providers/Microsoft.AppConfiguration/configurationStores/MyAppConfigStore`|
+ > | `<connection-name>` | Enter a connection name. |`MyConnection` |
+ > | `<location>` | Enter an Azure region. Your private endpoint must be in the same region as your virtual network. |`centralus` |
+++
+## Manage private link connection
+
+### [Portal](#tab/azure-portal)
+
+Go to **Networking** > **Private Access** in your App Configuration store to access the private endpoints linked to your App Configuration store.
+
+1. Check the connection state of your private link connection. When you create a private endpoint, the connection must be approved. If the resource for which you're creating a private endpoint is in your directory and you have [sufficient permissions](/azure/private-link/rbac-permissions), the connection request will be auto-approved. Otherwise, you must wait for the owner of that resource to approve your connection request. For more information about the connection approval models, go to [Manage Azure Private Endpoints](/azure/private-link/manage-private-endpoint#private-endpoint-connections).
+
+1. To manually approve, reject or remove a connection, select the checkbox next to the endpoint you want to edit and select an action item from the top menu.
+
+ :::image type="content" source="./media/private-endpoint/review-endpoints.png" alt-text="Screenshot of the Azure portal, review existing endpoints.":::
+
+1. Select the name of the private endpoint to open the private endpoint resource and access more information or to edit the private endpoint.
+
+### [Azure CLI](#tab/azure-cli)
+
+#### Review private endpoint connection details
+
+Run the [az network private-endpoint-connection list](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-list) command to review all private endpoint connections linked to your App Configuration store and check their connection state. Replace the placeholder texts `resource-group` and `<app-config-store-name>` with the name of the resource group and the name of the store.
+
+```azurecli-interactive
+az network private-endpoint-connection list --resource-group <resource-group> --name <app-config-store-name> --type Microsoft.AppConfiguration/configurationStores
+```
+
+Optionally, to get the details of a specific private endpoint, use the [az network private-endpoint-connection show](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-show) command. Replace the placeholder texts `resource-group` and `app-config-store-name` with the name of the resource group and the name of the store.
+
+```azurecli-interactive
+az network private-endpoint-connection show --resource-group <resource-group> --name <app-config-store-name> --type Microsoft.AppConfiguration/configurationStores
+```
+
+#### Get connection approval
+
+When you create a private endpoint, the connection must be approved. If the resource for which you're creating a private endpoint is in your directory and you have [sufficient permissions](/azure/private-link/rbac-permissions), the connection request will be auto-approved. Otherwise, you must wait for the owner of that resource to approve your connection request.
+
+To approve a private endpoint connection, use the [az network private-endpoint-connection approve](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-approve) command. Replace the placeholder texts `resource-group`, `private-endpoint`, and `<app-config-store-name>` with the name of the resource group, the name of the private endpoint and the name of the store.
+
+```azurecli-interactive
+az network private-endpoint-connection approve --resource-group <resource-group> --name <private-endpoint> --type Microsoft.AppConfiguration/configurationStores --resource-name <app-config-store-name>
+```
+
+For more information about the connection approval models, go to [Manage Azure Private Endpoints](/azure/private-link/manage-private-endpoint#private-endpoint-connections).
+
+#### Delete a private endpoint connection
+
+To delete a private endpoint connection, use the [az network private-endpoint-connection delete](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-delete) command. Replace the placeholder texts `resource-group` and `private-endpoint` with the name of the resource group and the name of the private endpoint.
+
+```azurecli-interactive
+az network private-endpoint-connection delete --resource-group <resource-group> --name <private-endpoint>
+```
+
+For more CLI commands, go to [az network private-endpoint-connection](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection)
+++
+If you have issues with a private endpoint, check the following guide: [Troubleshoot Azure Private Endpoint connectivity problems](/azure/private-link/troubleshoot-private-endpoint-connectivity).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+>[Use private endpoints for Azure App Configuration](concept-private-endpoint.md)
+
+> [!div class="nextstepaction"]
+>[Disable public access in Azure App Configuration](howto-disable-public-access.md)
azure-arc Tutorial Akv Secrets Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-akv-secrets-provider.md
You should see output similar to the example below. Note that it may take severa
"type": "Microsoft.KubernetesConfiguration/extensions", "apiVersion": "2021-09-01", "name": "[parameters('ExtensionInstanceName')]",
+ "identity": {
+ "type": "SystemAssigned"
+ },
"properties": { "extensionType": "[parameters('ExtensionType')]", "releaseTrain": "[parameters('ReleaseTrain')]",
azure-cache-for-redis Cache Aspnet Output Cache Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-aspnet-output-cache-provider.md
In `web.config`, use above key as parameter value instead of actual value.
```xml <sessionState mode="Custom" customProvider="MySessionStateStore"> <providers>
- <add type = "Microsoft.Web.Redis.RedisSessionStateProvide"
+ <add type = "Microsoft.Web.Redis.RedisSessionStateProvider"
name = "MySessionStateStore" connectionString = "MyRedisConnectionString"/> </providers>
In `web.config`, use above key as parameter value instead of actual value.
```xml <sessionState mode="Custom" customProvider="MySessionStateStore"> <providers>
- <add type = "Microsoft.Web.Redis.RedisSessionStateProvide"
+ <add type = "Microsoft.Web.Redis.RedisSessionStateProvider"
name = "MySessionStateStore" connectionString = "MyRedisConnectionString"/> </providers>
In `web.config`, use above key as parameter value instead of actual value.
```xml <sessionState mode="Custom" customProvider="MySessionStateStore"> <providers>
- <add type = "Microsoft.Web.Redis.RedisSessionStateProvide"
+ <add type = "Microsoft.Web.Redis.RedisSessionStateProvider"
name = "MySessionStateStore" connectionString = "mycache.redis.cache.windows.net:6380,password=actual access key,ssl=True,abortConnect=False"/> </providers>
azure-functions Security Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/security-concepts.md
For more information, see [How to use managed identities for App Service and Azu
[Cross-origin resource sharing (CORS)](https://en.wikipedia.org/wiki/Cross-origin_resource_sharing) is a way to allow web apps running in another domain to make requests to your HTTP trigger endpoints. App Service provides built-in support for handing the required CORS headers in HTTP requests. CORS rules are defined on a function app level.
-While it's tempting to use a wildcard that allows all sites to access your endpoint. But, this defeats the purpose of CORS, which is to help prevent cross-site scripting attacks. Instead, add a separate CORS entry for the domain of each web app that must access your endpoint.
+While it's tempting to use a wildcard that allows all sites to access your endpoint, this defeats the purpose of CORS, which is to help prevent cross-site scripting attacks. Instead, add a separate CORS entry for the domain of each web app that must access your endpoint.
### Managing secrets
azure-monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-manage.md
Run the following command to upgrade the agent.
`sudo sh ./omsagent-*.universal.x64.sh --upgrade`
+### Enable Auto-Update for the Linux Agent
+
+The **recommendation** is to enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature, using the following PowerShell commands.
+# [Powershell](#tab/PowerShellLinux)
+```powershell
+Set-AzVMExtension \
+ -ResourceGroupName myResourceGroup \
+ -VMName myVM \
+ -ExtensionName OmsAgentForLinux \
+ -ExtensionType OmsAgentForLinux \
+ -Publisher Microsoft.EnterpriseCloud.Monitoring \
+ -TypeHandlerVersion latestVersion
+ -ProtectedSettingString '{"workspaceKey":"myWorkspaceKey"}' \
+ -SettingString '{"workspaceId":"myWorkspaceId","skipDockerProviderInstall": true}' \
+ -EnableAutomaticUpgrade $true
+```
+# [Azure CLI](#tab/CLILinux)
+```powershell
+az vm extension set \
+ --resource-group myResourceGroup \
+ --vm-name myVM \
+ --name OmsAgentForLinux \
+ --publisher Microsoft.EnterpriseCloud.Monitoring \
+ --protected-settings '{"workspaceKey":"myWorkspaceKey"}' \
+ --settings '{"workspaceId":"myWorkspaceId","skipDockerProviderInstall": true}' \
+ --version latestVersion \
+--enable-auto-upgrade true
+```
+ ## Adding or removing a workspace ### Windows agent
azure-monitor Azure Monitor Agent Data Collection Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-data-collection-endpoint.md
Title: Using data collection endpoints with Azure Monitor agent
+ Title: Enable network isolation for the Azure Monitor agent
description: Use data collection endpoints to uniquely configure ingestion settings for your machines.
-# Enable network isolation for the Azure Monitor Agent
+# Enable network isolation for the Azure Monitor agent
By default, Azure Monitor agent will connect to a public endpoint to connect to your Azure Monitor environment. You can enable network isolation for your agents by creating [data collection endpoints](../essentials/data-collection-endpoint-overview.md) and adding them to your [Azure Monitor Private Link Scopes (AMPLS)](../logs/private-link-configure.md#connect-azure-monitor-resources).
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|ByteCount|Yes|Bytes|Bytes|Total|Total number of Bytes transmitted within time period|Protocol, Direction|
-|DatapathAvailability|Yes|Datapath Availability (Preview)|Count|PortalAverage|NAT Gateway Datapath Availability|No Dimensions|
-|PacketCount|Yes|Packets|Count|Total|Total number of Packets transmitted within time period|Protocol, Direction|
-|PacketDropCount|Yes|Dropped Packets|Count|Total|Count of dropped packets|No Dimensions|
-|SNATConnectionCount|Yes|SNAT Connection Count|Count|Total|Total concurrent active connections|Protocol, ConnectionState|
-|TotalConnectionCount|Yes|Total SNAT Connection Count|Count|Total|Total number of active SNAT connections|Protocol|
+|ByteCount|No|Bytes|Bytes|Total|Total number of Bytes transmitted within time period|Protocol, Direction|
+|DatapathAvailability|No|Datapath Availability (Preview)|Count|PortalAverage|NAT Gateway Datapath Availability|No Dimensions|
+|PacketCount|No|Packets|Count|Total|Total number of Packets transmitted within time period|Protocol, Direction|
+|PacketDropCount|No|Dropped Packets|Count|Total|Count of dropped packets|No Dimensions|
+|SNATConnectionCount|No|SNAT Connection Count|Count|Total|Total concurrent active connections|Protocol, ConnectionState|
+|TotalConnectionCount|No|Total SNAT Connection Count|Count|Total|Total number of active SNAT connections|Protocol|
## Microsoft.Network/networkInterfaces
azure-monitor Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/customer-managed-keys.md
Customer-managed key configuration isn't supported in Azure portal currently and
## Storing encryption key ("KEK")
+A [portfolio of Azure Key Management products](../../key-vault/managed-hsm/mhsm-control-data.md#portfolio-of-azure-key-management-products) lists the vaults and managed HSMs that can be used.
+ Create or use an existing Azure Key Vault in the region that the cluster is planed, and generate or import a key to be used for logs encryption. The Azure Key Vault must be configured as recoverable, to protect your key and the access to your data in Azure Monitor. You can verify this configuration under properties in your Key Vault, both *Soft delete* and *Purge protection* should be enabled. [![Soft delete and purge protection settings](media/customer-managed-keys/soft-purge-protection.png "Screenshot of Key Vault soft delete and purge protection properties")](media/customer-managed-keys/soft-purge-protection.png#lightbox)
Follow the procedure illustrated in [Dedicated Clusters article](./logs-dedicate
## Grant Key Vault permissions
-There are two permission models in Key Vault to grants permissions to your cluster and underlay storage, Vault access policy and Azure role-based access control.
+There are two permission models in Key Vault to grants permissions to your cluster and underlay storageΓÇöΓÇöVault access policy, and Azure role-based access control.
1. Vault access policy
azure-monitor Data Collector Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-collector-api.md
To use the HTTP Data Collector API, you create a POST request that includes the
| Log-Type |Specify the record type of the data that's being submitted. It can contain only letters, numbers, and the underscore (_) character, and it can't exceed 100 characters. | | x-ms-date |The date that the request was processed, in RFC 7234 format. | | x-ms-AzureResourceId | The resource ID of the Azure resource that the data should be associated with. It populates the [_ResourceId](./log-standard-columns.md#_resourceid) property and allows the data to be included in [resource-context](manage-access.md#access-mode) queries. If this field isn't specified, the data won't be included in resource-context queries. |
-| time-generated-field | The name of a field in the data that contains the timestamp of the data item. If you specify a field, its contents are used for **TimeGenerated**. If you don't specify this field, the default for **TimeGenerated** is the time that the message is ingested. The contents of the message field should follow the ISO 8601 format YYYY-MM-DDThh:mm:ssZ. Note: the Time Generated value cannot be older than 3 days before received time or the row will be dropped.|
+| time-generated-field | The name of a field in the data that contains the timestamp of the data item. If you specify a field, its contents are used for **TimeGenerated**. If you don't specify this field, the default for **TimeGenerated** is the time that the message is ingested. The contents of the message field should follow the ISO 8601 format YYYY-MM-DDThh:mm:ssZ. Note: the Time Generated value cannot be older than 2 days before received time or the row will be dropped.|
| | | ## Authorization
azure-netapp-files Azacsnap Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-preview.md
Feedback on AzAcSnap, including this preview, can be provided [online](https://a
## Getting the AzAcSnap Preview snapshot tools
-Refer to [Get started with Azure Application COnsistent Snapshot tool](azacsnap-get-started.md)
+Refer to [Get started with Azure Application Consistent Snapshot tool](azacsnap-get-started.md)
Return to this document for details on using the preview features.
azure-netapp-files Configure Network Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-network-features.md
na Previously updated : 12/09/2021 Last updated : 08/03/2021 # Configure network features for an Azure NetApp Files volume
-The **Network Features** functionality is available for public preview. This functionality enables you to indicate whether you want to use VNet features for an Azure NetApp Files volume. With this functionality, you can set the option to ***Standard*** or ***Basic***. You can specify the setting when you create a new NFS, SMB, or dual-protocol volume. See [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md) for details about network features.
+The **Network Features** functionality enables you to indicate whether you want to use VNet features for an Azure NetApp Files volume. With this functionality, you can set the option to ***Standard*** or ***Basic***. You can specify the setting when you create a new NFS, SMB, or dual-protocol volume. See [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md) for details about network features.
This article helps you understand the options and shows you how to configure network features.
+>[!IMPORTANT]
+>The **Network Features** functionality is currently in public preview. It is not available in Azure Government regions. See [supported regions](azure-netapp-files-network-topologies.md#supported-regions) for a full list.
+ ## Options for network features Two settings are available for network features:
azure-netapp-files Faq Nfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-nfs.md
Previously updated : 10/19/2021 Last updated : 08/03/2022 # NFS FAQs for Azure NetApp Files
For example, if a client mounting a volume becomes unresponsive or crashes beyon
A grace period defines a period of special processing in which clients can try to reclaim their locking state during a server recovery. The default timeout for the leases is 30 seconds with a grace period of 45 seconds. After that time, the client's lease will be released.
+## Oracle dNFS
+
+### Are there any Oracle patches required with dNFS?
+
+Customers using Oracle 19c and higher must ensure they **are patched for Oracle bug 32931941**. Most of the patch bundles currently in use by Oracle customers do **\*not\*** include this patch. The patch has only been included in a subset of recent patch bundles.
+
+If a database is exposed to this bug, network interruptions are highly likely to result in fractured block corruption. Network interruptions include events such as storage endpoint relocation, volume relocation, and storage service maintenance events. The corruption may not necessarily be detected immediately.
+
+This is not a bug on ONTAP or the Azure NetApp Files service itself. It is the result of an Oracle dNFS bug. The response to an NFS IO during a certain networking interruption or reconfiguration events is mishandled. The database will erroneously write a block that was being updated as it was written. In some cases, the corrupted block will be silently corrected by a later overwrite of that same block. If not, it will eventually be detected by Oracle database processes. An error should be logged in the Alert logs, and the Oracle instance is likely to terminate. In addition, dbv and RMAN operations can detect the corruption.
+
+Oracle publishes [document 1495104.1](https://support.oracle.com/knowledge/Oracle%20Cloud/1495104_1.html), which is continually updated with recommended dNFS patches. If your database uses dNFS, ensure the DBA team is checking for updates in this document.
+
+### Are there any patches required for use of Oracle dNFS with NFSv4.1?
+
+If your databases are using Oracle dNFS with NFSv4.1, they **need to be patched for Oracle bugs 33132050 and 33676296**. You may have to request a backport for other versions of Oracle. For example, at the time of writing, these patches are available for 19.11, but not yet 19.3. If you cite these bug numbers in the support case, Oracle's support engineers will know what to do.
+
+This requirement applies to ONTAP-based systems and services in general, which includes both on-premises ONTAP and Azure NetApp Files.
+
+Examples of the potential problems if these patches are not applied:
+
+1. Database hangs on backend storage endpoint moves.
+1. Database hangs on Azure NetApp Files service maintenance events.
+1. Brief Oracle hangs during normal operation that may or may not be noticeable.
+1. Slow Oracle shutdowns: if you monitor the shutdown process, you'll see pauses that could add up to minutes of delays as dNFS I/O times out.
+1. Incorrect dNFS reply caching behavior on reads that will hang a database.
+
+The patches include a change in dNFS session management and NFS reply caching that resolves these problems.
+
+**If you cannot patch for these two bugs**, you _must not_ use dNFS with NFSv4.1. You can either disable dNFS or switch to NFSv3.
+
+### Can I use multipathing with Oracle dNFS and NFSv4.1?
+
+dNFS will not work with multiple paths when using NFSv4.1. If you need multiple paths, you will have to use NFSv3. dNFS requires cluster-wide `clientID` and `sessionID` trunking for NFSv4.1 to work with multiple paths, and this is not supported by Azure NetApp Files. The result when trying to use this is a hang during dNFS startup
+ ## Next steps - [Microsoft Azure ExpressRoute FAQs](../expressroute/expressroute-faqs.md)
azure-resource-manager Linter Rule Use Stable Resource Identifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-use-stable-resource-identifier.md
Title: Linter rule - use stable resource identifier description: Linter rule - use stable resource identifier Previously updated : 07/21/2022 Last updated : 08/03/2022 # Linter rule - use stable resource identifier
The following example fails this test because `utcNow()` is used in the resource
```bicep param location string = resourceGroup().location
+param time string = utcNow()
resource sa 'Microsoft.Storage/storageAccounts@2021-09-01' = {
- name: 'store${toLower(utcNow())}'
+ name: 'store${toLower(time)}'
location: location sku: { name: 'Standard_LRS'
azure-resource-manager Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter.md
The following screenshot shows the linter in the command line. The output from t
You can integrate these checks as a part of your CI/CD pipelines. You can use a GitHub action to attempt a bicep build. Errors will fail the pipelines.
+## Silencing false positives
+
+Sometimes a rule can have false positives. For example you may need to include a link to a blob storage directly without using the [environment()](./bicep-functions-deployment.md#environment) function.
+In this case you can disable the warning for one line only, not the entire document, by adding `#disable-next-line <rule name>` before the line with the warning.
+
+```bicep
+#disable-next-line no-hardcoded-env-urls //Direct download link to my toolset
+scriptDownloadUrl: 'https://mytools.blob.core.windows.net/...'
+```
+
+It is good practice to add a comment explaining why the rule does not apply to this line.
+ ## Next steps - For more information about customizing the linter rules, see [Add custom settings in the Bicep config file](bicep-config-linter.md).
azure-video-indexer Limited Access Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/limited-access-features.md
# Limited Access features of Azure Video Indexer + Our vision is to empower developers and organizations to leverage AI to transform society in positive ways. We encourage responsible AI practices to protect the rights and safety of individuals. Microsoft facial recognition services are Limited Access in order to help prevent the misuse of the services in accordance with our [AI Principles](https://www.microsoft.com/ai/responsible-ai?SilentAuth=1&wa=wsignin1.0&activetab=pivot1%3aprimaryr6) and [facial recognition](https://blogs.microsoft.com/on-the-issues/2018/12/17/six-principles-to-guide-microsofts-facial-recognition-work/) principles. The Face Identify and Celebrity Recognition operations in Azure Video Indexer are Limited Access features that require registration. Since the announcement on June 11th, 2020, customers may not use, or allow use of, any Azure facial recognition service by or for a police department in the United States.
azure-video-indexer Video Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-overview.md
[!INCLUDE [regulation](./includes/regulation.md)] + Azure Video Indexer is a cloud application, part of Azure Applied AI Services, built on Azure Media Services and Azure Cognitive Services (such as the Face, Translator, Computer Vision, and Speech). It enables you to extract the insights from your videos using Azure Video Indexer video and audio models. Azure Video Indexer analyzes the video and audio content by running 30+ AI models, generating rich insights. Below is an illustration of the audio and video analysis performed by Azure Video Indexer in the background.
azure-vmware Deploy Disaster Recovery Using Jetstream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-disaster-recovery-using-jetstream.md
Once JetStream DR MSA and JetStream VIB are installed on the Azure VMware Soluti
1. [Select the VMs](https://www.jetstreamsoft.com/portal/jetstream-knowledge-base/select-vms-for-protection/) you want to protect and then [start VM protection](https://www.jetstreamsoft.com/portal/jetstream-knowledge-base/start-vm-protection/).
-For remaining configuration steps for JetStream DR, such as creating a failover runbook, invoking failover to the DR site, and invoking failback to the primary site, see the [JetStream Admin Guide documentation](https://docs.delphix.com/docs51/delphix-jet-stream/jet-stream-admin-guide).
+For remaining configuration steps for JetStream DR, such as creating a failover runbook, invoking failover to the DR site, and invoking failback to the primary site, see the [JetStream Admin Guide documentation](https://www.jetstreamsoft.com/portal/jetstream-knowledge-base/disaster-recovery-with-azure-netapp-files-jetstream-dr-and-avs-azure-vmware-solution/)).
## Disable JetStream DR on an Azure VMware Solution cluster
backup Sap Hana Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-backup-support-matrix.md
Title: SAP HANA Backup support matrix description: In this article, learn about the supported scenarios and limitations when you use Azure Backup to back up SAP HANA databases on Azure VMs. Previously updated : 03/28/2022 Last updated : 08/03/2022
Azure Backup supports the backup of SAP HANA databases to Azure. This article su
| -- | | | | **Topology** | SAP HANA running in Azure Linux VMs only | HANA Large Instances (HLI) | | **Regions** | **Americas** ΓÇô Central US, East US 2, East US, North Central US, South Central US, West US 2, West US 3, West Central US, West US, Canada Central, Canada East, Brazil South <br> **Asia Pacific** ΓÇô Australia Central, Australia Central 2, Australia East, Australia Southeast, Japan East, Japan West, Korea Central, Korea South, East Asia, Southeast Asia, Central India, South India, West India, China East, China East 2, China East 3, China North, China North 2, China North 3 <br> **Europe** ΓÇô West Europe, North Europe, France Central, UK South, UK West, Germany North, Germany West Central, Switzerland North, Switzerland West, Central Switzerland North, Norway East, Norway West <br> **Africa / ME** - South Africa North, South Africa West, UAE North, UAE Central <BR> **Azure Government regions** | France South, Germany Central, Germany Northeast, US Gov IOWA |
-| **OS versions** | SLES 12 with SP2, SP3, SP4 and SP5; SLES 15 with SP0, SP1, SP2 and SP3 <br><br> RHEL 7.4, 7.6, 7.7, 7.9, 8.1, 8.2 and 8.4 | |
+| **OS versions** | SLES 12 with SP2, SP3, SP4 and SP5; SLES 15 with SP0, SP1, SP2 and SP3 <br><br> RHEL 7.4, 7.6, 7.7, 7.9, 8.1, 8.2, 8.4, and 8.6 | |
| **HANA versions** | SDC on HANA 1.x, MDC on HANA 2.x SPS04, SPS05 Rev <= 56, SPS 06 (validated for encryption enabled scenarios as well) | | | **Encryption** | SSLEnforce, HANA data encryption | | | **HANA deployments** | SAP HANA on a single Azure VM - Scale up only. <br><br> For high availability deployments, both the nodes on the two different machines are treated as individual nodes with separate data chains. | Scale-out <br><br> In high availability deployments, backup doesnΓÇÖt failover to the secondary node automatically. Configuring backup should be done separately for each node. |
bastion Configuration Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/configuration-settings.md
Title: 'About Azure Bastion configuration settings' description: Learn about the available configuration settings for Azure Bastion.- Previously updated : 01/14/2022 Last updated : 08/03/2022 -+ # About Bastion configuration settings
The sections in this article discuss the resources and settings for Azure Bastio
A SKU is also known as a Tier. Azure Bastion supports two SKU types: Basic and Standard. The SKU is configured in the Azure portal during the workflow when you configure Bastion. You can [upgrade a Basic SKU to a Standard SKU](#upgradesku). * The **Basic SKU** provides base functionality, enabling Azure Bastion to manage RDP/SSH connectivity to virtual machines (VMs) without exposing public IP addresses on the target application VMs.
-* The **Standard SKU** enables premium features that allow Azure Bastion to manage remote connectivity at a larger scale.
+* The **Standard SKU** enables premium features.
-The following table shows features and corresponding SKUs.
+The following table shows the availability of features per corresponding SKU.
[!INCLUDE [Azure Bastion SKUs](../../includes/bastion-sku.md)]
+### Specify SKU
+ Currently, you must use the Azure portal if you want to specify the Standard SKU. If you use the Azure CLI or Azure PowerShell to configure Bastion, the SKU can't be specified and defaults to the Basic SKU.
-| Method | Value | Links |
+| Method | SKU Value | Links |
| | | |
-| Azure portal | Tier - Basic or <br>Standard | [Quickstart - Configure Bastion from VM settings](quickstart-host-portal.md)<br>[Tutorial - Configure Bastion](tutorial-create-host-portal.md) |
-| Azure PowerShell | Basic only - no settings |[Configure Bastion - PowerShell](bastion-create-host-powershell.md) |
-| Azure CLI | Basic only - no settings | [Configure Bastion - CLI](create-host-cli.md) |
+| Azure portal | Tier - Basic or Standard | [Tutorial](tutorial-create-host-portal.md) |
+| Azure portal | Tier - Basic| [Quickstart](quickstart-host-portal.md) |
+| Azure PowerShell | Basic |[How-to](bastion-create-host-powershell.md) |
+| Azure CLI | Basic| [How-to](create-host-cli.md) |
### <a name="upgradesku"></a>Upgrade a SKU
You can configure this setting using the following method:
| Method | Value | Links | | | | |
-| Azure portal |Tier | [Upgrade a SKU](upgrade-sku.md)|
+| Azure portal |Tier | [How-to](upgrade-sku.md)|
## <a name="subnet"></a>Azure Bastion subnet
You can configure this setting using the following methods:
| Method | Value | Links | | | | |
-| Azure portal | Subnet |[Quickstart - Configure Bastion from VM settings](quickstart-host-portal.md)<br>[Tutorial - Configure Bastion](tutorial-create-host-portal.md)|
+| Azure portal | Subnet |[Quickstart](quickstart-host-portal.md)<br>[Tutorial](tutorial-create-host-portal.md)|
| Azure PowerShell | -subnetName|[cmdlet](/powershell/module/az.network/new-azbastion#parameters) | | Azure CLI | --subnet-name | [command](/cli/azure/network/vnet#az-network-vnet-create) |
Azure Bastion requires a Public IP address. The Public IP must have the followin
You can configure this setting using the following methods:
-| Method | Value | Links |
-| | | |
-| Azure portal | Public IP address |[Azure portal](https://portal.azure.com)|
-| Azure PowerShell | -PublicIpAddress| [cmdlet](/powershell/module/az.network/new-azbastion#parameters) |
-| Azure CLI | --public-ip create |[command](/cli/azure/network/public-ip) |
+| Method | Value | Links | Requires Standard SKU|
+| | | | -- |
+| Azure portal | Public IP address |[Azure portal](https://portal.azure.com)| Yes |
+| Azure PowerShell | -PublicIpAddress| [cmdlet](/powershell/module/az.network/new-azbastion#parameters) | Yes |
+| Azure CLI | --public-ip create |[command](/cli/azure/network/public-ip) | Yes |
## <a name="instance"></a>Instances and host scaling
Instances are created in the AzureBastionSubnet. To allow for host scaling, the
You can configure this setting using the following methods:
-| Method | Value | Links |
-| | | |
-| Azure portal |Instance count | [Azure portal steps](configure-host-scaling.md)|
-| Azure PowerShell | ScaleUnit | [PowerShell steps](configure-host-scaling-powershell.md) |
+| Method | Value | Links | Requires Standard SKU |
+| | | | |
+| Azure portal |Instance count | [How-to](configure-host-scaling.md)| Yes
+| Azure PowerShell | ScaleUnit | [How-to](configure-host-scaling-powershell.md) | Yes |
## <a name="ports"></a>Custom ports
-You can specify the port that you want to use to connect to your VMs. By default, the inbound ports used to connect are 3389 for RDP and 22 for SSH. If you configure a custom port value, you need to specify that value when you connect to the VM.
+You can specify the port that you want to use to connect to your VMs. By default, the inbound ports used to connect are 3389 for RDP and 22 for SSH. If you configure a custom port value, specify that value when you connect to the VM.
-Custom port values are supported for the Standard SKU only. If your Bastion deployment uses the Basic SKU, you can easily [upgrade a Basic SKU to a Standard SKU](#upgradesku).
+Custom port values are supported for the Standard SKU only.
## Next steps
bastion Configure Host Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/configure-host-scaling.md
Title: 'Add scale units for host scaling: Azure portal' description: Learn how to add additional instances (scale units) to Azure Bastion.- Previously updated : 11/29/2021 Last updated : 08/03/2022 # Customer intent: As someone with a networking background, I want to configure host scaling using the Azure portal.-+ # Configure host scaling using the Azure portal
This article helps you add additional scale units (instances) to Azure Bastion t
1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the Azure portal, go to your Bastion host.
-1. Host scaling instance count requires Standard tier. On the **Configuration** page, for **Tier**, verify the tier is **Standard**. If the tier is Basic, select **Standard** from the dropdown.
+1. Host scaling instance count requires Standard tier. On the **Configuration** page, for **Tier**, verify the tier is **Standard**. If the tier is Basic, select **Standard**. To configure scaling, adjust the instance count. Each instance is a scale unit.
- :::image type="content" source="./media/configure-host-scaling/select-sku.png" alt-text="Screenshot of Select Tier." lightbox="./media/configure-host-scaling/select-sku.png":::
-1. To configure scaling, adjust the target instance count. Each instance is a scale unit.
+ :::image type="content" source="./media/configure-host-scaling/select-sku.png" alt-text="Screenshot of Select Tier and Instance count." lightbox="./media/configure-host-scaling/select-sku.png":::
- :::image type="content" source="./media/configure-host-scaling/instance-count.png" alt-text="Screenshot of Instance count slider." lightbox="./media/configure-host-scaling/instance-count.png":::
1. Click **Apply** to apply changes. >[!NOTE]
bastion Upgrade Sku https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/upgrade-sku.md
Title: 'Upgrade a SKU' description: Learn how to change Tiers from the Basic to the Standard SKU.- Previously updated : 08/30/2021 Last updated : 08/02/2022
-# Customer intent: As someone with a networking background, I want to upgrade to the Standard SKU.
-+ # Upgrade a SKU
-This article helps you upgrade from the Basic Tier (SKU) to Standard. Once you upgrade, you can't revert back to the Basic SKU without deleting and reconfiguring Bastion. Currently, this setting can be configured in the Azure portal only. For more information about host scaling, see [Configuration settings- SKUs](configuration-settings.md#skus).
+This article helps you upgrade from the Basic Tier (SKU) to Standard. Once you upgrade, you can't revert back to the Basic SKU without deleting and reconfiguring Bastion. Currently, this setting can be configured in the Azure portal only. For more information about features and SKUs, see [Configuration settings](configuration-settings.md).
## Configuration steps 1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the Azure portal, go to your Bastion host.
-1. On the **Configuration** page, for **Tier**, select **Standard** from the dropdown.
+1. On the **Configuration** page, for **Tier**, select **Standard**.
+
+ :::image type="content" source="./media/upgrade-sku/select-sku.png" alt-text="Screenshot of tier select dropdown with Standard selected." lightbox="./media/upgrade-sku/select-sku.png":::
- :::image type="content" source="./media/upgrade-sku/select-sku.png" alt-text="Screenshot of tier select dropdown with Standard selected." lightbox="./media/upgrade-sku/select-sku-expand.png":::
+1. You can add features at the same time you upgrade the SKU. You don't need to upgrade the SKU and then go back to add the features as a separate step.
-1. Click **Apply** to apply changes.
+1. Click **Apply** to apply changes. The bastion host will update. This takes about 10 minutes to complete.
## Next steps
-* See [Configuration settings](configuration-settings.md) for additional configuration information.
+* See [Configuration settings](configuration-settings.md) for more configuration information.
* Read the [Bastion FAQ](bastion-faq.md).
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 7/29/2022 Last updated : 8/3/2022 # Azure Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to the Guest OS you are using. Updates always carry forward for the particular [family][family-explain] they were introduced in.
->[!NOTE]
->
->The July Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the July Guest OS. This list is subject to change.
## July 2022 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | | | | | | |
-| Rel 22-07 | [5015811] | Latest Cumulative Update(LCU) | 6.45 | Jul 12, 2022 |
-| Rel 22-07 | [5015827] | Latest Cumulative Update(LCU) | 7.13 | Jul 12, 2022 |
-| Rel 22-07 | [5015808] | Latest Cumulative Update(LCU) | 5.69 | Jul 12, 2022 |
-| Rel 22-07 | [5015805] | IE Cumulative Updates | 2.124, 3.111, 4.104 | Jul 12, 2022 |
-| Rel 22-07 | [5013641] | . NET Framework 3.5 and 4.7.2 Cumulative Update | 6.46 | May 10, 2022 |
-| Rel 22-07 | [5013630] | .NET Framework 4.8 Security and Quality Rollup | 7.14 | May 10, 2022 |
-| Rel 22-07 | [5016058] | Servicing Stack update | 5.70 | Jul 12, 2022 |
-| Rel 22-07 | [4494175] | Microcode | 5.70 | Sep 1, 2020 |
-| Rel 22-07 | [4494174] | Microcode | 6.46 | Sep 1, 2020 |
-| Rel 22-07 | [5013637] | .NET Framework 3.5 Security and Quality Rollup LKG  | 2.126 | Jun 14, 2022 |
-| Rel 22-07 | [5013644] | .NET Framework 4.6.2 Security and Quality Rollup LKG  | 2.126 | May 10, 2022 |
-| Rel 22-07 | [5013638] | .NET Framework 3.5 Security and Quality Rollup LKG 6B is a Non-Sec Release  | 4.106 | Jun 14, 2020 |
-| Rel 22-07 | [5013643] | .NET Framework 4.6.2 Security and Quality Rollup LKG 6B is a Non-Sec Release  | 4.106 | May 10, 2022 |
-| Rel 22-07 | [5013635] | .NET Framework 3.5 Security  and Quality Rollup LKG  | 3.113 | Jun 14, 2022 |
-| Rel 22-07 | [5013642] | .NET Framework 4.6.2 Security and Quality Rollup LKG  | 3.113 | May 10, 2022 |
-| Rel 22-07 | [5015861] | Monthly Rollup  | 2.126 | Jul 12, 2022 |
-| Rel 22-07 | [5015863] | Monthly Rollup  | 3.113 | Jul 12, 2022 |
-| Rel 22-07 | [5015874] | Monthly Rollup  | 4.106 | Jul 12, 2022 |
-| Rel 22-07 | [5016263] | Servicing Stack update  | 3.113 | Jul 12, 2022 |
-| Rel 22-07 | [5016264] | Servicing Stack update  | 4.106 | Jul 12, 2022 |
-| Rel 22-07 | [4578013] | OOB Standalone Security Update  | 4.106 | Aug 19, 2020 |
-| Rel 22-07 | [5016057] | Servicing Stack update  | 2.126 | Jul 12, 2022 |
+| Rel 22-07 | [5015811] | Latest Cumulative Update(LCU) | [6.45] | Jul 12, 2022 |
+| Rel 22-07 | [5015827] | Latest Cumulative Update(LCU) | [7.13] | Jul 12, 2022 |
+| Rel 22-07 | [5015808] | Latest Cumulative Update(LCU) | [5.69] | Jul 12, 2022 |
+| Rel 22-07 | [5015805] | IE Cumulative Updates | [2.124], [3.111], [4.104] | Jul 12, 2022 |
+| Rel 22-07 | [5013641] | . NET Framework 3.5 and 4.7.2 Cumulative Update | [6.46] | May 10, 2022 |
+| Rel 22-07 | [5013630] | .NET Framework 4.8 Security and Quality Rollup | [7.14] | May 10, 2022 |
+| Rel 22-07 | [5016058] | Servicing Stack update | [5.70] | Jul 12, 2022 |
+| Rel 22-07 | [4494175] | Microcode | [5.70] | Sep 1, 2020 |
+| Rel 22-07 | [4494174] | Microcode | [6.46] | Sep 1, 2020 |
+| Rel 22-07 | [5013637] | .NET Framework 3.5 Security and Quality Rollup LKG  | [2.126] | Jun 14, 2022 |
+| Rel 22-07 | [5013644] | .NET Framework 4.6.2 Security and Quality Rollup LKG  | [2.126] | May 10, 2022 |
+| Rel 22-07 | [5013638] | .NET Framework 3.5 Security and Quality Rollup LKG 6B is a Non-Sec Release  | [4.106] | Jun 14, 2020 |
+| Rel 22-07 | [5013643] | .NET Framework 4.6.2 Security and Quality Rollup LKG 6B is a Non-Sec Release  | [4.106] | May 10, 2022 |
+| Rel 22-07 | [5013635] | .NET Framework 3.5 Security  and Quality Rollup LKG  | [3.113] | Jun 14, 2022 |
+| Rel 22-07 | [5013642] | .NET Framework 4.6.2 Security and Quality Rollup LKG  | [3.113] | May 10, 2022 |
+| Rel 22-07 | [5015861] | Monthly Rollup  | [2.126] | Jul 12, 2022 |
+| Rel 22-07 | [5015863] | Monthly Rollup  | [3.113] | Jul 12, 2022 |
+| Rel 22-07 | [5015874] | Monthly Rollup  | [4.106] | Jul 12, 2022 |
+| Rel 22-07 | [5016263] | Servicing Stack update  | [3.113] | Jul 12, 2022 |
+| Rel 22-07 | [5016264] | Servicing Stack update  | [4.106] | Jul 12, 2022 |
+| Rel 22-07 | [4578013] | OOB Standalone Security Update  | [4.106] | Aug 19, 2020 |
+| Rel 22-07 | [5016057] | Servicing Stack update  | [2.126] | Jul 12, 2022 |
[5015811]: https://support.microsoft.com/kb/5015811 [5015827]: https://support.microsoft.com/kb/5015827
The following tables show the Microsoft Security Response Center (MSRC) updates
[5016264]: https://support.microsoft.com/kb/5016264 [4578013]: https://support.microsoft.com/kb/4578013 [5016057]: https://support.microsoft.com/kb/5016057
+[2.126]: ./cloud-services-guestos-update-matrix.md#family-2-releases
+[3.113]: ./cloud-services-guestos-update-matrix.md#family-3-releases
+[4.106]: ./cloud-services-guestos-update-matrix.md#family-4-releases
+[5.70]: ./cloud-services-guestos-update-matrix.md#family-5-releases
+[6.46]: ./cloud-services-guestos-update-matrix.md#family-6-releases
+[7.14]: ./cloud-services-guestos-update-matrix.md#family-7-releases
## June 2022 Guest OS
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md
na Previously updated : 7/11/2022 Last updated : 8/03/2022 # Azure Guest OS releases and SDK compatibility matrix
Unsure about how to update your Guest OS? Check [this][cloud updates] out.
## News updates
+###### **August 3, 2022**
+The July Guest OS has released.
+ ###### **July 11, 2022** The June Guest OS has released.
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-7.14_202207-01 | August 3, 2022 | Post 7.16 |
| WA-GUEST-OS-7.13_202206-01 | July 11, 2022 | Post 7.15 |
-| WA-GUEST-OS-7.12_202205-01 | May 26, 2022 | Post 7.14 |
+|~~WA-GUEST-OS-7.12_202205-01~~| May 26, 2022 | August 3, 2022 |
|~~WA-GUEST-OS-7.11_202204-01~~| April 30, 2022 | July 11, 2022 | |~~WA-GUEST-OS-7.10_202203-01~~| March 19, 2022 | May 26, 2022 | |~~WA-GUEST-OS-7.9_202202-01~~| March 2, 2022 | April 30, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-6.46_202207-01 | August 3, 2022 | Post 6.48 |
| WA-GUEST-OS-6.45_202206-01 | July 11, 2022 | Post 6.47 |
-| WA-GUEST-OS-6.44_202205-01 | May 26, 2022 | Post 6.46 |
+|~~WA-GUEST-OS-6.44_202205-01~~| May 26, 2022 | August 3, 2022 |
|~~WA-GUEST-OS-6.43_202204-01~~| April 30, 2022 | July 11, 2022 | |~~WA-GUEST-OS-6.42_202203-01~~| March 19, 2022 | May 26, 2022 | |~~WA-GUEST-OS-6.41_202202-01~~| March 2, 2022 | April 30, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-5.70_202207-01 | August 3, 2022 | Post 5.72 |
| WA-GUEST-OS-5.69_202206-01 | July 11, 2022 | Post 5.71 |
-| WA-GUEST-OS-5.68_202205-01 | May 26, 2022 | Post 5.70 |
+|~~WA-GUEST-OS-5.68_202205-01~~| May 26, 2022 | August 3, 2022 |
|~~WA-GUEST-OS-5.67_202204-01~~| April 30, 2022 | July 11, 2022 | |~~WA-GUEST-OS-5.66_202203-01~~| March 19, 2022 | May 26, 2022 | |~~WA-GUEST-OS-5.65_202202-01~~| March 2, 2022 | April 30, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-4.106_202207-02 | August 3, 2022 | Post 4.108 |
| WA-GUEST-OS-4.105_202206-02 | July 11, 2022 | Post 4.107 |
-| WA-GUEST-OS-4.103_202205-01 | May 26, 2022 | Post 4.106 |
+|~~WA-GUEST-OS-4.103_202205-01~~| May 26, 2022 | August 2, 2022 |
|~~WA-GUEST-OS-4.102_202204-01~~| April 30, 2022 | July 11, 2022 | |~~WA-GUEST-OS-4.101_202203-01~~| March 19, 2022 | May 26, 2022 | |~~WA-GUEST-OS-4.100_202202-01~~| March 2, 2022 | April 30, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-3.113_202207-02 | August 3, 2022 | Post 3.115 |
| WA-GUEST-OS-3.112_202206-02 | July 11, 2022 | Post 3.114 |
-| WA-GUEST-OS-3.110_202205-01 | May 26, 2022 | Post 3.113 |
+|~~WA-GUEST-OS-3.110_202205-01~~| May 26, 2022 | August 3, 2022 |
|~~WA-GUEST-OS-3.109_202204-01~~| April 30, 2022 | July 11, 2022 | |~~WA-GUEST-OS-3.108_202203-01~~| March 19, 2022 | May 26, 2022 | |~~WA-GUEST-OS-3.107_202202-01~~| March 2, 2022 | April 30, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-2.126_202207-02 | August 3, 2022 | Post 2.128 |
| WA-GUEST-OS-2.125_202206-02 | July 11, 2022 | Post 2.127 |
-| WA-GUEST-OS-2.123_202205-01 | May 26, 2022 | Post 2.126 |
+|~~WA-GUEST-OS-2.123_202205-01~~| May 26, 2022 | August 3, 2022 |
|~~WA-GUEST-OS-2.122_202204-01~~| April 30, 2022 | July 11, 2022 | |~~WA-GUEST-OS-2.121_202203-01~~| March 19, 2022 | May 26, 2022 | |~~WA-GUEST-OS-2.120_202202-01~~| March 2, 2022 | April 30, 2022 |
cognitive-services Quickstart Translator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/quickstart-translator.md
Previously updated : 07/11/2022 Last updated : 07/28/2022 ms.devlang: csharp, golang, java, javascript, python <!-- markdownlint-disable MD033 --> <!-- markdownlint-disable MD001 -->
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD036 -->
# Quickstart: Azure Cognitive Services Translator
To get started, you'll need an active Azure subscription. If you don't have an A
:::image type="content" source="media/quickstarts/keys-and-endpoint-portal.png" alt-text="Screenshot: Azure portal keys and endpoint page."::: * Use the free pricing tier (F0) to try the service and upgrade later to a paid tier for production.
+<!-- checked -->
+> [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Csharp&Product=Translator&Page=quickstart-translator&Section=prerequisites)
## Headers To call the Translator service via the [REST API](reference/rest-api-guide.md), you'll need to include the following headers with each request. Don't worry, we'll include the headers for you in the sample code for each programming language.
-For more information on Translator authentication options, *see* the [Translator v3 reference](./reference/v3-0-reference.md#authentication) guide.
+For more information on Translator authentication options, _see_ the [Translator v3 reference](./reference/v3-0-reference.md#authentication) guide.
|Header|Value| Condition | | |: |:|
For more information on Translator authentication options, *see* the [Translator
||| > [!IMPORTANT]
-> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../key-vault/general/overview.md). See the Cognitive Services [security](../cognitive-services-security.md) article for more information.
+>
+> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../key-vault/general/overview.md). For more information, _see_ the Cognitive Services [security](../cognitive-services-security.md) article.
## Translate text
The core operation of the Translator service is translating text. In this quicks
### [C#: Visual Studio](#tab/csharp)
-### Set up
+### Set up your Visual Studio project
1. Make sure you have the current version of [Visual Studio IDE](https://visualstudio.microsoft.com/vs/).
The core operation of the Translator service is translating text. In this quicks
1. Select install from the right package manager window to add the package to your project. :::image type="content" source="media/quickstarts/install-newtonsoft.png" alt-text="Screenshot of the NuGet package install button.":::
+<!-- checked -->
+> [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Csharp&Product=Translator&Page=quickstart-translator&Section=set-up-your-visual-studio-project)
-### Build your application
+### Build your C# application
> [!NOTE] > > * Starting with .NET 6, new projects using the `console` template generate a new program style that differs from previous versions. > * The new output uses recent C# features that simplify the code you need to write. > * When you use the newer version, you only need to write the body of the `Main` method. You don't need to include top-level statements, global using directives, or implicit using directives.
-> * For more information, *see* [**New C# templates generate top-level statements**](/dotnet/core/tutorials/top-level-templates).
+> * For more information, _see_ [**New C# templates generate top-level statements**](/dotnet/core/tutorials/top-level-templates).
1. Open the **Program.cs** file.
class Program
} ```
+<!-- checked -->
+> [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Csharp&Product=Translator&Page=quickstart-translator&Section=build-your-c#-application)
### Run your C# application Once you've added a code sample to your application, choose the green **start button** next to formRecognizer_quickstart to build and run your program, or press **F5**. +
+**Translation output:**
+
+After a successful call, you should see the following response:
+
+```json
+[
+ {
+ "detectedLanguage": {
+ "language": "en",
+ "score": 1.0
+ },
+ "translations": [
+ {
+ "text": "J'aimerais vraiment conduire votre voiture autour du pâté de maisons plusieurs fois!",
+ "to": "fr"
+ },
+ {
+ "text": "Ngingathanda ngempela ukushayela imoto yakho endaweni evimbelayo izikhathi ezimbalwa!",
+ "to": "zu"
+ }
+ ]
+ }
+]
+
+```
+<!-- checked -->
+> [!div class="nextstepaction"]
+> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Csharp&Product=Translator&Page=quickstart-translator&Section=run-your-c#-application)
### [Go](#tab/go)
You can use any text editor to write Go applications. We recommend using the lat
```console go version ```
+<!-- checked -->
+> [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Go&Product=Translator&Page=quickstart-translator&Section=set-up-your-go-environment)
+
+### Build your Go application
1. In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app called **translator-app**, and navigate to it.
func main() {
fmt.Printf("%s\n", prettyJSON) } ```
+<!-- checked -->
+> [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Go&Product=Translator&Page=quickstart-translator&Section=build-your-go-application)
### Run your Go application
Once you've added a code sample to your application, your Go program can be exec
go run translation.go ```
-### [Java](#tab/java)
+**Translation output:**
+
+After a successful call, you should see the following response:
+
+```json
+[
+ {
+ "detectedLanguage": {
+ "language": "en",
+ "score": 1.0
+ },
+ "translations": [
+ {
+ "text": "J'aimerais vraiment conduire votre voiture autour du pâté de maisons plusieurs fois!",
+ "to": "fr"
+ },
+ {
+ "text": "Ngingathanda ngempela ukushayela imoto yakho endaweni evimbelayo izikhathi ezimbalwa!",
+ "to": "zu"
+ }
+ ]
+ }
+]
+
+```
+<!-- checked -->
+> [!div class="nextstepaction"]
+> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Go&Product=Translator&Page=quickstart-translator&Section=run-your-go-application)
+
+### [Java: Gradle](#tab/java)
### Set up your Java environment
-* You should have the latest version of [Visual Studio Code](https://code.visualstudio.com/) or your preferred IDE. *See* [Java in Visual Studio Code](https://code.visualstudio.com/docs/languages/java).
+* You should have the latest version of [Visual Studio Code](https://code.visualstudio.com/) or your preferred IDE. _See_ [Java in Visual Studio Code](https://code.visualstudio.com/docs/languages/java).
>[!TIP] >
Once you've added a code sample to your application, your Go program can be exec
* A [**Java Development Kit** (OpenJDK)](/java/openjdk/download#openjdk-17) version 8 or later. * [**Gradle**](https://docs.gradle.org/current/userguide/installation.html), version 6.8 or later.
+<!-- checked -->
+> [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=set-up-your-java-environment)
### Create a new Gradle project
Once you've added a code sample to your application, your Go program can be exec
mkdir translator-text-app; cd translator-text-app ```
-1. Run the `gradle init` command from the translator-text-app directory. This command will create essential build files for Gradle, including *build.gradle.kts*, which is used at runtime to create and configure your application.
+1. Run the `gradle init` command from the translator-text-app directory. This command will create essential build files for Gradle, including _build.gradle.kts_, which is used at runtime to create and configure your application.
```console gradle init --type basic
Once you've added a code sample to your application, your Go program can be exec
implementation("com.google.code.gson:gson:2.9.0") } ```
+<!-- checked -->
+> [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=create-a-gradle-project)
-### Create a Java Application
+### Create your Java Application
1. From the translator-text-app directory, run the following command:
public class TranslatorText {
} } ```
+<!-- checked -->
+> [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=create-your-java-application)
-### Build and run your application
+### Build and run your Java application
Once you've added a code sample to your application, navigate back to your main project directoryΓÇö**translator-text-app**, open a console window, and enter the following commands:
Once you've added a code sample to your application, navigate back to your main
gradle run ```
-### [Node.js](#tab/nodejs)
+**Translation output:**
+
+After a successful call, you should see the following response:
+
+```json
+[
+ {
+ "detectedLanguage": {
+ "language": "en",
+ "score": 1.0
+ },
+ "translations": [
+ {
+ "text": "J'aimerais vraiment conduire votre voiture autour du pâté de maisons plusieurs fois!",
+ "to": "fr"
+ },
+ {
+ "text": "Ngingathanda ngempela ukushayela imoto yakho endaweni evimbelayo izikhathi ezimbalwa!",
+ "to": "zu"
+ }
+ ]
+ }
+]
-### Create a Node.js Express application
+```
+<!-- checked -->
+> [!div class="nextstepaction"]
+> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=build-and-run-your-java-application)
+
+### [JavaScript: Node.js](#tab/nodejs)
+
+### Set up your Node.js Express project
1. If you haven't done so already, install the latest version of [Node.js](https://nodejs.org/en/download/). Node Package Manager (npm) is included with the Node.js installation.
Once you've added a code sample to your application, navigate back to your main
> * Type the following command **New-Item index.js**. > > * You can also create a new file named `index.js` in your IDE and save it to the `translator-app` directory.
+<!-- checked -->
+> [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=set-up-your-nodejs-express-project)
+
+### Build your JavaScript application
1. Add the following code sample to your `index.js` file. **Make sure you update the key variable with the value from your Azure portal Translator instance**:
Once you've added a code sample to your application, navigate back to your main
}) ```
+<!-- checked -->
+> [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=build-your-javascript-application)
-### Run your application
+### Run your JavaScript application
Once you've added the code sample to your application, run your program:
Once you've added the code sample to your application, run your program:
node index.js ```
+**Translation output:**
+
+After a successful call, you should see the following response:
+
+```json
+[
+ {
+ "detectedLanguage": {
+ "language": "en",
+ "score": 1.0
+ },
+ "translations": [
+ {
+ "text": "J'aimerais vraiment conduire votre voiture autour du pâté de maisons plusieurs fois!",
+ "to": "fr"
+ },
+ {
+ "text": "Ngingathanda ngempela ukushayela imoto yakho endaweni evimbelayo izikhathi ezimbalwa!",
+ "to": "zu"
+ }
+ ]
+ }
+]
+
+```
+<!-- checked -->
+> [!div class="nextstepaction"]
+> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=run-your-javascript-application)
+ ### [Python](#tab/python)
-### Create a Python application
+### Set up your Python project
1. If you haven't done so already, install the latest version of [Python 3.x](https://www.python.org/downloads/). The Python installer package (pip) is included with the Python installation.
Once you've added the code sample to your application, run your program:
> [!NOTE] > We will also use a Python built-in package called json. It's used to work with JSON data.
+<!-- checked -->
+> [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=set-up-your-python-project)
+
+### Build your Python application
1. Create a new Python file called **translator-app.py** in your preferred editor or IDE.
response = request.json()
print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': '))) ```
+<!-- checked -->
+> [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=build-your-python-application)
-### Run your python application
+### Run your Python application
Once you've added a code sample to your application, build and run your program:
Once you've added a code sample to your application, build and run your program:
python translator-app.py ``` --
-### Translation output
+**Translation output:**
-After a successful call, you should see the following response:
+After a successful call, you should see the following response:
```json [
After a successful call, you should see the following response:
] ```
+<!-- checked -->
+> [!div class="nextstepaction"]
+> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=run-your-python-application)
++
-That's it, congratulations! You have learned to use the Translator service to translate text.
+## Next steps
-## Next step
+That's it, congratulations! You've learned to use the Translator service to translate text.
Explore our how-to documentation and take a deeper dive into Translation service capabilities:
That's it, congratulations! You have learned to use the Translator service to tr
* [**Get sentence length**](translator-text-apis.md#get-sentence-length)
-* [**Dictionary lookup and alternate translations**](translator-text-apis.md#dictionary-examples-translations-in-context)
+* [**Dictionary lookup and alternate translations**](translator-text-apis.md#dictionary-examples-translations-in-context)
cognitive-services Cognitive Services Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-virtual-networks.md
Network rules are enforced on all network protocols to Azure Cognitive Services,
## Supported regions and service offerings
-Virtual networks (VNETs) are supported in [regions where Cognitive Services are available](https://azure.microsoft.com/global-infrastructure/services/). Currently multi-service resource does not support VNET. Cognitive Services supports service tags for network rules configuration. The services listed below are included in the **CognitiveServicesManagement** service tag.
+Virtual networks (VNETs) are supported in [regions where Cognitive Services are available](https://azure.microsoft.com/global-infrastructure/services/). Cognitive Services supports service tags for network rules configuration. The services listed below are included in the **CognitiveServicesManagement** service tag.
> [!div class="checklist"] > * Anomaly Detector
cognitive-services Use Asynchronously https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/use-asynchronously.md
Previously updated : 06/28/2022 Last updated : 08/02/2022
Currently, the following features are available to be used asynchronously:
* Customer content detection * Sentiment analysis and opinion mining * Text Analytics for health
+* Personal Identifiable information (PII)
When you send asynchronous requests, you will incur charges based on number of text records you include in your request, for each feature use. For example, if you send a text record for sentiment analysis and NER, it will be counted as sending two text records, and you will be charged for both according to your [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/).
cognitive-services Entity Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/concepts/entity-categories.md
The PII feature includes the ability to detect personal (`PII`) and health (`PHI
> [!NOTE] > To detect protected health information (PHI), use the `domain=phi` parameter and model version `2020-04-01` or later.
->
-> For example: `https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1/entities/recognition/pii?domain=phi&model-version=2021-01-15`
-
++ The following entity categories are returned when you're sending API requests PII feature. ## Category: Person
The entity in this category can have the following subcategories.
### Azure information
-These entity categories includes identifiable Azure information, including authentication information and connection strings. Not returned as PHI.
+These entity categories include identifiable Azure information like authentication information and connection strings. Not returned as PHI.
:::row::: :::column span="":::
cognitive-services How To Call For Conversations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/how-to-call-for-conversations.md
You can submit the input to the API as list of conversation items. Analysis is p
When using the async feature, the API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval.
-When you submit data to conversational PII, we can send one conversation (chat or spoken) per request.
+When you submit data to conversational PII, you can send one conversation (chat or spoken) per request.
The API will attempt to detect all the [defined entity categories](concepts/conversations-entity-categories.md) for a given conversation input. If you want to specify which entities will be detected and returned, use the optional `piiCategories` parameter with the appropriate entity categories.
When you get results from PII detection, you can stream the results to an applic
|Language |Package version | |||
- |.NET | [5.2.0-beta.2](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0-beta.2) |
- |Python | [5.2.0b2](https://pypi.org/project/azure-ai-textanalytics/5.2.0b2/) |
+ |.NET | [1.0.0](https://www.nuget.org/packages/Azure.AI.Language.Conversations/1.0.0) |
+ |Python | [1.0.0](https://pypi.org/project/azure-ai-language-conversations/1.0.0) |
4. After you've installed the client library, use the following samples on GitHub to start calling the API.
- * [C#](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample9_RecognizeCustomEntities.md)
- * [Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/RecognizeCustomEntities.java)
- * [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js)
- * [Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_custom_entities.py)
+ * [C#](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples/Sample8_AnalyzeConversation_ConversationPII_Transcript.md)
+ * [Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples/sample_conv_pii_transcript_input.py)
5. See the following reference documentation for more information on the client, and return object:
- * [C#](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true)
- * [Java](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true)
- * [JavaScript](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true)
- * [Python](/python/api/azure-ai-textanalytics/azure.ai.textanalytics?view=azure-python-preview&preserve-view=true)
+ * [C#](/dotnet/api/azure.ai.language.conversations)
+ * [Python](/python/api/azure-ai-language-conversations/azure.ai.language.conversations.aio)
# [REST API](#tab/rest-api)
cognitive-services How To Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/how-to-call.md
# How to detect and redact Personally Identifying Information (PII)
-The PII feature can evaluate unstructured text, extract sensitive information (PII) and health information (PHI) in text across several pre-defined categories.
+The PII feature can evaluate unstructured text, extract and redact sensitive information (PII) and health information (PHI) in text across several [pre-defined categories](concepts/entity-categories.md).
## Determine how to process the data (optional)
By default, this feature will use the latest available AI model on your text. Yo
### Input languages
-When you submit documents to be processed, you can specify which of [the supported languages](language-support.md) they're written in. if you don't specify a language, key phrase extraction will default to English. The API may return offsets in the response to support different [multilingual and emoji encodings](../concepts/multilingual-emoji-support.md).
+When you submit documents to be processed, you can specify which of [the supported languages](language-support.md) they're written in. if you don't specify a language, extraction will default to English. The API may return offsets in the response to support different [multilingual and emoji encodings](../concepts/multilingual-emoji-support.md).
## Submitting data
Analysis is performed upon receipt of the request. Using the PII detection featu
[!INCLUDE [asynchronous-result-availability](../includes/async-result-availability.md)]
-The API will attempt to detect the [defined entity categories](concepts/entity-categories.md) for a given document language. If you want to specify which entities will be detected and returned, use the optional `piiCategories` parameter with the appropriate entity categories. This parameter can also let you detect entities that aren't enabled by default for your document language. The following URL example would detect a French driver's license number that might occur in English text, along with the default English entities.
+## Select which entities to be returned
+
+The API will attempt to detect the [defined entity categories](concepts/entity-categories.md) for a given document language. If you want to specify which entities will be detected and returned, use the optional `piiCategories` parameter with the appropriate entity categories. This parameter can also let you detect entities that aren't enabled by default for your document language. The following example would detect only `Person`. You can specify one or more [entity types](concepts/entity-categories.md) to be returned.
> [!TIP] > If you don't include `default` when specifying entity categories, The API will only return the entity categories you specify.
-`https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1/entities/recognition/pii?piiCategories=default,FRDriversLicenseNumber`
+**Input:**
+
+> [!NOTE]
+> In this example, it will return only **person** entity type:
+
+`https://<your-language-resource-endpoint>/language/:analyze-text?api-version=2022-05-01`
+
+```bash
+{
+    "kind": "PiiEntityRecognition",
+    "parameters": 
+    {
+        "modelVersion": "latest",
+ "piiCategories" :
+ [
+ "Person"
+ ]
+    },
+    "analysisInput":
+    {
+        "documents":
+        [
+            {
+                "id":"1",
+                "language": "en",
+                "text": "We went to Contoso foodplace located at downtown Seattle last week for a dinner party, and we adore the spot! They provide marvelous food and they have a great menu. The chief cook happens to be the owner (I think his name is John Doe) and he is super nice, coming out of the kitchen and greeted us all. We enjoyed very much dining in the place! The pasta I ordered was tender and juicy, and the place was impeccably clean. You can even pre-order from their online menu at www.contosofoodplace.com, call 112-555-0176 or send email to order@contosofoodplace.com! The only complaint I have is the food didn't come fast enough. Overall I highly recommend it!"
+            }
+        ]
+    }
+}
+
+```
+
+**Output:**
+
+```bash
+
+{
+ "kind": "PiiEntityRecognitionResults",
+ "results": {
+ "documents": [
+ {
+ "redactedText": "We went to Contoso foodplace located at downtown Seattle last week for a dinner party, and we adore the spot! They provide marvelous food and they have a great menu. The chief cook happens to be the owner (I think his name is ********) and he is super nice, coming out of the kitchen and greeted us all. We enjoyed very much dining in the place! The pasta I ordered was tender and juicy, and the place was impeccably clean. You can even pre-order from their online menu at www.contosofoodplace.com, call 112-555-0176 or send email to order@contosofoodplace.com! The only complaint I have is the food didn't come fast enough. Overall I highly recommend it!",
+ "id": "1",
+ "entities": [
+ {
+ "text": "John Doe",
+ "category": "Person",
+ "offset": 226,
+ "length": 8,
+ "confidenceScore": 0.98
+ }
+ ],
+ "warnings": []
+ }
+ ],
+ "errors": [],
+ "modelVersion": "2021-01-15"
+ }
+}
+```
## Getting PII results
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/language-support.md
Previously updated : 11/02/2021 Last updated : 08/02/2022 # Personally Identifiable Information (PII) detection language support
-Use this article to learn which natural languages are supported by the PII feature of Azure Cognitive Service for Language.
+Use this article to learn which natural languages are supported by the PII and conversation PII (preview) features of Azure Cognitive Service for Language.
> [!NOTE] > * Languages are added as new [model versions](how-to-call.md#specify-the-pii-detection-model) are released.
Use this article to learn which natural languages are supported by the PII featu
## PII language support
-| Language | Language code | Starting with v3 model version: | Notes |
+| Language | Language code | Starting with model version | Notes |
|:-|:-:|:-:|::| | Chinese-Simplified | `zh-hans` | 2021-01-15 | `zh` also accepted | | English | `en` | 2020-07-01 | |
Use this article to learn which natural languages are supported by the PII featu
## PII language support
-| Language | Language code | Starting with v3 model version: | Notes |
+| Language | Language code | Starting with model version | Notes |
|:-|:-:|:-:|::| | English | `en` | 2022-05-15-preview | |
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/overview.md
Previously updated : 06/15/2022 Last updated : 08/02/2022 # What is Personally Identifiable Information (PII) detection in Azure Cognitive Service for Language?
-PII detection is one of the features offered by [Azure Cognitive Service for Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. The PII detection feature can identify, categorize, and redact sensitive information in unstructured text. For example: phone numbers, email addresses, and forms of identification. The method for utilizing PII in conversations is different than other use cases, and articles for this use have been separated.
+PII detection is one of the features offered by [Azure Cognitive Service for Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. The PII detection feature can **identify, categorize, and redact** sensitive information in unstructured text. For example: phone numbers, email addresses, and forms of identification. The method for utilizing PII in conversations is different than other use cases, and articles for this use have been separated.
* [**Quickstarts**](quickstart.md) are getting-started instructions to guide you through making requests to the service. * [**How-to guides**](how-to-call.md) contain instructions for using the service in more specific or customized ways. * The [**conceptual articles**](concepts/entity-categories.md) provide in-depth explanations of the service's functionality and features.
+PII comes into two shapes:
+* [PII](how-to-call.md) - works on unstructured text.
+* [Conversation PII (preview)](how-to-call-for-conversations.md) - tailored model to work on conversation transcription.
+ [!INCLUDE [Typical workflow for pre-configured language features](../includes/overview-typical-workflow.md)] [!INCLUDE [Developer reference](../includes/reference-samples-text-analytics.md)] ++ ## Responsible AI
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for PII](/legal/cognitive-services/language-service/transparency-note-personally-identifiable-information?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it's deployed. Read the [transparency note for PII](/legal/cognitive-services/language-service/transparency-note-personally-identifiable-information?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
[!INCLUDE [Responsible AI links](../includes/overview-responsible-ai-links.md)]
+## Example scenarios
+
+* **Apply sensitivity labels** - For example, based on the results from the PII service, a public sensitivity label might be applied to documents where no PII entities are detected. For documents where US addresses and phone numbers are recognized, a confidential label might be applied. A highly confidential label might be used for documents where bank routing numbers are recognized.
+* **Redact some categories of personal information from documents that get wider circulation** - For example, if customer contact records are accessible to first line support representatives, the company may want to redact the customer's personal information besides their name from the version of the customer history to preserve the customer's privacy.
+* **Redact personal information in order to reduce unconscious bias** - For example, during a company's resume review process, they may want to block name, address and phone number to help reduce unconscious gender or other biases.
+* **Replace personal information in source data for machine learning to reduce unfairness** ΓÇô For example, if you want to remove names that might reveal gender when training a machine learning model, you could use the service to identify them and you could replace them with generic placeholders for model training.
+* **Remove personal information from call center transcription** ΓÇô For example, if you want to remove names or other PII data that happen between the agent and the customer in a call center scenario. You could use the service to identify and remove them.
+* **Data cleaning for data science** - PII can be used to make the data ready for data scientists and engineers to be able to use these data to train their machine learning models. Redacting the data to make sure that customer data isn't exposed.
+++ ## Next steps There are two ways to get started using the entity linking feature:
-* [Language Studio](../language-studio.md), which is a web-based platform that enables you to try several Azure Cognitive Service for Language features without needing to write code.
+* [Language Studio](../language-studio.md), which is a web-based platform that enables you to try several Language service features without needing to write code.
* The [quickstart article](quickstart.md) for instructions on making requests to the service using the REST API and client library SDK.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/overview.md
The Azure OpenAI service provides REST API access to OpenAI's powerful language
| Regional availability | South Central US, <br> West Europe | | Content filtering | Prompts and completions are evaluated against our content policy with automated systems. High severity content will be filtered. | -- ## Responsible AI At Microsoft, we're committed to the advancement of AI driven by principles that put people first. Generative models such as the ones available in the Azure OpenAI service have significant potential benefits, but without careful design and thoughtful mitigations, such models have the potential to generate incorrect or even harmful content. Microsoft has made significant investments to help guard against abuse and unintended harm, which includes requiring applicants to show well-defined use cases, incorporating MicrosoftΓÇÖs <a href="https://www.microsoft.com/ai/responsible-ai?activetab=pivot1:primaryr6" target="_blank">principles for responsible AI use</a>, building content filters to support customers, and providing responsible AI implementation guidance to onboarded customers.
All solutions using the Azure OpenAI service are also required to go through a u
## Key concepts ### Prompts & Completions+ The completions endpoint is the core component of the API service. This API provides access to the model's text-in, text-out interface. Users simply need to provide an input **prompt** containing the English text command, and the model will generate a text **completion**. Here's an example of a simple prompt and completion:
->**Prompt**:
+>**Prompt**:
``` """ count to 5 in a for loop """ ``` >
->**Completion**:
+>**Completion**:
``` for i in range(1, 6): print(i) ``` ### Tokens+ OpenAI Enterprise processes text by breaking it down into tokens. Tokens can be words or just chunks of characters. For example, the word ΓÇ£hamburgerΓÇ¥ gets broken up into the tokens ΓÇ£hamΓÇ¥, ΓÇ£burΓÇ¥ and ΓÇ£gerΓÇ¥, while a short and common word like ΓÇ£pearΓÇ¥ is a single token. Many tokens start with a whitespace, for example ΓÇ£ helloΓÇ¥ and ΓÇ£ byeΓÇ¥. The total number of tokens processed in a given request depends on the length of your input, output and request parameters. The quantity of tokens being processed will also affect your response latency and throughput for the models.
The total number of tokens processed in a given request depends on the length of
The Azure OpenAI service is a new product offering on Azure. You can get started with the Azure OpenAI service the same way as any other Azure product where you [create a resource](how-to/create-resource.md), or instance of the service, in your Azure Subscription. You can read more about Azure's [resource management design](../../azure-resource-manager/management/overview.md). - ### Deployments
-Once you create an Azure OpenAI Resource, you must deploy a model before you can start making API calls and generating text. This action can be done using the Deployment APIs. These APIs allow you to specify the model you wish to use.
+
+Once you create an Azure OpenAI Resource, you must deploy a model before you can start making API calls and generating text. This action can be done using the Deployment APIs. These APIs allow you to specify the model you wish to use.
### In-context learning
The models used by the Azure OpenAI service use natural language instructions an
There are three main approaches for in-context learning: Few-shot, one-shot and zero-shot. These approaches vary based on the amount of task-specific data that is given to the model: **Few-shot**: In this case, a user includes several examples in the call prompt that demonstrate the expected answer format and content. The following example shows a few-shot prompt where we provide multiple examples:+ ``` Convert the questions to a command: Q: Ask Constance if we need some bread
There are three main approaches for in-context learning: Few-shot, one-shot and
A: ```
-The number of examples typically range from 0 to 100 depending on how many can fit in the maximum input length for a single prompt. Maximum input length can vary depending on the specific models you use. Few-shot learning enables a major reduction in the amount of task-specific data required for accurate predictions. This approach will typically perform less accurately than a fine-tuned model.
+The number of examples typically range from 0 to 100 depending on how many can fit in the maximum input length for a single prompt. Maximum input length can vary depending on the specific models you use. Few-shot learning enables a major reduction in the amount of task-specific data required for accurate predictions. This approach will typically perform less accurately than a fine-tuned model.
-
**One-shot**: This case is the same as the few-shot approach except only one example is provided. **Zero-shot**: In this case, no examples are provided to the model and only the task request is provided.
The service provides users access to several different models. Each model provid
The Codex series of models are a descendant of GPT-3 and have been trained on both natural language and code to power natural language to code use cases. Learn more about each model on our [models concept page](./concepts/models.md).
-## Terms of use
-
-The use of Azure OpenAI service is governed by the terms of service that were agreed to upon onboarding. You may only use this service for the use case provided. You must complete another review before using the Azure OpenAI service in a "live" or production scenario, within your company, or with your customers (as compared to use solely for internal evaluation).
- ## Next steps Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
communication-services Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/chat/logic-app.md
+
+ Title: Quickstart - Send chat message in Power Automate with Azure Communication Services
+
+description: In this quickstart, learn how to send a chat message in Azure Logic Apps workflows by using the Azure Communication Services Chat connector.
++++ Last updated : 07/20/2022+++++
+# Quickstart: Send chat message in Power Automate with Azure Communication Services
+
+You can create automated workflows that can send chat messages using the Azure Communication Services Chat connector. This quickstart will show how to create a chat, add a participant, send a message and list messages into an existing workflow.
+
+## Prerequisites
+
+- An Azure account with an active subscription, or [create an Azure account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- An active Azure Communication Services resource, or [create a Communication Services resource](../create-communication-resource.md).
+
+- An active Logic Apps resource (logic app), or [create a blank logic app but with the trigger that you want to use](../../../logic-apps/quickstart-create-first-logic-app-workflow.md). Currently, the Azure Communication Services Chat connector provides only actions, so your logic app requires a trigger, at minimum.
++
+## Create user
+
+Add a new step in your workflow by using the Azure Communication Services Identity connector, follow these steps in Power Automate with your Power Automate flow open in edit mode.
+
+1. On the designer, under the step where you want to add the new action, select New step. Alternatively, to add the new action between steps, move your pointer over the arrow between those steps, select the plus sign (+), and select Add an action.
+
+1. In the Choose an operation search box, enter Communication Services Identity. From the actions list, select Create a user.
+
+ :::image type="content" source="./media/logic-app/azure-communications-services-connector-create-user.png" alt-text="Screenshot that shows the Azure Communication Services Identity connector Create user action.":::
+
+1. Provide the Connection String. This can be found in [Microsoft Azure](https://portal.azure.com/), within your Azure Communication Service Resource, on the Keys option from the left menu > Connection String
+
+ :::image type="content" source="./media/logic-app/azure-portal-connection-string.png" alt-text="Screenshot that shows the Keys page within an Azure Communication Services Resource." lightbox="./media/logic-app/azure-portal-connection-string.png":::
+
+1. Provide a Connection Name
+
+1. Click ΓÇ£Show advanced optionsΓÇ¥ and select the Token Scope the action will also output an access token and its expiration time with the specified scope.
+
+ This action will output a User ID, which is a Communication Services user identity.
+ Additionally, if you click ΓÇ£Show advanced optionsΓÇ¥ and select the Token Scope the action will also output an access token and its expiration time with the specified scope.
+
+ :::image type="content" source="./media/logic-app/azure-communications-services-connector-create-user-action.png" alt-text="Screenshot that shows the Azure Communication Services Identity connector Create user action options.":::
+
+1. Select ΓÇ£chatΓÇ¥
+
+ :::image type="content" source="./media/logic-app/azure-communications-services-connector-create-user-action-advanced.png" alt-text="Screenshot that shows the Azure Communication Services Chat connector advanced options.":::
+
+1. Click Create. This will output the User ID and an Access Token.
+
+## Create a chat thread
+
+1. Add a new action
+
+1. In the Choose an operation search box, enter Communication Services Chat. From the actions list, select Create chat thread.
+
+ :::image type="content" source="./media/logic-app/azure-communications-services-connector-create-chat-thread.png" alt-text="Screenshot that shows the Azure Communication Services Chat connector Create a chat thread action.":::
+
+1. Provide the Azure Communication Services endpoint URL. This can be found in [Microsoft Azure](https://portal.azure.com/), within your Azure Communication Service Resource, on the Keys option from the left menu > Endpoint.
+
+1. Provide a Connection Name
+
+1. Select the Access Token from the previous step, add a Chat thread topic description. Additionally, add the created user and add a Name for the participant.
+
+ :::image type="content" source="./media/logic-app/azure-communications-services-connector-create-chat-thread-input.png" alt-text="Screenshot that shows the Azure Communication Services Chat connector Create chat thread action input fields.":::
+
+## Send a message
+
+1. Add a new action
+
+1. In the Choose an operation search box, enter Communication Services Chat. From the actions list, select Send a Chat message to chat thread.
+
+ :::image type="content" source="./media/logic-app/azure-communications-services-connector-send-chat-message.png" alt-text="Screenshot that shows the Azure Communication Services Chat connector Send chat message action.":::
+
+1. Provide the Access Token, Thread ID, Content, and Name information as shown below.
+
+ :::image type="content" source="./media/logic-app/azure-communications-services-connector-send-chat-message-input.png" alt-text="Screenshot that shows the Azure Communication Services Chat connector Send chat message action input fields.":::
+
+## List chat thread messages
+
+To verify you have correctly sent a message, we will add one more action to list the chat thread messages.
+1. Add a new action
+
+1. In the Choose an operation search box, enter Communication Services Chat. From the actions list, select List chat thread messages.
+
+ :::image type="content" source="./media/logic-app/azure-communications-services-connector-list-chat-messages.png" alt-text="Screenshot that shows the Azure Communication Services Chat connector List chat messages action.":::
+
+1. Provide the Access token and Thread ID as follows
+
+ :::image type="content" source="./media/logic-app/azure-communications-services-connector-list-chat-messages-input.png" alt-text="Screenshot that shows the Azure Communication Services Chat connector Send chat message action input.":::
+
+## Test your logic app
+
+To manually start your workflow, on the designer toolbar, select **Run**. The workflow should create a user, issue an access token for that user, then remove it and delete the user. For more information, review [how to run your workflow](../../../logic-apps/quickstart-create-first-logic-app-workflow.md#run-workflow).
+
+Now click on the List chat thread messages and check the output, the message sent will be in the action outputs.
++
+## Clean up resources
+
+To remove a Communication Services subscription, delete the Communication Services resource or resource group. Deleting the resource group also deletes any other resources in that group. For more information, review [how to clean up Communication Services resources](../create-communication-resource.md#clean-up-resources).
+
+To clean up your logic app workflow and related resources, review [how to clean up Logic Apps resources](../../../logic-apps/quickstart-create-first-logic-app-workflow.md#clean-up-resources).
+
+## Next steps
+
+In this quickstart, you learned how to create a user, create a chat thread and send a message using the Azure Communication Services Identity and Azure Communication Services Chat connectors. To learn more check the [Azure Communication Services Chat Connector](/connectors/acschat/) documentation.
+
+To learn more about access tokens check [Create and Manage Azure Communication Services users and access tokens](../chat/logic-app.md).
+
+To learn more about how to send an email check [Send email message in Power Automate with Azure Communication Services](../email/logic-app.md).
+
communication-services Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/logic-app.md
+
+ Title: Quickstart -Send email message in Power Automate with Azure Communication Services in Microsoft Power Automate
+
+description: In this quickstart, learn how to send an email in Azure Logic Apps workflows by using the Azure Communication Services Email connector.
++++ Last updated : 07/20/2022+++++
+# Quickstart: Send email message in Power Automate with Azure Communication Services
+
+This quickstart will show how to send emails using the Azure Communication Services Email connector in your Power Automate workflows.
++
+## Prerequisites
+
+- An Azure account with an active subscription, or [create an Azure account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- An active Azure Communication Services resource, or [create a Communication Services resource](../create-communication-resource.md).
+
+- An active Logic Apps resource (logic app), or [create a blank logic app but with the trigger that you want to use](../../../logic-apps/quickstart-create-first-logic-app-workflow.md). Currently, the Azure Communication Services Email connector provides only actions, so your logic app requires a trigger, at minimum.
+
+- An Azure Communication Services Email resource with a [configured domain](../email/create-email-communication-resource.md) or [custom domain](../email/add-custom-verified-domains.md).
+
+- An Azure Communication Services resource [connected with an Azure Email domain](../email/connect-email-communication-resource.md).
+++
+## Send email
+
+Add a new step in your workflow by using the Azure Communication Services Email connector, follow these steps in Power Automate with your Power Automate flow open in edit mode.
+
+1. On the designer, under the step where you want to add the new action, select New step. Alternatively, to add the new action between steps, move your pointer over the arrow between those steps, select the plus sign (+), and select Add an action.
+
+1. In the Choose an operation search box, enter Communication Services Email. From the actions list, select Send email.
+
+ :::image type="content" source="./media/logic-app/azure-communications-services-connector-send-email.png" alt-text="Screenshot that shows the Azure Communication Services Email connector Send email action.":::
+
+1. Provide the Connection String. This can be found in the [Microsoft Azure](https://portal.azure.com/), within your Azure Communication Service Resource, on the Keys option from the left menu > Connection String
+
+ :::image type="content" source="./media/logic-app/azure-communications-services-connection-string.png" alt-text="Screenshot that shows the Azure Communication Services Connection String." lightbox="./media/logic-app/azure-communications-services-connection-string.png":::
+
+1. Provide a Connection Name
+
+1. Select Send email
+
+ :::image type="content" source="./media/logic-app/azure-communications-services-connector-send-email.png" alt-text="Screenshot that shows the Azure Communication Services Email connector Send email action.":::
+
+1. Fill the **From** input field using an email domain configured in the [pre-requisites](#prerequisites). Also fill the To, Subject and Body field as shown below
+
+ :::image type="content" source="./media/logic-app/azure-communications-services-connector-send-email-input.png" alt-text="Screenshot that shows the Azure Communication Services Email connector Send email action input.":::
+++
+## Test your logic app
+
+To manually start your workflow, on the designer toolbar, select **Run**. The workflow should create a user, issue an access token for that user, then remove it and delete the user. For more information, review [how to run your workflow](../../../logic-apps/quickstart-create-first-logic-app-workflow.md#run-workflow). You can check the outputs of these actions after the workflow runs successfully.
+
+You should have an email in the address specified. Additionally, you can use the Get email message status action to check the status of emails send through the Send email action. To learn more actions, check the [Azure Communication Services Email connector](/connectors/acsemail/) documentation.
+
+## Clean up resources
+
+To remove a Communication Services subscription, delete the Communication Services resource or resource group. Deleting the resource group also deletes any other resources in that group. For more information, review [how to clean up Communication Services resources](../create-communication-resource.md#clean-up-resources).
+
+To clean up your logic app workflow and related resources, review [how to clean up Logic Apps resources](../../../logic-apps/quickstart-create-first-logic-app-workflow.md#clean-up-resources).
+
+## Next steps
+
+To learn more about [how to send a chat message](../chat/logic-app.md) from Power Automate using Azure Communication Services.
+
+To learn more about access tokens check [Create and Manage Azure Communication Services users and access tokens](../chat/logic-app.md).
communication-services Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/identity/logic-app.md
+
+ Title: Quickstart - Create and Manage Azure Communication Services users and access tokens in Microsoft Power Automate
+
+description: In this quickstart, learn how to manage users and access tokens in Azure Logic Apps workflows by using the Azure Communication Services Identity connector.
++++ Last updated : 07/20/2022+++++
+# Quickstart: Create and Manage Azure Communication Services users and access tokens in Microsoft Power Automate
+
+Access tokens let Azure Communication Services connectors authenticate directly against Azure Communication Services as a particular identity. You'll need to create access tokens if you want to perform actions like send a message in a chat using the [Azure Communication Services Chat](../chat/logic-app.md) connector.
+This quickstart shows how to [create a user](#create-user), [delete a user](#delete-a-user), [issue a user an access token](#issue-a-user-access-token) and [remove user access token](#revoke-user-access-tokens) using the [Azure Communication Services Identity](https://powerautomate.microsoft.com/connectors/details/shared_acsidentity/azure-communication-services-identity/) connector.
++
+## Prerequisites
+
+- An Azure account with an active subscription, or [create an Azure account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- An active Azure Communication Services resource, or [create a Communication Services resource](../create-communication-resource.md).
+
+- An active Logic Apps resource (logic app), or [create a blank logic app but with the trigger that you want to use](../../../logic-apps/quickstart-create-first-logic-app-workflow.md). Currently, the Azure Communication Services Identity connector provides only actions, so your logic app requires a trigger, at minimum.
++
+## Create user
+
+Add a new step in your workflow by using the Azure Communication Services Identity connector, follow these steps in Power Automate with your Power Automate flow open in edit mode.
+1. On the designer, under the step where you want to add the new action, select New step. Alternatively, to add the new action between steps, move your pointer over the arrow between those steps, select the plus sign (+), and select Add an action.
+
+1. In the Choose an operation search box, enter Communication Services Identity. From the actions list, select Create a user.
+
+ :::image type="content" source="./media/logic-app/azure-communications-services-connector-create-user.png" alt-text="Screenshot that shows the Azure Communication Services Identity connector Create user action.":::
+
+1. Provide the Connection String. This can be found in the [Microsoft Azure](https://portal.azure.com/), within your Azure Communication Service Resource, on the Keys option from the left menu > Connection String
+
+ :::image type="content" source="./media/logic-app/azure-portal-connection-string.png" alt-text="Screenshot that shows the Keys page within an Azure Communication Services Resource." lightbox="./media/logic-app/azure-portal-connection-string.png":::
+
+1. Provide a Connection Name
+
+1. Click **Create**
+
+ This action will output a User ID, which is a Communication Services user identity.
+ Additionally, if you click ΓÇ£Show advanced optionsΓÇ¥ and select the Token Scope the action will also output an access token and its expiration time with the specified scope.
+
+ :::image type="content" source="./media/logic-app/azure-communications-services-connector-create-user-action.png" alt-text="Screenshot that shows the Azure Communication Services connector Create user action.":::
+
+ :::image type="content" source="./media/logic-app/azure-communications-services-connector-create-user-action-advanced.png" alt-text="Screenshot that shows the Azure Communication Services connector Create user action advanced options.":::
++
+## Issue a user access token
+
+After you have a Communication Services identity, you can use the Issue a user access token action to issue an access token. The following steps will show you how:
+1. Add a new action and enter Communication Services Identity in the search box. From the actions list, select Issue a user access token.
+
+ :::image type="content" source="./media/logic-app/azure-communications-services-connector-issue-access-token-action.png" alt-text="Screenshot that shows the Azure Communication Services Identity connector Issue access token action.":::
+
+
+1. Then, you can use the User ID output from the previous [Create a user](#create-user) step.
+
+1. Specify the token scope: VoIP or chat. [Learn more about tokens and authentication](../../concepts/authentication.md).
+
+ :::image type="content" source="./media/logic-app/azure-communications-services-connector-issue-access-token-action-token-scope.png" alt-text="Screenshot that shows the Azure Communication Services Identity connector Issue access token action, specifying the token scope.":::
+
+This will output an access token and its expiration time with the specified scope.
+
+## Revoke user access tokens
+
+After you have a Communication Services identity, you can use the Issue a user access token action to revoke an access token . The following steps will show you how:
+1. Add a new action and enter Communication Services Identity in the search box. From the actions list, select Revoke user access tokens.
+
+ :::image type="content" source="./media/logic-app/azure-communications-services-connector-revoke-access-token-action.png" alt-text="Screenshot that shows the Azure Communication Services Identity connector Revoke access token action.":::
+
+1. Specify the User ID
+
+ :::image type="content" source="./media/logic-app/azure-communications-services-connector-revoke-access-token-action-user-id.png" alt-text="Screenshot that shows the Azure Communication Services Identity connector Revoke access token action input.":::
+
+This will revoke all user access tokens for the specified user, there are no outputs for this action.
++
+## Delete a user
+
+After you have a Communication Services identity, you can use the Issue a user access token action to delete an access token . The following steps will show you how:
+1. Add a new action and enter Communication Services Identity in the search box. From the actions list, select Delete a user.
+
+ :::image type="content" source="./media/logic-app/azure-communications-services-connector-delete-user.png" alt-text="Screenshot that shows the Azure Communication Services Identity connector Delete user action.":::
+
+1. Specify the User ID
+
+ :::image type="content" source="./media/logic-app/azure-communications-services-connector-delete-user-id.png" alt-text="Screenshot that shows the Azure Communication Services Identity connector Delete user action input.":::
+
+ This will remove the user and revoke all user access tokens for the specified user, there are no outputs for this action.
++
+## Test your logic app
+
+To manually start your workflow, on the designer toolbar, select **Run**. The workflow should create a user, issue an access token for that user, then remove it and delete the user. For more information, review [how to run your workflow](../../../logic-apps/quickstart-create-first-logic-app-workflow.md#run-workflow). You can check the outputs of these actions after the workflow runs successfully.
+
+## Clean up resources
+
+To remove a Communication Services subscription, delete the Communication Services resource or resource group. Deleting the resource group also deletes any other resources in that group. For more information, review [how to clean up Communication Services resources](../create-communication-resource.md#clean-up-resources).
+
+To clean up your logic app workflow and related resources, review [how to clean up Logic Apps resources](../../../logic-apps/quickstart-create-first-logic-app-workflow.md#clean-up-resources).
+
+## Next steps
+
+In this quickstart, you learned how to create a user, delete a user, issue a user an access token and remove user access token using the Azure Communication Services Identity connector. To learn more check the [Azure Communication Services Identity Connector](/connectors/acsidentity/) documentation.
+
+To see how tokens are use by other connectors, check out [how to send a chat message](../chat/logic-app.md) from Power Automate using Azure Communication Services.
+
+To learn more about how to send an email using the Azure Communication Services Email connector check [Send email message in Power Automate with Azure Communication Services](../email/logic-app.md).
communication-services Virtual Visits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/virtual-visits.md
Title: Virtual visits with Azure Communication Services
-description: Learn concepts for virtual visit apps
+ Title: Virtual appointments with Azure Communication Services
+description: Learn concepts for virtual appointments apps
-# Virtual visits
+# Virtual appointments
-This tutorial describes concepts for virtual visit applications. After completing this tutorial and the associated [Sample Builder](https://aka.ms/acs-sample-builder), you will understand common use cases that a virtual visits application delivers, the Microsoft technologies that can help you build those uses cases, and have built a sample application integrating Microsoft 365 and Azure that you can use to demo and explore further.
+This tutorial describes concepts for virtual visit applications. After completing this tutorial and the associated [Sample Builder](https://aka.ms/acs-sample-builder), you will understand common use cases that a virtual appointments application delivers, the Microsoft technologies that can help you build those uses cases, and have built a sample application integrating Microsoft 365 and Azure that you can use to demo and explore further.
-Virtual visits are a communication pattern where a **consumer** and a **business** assemble for a scheduled appointment. The **organizational boundary** between consumer and business, and **scheduled** nature of the interaction, are key attributes of most virtual visits. Many industries operate virtual visits: meetings with a healthcare provider, a loan officer, or a product support technician.
+Virtual appointments are a communication pattern where a **consumer** and a **business** assemble for a scheduled appointment. The **organizational boundary** between consumer and business, and **scheduled** nature of the interaction, are key attributes of most virtual appointments. Many industries operate virtual appointments: meetings with a healthcare provider, a loan officer, or a product support technician.
No matter the industry, there are at least three personas involved in a virtual visit and certain tasks they accomplish: - **Office Manager.** The office manager configures the businessΓÇÖ availability and booking rules for providers and consumers.-- **Provider.** The provider gets on the call with the consumer. They must be able to view upcoming virtual visits and join the virtual visit and engage in communication.
+- **Provider.** The provider gets on the call with the consumer. They must be able to view upcoming virtual appointments and join the virtual visit and engage in communication.
- **Consumer**. The consumer who schedules and motivates the visit. They must schedule a visit, enjoy reminders of the visit, typically through SMS or email, and join the virtual visit and engage in communication.
-Azure and Teams are interoperable. This interoperability gives organizations choice in how they deliver virtual visits using Microsoft's cloud. Three examples include:
+Azure and Teams are interoperable. This interoperability gives organizations choice in how they deliver virtual appointments using Microsoft's cloud. Three examples include:
-- **Microsoft 365** provides a zero-code suite for virtual visits using Microsoft [Teams](https://www.microsoft.com/microsoft-teams/group-chat-software/) and [Bookings](https://www.microsoft.com/microsoft-365/business/scheduling-and-booking-app). This is the easiest option but customization is limited. [Check out this video for an introduction.](https://www.youtube.com/watch?v=zqfGrwW2lEw)
+- **Microsoft 365** provides a zero-code suite for virtual appointments using Microsoft [Teams](https://www.microsoft.com/microsoft-teams/group-chat-software/) and [Bookings](https://www.microsoft.com/microsoft-365/business/scheduling-and-booking-app). This is the easiest option but customization is limited. [Check out this video for an introduction.](https://www.youtube.com/watch?v=zqfGrwW2lEw)
- **Microsoft 365 + Azure hybrid.** Combine Microsoft 365 Teams and Bookings with a custom Azure application for the consumer experience. Organizations take advantage of Microsoft 365's employee familiarity but customize and embed the consumer visit experience in their own application. - **Azure custom.** Build the entire solution on Azure primitives: the business experience, the consumer experience, and scheduling systems.
These three **implementation options** are columns in the table below, while eac
| *Provider* | Join the visit | Teams | Teams | ACS Calling & Chat | | *Consumer* | Schedule a visit | Bookings | Bookings | ACS Rooms | | *Consumer*| Be reminded of a visit | Bookings | Bookings | ACS SMS |
-| *Consumer*| Join the visit | Teams or Virtual Visits | ACS Calling & Chat | ACS Calling & Chat |
+| *Consumer*| Join the visit | Teams or virtual appointments | ACS Calling & Chat | ACS Calling & Chat |
-There are other ways to customize and combine Microsoft tools to deliver a virtual visits experience:
+There are other ways to customize and combine Microsoft tools to deliver a virtual appointments experience:
- **Replace Bookings with a custom scheduling experience with Graph.** You can build your own consumer-facing scheduling experience that controls Microsoft 365 meetings with Graph APIs. - **Replace TeamsΓÇÖ provider experience with Azure.** You can still use Microsoft 365 and Bookings to manage meetings but have the business user launch a custom Azure application to join the Teams meeting. This might be useful where you want to split or customize virtual visit interactions from day-to-day employee Teams activity. ## Extend Microsoft 365 with Azure The rest of this tutorial focuses on Microsoft 365 and Azure hybrid solutions. These hybrid configurations are popular because they combine employee familiarity of Microsoft 365 with the ability to customize the consumer experience. TheyΓÇÖre also a good launching point to understanding more complex and customized architectures. The diagram below shows user steps for a virtual visit:
-![High-level architecture of a hybrid virtual visits solution](./media/virtual-visits/virtual-visit-arch.svg)
+![High-level architecture of a hybrid virtual appointments solution](./media/virtual-visits/virtual-visit-arch.svg)
1. Consumer schedules the visit using Microsoft 365 Bookings. 2. Consumer gets a visit reminder through SMS and Email. 3. Provider joins the visit using Microsoft Teams.
The rest of this tutorial focuses on Microsoft 365 and Azure hybrid solutions. T
5. The users communicate with each other using voice, video, and text chat in a meeting. ## Building a virtual visit sample
-In this section weΓÇÖre going to use a Sample Builder tool to deploy a Microsoft 365 + Azure hybrid virtual visits application to an Azure subscription. This application will be a desktop and mobile friendly browser experience, with code that you can use to explore and productionize.
+In this section weΓÇÖre going to use a Sample Builder tool to deploy a Microsoft 365 + Azure hybrid virtual appointments application to an Azure subscription. This application will be a desktop and mobile friendly browser experience, with code that you can use to explore and productionize.
### Step 1 - Configure bookings This sample uses takes advantage of the Microsoft 365 Bookings app to power the consumer scheduling experience and create meetings for providers. Thus the first step is creating a Bookings calendar and getting the Booking page URL from https://outlook.office.com/bookings/calendar.
-![Screenshot of Booking configuration experience](./media/virtual-visits/bookings-url.png)
+![Screenshot of Booking configuration experience.](./media/virtual-visits/bookings-url.png)
Make sure online meeting is enable for the calendar by going to https://outlook.office.com/bookings/services.
-![Screenshot of Booking services configuration experience](./media/virtual-visits/bookings-services.png)
+![Screenshot of Booking services configuration experience.](./media/virtual-visits/bookings-services.png)
-And then make sure "Add online meeting" is enable.
+And then make sure "Add online meeting" is enabled.
-![Screenshot of Booking services online meeting configuration experience](./media/virtual-visits/bookings-services-online-meeting.png)
+![Screenshot of Booking services online meeting configuration experience.](./media/virtual-visits/bookings-services-online-meeting.png)
### Step 2 ΓÇô Sample Builder
-Use the Sample Builder to customize the consumer experience. You can reach the Sampler Builder using this [link](https://aka.ms/acs-sample-builder), or navigating to the page within the Azure Communication Services resource in the Azure portal. Step through the Sample Builder wizard and configure if Chat or Screen Sharing should be enabled. Change themes and text to you match your application. You can preview your configuration live from the page in both Desktop and Mobile browser form-factors.
+Use the Sample Builder to customize the consumer experience. You can reach the Sampler Builder using this [link](https://aka.ms/acs-sample-builder), or navigating to the page within the Azure Communication Services resource in the Azure portal. Step through the Sample Builder wizard and select Industry template, then configure if Chat or Screen Sharing should be enabled. Change themes and text to you match your application. You can preview your configuration live from the page in both Desktop and Mobile browser form-factors.
+
+[ ![Screenshot of Sample builder start page.](./media/virtual-visits/sample-builder-themes.png)](./media/virtual-visits/sample-builder-themes.png#lightbox)
-[ ![Screenshot of Sample builder start page](./media/virtual-visits/sample-builder-start.png)](./media/virtual-visits/sample-builder-start.png#lightbox)
### Step 3 - Deploy At the end of the Sample Builder wizard, you can **Deploy to Azure** or download the code as a zip. The sample builder code is publicly available on [GitHub](https://github.com/Azure-Samples/communication-services-virtual-visits-js).
-[ ![Screenshot of Sample builder deployment page](./media/virtual-visits/sample-builder-landing.png)](./media/virtual-visits/sample-builder-landing.png#lightbox)
+[ ![Screenshot of Sample builder deployment page.](./media/virtual-visits/sample-builder-landing.png)](./media/virtual-visits/sample-builder-landing.png#lightbox)
The deployment launches an Azure Resource Manager (ARM) template that deploys the themed application you configured.
-![Screenshot of Sample builder arm template](./media/virtual-visits/sample-builder-arm.png)
+![Screenshot of Sample builder arm template.](./media/virtual-visits/sample-builder-arm.png)
-After walking through the ARM template you can **Go to resource group**
+After walking through the ARM template you can **Go to resource group**.
-![Screenshot of a completed Azure Resource Manager Template](./media/virtual-visits/azure-complete-deployment.png)
+![Screenshot of a completed Azure Resource Manager Template.](./media/virtual-visits/azure-complete-deployment.png)
### Step 4 - Test The Sample Builder creates three resources in the selected Azure subscriptions. The **App Service** is the consumer front end, powered by Azure Communication Services.
-![Screenshot of produced azure resources in azure portal](./media/virtual-visits/azure-resources.png)
+![Screenshot of produced azure resources in azure portal.](./media/virtual-visits/azure-resources.png)
Opening the App ServiceΓÇÖs URL and navigating to `https://<YOUR URL>/VISIT` allows you to try out the consumer experience and join a Teams meeting. `https://<YOUR URL>/BOOK` embeds the Booking experience for consumer scheduling.
-![Screenshot of final view of azure app service](./media/virtual-visits/azure-resource-final.png)
+![Screenshot of final view of azure app service.](./media/virtual-visits/azure-resource-final.png)
### Step 5 - Set deployed app URL in Bookings Copy your application url into your calendar Business information setting by going to https://outlook.office.com/bookings/businessinformation.
-![Screenshot of final view of bookings business information](./media/virtual-visits/bookings-acs-app-integration-url.png)
+![Screenshot of final view of bookings business information.](./media/virtual-visits/bookings-acs-app-integration-url.png)
## Going to production The Sample Builder gives you the basics of a Microsoft 365 and Azure virtual visit: consumer scheduling via Bookings, consumer joins via custom app, and the provider joins via Teams. However, there are several things to consider as you take this scenario to production.
container-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/overview.md
With Azure Container Apps, you can:
- [**Run containers from any registry**](containers.md), public or private, including Docker Hub and Azure Container Registry (ACR). -- [**Use the Azure CLI extension or ARM templates**](get-started.md) to manage your applications.
+- [**Use the Azure CLI extension, Azure portal or ARM templates**](get-started.md) to manage your applications.
- [**Provide an existing virtual network**](vnet-custom.md) when creating an environment for your container apps. - [**Securely manage secrets**](manage-secrets.md) directly in your application. -- [**View application logs**](monitor.md) using Azure Log Analytics.
+- [**Monitor your apps**](monitor.md) using Azure Log Analytics.
<sup>1</sup> Applications that [scale on CPU or memory load](scale-app.md) can't scale to zero.
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
Analytical store relies on Azure Storage and offers the following protection aga
Although analytical store has built-in protection against physical failures, backup can be necessary for accidental deletes or updates in transactional store. In those cases, you can restore a container and use the restored container to backfill the data in the original container, or fully rebuild analytical store if necessary. > [!NOTE]
-> Currently analytical store isn't backuped and can't be restored, and your backup policy can't be planned relying on that.
+> Currently analytical store isn't backuped and can't be restored, and your backup policy can't be planned relying on that.
Synapse Link, and analytical store by consequence, has different compatibility levels with Azure Cosmos DB backup modes:
-* Periodic backup mode is fully compatible with Synapse Link and these 2 features can be used in the same database account without any restriction.
+* Periodic backup mode is fully compatible with Synapse Link and these 2 features can be used in the same database account.
* Continuous backup mode isn't fully supported yet:
- * Currently continuous backup mode can't be used in database accounts with Synapse Link enabled.
- * Currently database accounts with continuous backup mode enabled can enable Synapse Link through a support case.
- * Currently new database accounts can be created with continous backup mode and Synapse Link enabled, using Azure CLI or PowerShell. Those two features must be turned on at the same time, in the exact same command that creates the database account.
+ * Database accounts with Synapse Link enabled currently can't use continuous backup mode.
+ * Database accounts with continuous backup mode enabled can enable Synapse Link through a support case. This capability is in preview now.
+ * Database accounts that have neither continuous backup nor Synapse Link enabled can use these two features together through a support case. This capability is in preview now.
### Backup Polices
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md
Currently the point in time restore functionality has the following limitations:
* Multi-regions write accounts aren't supported.
-* Azure Synapse Link and periodic backup mode can coexist in the same database account. However, analytical store data isn't included in backups and restores. When Azure Synapse Link is enabled, Azure Cosmos DB will continue to automatically take backups of your data in the transactional store at a scheduled backup interval.
-
-* Azure Synapse Link and continuous backup mode can't coexist in the same database account. Currently database accounts with Azure Synapse Link enabled can't use continuous backup mode and vice-versa.
+* Currently Synapse Link isn't fully compatible with continuous backup mode. Click [here](analytical-store-introduction.md#backup) for more information.
* The restored account is created in the same region where your source account exists. You can't restore an account into a region where the source account didn't exist.
cosmos-db Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/synapse-link.md
Synapse Link isn't recommended if you're looking for traditional data warehouse
* Enabling Synapse Link on existing Cosmos DB containers is only supported for SQL API accounts. Synapse Link can be enabled on new containers for both SQL API and MongoDB API accounts.
-* Backup and restore:
- * You can recreate your analytical store data in some scenarios as below. In this mode, your transactional store data will be automatically backed up. If `transactional TTL` is equal or greater than your `analytical TTL` on your container, you can fully recreate your analytical store data by enabling analytical store on the restored container:
- - Azure Synapse Link can be enabled on accounts configured with periodic backups.
- - If continuous backup (point-in-time restore) is enabled on your account, you can now restore your analytical data. To enable Synapse Link for such accounts, please reach out to cosmosdbsynapselink@microsoft.com. This is applicable only for SQL API.
- * Restoring analytical data is not supported in following scenarios, for SQL API and API for Mongo DB:
- - If you already enabled Synapse Link on your database account, you cannot enable point-in-time restore on such accounts.
- - If `analytical TTL` is greater than `transactional TTL`, data that only exists in analytical store cannot be restored. You can continue to access full data from analytical store in the parent region.
+* Although analytical store data is not backed up, and therefore cannot be restored, you can rebuild your analytical store by reenabling Synapse Link in the restored container. Click [here](analytical-store-introduction.md#backup) for more information.
+
+* Currently Synapse Link isn't fully compatible with continuous backup mode. Click [here](analytical-store-introduction.md#backup) for more information.
* Granular Role-based Access (RBAC)s isn't supported when querying from Synapse. Users that have access to your Synapse workspace and have access to the Cosmos DB account can access all containers within that account. We currently don't support more granular access to the containers.
cost-management-billing Direct Ea Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-administration.md
Title: Azure portal administration for direct Enterprise Agreements
description: This article explains the common tasks that a direct enterprise administrator accomplishes in the Azure portal. Previously updated : 07/08/2022 Last updated : 08/03/2022
You can delete an enrollment account only when there are no active subscriptions
1. In the account row that you want to delete, select the ellipsis (**…**) symbol, and then select **Delete**. 1. On the Delete account page, select the **Yes, I want to delete this account** confirmation and then and select **Delete**.
+## Manage notification contacts
+
+Notifications allow enterprise administrators to enroll their team members to receive usage, invoice, and user management notifications without giving them billing account access in the Azure portal.
+
+Notification contacts are shown in the Azure portal in the Notifications under Settings. Managing your notification contacts makes sure that the right people in your organization get Azure EA notifications.
+
+To view current notifications settings and add contacts:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/AllBillingScopes).
+1. Navigate to **Cost Management + Billing**.
+1. In the left menu, select **Billing scopes** and then select a billing account scope.
+1. In the left menu, under **Settings**, select **Notifications**.
+ Notification contacts are shown on the page.
+1. To add a contact, selectΓÇ»**+ Add**.
+1. In the **Add Contact** area, enter the contact's email address.
+1. Under **Frequency**, select a notification interval. Weekly is the default value.
+1. Under **Categories**, select Lifecycle Management to receive notifications when the enrollment end date is approached or ended.
+1. Select **Add** to save the changes.
+ :::image type="content" source="./media/direct-ea-administration/add-contact.png" alt-text="Screenshot showing the Add Contact window where you add a contact." :::
+
+The new notification contact is shown in the Notification list.
+
+An EA admin can manage notification access for a contact by selecting the ellipsis (…) symbol to the right of each contact. They can edit and remove existing notification contacts.
+
+By default, notification contacts are subscribed for the coverage period end date approaching lifecycle notifications. Unsubscribing lifecycle management notifications suppresses notifications for the coverage period and agreement end date.
+ ## Azure sponsorship offer The Azure sponsorship offer is a limited sponsored Microsoft Azure account. It's available by e-mail invitation only to limited customers selected by Microsoft. If you're entitled to the Microsoft Azure sponsorship offer, you'll receive an e-mail invitation to your account ID.
data-factory Data Access Strategies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-access-strategies.md
Previously updated : 01/26/2022 Last updated : 08/03/2022 # Data access strategies
data-factory Data Factory Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-private-link.md
Previously updated : 03/18/2022 Last updated : 08/03/2022 # Azure Private Link for Azure Data Factory
data-factory Data Factory Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-service-identity.md
Previously updated : 01/27/2022 Last updated : 08/03/2022
data-factory Data Factory Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-troubleshoot-guide.md
Previously updated : 05/12/2022 Last updated : 08/03/2022
data-factory Data Factory Tutorials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-tutorials.md
Previously updated : 03/16/2021 Last updated : 08/03/2022 # Azure Data Factory tutorials
data-factory Data Factory Ux Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-ux-troubleshoot-guide.md
Title: Troubleshoot Azure Data Factory Studio description: Learn how to troubleshoot Azure Data Factory Studio issues.-+ Previously updated : 06/01/2021- Last updated : 08/03/2022+
data-factory Data Flow Aggregate Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-aggregate-functions.md
Previously updated : 02/02/2022 Last updated : 08/03/2022 # Aggregate functions in mapping data flow
data-factory Data Flow Aggregate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-aggregate.md
Previously updated : 09/09/2021 Last updated : 08/03/2022 # Aggregate transformation in mapping data flow
data-factory Data Flow Alter Row https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-alter-row.md
Previously updated : 09/09/2021 Last updated : 08/03/2022 # Alter row transformation in mapping data flow
data-factory Data Flow Array Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-array-functions.md
Previously updated : 02/02/2022 Last updated : 08/03/2022 # Array functions in mapping data flow
data-factory Data Flow Assert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-assert.md
Previously updated : 06/23/2022 Last updated : 08/03/2022 # Assert transformation in mapping data flow
data-factory Data Flow Cached Lookup Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-cached-lookup-functions.md
Previously updated : 02/02/2022 Last updated : 08/03/2022 # Cached lookup functions in mapping data flow
data-factory Data Flow Cast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-cast.md
Previously updated : 07/13/2022 Last updated : 08/03/2022 # Cast transformation in mapping data flow
To modify the data type for columns in your data flow, add columns to "Cast sett
**Type:** Choose the data type to cast your column to. If you pick "complex", you can then select "Define complex type" and define structures, arrays, and maps inside the expression builder.
+> [!NOTE]
+> Support for complex data type casting from the Cast transformation is currently unavailable. Use a Derived Column transformation instead.
+ **Format:** Some data types, like decimal and dates, will allow for additional formatting options. **Assert type check:** The cast transformation allows for type checking. If the casting fails, the row will be marked as an assertion error that you can trap later in the stream.
data-factory Data Flow Conditional Split https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-conditional-split.md
Previously updated : 05/12/2022 Last updated : 08/03/2022 # Conditional split transformation in mapping data flow
data-factory Data Flow Conversion Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-conversion-functions.md
Previously updated : 02/02/2022 Last updated : 08/03/2022 # Conversion functions in mapping data flow
data-factory Data Flow Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-create.md
Previously updated : 07/05/2021 Last updated : 08/03/2022 # Create Azure Data Factory data flows
data-factory Data Flow Tutorials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-tutorials.md
As updates are constantly made to the product, some features have added or diffe
[Assert transformation](https://www.youtube.com/watch?v=8K7flL7JWMo)
+[Log assert error rows](https://www.youtube.com/watch?v=VFRx0wjlA4s)
+
+[Fuzzy join](https://www.youtube.com/watch?v=ouMdM4yL78s)
+ ## Source and sink [Reading and writing JSONs](https://www.youtube.com/watch?v=yY5aB7Kdhjg)
As updates are constantly made to the product, some features have added or diffe
[Dynamic expressions as parameters](https://www.youtube.com/watch?v=q7W6J-DUuJY)
+[User-defined functions](https://www.youtube.com/watch?v=ZFTVoe8eeOc)
+ ## Metadata [Metadata validation rules](https://www.youtube.com/watch?v=E_UD3R-VpYE)
data-factory Ssis Integration Runtime Ssis Activity Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ssis-integration-runtime-ssis-activity-faq.md
Check if security policies are correctly assigned to the account running self-ho
Make sure Visual C++ runtime is installed on Self-Hosted integration runtime machine. More detail can be found at [Configure Self-Hosted IR as a proxy for Azure-SSIS IR in ADF](self-hosted-integration-runtime-proxy-ssis.md#prepare-the-self-hosted-ir)
+### Error message: "Timeout when reading from staging"
+
+This error occurs when SSIS-IR with SHIR as a data proxy can't read data from staging blob successfully. Usually, it is due to that SHIR has failed to transfer on-premises data to the staging blob. Then SSIS-IR's attempt to read staging data fails with timeout error. You need to check SHIR logs in C:\ProgramData\SSISTelemetry folder for runtime logs and C:\ProgramData\SSISTelemetry\ExecutionLog folder for execution logs to further investigate why data hasn't been uploaded to staging blob successfully by SHIR.
+ ### Multiple Package executions are triggered unexpectedly * Potential cause & recommended action:
data-factory Tutorial Deploy Ssis Packages Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-deploy-ssis-packages-azure.md
In this tutorial, you complete the following steps:
If you use an Azure SQL Database server with IP firewall rules/virtual network service endpoints or a managed instance with private endpoint to host SSISDB, or if you require access to on-premises data without configuring a self-hosted IR, you need to join your Azure-SSIS IR to a virtual network. For more information, see [Create an Azure-SSIS IR in a virtual network](./create-azure-ssis-integration-runtime.md).
- - Confirm that the **Allow access to Azure services** setting is enabled for the database server. This setting is not applicable when you use an Azure SQL Database server with IP firewall rules/virtual network service endpoints or a managed instance with private endpoint to host SSISDB. For more information, see [Secure Azure SQL Database](/azure/azure-sql/database/secure-database-tutorial#create-firewall-rules). To enable this setting by using PowerShell, see [New-AzSqlServerFirewallRule](/powershell/module/az.sql/new-azsqlserverfirewallrule).
+ - Confirm that the **Allow access to Azure services** setting is enabled for the database server. This setting isn't applicable when you use an Azure SQL Database server with IP firewall rules/virtual network service endpoints or a managed instance with private endpoint to host SSISDB. For more information, see [Secure Azure SQL Database](/azure/azure-sql/database/secure-database-tutorial#create-firewall-rules). To enable this setting by using PowerShell, see [New-AzSqlServerFirewallRule](/powershell/module/az.sql/new-azsqlserverfirewallrule).
- Add the IP address of the client machine, or a range of IP addresses that includes the IP address of the client machine, to the client IP address list in the firewall settings for the database server. For more information, see [Azure SQL Database server-level and database-level firewall rules](/azure/azure-sql/database/firewall-configure). - You can connect to the database server by using SQL authentication with your server admin credentials, or by using Azure Active Directory (Azure AD) authentication with the specified system/user-assigned managed identity for your data factory. For the latter, you need to add the specified system/user-assigned managed identity for your data factory into an Azure AD group with access permissions to the database server. For more information, see [Create an Azure-SSIS IR with Azure AD authentication](./create-azure-ssis-integration-runtime.md).
- - Confirm that your database server does not have an SSISDB instance already. The provisioning of an Azure-SSIS IR does not support using an existing SSISDB instance.
+ - Confirm that your database server doesn't have an SSISDB instance already. The provisioning of an Azure-SSIS IR doesn't support using an existing SSISDB instance.
> [!NOTE] > For a list of Azure regions in which Data Factory and an Azure-SSIS IR are currently available, see [Data Factory and SSIS IR availability by region](https://azure.microsoft.com/global-infrastructure/services/?products=data-factory&regions=all).
On the **Advanced settings** page of **Integration runtime setup** pane, complet
On the **Summary** page of **Integration runtime setup** pane, review all provisioning settings, bookmark the recommended documentation links, and select **Create** to start the creation of your integration runtime. > [!NOTE]
- > Excluding any custom setup time, this process should finish within 5 minutes.
+ > Excluding any custom setup time, and SSIS IR is not using standard VNet injection, this process will finish within 5 minutes in most cases.
> > If you use SSISDB, the Data Factory service will connect to your database server to prepare SSISDB. >
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Defender for Cloud's supported kill chain intents are based on [version 9 of the
| **Persistence** | V7, V9 | Persistence is any access, action, or configuration change to a system that gives a threat actor a persistent presence on that system. Threat actors will often need to maintain access to systems through interruptions such as system restarts, loss of credentials, or other failures that would require a remote access tool to restart or provide an alternate backdoor for them to regain access. | | **Privilege Escalation** | V7, V9 | Privilege escalation is the result of actions that allow an adversary to obtain a higher level of permissions on a system or network. Certain tools or actions require a higher level of privilege to work and are likely necessary at many points throughout an operation. User accounts with permissions to access specific systems or perform specific functions necessary for adversaries to achieve their objective may also be considered an escalation of privilege. | | **Defense Evasion** | V7, V9 | Defense evasion consists of techniques an adversary may use to evade detection or avoid other defenses. Sometimes these actions are the same as (or variations of) techniques in other categories that have the added benefit of subverting a particular defense or mitigation. |
-| **Credential Access** | Credential access represents techniques resulting in access to or control over system, domain, or service credentials that are used within an enterprise environment. Adversaries will likely attempt to obtain legitimate credentials from users or administrator accounts (local system administrator or domain users with administrator access) to use within the network. With sufficient access within a network, an adversary can create accounts for later use within the environment. |
+| **Credential Access** | V7, V9 | Credential access represents techniques resulting in access to or control over system, domain, or service credentials that are used within an enterprise environment. Adversaries will likely attempt to obtain legitimate credentials from users or administrator accounts (local system administrator or domain users with administrator access) to use within the network. With sufficient access within a network, an adversary can create accounts for later use within the environment. |
| **Discovery** | V7, V9 | Discovery consists of techniques that allow the adversary to gain knowledge about the system and internal network. When adversaries gain access to a new system, they must orient themselves to what they now have control of and what benefits operating from that system give to their current objective or overall goals during the intrusion. The operating system provides many native tools that aid in this post-compromise information-gathering phase. | | **LateralMovement** | V7, V9 | Lateral movement consists of techniques that enable an adversary to access and control remote systems on a network and could, but does not necessarily, include execution of tools on remote systems. The lateral movement techniques could allow an adversary to gather information from a system without needing additional tools, such as a remote access tool. An adversary can use lateral movement for many purposes, including remote Execution of tools, pivoting to additional systems, access to specific information or files, access to additional credentials, or to cause an effect. | | **Execution** | V7, V9 | The execution tactic represents techniques that result in execution of adversary-controlled code on a local or remote system. This tactic is often used in conjunction with lateral movement to expand access to remote systems on a network. |
defender-for-cloud Defender For Servers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-servers-introduction.md
Title: Overview of Microsoft Defender for Servers description: Learn about the benefits and features of Microsoft Defender for Servers.++ Last updated 06/22/2022
Microsoft Defender for Servers is one of the plans provided by Microsoft Defende
- Watch a [Defender for Servers introduction](episode-five.md) in our Defender for Cloud in the Field series. - Get pricing details for [Defender for Servers](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+- [Enable Defender for Servers on your subscriptions](enable-enhanced-security.md).
## Defender for Servers plans
defender-for-cloud Defender For Sql Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-usage.md
Microsoft Defender for SQL servers on machines extends the protections for your
- [Connect your GCP project to Microsoft Defender for Cloud](quickstart-onboard-gcp.md) > [!NOTE]
- > Enable database protection for your multicloud SQL servers through the AWS connector](quickstart-onboard-aws.md?pivots=env-settings#connect-your-aws-account) or the [GCP connector](quickstart-onboard-gcp.md?pivots=env-settings#configure-the-databases-plan).
+ > Enable database protection for your multicloud SQL servers through the [AWS connector](quickstart-onboard-aws.md?pivots=env-settings#connect-your-aws-account) or the [GCP connector](quickstart-onboard-gcp.md?pivots=env-settings#configure-the-databases-plan).
This plan includes functionality for identifying and mitigating potential database vulnerabilities and detecting anomalous activities that could indicate threats to your databases.
Learn more about [vulnerability assessment for Azure SQL servers on machines](de
|-|:-| |Release state:|General availability (GA)| |Pricing:|**Microsoft Defender for SQL servers on machines** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
-|Protected SQL versions:|[SQL Server versions currently supported by Microsoft](/mem/configmgr/core/plan-design/configs/support-for-sql-server-versions) in:
-<br>- [SQL on Azure virtual machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview)<br>- [SQL Server on Azure Arc-enabled servers](/sql/sql-server/azure-arc/overview)<br>- On-premises SQL servers on Windows machines without Azure Arc<br>|
+|Protected SQL versions:|[SQL Server versions currently supported by Microsoft](/mem/configmgr/core/plan-design/configs/support-for-sql-server-versions) in: <br>- [SQL on Azure virtual machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview)<br>- [SQL Server on Azure Arc-enabled servers](/sql/sql-server/azure-arc/overview)<br>- On-premises SQL servers on Windows machines without Azure Arc<br>|
|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet| ## Set up Microsoft Defender for SQL servers on machines
defender-for-iot How To Manage Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md
This procedure describes how to add a Defender for IoT plan for OT networks to a
- **Number of sites** (for annual commitment only). Enter the number of committed sites.
- - **Committed devices**. If you selected a monthly or annual commitment, enter the number of assets you'll want to monitor. If you selected a trial, this section doesn't appear as you have a default of 1000 devices.
+ - **Committed devices**. If you selected a monthly or annual commitment, enter the number of assets you'll want to monitor. If you selected a trial, this section doesn't appear as you have a default of 100 devices.
For example:
For example, you may have more devices that require monitoring if you're increas
1. Select the **I accept the terms** option, and then select **Save**.
-Changes to your plan will take effect one hour after confirming the change. Billing for these changes will be reflected at the beginning of the month following confirmation of the change.
+Changes to your plan will take effect one hour after confirming the change. This change will appear on your next monthly statement, and you will be charged based on the length of time each plan was in effect.
> [!NOTE] > **For an on-premises management console:**
Changes to your plan will take effect one hour after confirming the change. Bill
## Cancel a Defender for IoT plan from a subscription
-You may need to cancel a Defender for IoT plan from your Azure subscription, for example, if you need to work with a new payment entity. Your changes take effect one hour after confirmation. Your upcoming monthly bill will reflect this change.
+You may need to cancel a Defender for IoT plan from your Azure subscription, for example, if you need to work with a new payment entity. Your changes take effect one hour after confirmation. This change will be reflected in your upcoming monthly statement, and you will only be charged for the time that the subscription was active.
This option removes all Defender for IoT services from the subscription, including both OT and Enterprise IOT services. Delete all sensors that are associated with the subscription prior to removing the plan. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).
Business considerations may require that you apply your existing IoT sensors to
- [Create an additional Azure subscription](../../cost-management-billing/manage/create-subscription.md) -- [Upgrade your Azure subscription](../../cost-management-billing/manage/upgrade-azure-subscription.md)
+- [Upgrade your Azure subscription](../../cost-management-billing/manage/upgrade-azure-subscription.md)
defender-for-iot Integrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrate-overview.md
The following table lists available integrations for Microsoft Defender for IoT,
|Partner service |Description | Learn more | ||||
+| **ArcSight** | Forward Defender for IoT alerts to ArcSight. | [Integrate ArcSight with Microsoft Defender for IoT](integrations/arcsight.md) |
|**Aruba ClearPass** | Share Defender for IoT data with ClearPass Security Exchange and update the ClearPass Policy Manager Endpoint Database with Defender for IoT data. | [Integrate ClearPass with Microsoft Defender for IoT](tutorial-clearpass.md) | |**CyberArk** | Send CyberArk PSM syslog data on remote sessions and verification failures to Defender for IoT for data correlation. | [Integrate CyberArk with Microsoft Defender for IoT](tutorial-cyberark.md) | |**Forescout** | Automate actions in Forescout based on activity detected by Defender for IoT, and correlate Defender for IoT data with other *Forescout eyeExtended* modules that oversee monitoring, incident management, and device control. | [Integrate Forescout with Microsoft Defender for IoT](tutorial-forescout.md) | |**Fortinet** | Send Defender for IoT data to Fortinet services for: <br><br>- Enhanced network visibility in FortiSIEM<br>- Extra abilities in FortiGate to stop anomalous behavior | [Integrate Fortinet with Microsoft Defender for IoT](tutorial-fortinet.md) |
+| **LogRhythm** | Forward Defender for IoT alerts to LogRhythm. | [Integrate LogRhythm with Microsoft Defender for IoT](integrations/logrhythm.md) |
+| **RSA NetWitness** | Forward Defender for IoT alerts to RSA NetWitness | [Integrate RSA NetWitness with Microsoft Defender for IoT](integrations/netwitness.md) <br>[CyberX Platform - RSA NetWitness CEF Parser Implementation Guide](https://community.netwitness.com//t5/netwitness-platform-integrations/cyberx-platform-rsa-netwitness-cef-parser-implementation-guide/ta-p/554364) |
|**Palo Alto** |Use Defender for IoT data to block critical threats with Palo Alto firewalls, either with automatic blocking or with blocking recommendations. | [Integrate Palo-Alto with Microsoft Defender for IoT](tutorial-palo-alto.md) | |**QRadar** |Forward Defender for IoT alerts to IBM QRadar. | [Integrate Qradar with Microsoft Defender for IoT](tutorial-qradar.md) | |**ServiceNow** | View Defender for IoT device detections, attributes, and connections in ServiceNow. | [Integrate ServiceNow with Microsoft Defender for IoT](tutorial-servicenow.md) |
defender-for-iot Arcsight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrations/arcsight.md
+
+ Title: Integrate ArcSight with Microsoft Defender for IoT
+description: Learn how to send Microsoft Defender for IoT alerts to ArcSight.
+ Last updated : 08/02/2022++
+# Integrate ArcSight with Microsoft Defender for IoT
+
+This article describes how to send Microsoft Defender for IoT alerts to ArcSight. Integrating Defender for IoT with ArcSight provides visibility into the security and resiliency of OT networks and a unified approach to IT and OT security.
+
+## Prerequisites
+
+Before you begin, make sure that you have the following prerequisites:
+
+- Access to a Defender for IoT OT sensor as an Admin user.
+
+## Configure the ArcSight receiver type
+
+To configure your ArcSight server settings so that it can receive Defender for IoT alert information:
+
+1. Sign in to your ArcSight server.
+1. Configure your receiver type as a **CEF UDP Receiver**.
+
+For more information, see the [ArcSight SmartConnectors Documentation](https://www.microfocus.com/documentation/arcsight/arcsight-smartconnectors/#gsc.tab=0).
+
+## Create a Defender for IoT forwarding rule
+
+This procedure describes how to create a forwarding rule from your OT sensor to send Defender for IoT alerts from that sensor to ArcSight.
+
+For more information, see [Forward alert information](../how-to-forward-alert-information-to-partners.md).
+
+1. Sign in to your OT sensor console and select **Forwarding** on the left.
+
+1. Enter a meaningful name for your rule, and then define your rule details, including:
+
+ - The minimal alert level. For example, if you select Minor, you are notified about all minor, major and critical incidents.
+ - The protocols you want to include in the rule.
+ - The traffic you want to include in the rule.
+
+1. In the **Actions** area, define the following values:
+
+ - **Server**: Select **ArcSight**
+ - **Host**: The ArcSight server address
+ - **Port**: The ArcSight server port
+ - **Timezone**: The timezone of the ArcSight server
+
+1. Select **Save** to save your forwarding rule.
+
+## Next steps
+
+For more information, see:
+
+- [Integrations with partner services](../integrate-overview.md)
+- [Forward alert information](../how-to-forward-alert-information-to-partners.md)
+- [Manage individual sensors](../how-to-manage-individual-sensors.md)
+
defender-for-iot Logrhythm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrations/logrhythm.md
+
+ Title: Integrate LogRhythm with Microsoft Defender for IoT
+description: Learn how to send Microsoft Defender for IoT alerts to ALogRhythmrcSight.
+ Last updated : 08/02/2022++
+# Integrate LogRhythm with Microsoft Defender for IoT
+
+This article describes how to send Microsoft Defender for IoT alerts to LogRhythm. Integrating Defender for IoT with LogRhythm provides visibility into the security and resiliency of OT networks and a unified approach to IT and OT security.
+
+## Prerequisites
+
+Before you begin, make sure that you have the following prerequisites:
+
+- Access to a Defender for IoT OT sensor as an Admin user.
+
+## Create a Defender for IoT forwarding rule
+
+This procedure describes how to create a forwarding rule from your OT sensor to send Defender for IoT alerts from that sensor to LogRhythm.
+
+For more information, see [Forward alert information](../how-to-forward-alert-information-to-partners.md).
+
+1. Sign in to your OT sensor console and select **Forwarding** on the left.
+
+1. Enter a meaningful name for your rule, and then define your rule details, including:
+
+ - The minimal alert level. For example, if you select Minor, you are notified about all minor, major and critical incidents.
+ - The protocols you want to include in the rule.
+ - The traffic you want to include in the rule.
+
+1. In the **Actions** area, define the following values:
+
+ - **Server**: Select a SYSLOG server option, such as **SYSLOG Server (LEEF format)
+ - **Host**: The IP or hostname of your LogRhythm collector
+ - **Port**: Enter **514**
+ - **Timezone**: Enter your timezone
+
+1. Select **Save** to save your forwarding rule.
+
+## Configure LogRhythm to collect logs
+
+After configuring a forwarding rule from your OT sensor console, configure LogRhythm to collect your Defender for IoT logs.
+
+For more information, see the [LogRhythm documentation](https://docs.logrhythm.com/docs/devices/syslog-log-sources).
+
+## Next steps
+
+For more information, see:
+
+- [Integrations with partner services](../integrate-overview.md)
+- [Forward alert information](../how-to-forward-alert-information-to-partners.md)
defender-for-iot Netwitness https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrations/netwitness.md
+
+ Title: Integrate RSA NetWitness with Microsoft Defender for IoT
+description: Learn how to send Microsoft Defender for IoT alerts to RSA NetWitness.
+ Last updated : 08/02/2022++
+# Integrate RSA NetWitness with Microsoft Defender for IoT
+
+This article describes how to send Microsoft Defender for IoT alerts to RSA NetWitness. Integrating Defender for IoT with NetWitness provides visibility into the security and resiliency of OT networks and a unified approach to IT and OT security.
+
+## Prerequisites
+
+Before you begin, make sure that you have the following prerequisites:
+
+- Access to a Defender for IoT OT sensor as an Admin user.
+
+- NetWitness configuration to collect events from sources that support Common Event Format (CEF). For more information, see the [CyberX Platform - RSA NetWitness CEF Parser Implementation Guide](https://community.netwitness.com//t5/netwitness-platform-integrations/cyberx-platform-rsa-netwitness-cef-parser-implementation-guide/ta-p/554364).
+
+## Create a Defender for IoT forwarding rule
+
+This procedure describes how to create a forwarding rule from your OT sensor to send Defender for IoT alerts from that sensor to NetWitness.
+
+For more information, see [Forward alert information](../how-to-forward-alert-information-to-partners.md).
+
+1. Sign in to your OT sensor console and select **Forwarding** on the left.
+
+1. Enter a meaningful name for your rule, and then define your rule details, including:
+
+ - The minimal alert level. For example, if you select Minor, you are notified about all minor, major and critical incidents.
+ - The protocols you want to include in the rule.
+ - The traffic you want to include in the rule.
+
+1. In the **Actions** area, define the following values:
+
+ - **Server**: Select **NetWitness**
+ - **Host**: The NetWitness hostname
+ - **Port**: The NetWitness port
+ - **Timezone**: Enter your NetWitness timezone
+
+1. Select **Save** to save your forwarding rule.
+
+## Next steps
+
+For more information, see:
+
+- [CyberX Platform - RSA NetWitness CEF Parser Implementation Guide](https://community.netwitness.com//t5/netwitness-platform-integrations/cyberx-platform-rsa-netwitness-cef-parser-implementation-guide/ta-p/554364)
+- [Integrations with partner services](../integrate-overview.md)
+- [Forward alert information](../how-to-forward-alert-information-to-partners.md)
defender-for-iot Pre Deployment Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/pre-deployment-checklist.md
Review your industrial network architecture to define the proper location for th
> [!NOTE] > The Defender for IoT appliance should be connected to a lower-level switch that sees the traffic between the ports on the switch.
-1. **Committed devices** - Provide the approximate number of network devices that will be monitored. You'll need this information when onboarding your subscription to Defender for IoT in the Azure portal. During the onboarding process, you'll be prompted to enter the number of devices in increments of 1000. For more information, see [What is a Defender for IoT committed device?](architecture.md#what-is-a-defender-for-iot-committed-device)
+1. **Committed devices** - Provide the approximate number of network devices that will be monitored. You'll need this information when onboarding your subscription to Defender for IoT in the Azure portal. During the onboarding process, you'll be prompted to enter the number of devices in increments of 100. For more information, see [What is a Defender for IoT committed device?](architecture.md#what-is-a-defender-for-iot-committed-device)
1. **(Optional) Subnet list** - Provide a subnet list for the production networks and a description (optional).
defender-for-iot References Work With Defender For Iot Cli Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-work-with-defender-for-iot-cli-commands.md
This article describes CLI commands for sensors and on-premises management consoles. The commands are accessible to the following users: -- Administrator - CyberX - Support - cyberx_host
dms Migration Using Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-using-azure-data-studio.md
When you migrate database(s) using the Azure SQL migration extension for Azure D
- When migrating to SQL Server on Azure Virtual Machines, SQL Server 2008 and below as target versions are not supported currently. - If you are using SQL Server 2012 or SQL Server 2014 you need to store your source database backup files on an Azure Storage Blob Container instead of using the network share option. Store the backup files as page blobs since block blobs are only supported in SQL 2016 and after. - Migrating to Azure SQL Database isn't supported.-- Azure storage accounts secured by specific firewall rules or configured with a private endpoint are not supported for migrations. - You can't use an existing self-hosted integration runtime created from Azure Data Factory for database migrations with DMS. Initially, the self-hosted integration runtime should be created using the Azure SQL migration extension in Azure Data Studio and can be reused for further database migrations. ## Pricing
dms Tutorial Sql Server Managed Instance Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-offline-ads.md
To complete this tutorial, you need to:
* Provide an SMB network share, Azure storage account file share, or Azure storage account blob container that contains your full database backup files and subsequent transaction log backup files, which Azure Database Migration Service can use for database migration. > [!IMPORTANT] > - If your database backup files are provided in an SMB network share, [Create an Azure storage account](../storage/common/storage-account-create.md) that allows DMS service to upload the database backup files to and use for migrating databases. Make sure to create the Azure Storage Account in the same region as the Azure Database Migration Service instance is created.
- > - You can't use an Azure Storage account that has a private endpoint with Azure Database Migration Service.
> - Azure Database Migration Service does not initiate any backups, and instead uses existing backups, which you may already have as part of your disaster recovery plan, for the migration. > - You need to take [backups using the `WITH CHECKSUM` option](/sql/relational-databases/backup-restore/enable-or-disable-backup-checksums-during-backup-or-restore-sql-server?preserve-view=true&view=sql-server-2017). > - Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups (i.e. full and t-log) into a single backup media is not supported.
dms Tutorial Sql Server Managed Instance Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online-ads.md
To complete this tutorial, you need to:
> [!IMPORTANT] > - If your database backup files are provided in an SMB network share, [Create an Azure storage account](../storage/common/storage-account-create.md) that allows the DMS service to upload the database backup files. Make sure to create the Azure Storage Account in the same region as the Azure Database Migration Service instance is created.
- > - You can't use an Azure Storage account that has a private endpoint with Azure Database Migration Service.
> - Azure Database Migration Service does not initiate any backups, and instead uses existing backups, which you may already have as part of your disaster recovery plan, for the migration. > - You need to take [backups using the `WITH CHECKSUM` option](/sql/relational-databases/backup-restore/enable-or-disable-backup-checksums-during-backup-or-restore-sql-server). > - Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups (i.e. full and t-log) into a single backup media is not supported.
firewall Firewall Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-performance.md
Previously updated : 07/08/2022 Last updated : 08/03/2022
For more information about Azure Firewall, see [What is Azure Firewall?](overvie
## Performance testing
-Before deploying Azure Firewall, the performance needs to be tested and evaluated to ensure it meets your expectations. Not only should Azure Firewall handle the current traffic on a network, but it should also be ready for potential traffic growth. It is recommended to evaluate on a test network and not in a production environment. The testing should attempt to replicate the production environment as close as possible. This includes the network topology, and emulating the actual characteristics of the expected traffic through the firewall.
+Before you deploy Azure Firewall, the performance needs to be tested and evaluated to ensure it meets your expectations. Not only should Azure Firewall handle the current traffic on a network, but it should also be ready for potential traffic growth. It is recommended to evaluate on a test network and not in a production environment. The testing should attempt to replicate the production environment as close as possible. This includes the network topology, and emulating the actual characteristics of the expected traffic through the firewall.
## Performance data
The following set of performance results demonstrates the maximal Azure Firewall
|Firewall type and use case |TCP/UDP bandwidth (Gbps) |HTTP/S bandwidth (Gbps) | |||| |Standard |30|30|
-|Premium (no TLS/IDPS) |30|100|
+|Premium (no TLS/IDPS) |100|100|
|Premium with TLS |-|100| |Premium with IDS |100|100| |Premium with IPS |10|10|
Azure Firewall also supports the following throughput for single connections:
|Standard<br>Max bandwidth for single TCP connection |1.3| |Premium<br>Max bandwidth for single TCP connection |9.5| |Premium max bandwidth with TLS/IDS|100|
+|Premium single TCP connection with IDPS on *Alert and Deny* mode|up to 300 Mbps|
Performance values are calculated with Azure Firewall at full scale. Actual performance may vary depending on your rule complexity and network configuration. These metrics are updated periodically as performance continuously evolves with each release.
governance Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/manage.md
Title: How to work with your management groups - Azure Governance
+ Title: Manage your Azure subscriptions at scale with management groups - Azure Governance
description: Learn how to view, maintain, update, and delete your management group hierarchy. Previously updated : 01/07/2022 Last updated : 08/02/2022 ++
-# Manage your resources with management groups
+# Manage your Azure subscriptions at scale with management groups
If your organization has many subscriptions, you may need a way to efficiently manage access, policies, and compliance for those subscriptions. Azure management groups provide a level of scope
subscriptions you might have. To learn more about management groups, see
> refreshing the browser, signing in and out, or requesting a new token. > [!IMPORTANT]
-> AzManagementGroup related Az PowerShell cmdlets mention that the **-GroupId** is alias of **-GroupName** parameter
-> so we can use either of it to provide Management Group Id as a string value.
+> AzManagementGroup related Az PowerShell cmdlets mention that the **-GroupId** is alias of **-GroupName** parameter
+> so we can use either of it to provide Management Group Id as a string value.
## Change the name of a management group
healthcare-apis Deploy Iot Connector In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-iot-connector-in-azure.md
If you already have an active Azure account, you can use this [![Deploy to Azure
* **Subscription** - Choose the Azure subscription you would like to use for the deployment. * **Resource Group** - Choose an existing Resource Group or create a new Resource Group. * **Region** - The Azure region of the Resource Group used for the deployment. This field will auto-fill based on the Resource Group region.
- * **Basename** - Will be used to append the name the Azure services to be deployed.
+ * **Basename** - Will be used to append the name the Azure resources and services to be deployed.
* **Location** - Use the drop-down list to select a supported Azure region for the Azure Health Data Services (could be the same or different region than your Resource Group). 2. Leave the **Device Mapping** and **Destination Mapping** fields with their default values.
If you already have an active Azure account, you can use this [![Deploy to Azure
:::image type="content" source="media\iot-deploy-quickstart-in-portal\iot-deploy-quickstart-create.png" alt-text="Screenshot of Azure portal page displaying validation box and Create button for the Azure Health Data Service MedTech service." lightbox="media\iot-deploy-quickstart-in-portal\iot-deploy-quickstart-create.png"::: 5. After a successful deployment, there will be remaining configurations that will need to be completed by you for a fully functional MedTech service:
- * Provide a working device mapping file. For more information, see [How to use device mappings](how-to-use-device-mappings.md).
- * Provide a working destination mapping file. For more information, see [How to use FHIR destination mappings](how-to-use-fhir-mappings.md).
+ * Provide a working device mapping. For more information, see [How to use device mappings](how-to-use-device-mappings.md).
+ * Provide a working FHIR destination mapping. For more information, see [How to use FHIR destination mappings](how-to-use-fhir-mappings.md).
* Use the Shared access policies (SAS) key (**devicedatasender**) for connecting your device or application to the MedTech service device message event hub (**devicedata**). For more information, see [Connection string for a specific event hub in a namespace](../../event-hubs/event-hubs-get-connection-string.md#connection-string-for-a-specific-event-hub-in-a-namespace). > [!IMPORTANT]
iot-central Howto Manage Iot Central From Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-iot-central-from-portal.md
After filling out all fields, select **Create**. To learn more, see [Create an I
If you already have an Azure IoT Central application, you can delete it, or move it to a different subscription or resource group in the Azure portal.
-> [!NOTE]
-> Applications created using the *free* plan do not require an Azure subscriptions, and therefore you won't find them listed in your Azure subscription on the Azure portal. You can only see and manage free apps from the IoT Central portal.
- To get started, search for your application in the search bar at the top of the Azure portal. You can also view all your applications by searching for _IoT Central Applications_ and selecting the service: ![Screenshot that shows the search results for "IoT Central Applications" with the first service selected.](media/howto-manage-iot-central-from-portal/search-iot-central.png)
When you select an application in the search results, the Azure portal shows you
> [!NOTE] > Use the **IoT Central Application URL** to access the application for the first time.
-To move the application to a different resource group, select **change** beside the resource group. On the **Move resources** page, choose the resource group you'd like to move this application to:
+To move the application to a different resource group, select **move** beside **Resource group**. On the **Move resources** page, choose the resource group you'd like to move this application to.
+
+To move the application to a different subscription, select **move** beside **Subscription**. On the **Move resources** page, choose the subscription you'd like to move this application to:
-![Screenshot that shows the "Overview" page with the "Resource group (change)" highlighted.](media/howto-manage-iot-central-from-portal/highlight-resource-group.png)
+![Screenshot that shows the "Overview" page with the "Resource group (move)" highlighted.](media/howto-manage-iot-central-from-portal/highlight-resource-group-subscription.png)
-To move the application to a different subscription, select **change** beside the subscription. On the **Move resources** page, choose the subscription you'd like to move this application to:
+## Manage networking
-![Management portal: resource management](media/howto-manage-iot-central-from-portal/highlight-subscription.png)
+You can use private IP addresses from a virtual network address space to manage your devices in IoT Central application to eliminate exposure on the public internet. To learn more, see [Create and configure a private endpoint for IoT Central](../core/howto-create-private-endpoint.md)
## Configure a managed identity
You can use the set of metrics provided by IoT Central to assess the health of d
Metrics are enabled by default for your IoT Central application and you access them from the [Azure portal](https://portal.azure.com/). The [Azure Monitor data platform exposes these metrics](../../azure-monitor/essentials/data-platform-metrics.md) and provides several ways for you to interact with them. For example, you can use charts in the Azure portal, a REST API, or queries in PowerShell or the Azure CLI.
-> [!TIP]
-> Applications that use the free trial plan don't have an associated Azure subscription and so don't support Azure Monitor metrics. You can [convert an application to a standard pricing plan](./howto-faq.yml#how-do-i-move-from-a-free-to-a-standard-pricing-plan-) and get access to these metrics.
+Access to metrics in the Azure portal is managed by [Azure role based access control](../../role-based-access-control/overview.md). Use the Azure portal to add users to the IoT Central application/resource group/subscription to grant them access. You must add a user in the portal even they're already added to the IoT Central application. Use [Azure built-in roles](../../role-based-access-control/built-in-roles.md) for finer grained access control.
### View metrics in the Azure portal
-The following steps assume you have an [IoT Central application](./howto-create-iot-central-application.md) with some [connected devices](./tutorial-connect-device.md) or a running [data export](howto-export-to-blob-storage.md).
-
+The following example **Metrics** page shows a plot of the number of devices connected to your IoT Central application. For a list of the metrics that are currently available for IoT Central, see [Supported metrics with Azure Monitor](../../azure-monitor/essentials/metrics-supported.md#microsoftiotcentraliotapps).
To view IoT Central metrics in the portal:
To view IoT Central metrics in the portal:
![Azure Metrics](media/howto-manage-iot-central-from-portal/metrics.png)
-### Azure portal permissions
+### Export logs and metrics
-Access to metrics in the Azure portal is managed by [Azure role based access control](../../role-based-access-control/overview.md). Use the Azure portal to add users to the IoT Central application/resource group/subscription to grant them access. You must add a user in the portal even they're already added to the IoT Central application. Use [Azure built-in roles](../../role-based-access-control/built-in-roles.md) for finer grained access control.
+Use the Azure portal to export metrics and logs to different destinations. To learn more, see Use the **Diagnostics settings** page to configure exporting metrics and logs to different destinations. To learn more, see [Diagnostic settings in Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md).
-### IoT Central metrics
+### Analyze logs and metrics
-For a list of the metrics that are currently available for IoT Central, see [Supported metrics with Azure Monitor](../../azure-monitor/essentials/metrics-supported.md#microsoftiotcentraliotapps).
+Use the **Workbooks** page to analyze logs and create visual reports. To learn more, see [Azure Workbooks](../../azure-monitor/visualize/workbooks-overview.md).
### Metrics and invoices
iot-dps Iot Dps Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/iot-dps-mqtt-support.md
All device communication with DPS must be secured using TLS/SSL. Therefore, DPS
A device can use the MQTT protocol to connect to a DPS instance using any of the following options.
-* Libraries in the [Azure IoT Provisioning SDKs](../iot-hub/iot-hub-devguide-sdks.md#microsoft-azure-provisioning-sdks).
+* Libraries in the [Azure IoT Provisioning SDKs](libraries-sdks.md).
* The MQTT protocol directly. ## Using the MQTT protocol directly (as a device)
iot-dps Libraries Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/libraries-sdks.md
Title: IoT Hub Device Provisioning Service libraries and SDKs description: Information about the device and service libraries available for developing solutions with Device Provisioning Service (CPS).-- Previously updated : 06/30/2022++ Last updated : 08/03/2022
# Microsoft SDKs for IoT Hub Device Provisioning Service
-Azure IoT Hub Device Provisioning Service (DPS) SDKs help you build backend and device applications that leverage DPS to provide zero-touch, just-in-time provisioning to one or more IoT hubs. The SDKs are published in a variety of popular languages and handle the underlying transport and security protocols between your devices or backend apps and DPS, freeing developers to focus on application development. Additionally, using the SDKs provides you with support for future updates to DPS, including security updates.
+The Azure IoT Hub Device Provisioning Service (DPS) is a helper service for IoT Hub. The DPS package provides SDKs to help you build backend and device applications that leverage DPS to provide zero-touch, just-in-time provisioning to one or more IoT hubs. The SDKs are published in a variety of popular languages and handle the underlying transport and security protocols between your devices or backend apps and DPS, freeing developers to focus on application development. Additionally, using the SDKs provides you with support for future updates to DPS, including security updates.
There are three categories of software development kits (SDKs) for working with DPS: -- [DPS service SDKs](#service-sdks) provide data plane operations for backend apps. You can use the service SDKs to create and manage individual enrollments and enrollment groups, and to query and manage device registration records.--- [DPS management SDKs](#management-sdks) provide control plane operations for backend apps. You can use the management SDKs to create and manage DPS instances and metadata. For example, to create and manage DPS instances in your subscription, to upload and verify certificates with a DPS instance, or to create and manage authorization policies or allocation policies in a DPS instance.- - [DPS device SDKs](#device-sdks) provide data plane operations for devices. You use the device SDK to provision a device through DPS.
-Azure IoT SDKs are also available for the following
--- [IoT Hub SDKs](../iot-hub/iot-hub-devguide-sdks.md): To help you build devices and backend apps that communicate with Azure IoT Hub.
+- [DPS service SDKs](#service-sdks) provide data plane operations for backend apps. You can use the service SDKs to create and manage individual enrollments and enrollment groups, and to query and manage device registration records.
-- [Device Update for IoT Hub SDKs](../iot-hub-device-update/understand-device-update.md): To help you deploy over-the-air (OTA) updates for IoT devices.
+- [DPS management SDKs](#management-sdks) provide control plane operations for backend apps. You can use the management SDKs to create and manage DPS instances and metadata. For example, to create and manage DPS instances in your subscription, to upload and verify certificates with a DPS instance, or to create and manage authorization policies or allocation policies in a DPS instance.
-- [IoT Plug and Play SDKs](../iot-develop/libraries-sdks.md): To help you build IoT Plug and Play solutions.
+The DPS SDKs help you provision devices to your IoT hubs. Microsoft also provides a set of SDKs to help you build device apps and backend apps that communicate directly with Azure IoT Hub. For example, to help your provisioned devices send telemetry to your IoT hub, and, optionally, to receive messages and job, method, or twin updates from your IoT hub. To learn more, see [Azure IoT Hub SDKs](../iot-hub/iot-hub-devguide-sdks.md).
## Device SDKs
-The DPS device SDKs provide code that runs on your IoT devices and simplifies provisioning with DPS.
+The DPS device SDKs provide implementations of the [Register](/rest/api/iot-dps/device/runtime-registration/register-device) API and others that devices call to provision through DPS. The device SDKs can run on general MPU-based computing devices such as a PC, tablet, smartphone, or Raspberry Pi. The SDKs support development in C and in modern managed languages including in C#, Node.JS, Python, and Java.
| Platform | Package | Code repository | Samples | Quickstart | Reference | | --|--|--|--|--|--|
The DPS device SDKs provide code that runs on your IoT devices and simplifies pr
| Node.js|[npm](https://www.npmjs.com/package/azure-iot-provisioning-device) |[GitHub](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning)|[Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning/device/samples)|[Quickstart](./quick-create-simulated-device-x509.md?pivots=programming-language-nodejs&tabs=windows)|[Reference](/javascript/api/azure-iot-provisioning-device) | | Python|[pip](https://pypi.org/project/azure-iot-device/) |[GitHub](https://github.com/Azure/azure-iot-sdk-python)|[Samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/azure-iot-device/samples/async-hub-scenarios)|[Quickstart](./quick-create-simulated-device-x509.md?pivots=programming-language-python&tabs=windows)|[Reference](/python/api/azure-iot-device/azure.iot.device.provisioningdeviceclient) |
-Microsoft also provides embedded device SDKs to facilitate development on resource-constrained devices. To learn more, see the [IoT Device Development Documentation](../iot-develop/about-iot-sdks.md).
+> [!WARNING]
+> The **C SDK** listed above is **not** suitable for embedded applications due to its memory management and threading model. For embedded devices, refer to the [Embedded device SDKs](#embedded-device-sdks).
+
+### Embedded device SDKs
+
+These SDKs were designed and created to run on devices with limited compute and memory resources and are implemented using the C language.
+
+| RTOS | SDK | Source | Samples | Reference |
+| :-- | :-- | :-- | :-- | :-- |
+| **Azure RTOS** | Azure RTOS Middleware | [GitHub](https://github.com/azure-rtos/netxduo) | [Quickstarts](../iot-develop/quickstart-devkit-mxchip-az3166.md) | [Reference](https://github.com/azure-rtos/netxduo/tree/master/addons/azure_iot) |
+| **FreeRTOS** | FreeRTOS Middleware | [GitHub](https://github.com/Azure/azure-iot-middleware-freertos) | [Samples](https://github.com/Azure-Samples/iot-middleware-freertos-samples) | [Reference](https://azure.github.io/azure-iot-middleware-freertos) |
+| **Bare Metal** | Azure SDK for Embedded C | [GitHub](https://github.com/Azure/azure-sdk-for-c/tree/master/sdk/docs/iot) | [Samples](https://github.com/Azure/azure-sdk-for-c/blob/master/sdk/samples/iot/README.md) | [Reference](https://azure.github.io/azure-sdk-for-c) |
+
+Learn more about the device and embedded device SDKs in the [IoT Device Development documentation](../iot-develop/about-iot-sdks.md).
## Service SDKs
iot-edge Configure Connect Verify Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/configure-connect-verify-gpu.md
We'll use the Azure portal, the Azure Cloud Shell, and your VM's command line to
## Prerequisites
-* Azure account - [create a free account](https://azure.microsoft.com/account/free)
+* Azure account - [create a free account](https://azure.microsoft.com/free/search/)
-* Azure IoT Hub - [create an IoT Hub](https://azure.microsoft.com/services/iot_hub)
+* Azure IoT Hub - [create an IoT Hub](https://azure.microsoft.com/services/iot-hub/#overview)
* Azure IoT Edge device
We'll use the Azure portal, the Azure Cloud Shell, and your VM's command line to
az iot hub device-identity create --device-id <YOUR-DEVICE-NAME> --edge-enabled --hub-name <YOUR-IOT-HUB-NAME> ```
- For more information on creating an IoT Edge device, see [Quickstart: Deploy your first IoT Edge module to a virtual Linux device](/quickstart-linux). Later in this tutorial, we'll add an NVIDIA module to our IoT Edge device.
+ For more information on creating an IoT Edge device, see [Quickstart: Deploy your first IoT Edge module to a virtual Linux device](quickstart-linux.md). Later in this tutorial, we'll add an NVIDIA module to our IoT Edge device.
## Create a GPU-optimized virtual machine
iot-edge Troubleshoot Iot Edge For Linux On Windows Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-iot-edge-for-linux-on-windows-common-errors.md
+
+ Title: Common issues and resolutions for Azure IoT Edge for Linux on Windows | Microsoft Docs
+description: Use this article to resolve common issues encountered when deploying an IoT Edge for Linux on Windows (EFLOW) solution
++ Last updated : 07/25/2022+++++
+# Common issues and resolutions for Azure IoT Edge for Linux on Windows
+
+Use this article to help resolve common issues that can occur when deploying IoT Edge for Linux on Windows solutions.
+
+## Installation and Deployment
+
+The following section addresses the common errors when installing the EFLOW MSI and deploying the EFLOW virtual machine. Ensure you have an understanding of the following EFLOW concepts:
+- [Azure IoT Edge for Linux on Windows prerequisites](https://aka.ms/AzEFLOW-Requirements)
+- [Nested virtualization for Azure IoT Edge for Linux on Windows](./nested-virtualization.md)
+- [Networking configuration for Azure IoT Edge for Linux on Windows](./how-to-configure-iot-edge-for-linux-on-windows-networking.md)
+- [Azure IoT Edge for Linux on Windows virtual switch creation](/how-to-create-virtual-switch.md)
+- [PowerShell functions for IoT Edge for Linux on Windows](./reference-iot-edge-for-linux-on-windows-functions.md)
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Error | Error Description | Solution |
+> | -- | -- | -- |
+> | HNS API version X doesn't meet minimum version | EFLOW uses HCS/HNS to create the virtual machine on client SKUs. The minimum HNS version its 9.2. | If you're using a Windows version 20H1 or later, the HCS/HNS API should meet the requirement. If you're using Windows Client RS5 (17763), verify you have the latest Windows update. |
+> | Can't find WSSDAGENT service! <br/> WssdAgent is unreachable, update has failed. | The *WSSDAgent* is the EFLOW component that creates and manages the lifecycle of the VM. *WSSDAgent* runs a service on the Windows host OS. If the service isn't running, the VM communication and lifecycle fails. | Verify the *WSSDAgent* service is running on the Windows host OS by opening an elevated PowerShell session and running the command `Get-Service -Name wssdagent`. If WSSDAgent isn't running, try starting the service manually using the cmdlet: `Start-Service -name WSSDAgent`. If it doesn't start, share the content under _C:\ProgramData\wssdagent_. |
+> | Expected image '$vhdPathBackup' is missing | When installing EFLOW current release (CR), the user can provide the data partition size using the *vmDataSize*. If specified, EFLOW resizes the VHDX. The error occurs if the VHDX file isn't found during resizing. | Verify the VHDX file wasn't deleted or moved from the original location. |
+> | Installation of Hyper-V, Hyper-V Management PowerShell or OpenSSH failed. Please install manually and restart deployment. Aborting... | EFLOW installation has required prerequisites to deploy the virtual machine. If one the prerequisites aren't met, the `Deploy-Eflow` cmdlet fails. | Ensure that all required prerequisites are met. _PowerShell_: Close PowerShell session and open a new one. If you have multiple installations, make sure to have the correct module imported. Try a different version of PowerShell.<br/>_Hyper-V_: For more information EFLOW Hyper-V support, see [Azure IoT Edge for Linux on Windows Nested Virtualization](./nested-virtualization.md). <br/> _OpenSSH:_ If you're using your own custom installation, check `customSsh` parameter during deployment. |
+> | $feature isn't available in this Windows edition. <br/> $featureversion could not be enabled. Please add '$featureversion' optional feature in settings and restart the deployment. | When deploying EFLOW, if one of the prerequisites it's not met, the installer tries to install it. If this feature isn't available or the feature installation fails, the EFLOW deployment fails. | Ensure that all required prerequisites are met. _PowerShell_: Close PowerShell session and open a new one. If you have multiple installations, make sure to have the correct module imported. Try a different version of PowerShell.<br/>_Hyper-V_: For more information EFLOW Hyper-V support, see [Azure IoT Edge for Linux on Windows Nested Virtualization](./nested-virtualization.md). <br/> _OpenSSH:_ If you're using your own custom installation, check `customSsh` parameter during deployment. |
+> | Hyper-V properties indicate the Hyper-V component is not functional (property HyperVRequirementVirtualizationFirmwareEnabled is false) <br/> Hyper-V properties indicate the Hyper-V component is not functional (property HyperVisorPresent is false). <br/>Hyper-V core services not running (vmms, vmcompute, hvhost). Ensure Hyper-V is configured properly and capable of starting virtual machines. | These errors are related to Hyper-V virtualization technology stack and services. The EFLOW virtual machine requires several Hyper-V services in order to create and run the virtual machine. If one of these services isn't available, the installation fails. | For more information EFLOW Hyper-V support, see [Azure IoT Edge for Linux on Windows Nested Virtualization](./nested-virtualization.md).|
+> | wssdagent unreachable, please retry... | The *WSSDAgent* is the EFLOW component that creates and manages the lifecycle of the VM. *WSSDAgent* runs a service on the Windows host OS. |Verify the *WSSDAgent* service is running on the Windows host OS by opening an elevated PowerShell session and running the command `Get-Service -Name wssdagent`. If WSSDAgent isn't running, try starting the service manually using the cmdlet: `Start-Service -name WSSDAgent`. If it doesn't start, share the content under _C:\ProgramData\wssdagent_. |
+> | Virtual machine configuration could not be retrieved from wssdagent | Find the EFLOW configuration yaml files under the EFLOW root installation folder. For example, if the default installation directory was used, the configuration files should be in the *C:\Program Files\Azure IoT Edge\yaml* directory. | Check if the directory was deleted or moved. If the directory isn't available, the VM can't be created. EFLOW reinstallation is needed. |
+> | An existing virtual machine was detected. To redeploy the virtual machine, please uninstall and re-install $eflowProductName. <br/> Virtual machine '$name' already exists. You much remove '$name' virtual machine first. | During EFLOW deployment, the installer checks if there's an EFLOW VM created from previous installations. In some cases, if the installation fails in its final steps, the VM was created and it's still running in the Windows host OS. | Make sure to completely uninstall EFLOW before starting a new installation. If you want to remove the Azure IoT Edge for Linux on Windows installation from your device, use the following commands. <br/> 1. In Windows, open **Settings** <br/> 2. Select **Add or Remove Programs** <br/> 3. Select **Azure IoT Edge LTS** app <br/> 4. Select **Uninstall**|
+> | Creating storage vhd (file: $($config["imageNameEflowVm"])) failed | Error when creating or resizing the EFLOW virtual machine VHDX. | Check EFLOW installation logs _C:\Program Files\Azure IoT Edge_ and WSSDAgent logs _C:\ProgramData\wssdagent_ for more information. |
+> | Error: Virtual machine creation failed! <br/> Failed to retrieve virtual machine name. | Error related to virtual machine creation by *WSSDAgent*. Installer will try removing the VM and mark the installation as failed. | Verify the *WSSDAgent* service is running on the Windows host OS by opening an elevated PowerShell session and running the command `Get-Service -Name wssdagent`. If WSSDAgent isn't running, try starting the service manually using the cmdlet: `Start-Service -name WSSDAgent`. If it doesn't start, share the content under _C:\ProgramData\wssdagent_. |
+> | This Windows device does not meet the minimum requirements for Azure EFLOW. Please refer to https://aka.ms/AzEFLOW-Requirements for system requirements | During EFLOW deployment, the *Deploy-EFlow* PowerShell cmdlet checks that all the prerequisites are met. Specifically, Windows SKUs are checked (Windows Server 2019, 2022, Windows Client Professional, or Client Enterprise) and the Windows version is at least 17763. | Verify you're using a supported Windows SKU and version. If using Windows version 17763, ensure to have all the updates applied. |
+> | Not enough memory available. | There isn't enough RAM memory to create the EFLOW VM with the allocated VM memory. By default, the virtual machine has 1024 MB assigned. The Windows host OS needs to have at least X MB free to assign that memory to the VM. | Check the Windows host OS memory available and use the _memoryInMb_ parameter during `Deploy-Eflow` for custom memory assignment. For more information about `Deploy-EFlow` PowerShell cmdlet, see [PowerShell functions for IoT Edge for Linux on Windows](./reference-iot-edge-for-linux-on-windows-functions.md). |
+> | Not enough CPU cores available. | There isn't enough CPU cores available to create the EFLOW VM with the allocated cores. By default, the virtual machine will have one core assigned. The Windows host OS needs to have at least one core to assign that core to the EFLOW VM. | Check the Windows host OS CPU cores available and use the _cpuCore_ parameter during `Deploy-Eflow` for custom memory assignment. For more information about `Deploy-EFlow` PowerShell cmdlet, see [PowerShell functions for IoT Edge for Linux on Windows](./reference-iot-edge-for-linux-on-windows-functions.md). |
+> | Not enough disk space is available on drive '$drive'. | There isn't enough storage available to create the EFLOW VM with the allocated storage size. By default, the VM will use ~18 GB of storage. The Windows host OS needs to have at least 18 GB of free storage to assign that storage to the EFLOW VM VHDX. | Check the Windows host free storage available and use the _vmDiskSize_ parameter during `Deploy-Eflow` for custom storage size. For more information about `Deploy-EFlow` PowerShell cmdlet, see [PowerShell functions for IoT Edge for Linux on Windows](./reference-iot-edge-for-linux-on-windows-functions.md). |
+> | Signature is not valid (file: $filePath, signature status: ${signature.Status}) <br/> Signature is missing (file: $filePath)! <br/> could not track signature to the microsoft root certificate | The signature of the file can't be found or it's invalid. EFLOW PSM and EFLOW updates are all signed using Microsoft certificates. If the Microsoft code or Microsoft CA certificates are not available in the Windows host OS, the validation fails. | Verify all contents were downloaded from official Microsoft sites. Also, if the necessary certificates aren't part of the Windows host, check [Install EFLOW necessary certificates](https://github.com/Azure/iotedge-eflow/issues/158). |
+
+## Provisioning and IoT Edge runtime
+
+The following section addresses the common errors when provisioning the EFLOW virtual machine and interact with the IoT Edge runtime. Ensure you have an understanding of the following EFLOW concepts:
+- [What is Azure IoT Hub Device Provisioning Service?](/azure/iot-dps/about-iot-dps)
+- [Understand the Azure IoT Edge runtime and its architecture](./iot-edge-runtime.md)
+- [Troubleshoot your IoT Edge device](./troubleshoot.md)
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Error | Error Description | Solution |
+> | -- | -- | -- |
+> | Action aborted by user | For some of the EFLOW PowerShell cmdlets, there's user interaction and confirmation needed. | - |
+> | Error, device connection string not provided. <br/> Only the device connection string for *ManualConnectionString* provisioning may be specified. | Incorrect parameters used when using *ManualConnectionString* device provisioning. | For more information about the `Provision-EflowVm` PowerShell cmdlet, see [PowerShell functions for IoT Edge for Linux on Windows](./reference-iot-edge-for-linux-on-windows-functions.md). |
+> | IoT Hub Hostname, Device ID, and/or identity cert/pk parameters for ManualX509 provisioning not specified <br/> Device connection string, scope ID, registration ID, and symmetric key may not be specified for DpsX509 provisioning <br/> Certificate and private key file for ManualX509 provisioning not found (expected at $identityCertPath and $identityPrivKeyPath) | Incorrect parameters used when using _ManualX509_ device provisioning. | For more information about `Provision-EflowVm` PowerShell cmdlet, see [PowerShell functions for IoT Edge for Linux on Windows](./reference-iot-edge-for-linux-on-windows-functions.md). |
+> | Scope ID for DpsTpm provisioning not specified <br/> Only scope ID and registration ID (optional) for DpsTpm provisioning may be specified | Incorrect parameters used when using _DpsTpm_ device provisioning. | For more information about `Provision-EflowVm` PowerShell cmdlet, see [PowerShell functions for IoT Edge for Linux on Windows](./reference-iot-edge-for-linux-on-windows-functions.md). |
+> | Scope ID, registration ID or symmetric key missing for DpsSymmetricKey provisioning <br/> globalEndpoint not specified <br> Only scope ID, registration ID or symmetric key for DpsSymmetricKey provisioning may be specified | Incorrect parameters used when using _DpsSymmetricKey_ device provisioning. | For more information about `Provision-EflowVm` PowerShell cmdlet, see [PowerShell functions for IoT Edge for Linux on Windows](./reference-iot-edge-for-linux-on-windows-functions.md). |
+> | Virtual machine does not exist, deploy it first | The EFLOW MSI was installed, however the EFLOW virtual machine was never deployed. | Deploy the EFLOW VM using the `Deploy-Eflow` PowerShell cmdlet. |
+> | Aborting, iotedge was previously provisioned (headless mode)! | The EFLOW VM was previously provisioned and _headless_ mode isn't supported when reprovisioning. | The issue is fixed and now *-headless* mode is supported with reprovisioning |
+> | Provisioning aborted by user | The EFLOW VM was previously provisioned and the user needs to confirm they want to reprovision. | User must accept reprovisioning to continue with the provisioning process.|
+> | Failed to provision <br/> Failed to provision config.toml. Please provision manually. <br/> iotedge service not running after provisioning, please investigate manually | The EFLOW provisioning information was correctly configured inside the EFLOW VM, but the IoT Edge daemon was not able to provision the device with the cloud provisioning service. | Check the Azure IoT Edge runtime logs. First, connect to the EFLOW virtual machine using the `Connect-EflowVm` PowerShell cmdlet then follow [Troubleshoot your IoT Edge device](./troubleshoot.md) to retrieve the IoT Edge logs. |
+> | Unexpected return from 'sudo iotedge list' <br/> Retrieving iotedge check output from: "$vmName" | The execution of `sudo iotedge list` command inside the EFLOW VM returned an unexpected payload. Generally this is related to IoT Edge service not running correctly inside the EFLOW VM. | Check the Azure IoT Edge runtime logs. First, connect to the EFLOW virtual machine using the `Connect-EflowVm` PowerShell cmdlet then follow [Troubleshoot your IoT Edge device](./troubleshoot.md) to get the IoT Edge logs. |
+> | TPM 2.0 is required to enable DpsTpm | TPM passthrough only works with TPM 2.0 compatible hardware. This could be caused by a physical TPM or if the EFLOW VM is running using nested virtualization using vTPM on the Windows host OS. | Make sure the Windows host OS has a valid TPM 2.0, check [Enable TPM 2.0 on your PC](https://support.microsoft.com/windows/enable-tpm-2-0-on-your-pc-1fd5a332-360d-4f46-a1e7-ae6b0c90645c). |
+> | TPM provisioning information not available! | The TPM passthrough binary inside the EFLOW VM could not get the TPM information from the Windows host OS. This error is probably related to a communication error with the EFLOWProxy. | Ensure that the _EFLOW Proxy Service_ service is running using the PowerShell cmdlet `Get-Service -name "EFLOW Proxy Service"`. If not running, check the Event logs. Open the **Event Viewer** > **Application and service logs** -> **Azure IoT Edge for Linux on Windows**. |
+
+## Interaction with the VM
+
+The following section addresses the common errors when interacting with the EFLOW virtual machine, and configure the EFLOW device passthrough options. Ensure you have an understanding of the following EFLOW concepts:
+- [PowerShell functions for IoT Edge for Linux on Windows](./reference-iot-edge-for-linux-on-windows-functions.md)
+- [GPU acceleration for Azure IoT Edge for Linux on Windows](./gpu-acceleration.md)
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Error | Error Description | Solution |
+> | -- | -- | -- |
+> | Can't process request, EFLOW VM is OFF! | When trying to apply a configuration to the EFLOW virtual machine, the VM must be turned on. If the EFLOW VM is off, then the SSH channel will fail, and no communication is possible with the VM.| Start the EFLOW VM using the *Start-EflowVm* PowerShell cmdlet. For more information about the *Start-EflowVm* cmdlet, see [PowerShell functions for IoT Edge for Linux on Windows](./reference-iot-edge-for-linux-on-windows-functions.md). |
+> | Virtual machine name could not be retrieved from wssdagent <br/> Error: More than one virtual machine found | The WSSDAgent service couldn't find the EFLOW virtual machine. | Verify the EFLOW VM is started and running, use the PowerShell cmdlet `Start-EflowVm`. If using a client SKU, use the `hcsdiag list` cmdlet and find the line that has _wssdagent_ after the GUID of the VM then check the state. If using a server SKU, go to Hyper-V manager and verify there's a VM with the name _Windows-hostname_-EFLOW then check the state. |
+> | Unable to connect virtual machine with SSH.  Aborting.. | The Windows host OS could not establish an SSH connection with the EFLOW VM to execute the necessary commands or copy files. Generally, this issue is related to a networking problem between Windows and the virtual machine. | Try the PowerShell cmdlet `Get-EflowVmAddr` and check if the *IP4Address* assigned to the VM is correct. For more information about networking configurations, see [Azure IoT Edge for Linux on Windows networking](./iot-edge-for-linux-on-windows-networking.md).|
+> | Unexpected return stats from virtual machine | The execution of `Get-EflowVm` PowerShell cmdlet checks the status of the virtual machine. If the communication with the virtual machine fails, or some of the Linux bash commands inside the VM fail, the cmdlet fails. | Check the EFLOW VM connection using `Connect-EflowVm` PowerShell cmdlet and try manually running the VM stats bash commands inside the VM. |
+> | TPM provisioning was not enabled! | To get the TPM provisioning information for the TPM, the EFLOW TPM passthrough must be enabled. If TPM passthrough isn't enabled, the cmdlet fails. | Enable TPM passthrough before getting the TPM information. Use the `Set-EflowVmFeature` PowerShell cmdlet to enable the TPM passthrough. For more information about `Set-EflowVmFeature` PowerShell cmdlet, see [PowerShell functions for IoT Edge for Linux on Windows](./reference-iot-edge-for-linux-on-windows-functions.md). |
+> | Unknown feature '$feature' is provided. | The *Set-EflowVmFeature* cmdlet supports _DpsTpm_ and _Defender_ as the two features that can be enabled or disabled. | For more information about `Set-EflowVmFeature` PowerShell cmdlet, see [PowerShell functions for IoT Edge for Linux on Windows](./reference-iot-edge-for-linux-on-windows-functions.md). |
+> | Unsupported DDA Type: $gpuName | Currently, GPU DDA is only supported for _NVIDIA Tesla T4_ and _NVIDIA A2_. If the user provides another GPU name, the GPU passthrough fails. | Verify all the GPU prerequisites are met. For more information about EFLOW GPU support, check [Azure IoT Edge for Linux on Windows GPU passthrough](https://aka.ms./azeflow-gpu).
+> | Invalid GPU configuration requested, <br/> Passthrough enabled but requested gpuCount == $gpuCount <br/> GPU PassthroughType '$gpuPassthroughType' not supported by '$script:WssdProviderVmms' WssdProvider <br/> Requested GPU configuration cannot be supported by Host, <br/> GPU '$gpuName' not available <br/> Requested GPU configuration cannot be supported by Host, <br/> not enough GPUs available - Requested($gpuCount), Available($($selectedGpuDevice.Count)) <br/> Requested GPU configuration cannot be supported by Host, <br/> GPU PassthroughType '$gpuPassthroughType' not supported <br/> Invalid GPU configuration requested, Passthrough disabled but gpuCount > 0" | These errors are generally related to one or more the GPU dependencies not being met. | Make sure all the GPU prerequisites are met. For more information about EFLOW GPU support, check [Azure IoT Edge for Linux on Windows GPU passthrough](https://aka.ms./azeflow-gpu). |
+
+## Networking
+
+The following section addresses the common errors related to EFLOW networking and communication between the EFLOW virtual machine and the Windows host OS. Ensure you have an understanding of the following EFLOW concepts:
+
+- [Azure IoT Edge for Linux on Windows networking](./iot-edge-for-linux-on-windows-networking.md)
+- [Networking configuration for Azure IoT Edge for Linux on Windows](./how-to-configure-iot-edge-for-linux-on-windows-networking.md)
+- [Azure IoT Edge for Linux on Windows virtual switch creation](/how-to-create-virtual-switch.md)
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Error | Error Description | Solution |
+> | -- | -- | -- |
+> | Installation of virtual switch failed <br/> The virtual switch '$switchName' of type '$switchType' was not found | When creating the EFLOW VM, there's a check that the virtual switch provided exists and has the correct type. If using no parameter, the installation uses the default switch provided by the Windows client. | Check that the virtual switch being used is part of the Windows host OS. You can check the virtual switches using the PowerShell cmdlet `Get-VmSwitch`. For more information about networking configurations, see [Azure IoT Edge for Linux on Windows networking](./iot-edge-for-linux-on-windows-networking.md). |
+> | The virtual switch '$switchName' of type '$switchType' </br> is not supported on current host OS | When using Windows client SKUs, external/default switches are supported. However, when using Windows Server SKUs, external/internal switches are supported. | For more information about networking configurations, see [Azure IoT Edge for Linux on Windows networking](./iot-edge-for-linux-on-windows-networking.md).
+> | Cannot set Static IP on ICS type virtual switch (Default Switch) | The _default switch_ is a virtual switch that's provided in the Windows client SKUs after installing Hyper-V. This switch already has a DHCP server for *IP4Address* assignation and for security reasons doesn't support a static IP. | If using the _default switch_, you can either use the `Get-EflowVmAddr` cmdlet or use the hostname of the EFLOW VM to get the VM *IP4Address*. If using the hostname, try using _Windows-hostname_-EFLOW.mshome.net. For more information about networking configurations, see [Azure IoT Edge for Linux on Windows networking](./iot-edge-for-linux-on-windows-networking.md). |
+> | $dnsServer is not a valid IP4 address | The *Set-EflowVmDnsServers* cmdlet expects a list of valid *IP4Addresses* | Verify you provided a valid list of addresses. You can check the Windows host OS DNS servers by using the PowerShell cmdlet `ipconfig /all` and then looking for the entry _DNS Servers_. For example, if you wanted to set two DNS servers with IPs 10.0.1.2 and 10.0.1.3, use the `Set-EflowVmDnsServers -dnsServers @("10.0.1.2", "10.0.1.3")` cmdlet. |
+> | Could not retrieve IP address for virtual machine <br/> Virtual machine name could not be retrieved, failed to get IP/MAC address <br/> Failed to acquire MAC address for virtual machine <br/> Failed to acquire IP address for virtual machine <br/> Unable to obtain host routing. Connectivity test to $computerName failed. <br/> wssdagent does not have the expected vnet resource provisioned. <br/> Missing EFLOW-VM guest interface for ($vnicName) | Caused by connectivity issues with the EFLOW virtual machine. The errors are generally related to an IP address change (if using static IP) or failure to assign an IP if using DHCP server. | Make sure to use the appropriate networking configuration. If there's a valid DHCP server, you can use DHCP assignation. If using static IP, make sure the IP configuration is correct (all three parameters: _ip4Address_, _ip4GatewayAddress_ and _ip4PrefixLength_) and the address isn't being used by another device in the network. For more information about networking configurations, see [Azure IoT Edge for Linux on Windows networking](./iot-edge-for-linux-on-windows-networking.md). |
+> | No adapters associated with the switch '$vnetName' are found. <br/> No adapters associated with the device ID '$adapterGuid' are found <br/> No adapters associated with the adapter name '$name' are found. <br/> Network '$vswitchName' does not exist | Caused by a network communication error between the Windows host OS and the EFLOW virtual machine. | Ensure you can reach the EFLOW VM and establish an SSH channel. Use the `Connect-EflowVm` PowerShell cmdlet to connect to the virtual machine. If connectivity fails, reboot the EFLOW VM and check again. |
+> | ip4Address & ip4PrefixLength are required for StaticIP! | During EFLOW VM deployment or when adding multiple NICs, if using static IP, the three static ip parameters are needed: _ip4Address_, _ip4GatewayAddress_, _ip4PrefixLength_. | For more information about `Deploy-EFlow` PowerShell cmdlet, see [PowerShell functions for IoT Edge for Linux on Windows](./reference-iot-edge-for-linux-on-windows-functions.md). |
+> | Found multiple VMMS switches <br/> with name '$switchName' of type '$switchType' | There are two or more virtual switches with the same name and type. This environment conflicts with the EFLOW VM installation and lifecycle of the VM. | Use `Get-VmSwitch` PowerShell cmdlet to check the virtual switches available in the Windows host and make sure that each {name,type} is unique. |
+
+## Next steps
+
+Do you think that you found a bug in the IoT Edge for Linux on Windows? [Submit an issue](https://github.com/Azure/iotedge-eflow/issues) so that we can continue to improve.
+
+If you have more questions, create a [Support request](https://portal.azure.com/#create/Microsoft.Support) for help.
iot-hub Iot Hub Csharp Csharp Device Management Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-csharp-csharp-device-management-get-started.md
In this section, you create a .NET console app, using C#, that initiates a remot
1. In **Create a new project**, find and select the **Console App (.NET Framework)** project template, and then select **Next**.
-1. In **Configure your new project**, name the project *TriggerReboot*, and select .NET Framework version 4.5.1 or later. Select **Create**.
+1. In **Configure your new project**, name the project *TriggerReboot*, then select **Next**.
- ![New Visual C# Windows Classic Desktop project](./media/iot-hub-csharp-csharp-device-management-get-started/create-trigger-reboot-configure.png)
+ :::image type="content" source="./media/iot-hub-csharp-csharp-device-management-get-started/create-trigger-reboot-configure.png" alt-text="Screenshot that shows how to configure a new Visual Studio project." lightbox="./media/iot-hub-csharp-csharp-device-management-get-started/create-trigger-reboot-configure.png":::
+
+1. Accept the default version of the .NET Framework, then select **Create** to create the project.
1. In **Solution Explorer**, right-click the **TriggerReboot** project, and then select **Manage NuGet Packages**. 1. Select **Browse**, then search for and select **Microsoft.Azure.Devices**. Select **Install** to install the **Microsoft.Azure.Devices** package.
- ![NuGet Package Manager window](./media/iot-hub-csharp-csharp-device-management-get-started/create-trigger-reboot-nuget-devices.png)
+ :::image type="content" source="./media/iot-hub-csharp-csharp-device-management-get-started/create-trigger-reboot-nuget-devices.png" alt-text="Screenshot that shows how to install the Microsoft.Azure.Devices package." lightbox="./media/iot-hub-csharp-csharp-device-management-get-started/create-trigger-reboot-nuget-devices.png":::
This step downloads, installs, and adds a reference to the [Azure IoT service SDK](https://www.nuget.org/packages/Microsoft.Azure.Devices/) NuGet package and its dependencies.
To create the simulated device app, follow these steps:
1. In **Configure your new project**, name the project *SimulateManagedDevice*, and for **Solution**, select **Add to solution**. Select **Create**.
- ![Name and add your project to the solution](./media/iot-hub-csharp-csharp-device-management-get-started/configure-device-app.png)
+ :::image type="content" source="./media/iot-hub-csharp-csharp-device-management-get-started/configure-device-app.png" alt-text="Screenshot that shows how to name a new Visual Studio project." lightbox="./media/iot-hub-csharp-csharp-device-management-get-started/configure-device-app.png":::
1. In Solution Explorer, right-click the new **SimulateManagedDevice** project, and then select **Manage NuGet Packages**. 1. Select **Browse**, then search for and select **Microsoft.Azure.Devices.Client**. Select **Install**.
- ![NuGet Package Manager window Client app](./media/iot-hub-csharp-csharp-device-management-get-started/create-device-nuget-devices-client.png)
+ :::image type="content" source="./media/iot-hub-csharp-csharp-device-management-get-started/create-device-nuget-devices-client.png" alt-text="Screenshot that shows how to install the Microsoft.Azure.Devices.Client package." lightbox="./media/iot-hub-csharp-csharp-device-management-get-started/create-device-nuget-devices-client.png":::
This step downloads, installs, and adds a reference to the [Azure IoT device SDK](https://www.nuget.org/packages/Microsoft.Azure.Devices.Client/) NuGet package and its dependencies.
iot-hub Iot Hub Csharp Csharp Module Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-csharp-csharp-module-twin-getstarted.md
At the end of this article, you have two .NET console apps:
In this section, you create a .NET console app on your simulated device that updates the module twin reported properties.
-Before you begin, get your module connection string. Sign in to the [Azure portal](https://portal.azure.com/). Navigate to your hub and select **IoT Devices**. Find **myFirstDevice**. Select **myFirstDevice** to open it, and then select **myFirstModule** to open it. In **Module Identity Details**, copy the **Connection string (primary key)** when needed in the following procedure.
+Here's how to get your module connection string from the Azure portal. Sign in to the [Azure portal](https://portal.azure.com/). Navigate to your hub and select **Devices**. Find **myFirstDevice**. Select **myFirstDevice** to open it, and then select **myFirstModule** to open it. In **Module Identity Details**, copy the **Connection string (primary key)** to save it for the console app.
- ![Azure portal module detail](./media/iot-hub-csharp-csharp-module-twin-getstarted/module-identity-detail.png)
-1. In Visual Studio, add a new project to your solution by selecting **File** > **New** > **Project**. In Create a new project, select **Console App (.NET Framework)**, and select **Next**.
+1. In Visual Studio, add a new project to your solution by selecting **File** > **New** > **Project**. In **Create a new project**, select **Console App (.NET Framework)**, and select **Next**.
-1. Name the project *UpdateModuleTwinReportedProperties*. For **Solution**, select **Add to solution**. Make sure the .NET Framework version is 4.6.1 or later.
+1. In **Configure your new project**, name the project *UpdateModuleTwinReportedProperties*, then select **Next**.
- ![Create a Visual Studio project](./media/iot-hub-csharp-csharp-module-twin-getstarted/configure-update-twins-csharp1.png)
+ :::image type="content" source="./media/iot-hub-csharp-csharp-module-twin-getstarted/configure-update-twins-csharp1.png" alt-text="Screenshot that shows the 'Configure your new project' popup." lightbox="./media/iot-hub-csharp-csharp-module-twin-getstarted/configure-update-twins-csharp1.png":::
-1. Select **Create** to create your project.
+1. Keep the default .NET Framework option and select **Create** to create your project.
1. In Visual Studio, open **Tools** > **NuGet Package Manager** > **Manage NuGet Packages for Solution**. Select the **Browse** tab. 1. Search for and select **Microsoft.Azure.Devices.Client**, and then select **Install**.
- ![Screenshot that shows the "Microsoft.Azure.Devices.Client" selected and the "Install" button highlighted.](./media/iot-hub-csharp-csharp-module-twin-getstarted/install-client-sdk.png)
+ :::image type="content" source="./media/iot-hub-csharp-csharp-module-twin-getstarted/install-client-sdk.png" alt-text="Screenshot that shows the 'Microsoft.Azure.Devices.Client' selected and the 'Install' button highlighted." lightbox="./media/iot-hub-csharp-csharp-module-twin-getstarted/install-client-sdk.png":::
1. Add the following `using` statements at the top of the **Program.cs** file:
iot-hub Iot Hub Csharp Csharp Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-csharp-csharp-twin-getstarted.md
In this article, you create these .NET console apps:
In this section, you create a .NET console app, using C#, that adds location metadata to the device twin associated with **myDeviceId**. It then queries the device twins stored in the IoT hub selecting the devices located in the US, and then the ones that reported a cellular connection.
-1. In Visual Studio, select **Create a new project**. In **Create new project**, select **Console App (.NET Framework)**, and then select **Next**.
+1. In Visual Studio, select **File > New > Project**. In **Create a new project**, select **Console App (.NET Framework)**, and then select **Next**.
-1. In **Configure your new project**, name the project **AddTagsAndQuery**.
+1. In **Configure your new project**, name the project **AddTagsAndQuery**, the select **Next**.
- ![Configure your AddTagsAndQuery project](./media/iot-hub-csharp-csharp-twin-getstarted/config-addtagsandquery-app.png)
+ :::image type="content" source="./media/iot-hub-csharp-csharp-twin-getstarted/config-addtagsandquery-app.png" alt-text="Screenshot of how to create a new Visual Studio project." lightbox="./media/iot-hub-csharp-csharp-twin-getstarted/config-addtagsandquery-app.png":::
+
+1. Accept the default version of the .NET Framework, then select **Create** to create the project.
1. In Solution Explorer, right-click the **AddTagsAndQuery** project, and then select **Manage NuGet Packages**.
iot-hub Iot Hub Devguide Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-sdks.md
Title: Azure IoT Hub SDKs | Microsoft Docs description: Links to the Azure IoT Hub SDKs which you can use to build device apps and back-end apps. -
# Azure IoT Hub SDKs
-There are two categories of software development kits (SDKs) for working with IoT Hub:
+There are three categories of software development kits (SDKs) for working with IoT Hub:
+
+* [**IoT Hub device SDKs**](#azure-iot-hub-device-sdks) enable you to build apps that run on your IoT devices using device client or module client. These apps send telemetry to your IoT hub, and optionally receive messages, job, method, or twin updates from your IoT hub. You can use these SDKs to build device apps that use [Azure IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) conventions and models to advertise their capabilities to IoT Plug and Play-enabled applications. You can also use module client to author [modules](../iot-edge/iot-edge-modules.md) for [Azure IoT Edge runtime](../iot-edge/about-iot-edge.md).
* [**IoT Hub service SDKs**](#azure-iot-hub-service-sdks) enable you to build backend applications to manage your IoT hub, and optionally send messages, schedule jobs, invoke direct methods, or send desired property updates to your IoT devices or modules.
-* [**IoT Hub device SDKs**](../iot-develop/about-iot-sdks.md) enable you to build apps that run on your IoT devices using device client or module client. These apps send telemetry to your IoT hub, and optionally receive messages, job, method, or twin updates from your IoT hub. You can use these SDKs to build device apps that use [Azure IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) conventions and models to advertise their capabilities to IoT Plug and Play-enabled applications. You can also use module client to author [modules](../iot-edge/iot-edge-modules.md) for [Azure IoT Edge runtime](../iot-edge/about-iot-edge.md).
+* [**IoT Hub management SDKs**](#azure-iot-hub-management-sdks) help you build backend applications that manage the IoT hubs in your Azure subscription.
+
+Microsoft also provides a set of SDKs for provisioning devices through and building backend services for the [Device Provisioning Service](../iot-dps/about-iot-dps.md). To learn more, see [Microsoft SDKs for IoT Hub Device Provisioning Service](../iot-dps/libraries-sdks.md).
-In addition, we also provide a set of SDKs for working with the [Device Provisioning Service](../iot-dps/about-iot-dps.md).
+Learn about the [benefits of developing using Azure IoT SDKs](https://azure.microsoft.com/blog/benefits-of-using-the-azure-iot-sdks-in-your-azure-iot-solution/).
-* **Provisioning device SDKs** enable you to build apps that run on your IoT devices to communicate with the Device Provisioning Service.
+## Azure IoT Hub device SDKs
-* **Provisioning service SDKs** enable you to build backend applications to manage your enrollments in the Device Provisioning Service.
+The Microsoft Azure IoT device SDKs contain code that facilitates building applications that connect to and are managed by Azure IoT Hub services. These SDKs can run on a general MPU-based computing device such as a PC, tablet, smartphone, or Raspberry Pi. The SDKs support development in C and in modern managed languages including in C#, Node.JS, Python, and Java.
-Learn about the [benefits of developing using Azure IoT SDKs](https://azure.microsoft.com/blog/benefits-of-using-the-azure-iot-sdks-in-your-azure-iot-solution/).
+The SDKs are available in **multiple languages** providing the flexibility to choose which best suits your team and scenario.
+
+| Language | Package | Source | Quickstarts | Samples | Reference |
+| :-- | :-- | :-- | :-- | :-- | :-- |
+| **.NET** | [NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Client) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp) | [Quickstart](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp) | [Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp) | [Reference](/dotnet/api/microsoft.azure.devices.client) |
+| **Python** | [pip](https://pypi.org/project/azure-iot-device/) | [GitHub](https://github.com/Azure/azure-iot-sdk-python) | [Quickstart](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-python) | [Samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/azure-iot-device/samples) | [Reference](/python/api/azure-iot-device) |
+| **Node.js** | [npm](https://www.npmjs.com/package/azure-iot-device) | [GitHub](https://github.com/Azure/azure-iot-sdk-node) | [Quickstart](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/device/samples) | [Reference](/javascript/api/azure-iot-device/) |
+| **Java** | [Maven](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot/iot-device-client) | [GitHub](https://github.com/Azure/azure-iot-sdk-java) | [Quickstart](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java) | [Samples](https://github.com/Azure/azure-iot-sdk-java/tree/master/device/iot-device-samples) | [Reference](/java/api/com.microsoft.azure.sdk.iot.device) |
+| **C** | [packages](https://github.com/Azure/azure-iot-sdk-c/blob/master/readme.md#getting-the-sdk) | [GitHub](https://github.com/Azure/azure-iot-sdk-c) | [Quickstart](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-ansi-c) | [Samples](https://github.com/Azure/azure-iot-sdk-c/tree/master/iothub_client/samples) | [Reference](/azure/iot-hub/iot-c-sdk-ref/) |
+
+> [!WARNING]
+> The **C SDK** listed above is **not** suitable for embedded applications due to its memory management and threading model. For embedded devices, refer to the [Embedded device SDKs](#embedded-device-sdks).
+
+### Embedded device SDKs
+
+These SDKs were designed and created to run on devices with limited compute and memory resources and are implemented using the C language.
+
+The embedded device SDKs are available for **multiple operating systems** providing the flexibility to choose which best suits your team and scenario.
+
+| RTOS | SDK | Source | Samples | Reference |
+| :-- | :-- | :-- | :-- | :-- |
+| **Azure RTOS** | Azure RTOS Middleware | [GitHub](https://github.com/azure-rtos/netxduo) | [Quickstarts](../iot-develop/quickstart-devkit-mxchip-az3166.md) | [Reference](https://github.com/azure-rtos/netxduo/tree/master/addons/azure_iot) |
+| **FreeRTOS** | FreeRTOS Middleware | [GitHub](https://github.com/Azure/azure-iot-middleware-freertos) | [Samples](https://github.com/Azure-Samples/iot-middleware-freertos-samples) | [Reference](https://azure.github.io/azure-iot-middleware-freertos) |
+| **Bare Metal** | Azure SDK for Embedded C | [GitHub](https://github.com/Azure/azure-sdk-for-c/tree/master/sdk/docs/iot) | [Samples](https://github.com/Azure/azure-sdk-for-c/blob/master/sdk/samples/iot/README.md) | [Reference](https://azure.github.io/azure-sdk-for-c) |
+
+Learn more about the IoT Hub device SDKS in the [IoT Device Development Documentation](../iot-develop/about-iot-sdks.md).
## Azure IoT Hub service SDKs
The Azure IoT service SDKs contain code to facilitate building applications that
| Node | [npm](https://www.npmjs.com/package/azure-iothub) | [GitHub](https://github.com/Azure/azure-iot-sdk-node) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/service/samples) | [Reference](/javascript/api/azure-iothub/) | | Python | [pip](https://pypi.org/project/azure-iot-hub) | [GitHub](https://github.com/Azure/azure-iot-sdk-python) | [Samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/azure-iot-hub/samples) | [Reference](/python/api/azure-iot-hub) |
-## Microsoft Azure provisioning SDKs
-
-The **Microsoft Azure provisioning SDKs** enable you to provision devices to your IoT Hub using the [Device Provisioning Service](../iot-dps/about-iot-dps.md). To learn more about the provisioning SDKs, see [Microsoft SDKs for Device Provisioning Service](../iot-dps/libraries-sdks.md).
+## Azure IoT Hub management SDKs
-## Azure IoT Hub device SDKs
-
-The Microsoft Azure IoT device SDKs contain code that facilitates building applications that connect to and are managed by Azure IoT Hub services.
+The Iot Hub management SDKs help you build backend applications that manage the IoT hubs in your Azure subscription.
-Learn more about the IoT Hub device SDKS in the [IoT Device Development Documentation](../iot-develop/about-iot-sdks.md).
+| Platform | Package | Code repository | Reference |
+| --|--|--|--|
+| .NET|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Management.IotHub) |[GitHub](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/iothub)| [Reference](/dotnet/api/microsoft.azure.management.iothub) |
+| Java|[Maven](https://mvnrepository.com/artifact/com.azure.resourcemanager/azure-resourcemanager-iothub) |[GitHub](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/iothub/azure-resourcemanager-iothub)| [Reference](/java/api/overview/azure/resourcemanager-iothub-readme) |
+| Node.js|[npm](https://www.npmjs.com/package/@azure/arm-iothub)|[GitHub](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/iothub/arm-iothub)|[Reference](/javascript/api/overview/azure/arm-iothub-readme) |
+| Python|[pip](https://pypi.org/project/azure-mgmt-iothub/) |[GitHub](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/iothub/azure-mgmt-iothub)|[Reference](/python/api/azure-mgmt-iothub) |
## SDK and hardware compatibility
-For more information about SDK compatibility with specific hardware devices, see the [Azure Certified for IoT device catalog](https://devicecatalog.azure.com/) or individual repository.
+For more information about device SDK compatibility with specific hardware devices, see the [Azure Certified for IoT device catalog](https://devicecatalog.azure.com/) or individual repository.
[!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-partial.md)]
+## SDKs for related Azure IoT services
+
+Azure IoT SDKs are also available for the following
+
+* [Microsoft SDKs for IoT Hub Device Provisioning Service](../iot-dps/libraries-sdks.md): To help you provision devices through and build backend services for the Device Provisioning Service.
+
+* [Device Update for IoT Hub SDKs](../iot-hub-device-update/understand-device-update.md): To help you deploy over-the-air (OTA) updates for IoT devices.
+
+* [IoT Plug and Play SDKs](../iot-develop/libraries-sdks.md): To help you build IoT Plug and Play solutions.
+ ## Next steps * Learn how to [manage connectivity and reliable messaging](iot-hub-reliability-features-in-sdks.md) using the IoT Hub SDKs.
iot-hub Iot Hub Portal Csharp Module Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-portal-csharp-module-twin-getstarted.md
In this article, you will learn:
* Visual Studio.
+* An active Azure account. If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.
+ * An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
-* An active Azure account. If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.
+* A registered device. Register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
## Create a module identity in the portal Within one device identity, you can create up to 20 module identities. To add an identity, follow these steps:
-1. For the device you created in the previous section, choose **Add Module Identity** to create your first module identity.
+1. From your existing device in the Azure portal, choose **Add Module Identity** to create your first module identity.
1. Enter the name *myFirstModule*. Save your module identity.
- ![Add module identity](./media/iot-hub-portal-csharp-module-twin-getstarted/add-module-identity.png)
+ :::image type="content" source="./media/iot-hub-portal-csharp-module-twin-getstarted/add-module-identity.png" alt-text="Screenshot that shows the 'Module Identity Details' page." lightbox="./media/iot-hub-portal-csharp-module-twin-getstarted/add-module-identity.png":::
Your new module identity appears at the bottom of the screen. Select it to see module identity details.
- ![See module identity details](./media/iot-hub-portal-csharp-module-twin-getstarted/module-identity-details.png)
+ :::image type="content" source="./media/iot-hub-portal-csharp-module-twin-getstarted/module-identity-details.png" alt-text="Screenshot that shows the Module Identity Details menu.":::
-Save the **Connect string - primary key**. You use it in the next section to you set up your module on the device.
+Save the **Connection string (primary key)**. You use it in the next section to set up your module on the device in a console app.
## Update the module twin using .NET device SDK
To create an app that updates the module twin reported properties, follow these
1. In Visual Studio, select **Create a new project**, then choose **Console App (.NET Framework)**, and select **Next**.
-1. In **Configure your new project**, enter *UpdateModuleTwinReportedProperties* as the **Project name**. Select **Create** to continue.
+1. In **Configure your new project**, enter *UpdateModuleTwinReportedProperties* as the **Project name**. Select **Next** to continue.
- ![Configure your a visual studio project](./media/iot-hub-portal-csharp-module-twin-getstarted/configure-twins-project.png)
+ :::image type="content" source="./media/iot-hub-portal-csharp-module-twin-getstarted/configure-twins-project.png" alt-text="Screenshot showing the 'Configure your new project' popup.":::
+
+1. Keep the default .NET framework, then select **Create**.
### Install the latest Azure IoT Hub .NET device SDK
Module identity and module twin is in public preview. It's only available in the
1. Select **Browse**, and then select **Include prerelease**. Search for *Microsoft.Azure.Devices.Client*. Select the latest version and install.
- ![Install Azure IoT Hub .NET service SDK preview](./media/iot-hub-csharp-csharp-module-twin-getstarted/install-sdk.png)
-
- Now you have access to all the module features.
-
-### Get your module connection string
-
-You need the module connection string for your console app. Follow these steps:
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. Navigate to your IoT hub and select **IoT Devices**. Open **myFirstDevice** and you see that **myFirstModule** was successfully created.
-
-1. Select **myFirstModule** under **Module Identities**. In **Module Identity Details**, copy the **Connection string (primary key)**.
+ :::image type="content" source="./media/iot-hub-csharp-csharp-module-twin-getstarted/install-client-sdk.png" alt-text="Screenshot showing how to install the Microsoft.Azure.Devices.Client.":::
- ![Azure portal module detail](./media/iot-hub-portal-csharp-module-twin-getstarted/module-identity-details.png)
+ Now you have access to all the module features.
### Create UpdateModuleTwinReportedProperties console app
To create your app, follow these steps:
using Newtonsoft.Json; ```
-2. Add the following fields to the **Program** class. Replace the placeholder value with the module connection string.
+2. Add the following fields to the **Program** class. Replace the placeholder value with the module connection string you saved previously.
```csharp private const string ModuleConnectionString = "<Your module connection string>";
iot-hub Iot Hub Python Python Module Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-python-python-module-twin-getstarted.md
In this section, you create a Python service app that updates the module twin de
In this section, you create a Python app to get the module twin desired properties update on your device.
-1. Get your module connection string. In [Azure portal](https://portal.azure.com/), navigate to your IoT Hub and select **IoT devices** in the left pane. Select **myFirstDevice** from the list of devices and open it. Under **Module identities**, select **myFirstModule**. Copy the module connection string. You need it in a following step.
+1. Get your module connection string. In [Azure portal](https://portal.azure.com/), navigate to your IoT Hub and select **Devices** in the left pane. Select **myFirstDevice** from the list of devices and open it. Under **Module identities**, select **myFirstModule**. Select the copy icon for **Connection string (primary key)**. You need this connection string in a following step.
- ![Azure portal module detail](./media/iot-hub-python-python-module-twin-getstarted/module-detail.png)
+ :::image type="content" source="./media/iot-hub-python-python-module-twin-getstarted/module-detail.png" alt-text="Screenshot of the Module Identity Details page in the Azure portal.":::
1. At your command prompt, run the following command to install the **azure-iot-device** package:
key-vault Security Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/security-features.md
Azure Private Link Service enables you to access Azure Key Vault and Azure hoste
> [!NOTE] > For Azure Key Vault, ensure that the application accessing the Keyvault service should be running on a platform that supports TLS 1.2 or recent version. If the application is dependent on .Net framework, it should be updated as well. You can also make the registry changes mentioned in [this article](/troubleshoot/azure/active-directory/enable-support-tls-environment) to explicitly enable the use of TLS 1.2 at OS level and for .Net framework. To meet with compliance obligations and to improve security posture, Key Vault connections via TLS 1.0 & 1.1 are considered a security risk, and any connections using old TLS protocols will be disallowed in 2023.
+> [!WARNING]
+> TLS 1.0 and 1.1 is deprecated by Azure Active Directory and tokens to access key vault may not longer be issued for users or services requesting them with deprecated protocols. This may lead to loss of access to Key vaults. More information on AAD TLS support can be in [Azure AD TLS 1.1 and 1.0 deprecation](/troubleshoot/azure/active-directory/enable-support-tls-environment/#why-this-change-is-being-made)
+ ## Key Vault authentication options When you create a key vault in an Azure subscription, it's automatically associated with the Azure AD tenant of the subscription. All callers in both planes must register in this tenant and authenticate to access the key vault. In both cases, applications can access Key Vault in three ways:
logic-apps Logic Apps Using Sap Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-using-sap-connector.md
Previously updated : 03/18/2022 Last updated : 08/02/2022 tags: connectors
The following example is an RFC call with a table parameter. This example call a
<TCPICDAT> <ABAPTEXT xmlns="http://Microsoft.LobServices.Sap/2007/03/Rfc/"> <LINE>exampleFieldInput1</LINE>
+ </ABAPTEXT>
<ABAPTEXT xmlns="http://Microsoft.LobServices.Sap/2007/03/Rfc/"> <LINE>exampleFieldInput2</LINE>
+ </ABAPTEXT>
<ABAPTEXT xmlns="http://Microsoft.LobServices.Sap/2007/03/Rfc/"> <LINE>exampleFieldInput3</LINE> </ABAPTEXT>
machine-learning How To Deploy Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-managed-online-endpoints.md
To view metrics and set alerts based on your SLA, complete the steps that are de
### (Optional) Integrate with Log Analytics
-The `get-logs` command provides only the last few hundred lines of logs from an automatically selected instance. However, Log Analytics provides a way to durably store and analyze logs.
+The `get-logs` command provides only the last few hundred lines of logs from an automatically selected instance. However, Log Analytics provides a way to durably store and analyze logs. For more information on using logging, see [Monitor online endpoints](how-to-monitor-online-endpoints.md#logs)
-First, create a Log Analytics workspace by completing the steps in [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md#create-a-workspace).
-
-Then, in the Azure portal:
-
-1. Go to the resource group.
-1. Select your endpoint.
-1. Select the **ARM resource page**.
-1. Select **Diagnostic settings**.
-1. Select **Add settings**.
-1. Select to enable sending console logs to the Log Analytics workspace.
-
-The logs might take up to an hour to connect. After an hour, send some scoring requests, and then check the logs by using the following steps:
-
-1. Open the Log Analytics workspace.
-1. In the left menu, select **Logs**.
-1. Close the **Queries** dialog that automatically opens.
-1. Double-click **AmlOnlineEndpointConsoleLog**.
-1. Select **Run**.
-
- [!INCLUDE [Email Notification Include](../../includes/machine-learning-email-notifications.md)]
## Delete the endpoint and the deployment
machine-learning How To Manage Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-quotas.md
Azure Machine Learning managed online endpoints have limits described in the fol
<sup>3</sup> If you request a limit increase, be sure to calculate related limit increases you might need. For example, if you request a limit increase for requests per second, you might also want to compute the required connections and bandwidth limits and include these limit increases in the same request.
-To determine the current usage for an endpoint, [view the metrics](how-to-monitor-online-endpoints.md#view-metrics).
+To determine the current usage for an endpoint, [view the metrics](how-to-monitor-online-endpoints.md#metrics).
To request an exception from the Azure Machine Learning product team, use the steps in the [Request quota increases](#request-quota-increases) section and provide the following information:
machine-learning How To Monitor Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-online-endpoints.md
Previously updated : 06/01/2022 Last updated : 06/27/2022
In this article you learn how to:
- Deploy an Azure Machine Learning online endpoint. - You must have at least [Reader access](../role-based-access-control/role-assignments-portal.md) on the endpoint.
-## View metrics
+## Metrics
Use the following steps to view metrics for an online endpoint or deployment: 1. Go to the [Azure portal](https://portal.azure.com).
Use the following steps to view metrics for an online endpoint or deployment:
1. In the left-hand column, select **Metrics**.
-## Available metrics
+### Available metrics
Depending on the resource that you select, the metrics that you see will be different. Metrics are scoped differently for online endpoints and online deployments.
-### Metrics at endpoint scope
+#### Metrics at endpoint scope
- Request Latency - Request Latency P50 (Request latency at the 50th percentile)
Split on the following dimensions:
- Status Code - Status Code Class
-#### Bandwidth throttling
+**Bandwidth throttling**
Bandwidth will be throttled if the limits are exceeded for _managed_ online endpoints (see managed online endpoints section in [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints)). To determine if requests are throttled: - Monitor the "Network bytes" metric - The response trailers will have the fields: `ms-azureml-bandwidth-request-delay-ms` and `ms-azureml-bandwidth-response-delay-ms`. The values of the fields are the delays, in milliseconds, of the bandwidth throttling.
-### Metrics at deployment scope
+#### Metrics at deployment scope
- CPU Utilization Percentage - Deployment Capacity (the number of instances of the requested instance type)
Split on the following dimension:
- InstanceId
-## Create a dashboard
+### Create a dashboard
You can create custom dashboards to visualize data from multiple sources in the Azure portal, including the metrics for your online endpoint. For more information, see [Create custom KPI dashboards using Application Insights](../azure-monitor/app/tutorial-app-dashboards.md#add-custom-metric-chart).
-## Create an alert
+### Create an alert
You can also create custom alerts to notify you of important status updates to your online endpoint:
You can also create custom alerts to notify you of important status updates to y
1. Select **Add action groups** > **Create action groups** to specify what should happen when your alert is triggered. 1. Choose **Create alert rule** to finish creating your alert.
+1.
+## Logs
+
+There are three logs that can be enabled for online endpoints:
+
+* **AMLOnlineEndpointTrafficLog**: You could choose to enable traffic logs if you want to check the information of your request. Below are some cases:
+
+ * If the response isn't 200, check the value of the column ΓÇ£ResponseCodeReasonΓÇ¥ to see what happened. Also check the reason in the "HTTPS status codes" section of the [Troubleshoot online endpoints](how-to-troubleshoot-online-endpoints.md#http-status-codes) article.
+
+ * You could check the response code and response reason of your model from the column ΓÇ£ModelStatusCodeΓÇ¥ and ΓÇ£ModelStatusReasonΓÇ¥.
+
+ * You want to check the duration of the request like total duration, the request/response duration, and the delay caused by the network throttling. You could check it from the logs to see the breakdown latency.
+
+ * If you want to check how many requests or failed requests recently. You could also enable the logs.
+
+* **AMLOnlineEndpointConsoleLog**: Contains logs that the containers output to the console. Below are some cases:
+
+ * If the container fails to start, the console log may be useful for debugging.
+
+ * Monitor container behavior and make sure that all requests are correctly handled.
+
+ * Write request IDs in the console log. Joining the request ID, the AMLOnlineEndpointConsoleLog, and AMLOnlineEndpointTrafficLog in the Log Analytics workspace, you can trace a request from the network entry point of an online endpoint to the container.
+
+ * You may also use this log for performance analysis in determining the time required by the model to process each request.
+
+* **AMLOnlineEndpointEventLog**: Contains event information regarding the containerΓÇÖs life cycle. Currently, we provide information on the following types of events:
+
+ | Name | Message |
+ | -- | -- |
+ | BackOff | Back-off restarting failed container
+ | Pulled | Container image "\<IMAGE\_NAME\>" already present on machine
+ | Killing | Container inference-server failed liveness probe, will be restarted
+ | Created | Created container image-fetcher
+ | Created | Created container inference-server
+ | Created | Created container model-mount
+ | Unhealthy | Liveness probe failed: \<FAILURE\_CONTENT\>
+ | Unhealthy | Readiness probe failed: \<FAILURE\_CONTENT\>
+ | Started | Started container image-fetcher
+ | Started | Started container inference-server
+ | Started | Started container model-mount
+ | Killing | Stopping container inference-server
+ | Killing | Stopping container model-mount
+
+### How to enable/disable logs
+
+> [!IMPORTANT]
+> Logging uses Azure Log Analytics. If you do not currently have a Log Analytics workspace, you can create one using the steps in [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md#create-a-workspace).
+
+1. In the [Azure portal](https://portal.azure.com), go to the resource group that contains your endpoint and then select the endpoint.
+1. From the **Monitoring** section on the left of the page, select **Diagnostic settings** and then **Add settings**.
+1. Select the log categories to enable, select **Send to Log Analytics workspace**, and then select the Log Analytics workspace to use. Finally, enter a **Diagnostic setting name** and select **Save**.
+
+ :::image type="content" source="./media/how-to-monitor-online-endpoints/diagnostic-settings.png" alt-text="Screenshot of the diagnostic settings dialog.":::
+
+ > [!IMPORTANT]
+ > It may take up to an hour for the connection to the Log Analytics workspace to be enabled. Wait an hour before continuing with the next steps.
+
+1. Submit scoring requests to the endpoint. This activity should create entries in the logs.
+1. From either the online endpoint properties or the Log Analytics workspace, select **Logs** from the left of the screen.
+1. Close the **Queries** dialog that automatically opens, and then double-click the **AmlOnlineEndpointConsoleLog**. If you don't see it, use the **Search** field.
+
+ :::image type="content" source="./media/how-to-monitor-online-endpoints/online-endpoints-log-queries.png" alt-text="Screenshot showing the log queries.":::
+
+1. Select **Run**.
+
+ :::image type="content" source="./media/how-to-monitor-online-endpoints/query-results.png" alt-text="Screenshots of the results after running a query.":::
+
+### Example queries
+
+You can find example queries on the __Queries__ tab while viewing logs. Search for __Online endpoint__ to find example queries.
++
+### Log column details
+
+The following tables provide details on the data stored in each log:
+
+**AMLOnlineEndpointTrafficLog**
+
+| Field name | Description |
+| - | - |
+| Method | The requested method from client.
+| Path | The requested path from client.
+| SubscriptionId | The machine learning subscription ID of the online endpoint.
+| WorkspaceId | The machine learning workspace ID of the online endpoint.
+| EndpointName | The name of the online endpoint.
+| DeploymentName | The name of the online deployment.
+| Protocol | The protocol of the request.
+| ResponseCode | The final response code returned to the client.
+| ResponseCodeReason | The final response code reason returned to the client.
+| ModelStatusCode | The response status code from model.
+| ModelStatusReason | The response status reason from model.
+| RequestPayloadSize | The total bytes received from the client.
+| ResponsePayloadSize | The total bytes sent back to the client.
+| UserAgent | The user-agent header of the request.
+| XRequestId | The request ID generated by Azure Machine Learning for internal tracing.
+| XMSClientRequestId | The tracking ID generated by the client.
+| TotalDurationMs | Duration in milliseconds from the request start time to the last response byte sent back to the client. If the client disconnected, it measures from the start time to client disconnect time.
+| RequestDurationMs | Duration in milliseconds from the request start time to the last byte of the request received from the client.
+| ResponseDurationMs | Duration in milliseconds from the request start time to the first response byte read from the model.
+| RequestThrottlingDelayMs | Delay in milliseconds in request data transfer due to network throttling.
+| ResponseThrottlingDelayMs | Delay in milliseconds in response data transfer due to network throttling.
+
+**AMLOnlineEndpointConsoleLog**
+
+| Field Name | Description |
+| -- | -- |
+| TimeGenerated | The timestamp (UTC) of when the log was generated.
+| OperationName | The operation associated with log record.
+| InstanceId | The ID of the instance that generated this log record.
+| DeploymentName | The name of the deployment associated with the log record.
+| ContainerName | The name of the container where the log was generated.
+| Message | The content of the log.
+
+**AMLOnlineEndpointEventLog**
++
+| Field Name | Description |
+| -- | -- |
+| TimeGenerated | The timestamp (UTC) of when the log was generated.
+| OperationName | The operation associated with log record.
+| InstanceId | The ID of the instance that generated this log record.
+| DeploymentName | The name of the deployment associated with the log record.
+| Name | The name of the event.
+| Message | The content of the event.
+ ## Next steps
machine-learning How To Safely Rollout Managed Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-safely-rollout-managed-endpoints.md
If you want to use a REST client to invoke the deployment directly without going
Once you've tested your `green` deployment, you can copy (or 'mirror') a percentage of the live traffic to it. Mirroring traffic doesn't change results returned to clients. Requests still flow 100% to the blue deployment. The mirrored percentage of the traffic is copied and submitted to the `green` deployment so you can gather metrics and logging without impacting your clients. Mirroring is useful when you want to validate a new deployment without impacting clients. For example, to check if latency is within acceptable bounds and that there are no HTTP errors. > [!WARNING]
-> Mirroring traffic uses your [endpoint bandwidth quota](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints) (default 5 MBPS). Your endpoint bandwidth will be throttled if you exceed the allocated quota. For information on monitoring bandwidth throttling, see [Monitor managed online endpoints](how-to-monitor-online-endpoints.md#bandwidth-throttling).
+> Mirroring traffic uses your [endpoint bandwidth quota](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints) (default 5 MBPS). Your endpoint bandwidth will be throttled if you exceed the allocated quota. For information on monitoring bandwidth throttling, see [Monitor managed online endpoints](how-to-monitor-online-endpoints.md#metrics-at-endpoint-scope).
The following command mirrors 10% of the traffic to the `green` deployment:
marketplace What Is New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/what-is-new.md
Previously updated : 04/18/2022 Last updated : 08/01/2022 # What's new in the Microsoft commercial marketplace
Learn about important updates in the commercial marketplace program of Partner C
| Category | Description | Date | | | | |
+| Offers | Software as a service (SaaS) plans now support 2-year and 3-year billing term with upfront, monthly or annually payment options. To learn more, see [Plan a SaaS offer for the commercial marketplace](plan-saas-offer.md#saas-billing-terms-and-payment-options). | 2022-08-01 |
| Offers | ISVs can now offer custom prices, terms, conditions, and pricing for a specific customer through private offers. See [ISV to customer private offers](isv-customer.md) and the [FAQ](isv-customer-faq.yml). | 2022-04-06 | | Offers | Publishers can now [change transactable offer and plan pricing](price-changes.md) without having to discontinue an offer and recreate it with new pricing (also see [this FAQ](price-changes-faq.yml)). | 2022-03-30 | | Offers | An ISV can now specify time-bound margins for CSP partners to incentivize them to sell it to their customers. When their partner makes a sale to a customer, Microsoft will pay the ISV the wholesale price. See [ISV to CSP Partner private offers](./isv-csp-reseller.md) and [the FAQs](./isv-csp-faq.yml). | 2022-02-15 |
mysql Concepts Service Tiers Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-service-tiers-storage.md
While the service attempts to make the server read-only, all new write transacti
To get the server out of read-only mode, you should increase the provisioned storage on the server. This can be done using the Azure portal or Azure CLI. Once increased, the server will be ready to accept write transactions again.
-We recommend that you set up an alert to notify you when your server storage is approaching the threshold so you can avoid getting into the read-only state. Refer to the [monitoring article](./concepts-monitoring.md) to learn about metrics available.
- We recommend that you <!--turn on storage auto-grow or to--> set up an alert to notify you when your server storage is approaching the threshold so you can avoid getting into the read-only state. For more information, see the documentation on alert documentation [how to set up an alert](how-to-alert-on-metric.md). ### Storage auto-grow
postgresql How To Autovacuum Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-autovacuum-tuning.md
+
+ Title: Autovacuum Tuning
+description: Troubleshooting guide for autovacuum in Azure Database for PostgreSQL - Flexible Server
+++++ Last updated : 08/03/2022++
+# Autovacuum Tuning in Azure Database for PostgreSQL - Flexible Server
+
+This article provides an overview of the autovacuum feature for [Azure Database for PostgreSQL - Flexible Server](overview.md).
+
+## What is autovacuum
+
+Internal data consistency in PostgreSQL is based on the Multi-Version Concurrency Control (MVCC) mechanism, which allows the database engine to maintain multiple versions of a row and provides greater concurrency with minimal blocking between the different processes.
+
+PostgreSQL databases need appropriate maintenance. For example, when a row is deleted, it is not removed physically. Instead, the row is marked as ΓÇ£deadΓÇ¥. Similarly for updates, the row is marked as "dead" and a new version of the row is inserted. These operations leave behind dead records, called dead tuples, even after all the transactions that might see those versions finish. Unless cleaned up, dead tuples remain, consuming disk space and bloating tables and indexes which result in slow query performance.
+
+PostgreSQL uses a process called autovacuum to automatically clean up dead tuples.
++
+## Autovacuum internals
+
+Autovacuum reads pages looking for dead tuples, and if none are found, autovacuum discard the page. When autovacuum finds dead tuples, it removes them. The cost is based on:
+
+- `vacuum_cost_page_hit`: Cost of reading a page that is already in shared buffers and does not need a disk read. The default value is set to 1.
+- `vacuum_cost_page_miss`: Cost of fetching a page that is not in shared buffers. The default value is set to 10.
+- `vacuum_cost_page_dirty`: Cost of writing to a page when dead tuples are found in it. The default value is set to 20.
+
+The amount of work autovacuum does depends on two parameters:
+
+- `autovacuum_vacuum_cost_limit` is the amount of work autovacuum does in one go and once the cleanup process is done, the amount of time autovacuum is asleep.
+- `autovacuum_vacuum_cost_delay` number of milliseconds.
++
+In Postgres versions 9.6, 10 and 11 the default for `autovacuum_vacuum_cost_limit` is 200 and `autovacuum_vacuum_cost_delay` is 20 milliseconds.
+In Postgres versions 12 and above the default `autovacuum_vacuum_cost_limit` is 200 and `autovacuum_vacuum_cost_delay` is 2 milliseconds.
+
+Autovacuum wakes up 50 times (50*20 ms=1000 ms) every second. Every time it wakes up, autovacuum reads 200 pages.
+
+That means in one-second autovacuum can do:
+
+- ~80 MB/Sec [ (200 pages/`vacuum_cost_page_hit`) * 50 * 8 KB per page] if all pages with dead tuples are found in shared buffers.
+- ~8 MB/Sec [ (200 pages/`vacuum_cost_page_miss`) * 50 * 8 KB per page] if all pages with dead tuples are read from disk.
+- ~4 MB/Sec [ (200 pages/`vacuum_cost_page_dirty`) * 50 * 8 KB per page] autovacuum can write up to 4 MB/sec.
+++
+## Monitoring autovacuum
+
+Use the following queries to monitor autovacuum:
+
+```postgresql
+select schemaname,relname,n_dead_tup,n_live_tup,round(n_dead_tup::float/n_live_tup::float*100) dead_pct,autovacuum_count,last_vacuum,last_autovacuum,last_autoanalyze,last_analyze from pg_stat_all_tables where n_live_tup >0;
+```
+ΓÇ»
+
+The following columns help determine if autovacuum is catching up to table activity:
++
+- **Dead_pct**: percentage of dead tuples when compared to live tuples.
+- **Last_autovacuum**: The date of the last time the table was autovacuumed.
+- **Last_autoanalyze**: The date of the last time the table was automatically analyzed.
++
+## When does PostgreSQL trigger autovacuum
+
+An autovacuum action (either *ANALYZE* or *VACUUM*) triggers when the number of dead tuples exceeds a particular number that is dependent on two factors: the total count of rows in a table, plus a fixed threshold. *ANALYZE*, by default, triggers when 10% of the table plus 50 rows changes, while *VACUUM* triggers when 20% of the table plus 50 rows changes. Since the *VACUUM* threshold is twice as high as the *ANALYZE* threshold, *ANALYZE* gets triggered much earlier than *VACUUM*.
+
+The exact equations for each action are:
+
+- **Autoanalyze** = autovacuum_analyze_scale_factor * tuples + autovacuum_analyze_threshold
+- **Autovacuum** = autovacuum_vacuum_scale_factor * tuples + autovacuum_vacuum_threshold
++
+For example, analyze triggers after 60 rows change on a table that contains 100 rows, and vacuum triggers when 70 rows change on the table, using the following equations:
+
+`Autoanalyze = 0.1 * 100 + 50 = 60`
+`Autovacuum = 0.2 * 100 + 50 = 70`
++
+Use the following query to list the tables in a database and identify the tables that qualify for the autovacuum process:
++
+```postgresql
+ SELECT *
+ ,n_dead_tup > av_threshold AS av_needed
+ ,CASE
+ WHEN reltuples > 0
+ THEN round(100.0 * n_dead_tup / (reltuples))
+ ELSE 0
+ END AS pct_dead
+ FROM (
+ SELECT N.nspname
+ ,C.relname
+ ,pg_stat_get_tuples_inserted(C.oid) AS n_tup_ins
+ ,pg_stat_get_tuples_updated(C.oid) AS n_tup_upd
+ ,pg_stat_get_tuples_deleted(C.oid) AS n_tup_del
+ ,pg_stat_get_live_tuples(C.oid) AS n_live_tup
+ ,pg_stat_get_dead_tuples(C.oid) AS n_dead_tup
+ ,C.reltuples AS reltuples
+ ,round(current_setting('autovacuum_vacuum_threshold')::INTEGER + current_setting('autovacuum_vacuum_scale_factor')::NUMERIC * C.reltuples) AS av_threshold
+ ,date_trunc('minute', greatest(pg_stat_get_last_vacuum_time(C.oid), pg_stat_get_last_autovacuum_time(C.oid))) AS last_vacuum
+ ,date_trunc('minute', greatest(pg_stat_get_last_analyze_time(C.oid), pg_stat_get_last_analyze_time(C.oid))) AS last_analyze
+ FROM pg_class C
+ LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
+ WHERE C.relkind IN (
+ 'r'
+ ,'t'
+ )
+ AND N.nspname NOT IN (
+ 'pg_catalog'
+ ,'information_schema'
+ )
+ AND N.nspname ! ~ '^pg_toast'
+ ) AS av
+ ORDER BY av_needed DESC ,n_dead_tup DESC;
+```
+
+> [!NOTE]
+> The query does not take into consideration that autovacuum can be configured on a per-table basis using the "alter table" DDL command. 
++
+## Common autovacuum problems
+
+Review the possible common problems with the autovacuum process.
+
+### Not keeping up with busy server
+
+The autovacuum process estimates the cost of every I/O operation, accumulates a total for each operation it performs and pauses once the upper limit of the cost is reached. `autovacuum_vacuum_cost_delay` and `autovacuum_vacuum_cost_limit` are the two server parameters that are used in the process.
++
+By default, `autovacuum_vacuum_cost_limit` is set to –1, meaning autovacuum cost limit is the same value as the parameter `vacuum_cost_limit`, which defaults to 200. `vacuum_cost_limit` is the cost of a manual vacuum.
+
+If `autovacuum_vacuum_cost_limit` is set to `-1` then autovacuum uses the `vacuum_cost_limit` parameter, but if `autovacuum_vacuum_cost_limit` itself is set to greater than `-1` then `autovacuum_vacuum_cost_limit` parameter is considered.
+
+In case the autovacuum is not keeping up, the following parameters may be changed:
+
+|Parameter |Description |
+|||
+|`autovacuum_vacuum_scale_factor`| Default: `0.2`, range: `0.05 - 0.1`. The scale factor is workload-specific and should be set depending on the amount of data in the tables. Before changing the value, investigate the workload and individual table volumes. |
+|`autovacuum_vacuum_cost_limit`|Default: `200`. Cost limit may be increased. CPU and I/O utilization on the database should be monitored before and after making changes. |
+|`autovacuum_vacuum_cost_delay` | **Postgres Versions 9.6,10,11** - Default: `20 ms`. The parameter may be decreased to `2-10 ms`. </br> **Postgres Versions 12 and above** - Default: `2 ms`. |
+
+> [!NOTE]
+> The `autovacuum_vacuum_cost_limit` value is distributed proportionally among the running autovacuum workers, so that if there is more than one, the sum of the limits for each worker does not exceed the value of the `autovacuum_vacuum_cost_limit` parameter
+
+### Autovacuum constantly running
+
+Continuously running autovacuum may affect CPU and IO utilization on the server. The following might be possible reasons:
+
+#### `maintenance_work_mem`
+
+Autovacuum daemon uses `autovacuum_work_mem` that is by default set to `-1` meaningΓÇ»`autovacuum_work_mem` would have the same value as the parameterΓÇ»`maintenance_work_mem`. This document assumes `autovacuum_work_mem` is set to `-1` and `maintenance_work_mem` is used by the autovacuum daemon.
+
+If `maintenance_work_mem` is low, it may be increased to up to 2 GB on Flexible Server. A general rule of thumb is to allocate 50 MB to `maintenance_work_mem` for every 1 GB of RAM. 
++
+#### Large number of databases
+
+Autovacuum tries to start a worker on each database every `autovacuum_naptime` seconds.
+
+For example, if a server has 60 databases and `autovacuum_naptime` is set to 60 seconds, then the autovacuum worker starts every second [autovacuum_naptime/Number of DBs].
+
+It is a good idea to increase `autovacuum_naptime` if there are more databases in a cluster. At the same time, the autovacuum process can be made more aggressive by increasing the `autovacuum_cost_limit` and decreasing the `autovacuum_cost_delay` parameters and increasing the `autovacuum_max_workers` from the default of 3 to 4 or 5.
++
+### Out of memory errors
+
+Overly aggressive `maintenance_work_mem` values could periodically cause out-of-memory errors in the system. It is important to understand available RAM on the server before any change to the `maintenance_work_mem` parameter is made.
++
+### Autovacuum is too disruptive
+
+If autovacuum is consuming a lot of resources, the following can be done:
+
+#### Autovacuum parameters
+
+Evaluate the parameters `autovacuum_vacuum_cost_delay`, `autovacuum_vacuum_cost_limit`, `autovacuum_max_workers`. Improperly setting autovacuum parameters may lead to scenarios where autovacuum becomes too disruptive.
+
+If autovacuum is too disruptive, consider the following:
+
+- IncreaseΓÇ»`autovacuum_vacuum_cost_delay` and reduce `autovacuum_vacuum_cost_limit` if set higher than the default of 200.
+- Reduce the number of `autovacuum_max_workers` if it is set higher than the default of 3.ΓÇ»
+
+#### Too many autovacuum workersΓÇ»
+
+Increasing the number of autovacuum workers will not necessarily increase the speed of vacuum. Having a high number of autovacuum workers is not recommended.
+
+Increasing the number of autovacuum workers will result in more memory consumption, and depending on the value of `maintenance_work_mem` , could cause performance degradation.
+
+Each autovacuum worker process only gets (1/autovacuum_max_workers) of the total `autovacuum_cost_limit`, so having a high number of workers causes each one to go slower.
+
+If the number of workers is increased, `autovacuum_vacuum_cost_limit` should also be increased and/or `autovacuum_vacuum_cost_delay` should be decreased to make the vacuum process faster.
+
+However, if we have changed table level `autovacuum_vacuum_cost_delay` or `autovacuum_vacuum_cost_limit` parameters then the workers running on those tables are exempted from being considered in the balancing algorithm [autovacuum_cost_limit/autovacuum_max_workers].
+ΓÇ»
+### Autovacuum transaction ID (TXID) wraparound protection
+
+When a database runs into transaction ID wraparound protection, an error message like the following can be observed:
+
+```
+Database is not accepting commands to avoid wraparound data loss in database ΓÇÿxxΓÇÖ
+Stop the postmaster and vacuum that database in single-user mode.
+```
+
+> [!NOTE]
+> This error message is a long-standing oversight. Usually, you do not need to switch to single-user mode. Instead, you can run the required VACUUM commands and perform tuning for VACUUM to run fast. While you cannot run any data manipulation language (DML), you can still run VACUUM.
++
+The wraparound problem occurs when the database is either not vacuumed or there are too many dead tuples that could not be removed by autovacuum. The reasons for this might be:
+
+#### Heavy workload
+
+The workload could cause too many dead tuples in a brief period that makes it difficult for autovacuum to catch up. The dead tuples in the system add up over a period leading to degradation of query performance and leading to wraparound situation. One reason for this situation to arise might be because autovacuum parameters aren't adequately set and it is not keeping up with a busy server.
++
+#### Long-running transactions
+
+Any long-running transactions in the system will not allow dead tuples to be removed while autovacuum is running. They're a blocker to the vacuum process. Removing the long running transactions frees up dead tuples for deletion when autovacuum runs.
+
+Long-running transactions can be detected using the following query:
+
+```postgresql
+ SELECT pid, age(backend_xid) AS age_in_xids,
+ now () - xact_start AS xact_age,
+ now () - query_start AS query_age,
+ state,
+ query
+ FROM pg_stat_activity
+ WHERE state != 'idle'
+ ORDER BY 2 DESC
+ LIMIT 10;
+```
+
+#### Prepared statements
+
+If there are prepared statements that are not committed, they would prevent dead tuples from being removed.
+The following query helps find non-committed prepared statements:
+
+```postgresql
+ SELECT gid, prepared, owner, database, transaction
+ FROM pg_prepared_xacts
+ ORDER BY age(transaction) DESC;
+```
+
+Use COMMIT PREPARED or ROLLBACK PREPARED to commit or roll back these statements.
+
+#### Unused replication slots
+
+Unused replication slots prevent autovacuum from claiming dead tuples. The following query helps identify unused replication slots:
+
+```postgresql
+ SELECT slot_name, slot_type, database, xmin
+ FROM pg_replication_slots
+ ORDER BY age(xmin) DESC;
+```
+
+UseΓÇ»`pg_drop_replication_slot()` to delete unused replication slots.
+
+When the database runs into transaction ID wraparound protection, check for any blockers as mentioned previously, and remove those manually for autovacuum to continue and complete. You can also increase the speed of autovacuum by setting `autovacuum_cost_delay` to 0 and increasing the `autovacuum_cost_limit` to a value much greater than 200. However, changes to these parameters will not be applied to existing autovacuum workers. Either restart the database or kill existing workers manually to apply parameter changes.
++
+### Table-specific requirementsΓÇ»
+
+Autovacuum parameters may be set for individual tables. It is especially important for small and big tables. For example, for a small table that contains only 100 rows, autovacuum triggers VACUUM operation when 70 rows change (as calculated previously). If this table is frequently updated, you might see hundreds of autovacuum operations a day. This will prevent autovacuum from maintaining other tables on which the percentage of changes aren't as big. Alternatively, a table containing a billion rows needs to change 200 million rows to trigger autovacuum operations. Setting autovacuum parameters appropriately prevents such scenarios.
+
+To set autovacuum setting per table, change the server parameters as the following examples:
+
+```postgresql
+ ALTER TABLE <table name> SET (autovacuum_analyze_scale_factor = xx);
+ ALTER TABLE <table name> SET (autovacuum_analyze_threshold = xx);
+ ALTER TABLE <table name> SET (autovacuum_vacuum_scale_factor =xx); 
+ ALTER TABLE <table name> SET (autovacuum_vacuum_threshold = xx); 
+ ALTER TABLE <table name> SET (autovacuum_vacuum_cost_delay = xx); 
+ ALTER TABLE <table name> SET (autovacuum_vacuum_cost_limit = xx); 
+```
+
+### Insert-only workloadsΓÇ»
+
+In versions of PostgreSQL prior to 13, autovacuum will not run on tables with an insert-only workload, because if there are no updates or deletes, there are no dead tuples and no free space that needs to be reclaimed. However, autoanalyze will run for insert-only workloads since there is new data. The disadvantages of this are:
+
+- The visibility map of the tables is not updated, and thus query performance, especially where there are Index Only Scans, starts to suffer over time.
+- The database can run into transaction ID wraparound protection.
+- Hint bits will not be set.
+
+#### SolutionsΓÇ»
+
+##### Postgres versions prior to 13 
+
+Using the **pg_cron** extension, a cron job can be set up to schedule a periodic vacuum analyze on the table. The frequency of the cron job depends on the workload.  
+
+For step-by-step guidance using pg_cron, review [Extensions](./concepts-extensions.md).
++
+##### Postgres 13 and higher versions
+
+Autovacuum will run on tables with an insert-only workload. Two new server parameters `autovacuum_vacuum_insert_threshold` and  `autovacuum_vacuum_insert_scale_factor` help control when autovacuum can be triggered on insert-only tables. 
+
+## Next steps
+
+- Troubleshoot high CPU utilization [High CPU Utilization](./how-to-high-cpu-utilization.md).
+- Troubleshoot high memory utilization [High Memory Utilization](./how-to-high-memory-utilization.md).
+- Configure server parameters [Server Parameters](./howto-configure-server-parameters-using-portal.md).
postgresql How To High Cpu Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-high-cpu-utilization.md
+
+ Title: High CPU Utilization
+description: Troubleshooting guide for high cpu utilization in Azure Database for PostgreSQL - Flexible Server
+++++ Last updated : 08/03/2022++
+# Troubleshoot high CPU utilization in Azure Database for PostgreSQL - Flexible Server
+
+This article shows you how to quickly identify the root cause of high CPU utilization, and possible remedial actions to control CPU utilization when using [Azure Database for PostgreSQL - Flexible Server](overview.md).
+
+In this article, you will learn:
+
+- About tools to identify high CPU utilization such as Azure Metrics, Query Store, and pg_stat_statements.
+- How to identify root causes, such as long running queries and total connections.
+- How to resolve high CPU utilization by using Explain Analyze, Connection Pooling, and Vacuuming tables.
++
+## Tools to identify high CPU utilization
+
+Consider these tools to identify high CPU utilization.
+
+### Azure Metrics
+
+Azure Metrics is a good starting point to check the CPU utilization for the definite date and period. Metrics give information about the time duration during which the CPU utilization is high. Compare the graphs of Write IOPs, Read IOPs, Read Throughput, and Write Throughput with CPU utilization to find out times when the workload caused high CPU. For proactive monitoring, you can configure alerts on the metrics. For step-by-step guidance, see [Azure Metrics](./howto-alert-on-metrics.md).
++
+### Query Store
+
+Query Store automatically captures the history of queries and runtime statistics, and it retains them for your review. It slices the data by time so that you can see temporal usage patterns. Data for all users, databases and queries is stored in a database named azure_sys in the Azure Database for PostgreSQL instance. For step-by-step guidance, see [Query Store](./concepts-query-store.md).
+
+### pg_stat_statements
+
+The pg_stat_statements extension helps identify queries that consume time on the server.
+
+#### Mean or average execution time
+
+##### [Postgres v13 & above](#tab/postgres-13)
++
+For Postgres versions 13 and above, use the following statement to view the top five SQL statements by mean or average execution time:
+
+```postgresql
+SELECT userid::regrole, dbid, query, mean_exec_time
+FROM pg_stat_statements
+ORDER BY mean_exec_time
+DESC LIMIT 5;
+```
++
+##### [Postgres v9.6-12 & above](#tab/postgres9-12)
+
+For Postgres versions 9.6, 10, 11, and 12, use the following statement to view the top five SQL statements by mean or average execution time:
++
+```postgresql
+SELECT userid::regrole, dbid, query
+FROM pg_stat_statements
+ORDER BY mean_time
+DESC LIMIT 5;
+```
+++
+#### Total execution time
+
+Execute the following statements to view the top five SQL statements by total execution time.
+
+##### [Postgres v13 & above](#tab/postgres-13)
+
+For Postgres versions 13 and above, use the following statement to view the top five SQL statements by total execution time:
+
+```postgresql
+SELECT userid::regrole, dbid, query
+FROM pg_stat_statements
+ORDER BY total_exec_time
+DESC LIMIT 5;
+```
+
+##### [Postgres v9.6-12 & above](#tab/postgres9-12)
+
+For Postgres versions 9.6, 10, 11, and 12, use the following statement to view the top five SQL statements by total execution time:
+
+```postgresql
+SELECT userid: :regrole, dbid, query,
+FROM pg_stat_statements
+ORDER BY total_time
+DESC LIMIT 5;
+```
++++
+## Identify root causes
+
+If CPU consumption levels are high in general, the following could be possible root causes:
++
+### Long-running transactions
+
+Long-running transactions can consume CPU resources that can lead to high CPU utilization.
+
+The following query helps identify connections running for the longest time:
+
+```postgresql
+SELECT pid, usename, datname, query, now() - xact_start as duration
+FROM pg_stat_activity
+WHERE pid <> pg_backend_pid() and state IN ('idle in transaction', 'active')
+ORDER BY duration DESC;
+```
+
+### Total number of connections and number connections by state
+
+A large number of connections to the database is also another issue that might lead to increased CPU as well as memory utilization.
++
+The following query gives information about the number of connections by state:
+
+```postgresql
+SELECT state, count(*)
+FROM pg_stat_activity
+WHERE pid <> pg_backend_pid()
+GROUP BY 1 ORDER BY 1;
+```
+
+## Resolve high CPU utilization
+
+Use Explain Analyze, PG Bouncer, connection pooling and terminate long running transactions to resolve high CPU utilization.
+
+### Using Explain Analyze
+
+Once you know the query that's running for a long time, use **EXPLAIN** to further investigate the query and tune it.
+For more information about the **EXPLAIN** command, review [Explain Plan](https://www.postgresql.org/docs/current/sql-explain.html).
++
+### PGBouncer and connection pooling
+
+In situations where there are lots of idle connections or lot of connections which are consuming the CPU consider use of a connection pooler like PgBouncer.
+
+For more details about PgBouncer, review:
+
+[Connection Pooler](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/not-all-postgres-connection-pooling-is-equal/ba-p/825717)
+
+[Best Practices](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/connection-handling-best-practice-with-postgresql/ba-p/790883)
+
+Azure Database for Flexible Server offers PgBouncer as a built-in connection pooling solution. For more information, see [PgBouncer](./concepts-pgbouncer.md)
++
+### Terminating long running transactions
+
+You could consider killing a long running transaction as an option.
+
+To terminate a session's PID, you will need to detect the PID using the following query:
+
+```postgresql
+SELECT pid, usename, datname, query, now() - xact_start as duration
+FROM pg_stat_activity
+WHERE pid <> pg_backend_pid() and state IN ('idle in transaction', 'active')
+ORDER BY duration DESC;
+```
+
+You can also filter by other properties like `usename` (username), `datname` (database name) etc.
+
+Once you have the session's PID you can terminate using the following query:
+
+```postgresql
+SELECT pg_terminate_backend(pid);
+```
+
+### Monitoring vacuum and table stats
+
+Keeping table statistics up to date helps improve query performance. Monitor whether regular autovacuuming is being carried out.
++
+The following query helps to identify the tables that need vacuuming:
+
+```postgresql
+select schemaname,relname,n_dead_tup,n_live_tup,last_vacuum,last_analyze,last_autovacuum,last_autoanalyze
+from pg_stat_all_tables where n_live_tup > 0;  
+```
+
+`last_autovacuum` and `last_autoanalyze` columns give the date and time when the table was last autovacuumed or analyzed. If the tables are not being vacuumed regularly, take steps to tune autovacuum. For more information about autovacuum troubleshooting and tuning, see [Autovacuum Troubleshooting](./how-to-autovacuum-tuning.md).
++
+A short-term solution would be to do a manual vacuum analyze of the tables where slow queries are seen:
+
+```postgresql
+vacuum analyze <table_name>;
+```
+
+## Next steps
+
+- Troubleshoot and tune Autovacuum [Autovacuum Tuning](./how-to-high-cpu-utilization.md).
+- Troubleshoot High Memory Utilization [High Memory Utilization](./how-to-high-memory-utilization.md).
postgresql How To High Memory Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-high-memory-utilization.md
+
+ Title: High Memory Utilization
+description: Troubleshooting guide for high memory utilization in Azure Database for PostgreSQL - Flexible Server
+++++ Last updated : 08/03/2022++
+# High memory utilization in Azure Database for PostgreSQL - Flexible Server
+
+This article introduces common scenarios and root causes that might lead to high memory utilization in [Azure Database for PostgreSQL - Flexible Server](overview.md).
+
+In this article, you will learn:
+
+- Tools to identify high memory utilization.
+- Reasons for high memory & remedial actions.
++
+## Tools to identify high memory utilization
+
+Consider the following tools to identify high memory utilization.
+
+### Azure Metrics
+
+Use Azure Metrics to monitor the percentage of memory in use for the definite date and time frame.
+For proactive monitoring, configure alerts on the metrics. For step-by-step guidance, see [Azure Metrics](./howto-alert-on-metrics.md).
++
+### Query Store
+
+Query Store automatically captures the history of queries and their runtime statistics, and it retains them for your review.
+Query Store can correlate wait event information with query run time statistics. Use Query Store to identify queries that have high memory consumption during the period of interest.
+
+For more information on setting up and using Query Store, review [Query Store](./concepts-query-store.md).
++
+## Reasons and remedial actions
+
+Consider the following reasons and remedial actions for resolving high memory utilization.
+
+### Server parameters
+
+The following server parameters impact memory consumption and should be reviewed:
+
+#### Work_Mem
+
+The `work_mem` parameter specifies the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files. It isn't on a per-query basis rather, it's set based on the number of sort and hash operations.
++
+If the workload has many short-running queries with simple joins and minimal sort operations, it's advised to keep lower `work_mem`. If there are a few active queries with complex joins and sorts, then it's advised to set a higher value for work_mem.
+
+It is tough to get the value of `work_mem` right. If you notice high memory utilization or out-of-memory issues, consider decreasing `work_mem`.
+
+A safer setting for `work_mem` is `work_mem = Total RAM / Max_Connections / 16 `
+
+The default value of `work_mem` = 4 MB. You can set the `work_mem` value on multiple levels including at the server level via the parameters page in the Azure portal.
+
+A good strategy is to monitor memory consumption during peak times.
+
+If disk sorts are happening during this time and there is plenty of unused memory, increase `work_mem` gradually until you're able to reach a good balance between available and used memory
+Similarly, if the memory use looks high, reduce `work_mem`.
++
+#### Maintenance_Work_Mem
+
+`maintenance_work_mem` is for maintenance tasks like vacuuming, adding indexes or foreign keys. The usage of memory in this scenario is per session.
+
+For example, consider a scenario where there are three autovacuum workers running.
+
+If `maintenance_work_mem` is set to 1 GB, then all sessions combined will use 3 GB of memory.
+
+A high `maintenance_work_mem` value along with multiple running sessions for vacuuming/index creation/adding foreign keys can cause high memory utilization. The maximum allowed value for the `maintenance_work_mem` server parameter in Azure Database for Flexible Server is 2 GB.
++
+#### Shared buffers
+
+The `shared_buffers` parameter determines how much memory is dedicated to the server for caching data. The objective of shared buffers is to reduce DISK I/O.
+
+A reasonable setting for shared buffers is 25% of RAM. Setting a value of greater than 40% of RAM is not recommended for most common workloads.
+
+### Max connections
+
+All new and idle connections on a Postgres database consume up to 2 MB of memory. One way to monitor connections is by using the following query:
+
+```postgresql
+select count(*) from pg_stat_activity;
+```
+
+When the number of connections to a database is high, memory consumption also increases.
+
+In situations where there are a lot of database connections, consider using a connection pooler like PgBouncer.
+
+For more details on PgBouncer, review:
+
+[Connection Pooler](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/not-all-postgres-connection-pooling-is-equal/ba-p/825717).
+
+[Best Practices](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/connection-handling-best-practice-with-postgresql/ba-p/790883).
++
+Azure Database for Flexible Server offers PgBouncer as a built-in connection pooling solution. For more information, see [PgBouncer](./concepts-pgbouncer.md).
++
+### Explain Analyze
+
+Once high memory-consuming queries have been identified from Query Store, use **EXPLAIN** and **EXPLAIN ANALYZE** to investigate further and tune them.
++
+For more information on the **EXPLAIN** command, review [Explain Plan](https://www.postgresql.org/docs/current/sql-explain.html).
+
+## Next steps
+
+- Troubleshoot and tune Autovacuum [Autovacuum Tuning](./how-to-high-cpu-utilization.md).
+- Troubleshoot High CPU Utilization [High CPU Utilization](./how-to-high-cpu-utilization.md).
+- Configure server parameters [Server Parameters](./howto-configure-server-parameters-using-portal.md).
postgresql Concepts Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-upgrade.md
+
+ Title: Server group upgrades - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Types of upgrades, and their precautions
+++++ Last updated : 08/02/2022++
+# Hyperscale (Citus) server group upgrades
++
+The Hyperscale (Citus) managed service can handle upgrades of both the
+PostgreSQL server, and the Citus extension. You can choose these versions
+mostly independently of one another, except Citus 11 requires PostgreSQL 13 or
+higher.
+
+## Upgrade precautions
+
+Upgrading a major version of Citus can introduce changes in behavior.
+It's best to familiarize yourself with new product features and changes
+to avoid surprises.
+
+Noteworthy Citus 11 changes:
+
+* Table shards may disappear in your SQL client. Their visibility
+ is now controlled by
+ [citus.show_shards_for_app_name_prefixes](reference-parameters.md#citusshow_shards_for_app_name_prefixes-text).
+* There are several [deprecated
+ features](https://www.citusdata.com/updates/v11-0/#deprecated-features).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [How to perform upgrades](howto-upgrade.md)
postgresql Howto Modify Distributed Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-modify-distributed-tables.md
Previously updated : 8/10/2020 Last updated : 08/02/2022 # Distribute and modify tables
method is useful for adding new indexes in a production environment.
CREATE INDEX CONCURRENTLY clicked_at_idx ON clicks USING BRIN (clicked_at); ```
+### Types and Functions
+
+Creating custom SQL types and user-defined functions propogates to worker
+nodes. However, creating such database objects in a transaction with
+distributed operations involves tradeoffs.
+
+Hyperscale (Citus) parallelizes operations such as `create_distributed_table()`
+across shards using multiple connections per worker. Whereas, when creating a
+database object, Citus propagates it to worker nodes using a single connection
+per worker. Combining the two operations in a single transaction may cause
+issues, because the parallel connections will not be able to see the object
+that was created over a single connection but not yet committed.
+
+Consider a transaction block that creates a type, a table, loads data, and
+distributes the table:
+
+```postgresql
+BEGIN;
+
+-- type creation over a single connection:
+CREATE TYPE coordinates AS (x int, y int);
+CREATE TABLE positions (object_id text primary key, position coordinates);
+
+-- data loading thus goes over a single connection:
+SELECT create_distributed_table(ΓÇÿpositionsΓÇÖ, ΓÇÿobject_idΓÇÖ);
+\COPY positions FROM ΓÇÿpositions.csvΓÇÖ
+
+COMMIT;
+```
+
+Prior to Citus 11.0, Citus would defer creating the type on the worker nodes,
+and commit it separately when creating the distributed table. This enabled the
+data copying in `create_distributed_table()` to happen in parallel. However, it
+also meant that the type was not always present on the Citus worker nodes ΓÇô or
+if the transaction rolled back, the type would remain on the worker nodes.
+
+With Citus 11.0, the default behaviour changes to prioritize schema consistency
+between coordinator and worker nodes. The new behavior has a downside: if
+object propagation happens after a parallel command in the same transaction,
+then the transaction can no longer be completed, as highlighted by the ERROR in
+the code block below:
+
+```postgresql
+BEGIN;
+CREATE TABLE items (key text, value text);
+-- parallel data loading:
+SELECT create_distributed_table(ΓÇÿitemsΓÇÖ, ΓÇÿkeyΓÇÖ);
+\COPY items FROM ΓÇÿitems.csvΓÇÖ
+CREATE TYPE coordinates AS (x int, y int);
+
+ERROR: cannot run type command because there was a parallel operation on a distributed table in the transaction
+```
+
+If you run into this issue, there are two simple workarounds:
+
+1. Use set `citus.create_object_propagation` to `automatic` to defer creation
+ of the type in this situation, in which case there may be some inconsistency
+ between which database objects exist on different nodes.
+1. Use set `citus.multi_shard_modify_mode` to `sequential` to disable per-node
+ parallelism. Data load in the same transaction might be slower.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Useful diagnostic queries](howto-useful-diagnostic-queries.md)
postgresql Howto Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-upgrade.md
Previously updated : 4/5/2021 Last updated : 08/02/2022 # Upgrade Hyperscale (Citus) server group
on all server group nodes.
Upgrading PostgreSQL causes more changes than you might imagine, because Hyperscale (Citus) will also upgrade the [database extensions](reference-extensions.md), including the Citus extension.+ We strongly recommend you to test your application with the new PostgreSQL and
-Citus version before you upgrade your production environment.
+Citus version before you upgrade your production environment. Also, please see
+our list of [upgrade precautions](concepts-upgrade.md).
A convenient way to test is to make a copy of your server group using [point-in-time restore](concepts-backup.md#restore). Upgrade the
works properly, upgrade the original server group.
* Learn about [supported PostgreSQL versions](reference-versions.md). * See [which extensions](reference-extensions.md) are packaged with each PostgreSQL version in a Hyperscale (Citus) server group.
+* Learn more about [upgrades](concepts-upgrade.md)
postgresql Reference Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-extensions.md
Previously updated : 07/05/2022 Last updated : 08/02/2022 # PostgreSQL extensions in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
The versions of each extension installed in a server group sometimes differ base
> [!div class="mx-tableFixed"] > | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | > ||||||
-> | [citus](https://github.com/citusdata/citus) | Citus distributed database. | 9.5.11 | 10.0.7 | 10.2.6 | 10.2.6 |
+> | [citus](https://github.com/citusdata/citus) | Citus distributed database. | 9.5.11 | 10.0.7 | 10.2.6 | 11.0.4 |
### Data types extensions
The versions of each extension installed in a server group sometimes differ base
> | [lo](https://www.postgresql.org/docs/current/lo.html) | Large Object maintenance. | 1.1 | 1.1 | 1.1 | 1.1 | > | [ltree](https://www.postgresql.org/docs/current/static/ltree.html) | Provides a data type for hierarchical tree-like structures. | 1.1 | 1.1 | 1.2 | 1.2 | > | [seg](https://www.postgresql.org/docs/current/seg.html) | Data type for representing line segments or floating-point intervals. | 1.3 | 1.3 | 1.3 | 1.4 |
-> | [tdigest](https://github.com/tvondra/tdigest) | Data type for on-line accumulation of rank-based statistics such as quantiles and trimmed means. | 1.2.0 | 1.2.0 | 1.2.0 | 1.2.0 |
+> | [tdigest](https://github.com/tvondra/tdigest) | Data type for on-line accumulation of rank-based statistics such as quantiles and trimmed means. | 1.2.0 | 1.2.0 | 1.2.0 | 1.4.0 |
> | [topn](https://github.com/citusdata/postgresql-topn/) | Type for top-n JSONB. | 2.4.0 | 2.4.0 | 2.4.0 | 2.4.0 | ### Full-text search extensions
The versions of each extension installed in a server group sometimes differ base
> | [intagg](https://www.postgresql.org/docs/current/intagg.html) | Integer aggregator and enumerator (obsolete). | 1.1 | 1.1 | 1.1 | 1.1 | > | [intarray](https://www.postgresql.org/docs/current/static/intarray.html) | Provides functions and operators for manipulating null-free arrays of integers. | 1.2 | 1.2 | 1.3 | 1.5 | > | [moddatetime](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.9) | Functions for tracking last modification time. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [pg\_partman](https://pgxn.org/dist/pg_partman/doc/pg_partman.html) | Manages partitioned tables by time or ID. | 4.6.0 | 4.6.0 | 4.6.0 | 4.6.0 |
+> | [pg\_partman](https://pgxn.org/dist/pg_partman/doc/pg_partman.html) | Manages partitioned tables by time or ID. | 4.6.0 | 4.6.0 | 4.6.0 | 4.6.2 |
> | [pg\_surgery](https://www.postgresql.org/docs/current/pgsurgery.html) | Functions to perform surgery on a damaged relation. | | | | 1.0 | > | [pg\_trgm](https://www.postgresql.org/docs/current/static/pgtrgm.html) | Provides functions and operators for determining the similarity of alphanumeric text based on trigram matching. | 1.4 | 1.4 | 1.5 | 1.6 | > | [pgcrypto](https://www.postgresql.org/docs/current/static/pgcrypto.html) | Provides cryptographic functions. | 1.3 | 1.3 | 1.3 | 1.3 |
postgresql Reference Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-overview.md
Previously updated : 02/24/2022 Last updated : 08/02/2022 # The Hyperscale (Citus) SQL API
configuration options for:
| [citus.local_table_join_policy](reference-parameters.md#cituslocal_table_join_policy-enum) | Where data moves when doing a join between local and distributed tables | | [citus.multi_shard_commit_protocol](reference-parameters.md#citusmulti_shard_commit_protocol-enum) | The commit protocol to use when performing COPY on a hash distributed table | | [citus.propagate_set_commands](reference-parameters.md#cituspropagate_set_commands-enum) | Which SET commands are propagated from the coordinator to workers |
+| [citus.create_object_propagation](reference-parameters.md#cituscreate_object_propagation-enum) | Behavior of CREATE statements in transactions for supported objects |
+| [citus.use_citus_managed_tables](reference-parameters.md#citususe_citus_managed_tables-boolean) | Allow local tables to be accessed in worker node queries |
### Informational
configuration options for:
| [citus.stat_statements_max](reference-parameters.md#citusstat_statements_max-integer) | Max number of rows to store in `citus_stat_statements` | | [citus.stat_statements_purge_interval](reference-parameters.md#citusstat_statements_purge_interval-integer) | Frequency at which the maintenance daemon removes records from `citus_stat_statements` that are unmatched in `pg_stat_statements` | | [citus.stat_statements_track](reference-parameters.md#citusstat_statements_track-enum) | Enable/disable statement tracking |
+| [citus.show_shards_for_app_name_prefixes](reference-parameters.md#citusshow_shards_for_app_name_prefixes-text) | Allows shards to be displayed for selected clients that want to see them |
+| [citus.override_table_visibility](reference-parameters.md#citusoverride_table_visibility-boolean) | Enable/disable shard hiding |
### Inter-node connection management
postgresql Reference Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-parameters.md
Previously updated : 02/18/2022 Last updated : 08/02/2022 # Server parameters
DETAIL: on server citus@private-c.demo.postgres.database.azure.com:5432 connect
... etc, one for each of the 32 shards ```
+#### citus.show\_shards\_for\_app\_name\_prefixes (text)
+
+By default, Citus hides shards from the list of tables PostgreSQL gives to SQL
+clients. It does this because there are multiple shards per distributed table,
+and the shards can be distracting to the SQL client.
+
+The `citus.show_shards_for_app_name_prefixes` GUC allows shards to be displayed
+for selected clients that want to see them. Its default value is ''.
+
+```postgresql
+-- show shards to psql only (hide in other clients, like pgAdmin)
+
+SET citus.show_shards_for_app_name_prefixes TO 'psql';
+
+-- also accepts a comma separated list
+
+SET citus.show_shards_for_app_name_prefixes TO 'psql,pg_dump';
+```
+
+Shard hiding can be disabled entirely using
+[citus.override_table_visibility](#citusoverride_table_visibility-boolean).
+
+#### citus.override\_table\_visibility (boolean)
+
+Determines whether
+[citus.show_shards_for_app_name_prefixes](#citusshow_shards_for_app_name_prefixes-text)
+is active. The default value is 'true'. When set to 'false', shards are visible
+to all client applications.
+
+#### citus.use\_citus\_managed\_tables (boolean)
+
+Allow new [local tables](concepts-nodes.md#type-3-local-tables) to be accessed
+by queries on worker nodes. Adds all newly created tables to Citus metadata
+when enabled. The default value is 'false'.
+ ### Query Statistics #### citus.stat\_statements\_purge\_interval (integer)
The supported values are:
* **none:** no SET commands are propagated. * **local:** only SET LOCAL commands are propagated.
+##### citus.create\_object\_propagation (enum)
+
+Controls the behavior of CREATE statements in transactions for supported
+objects.
+
+When objects are created in a multi-statement transaction block, Citus switches
+sequential mode to ensure created objects are visible to later statements on
+shards. However, the switch to sequential mode is not always desirable. By
+overriding this behavior, the user can trade off performance for full
+transactional consistency in the creation of new objects.
+
+The default value for this parameter is 'immediate'.
+
+The supported values are:
+
+* **immediate:** raises error in transactions where parallel operations like
+ create\_distributed\_table happen before an attempted CREATE TYPE.
+* **automatic:** defer creation of types when sharing a transaction with a
+ parallel operation on distributed tables. There may be some inconsistency
+ between which database objects exist on different nodes.
+* **deferred:** return to pre-11.0 behavior, which is like automatic but with
+ other subtle corner cases. We recommend the automatic setting over deferred,
+ unless you require true backward compatibility.
+
+For an example of this GUC in action, see [type
+propagation](howto-modify-distributed-tables.md#types-and-functions).
+ ##### citus.enable\_repartition\_joins (boolean) Ordinarily, attempting to perform repartition joins with the adaptive executor
purview Abap Functions Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/abap-functions-deployment-guide.md
Previously updated : 03/05/2022 Last updated : 08/03/2022 # SAP ABAP function module deployment guide
This step is optional, and an existing package can be used.
1. Select and hold (or right-click) the function group name in the repository browser. Select **Create** and then select **Function Module**.
-1. In the **Function module** box, enter **Z_MITI_DOWNLOAD**. Enter a description in the **Short Text** box.
+1. In the **Function module** box, enter **Z_MITI_DOWNLOAD** in case of SAP ECC or S/4HANA and **Z_MITI_BW_DOWNLOAD** in case of SAP BW. Enter a description in the **Short Text** box.
After the module is created, specify the following information:
After the module is created, specify the following information:
After you finish the previous steps, test the function:
-1. Open the **Z\_MITI\_DOWNLOAD** function module.
+1. Open the **Z_MITI_DOWNLOAD** or **Z_MITI_BW_DOWNLOAD** function module you created.
1. On the main menu, select **Function Module** > **Test** > **Test Function Module**. You can also select **F8**.
purview How To Monitor Scan Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-monitor-scan-runs.md
Previously updated : 04/04/2022 Last updated : 08/03/2022 # Monitor scan runs in Microsoft Purview
In Microsoft Purview, you can register and scan various types of data sources, a
## Monitor scan runs
-1. Go to your Microsoft Purview account -> open **Microsoft Purview governance portal** -> **Data map** -> **Monitoring**.
+1. Go to your Microsoft Purview account -> open **Microsoft Purview governance portal** -> **Data map** -> **Monitoring**. You need to have **Data source admin** role on any collection to access this page. And you will see the scan runs that belong to the collections on which you have data source admin privilege.
1. The high-level KPIs show total scan runs within a period. The time period is defaulted at last 30 days, you can also choose to select last seven days. Based on the time filter selected, you can see the distribution of successful, failed, and canceled scan runs by week or by the day in the graph.
purview Register Scan Sap Bw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sap-bw.md
Previously updated : 05/04/2022 Last updated : 08/03/2022
This article outlines how to register SAP Business Warehouse (BW), and how to au
||||||||| | [Yes](#register)| [Yes](#scan)| No | No | No | No| No|No|
-The supported SAP BW versions are 7.3 to 7.5. SAP BW4/HANA isn't supported.
+The supported SAP BW versions are 7.3 to 7.5. SAP BW/4HANA isn't supported.
When scanning SAP BW source, Microsoft Purview supports extracting technical metadata including:
search Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/whats-new.md
Previously updated : 06/17/2022 Last updated : 08/03/2022 # What's new in Azure Cognitive Search
-Learn what's new in the service. Bookmark this page to keep up to date with service updates.
+Learn about the latest updates to Azure Cognitive Search. The following links supplement this article:
-* [**Preview features**](search-api-preview.md) is a list of current features that haven't been approved for production workloads.
-* [**Previous versions**](/previous-versions/azure/search/) is an archive of earlier feature announcements.
+* [**Previous versions**](/previous-versions/azure/search/) is an archive of feature announcements from 2019 and 2020.
+* [**Preview features**](search-api-preview.md) are announced here in "What's New", mixed in with announcements about general availability or feature retirement. If you need to quickly determine the status of a particular feature, visit the preview features page to see if it's listed.
## June 2022
sentinel Connect Threat Intelligence Taxii https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-threat-intelligence-taxii.md
Learn more about [Threat Intelligence](understand-threat-intelligence.md) in Mic
TAXII 2.x servers advertise API Roots, which are URLs that host Collections of threat intelligence. You can usually find the API Root and the Collection ID in the documentation pages of the threat intelligence provider hosting the TAXII server. > [!NOTE]
-> In some cases, the provider will only advertise a URL called a Discovery Endpoint. You can use the cURL utility to browse the discovery endpoint and request the API Root, as [detailed below](#find-the-api-root-using-curl).
-
-### Find the API Root using cURL
-
-Here's an example of how to use the [cURL](https://en.wikipedia.org/wiki/CURL) command line utility, which is provided in Windows and most Linux distributions, to discover the API Root and browse the Collections of a TAXII server, given only the discovery endpoint. Using the discovery endpoint of the [Anomali Limo](https://www.anomali.com/community/limo) ThreatStream TAXII 2.0 server, you can request the API Root URI and then the Collections.
-
-1. From a browser, navigate to the ThreatStream TAXII 2.0 server discovery endpoint at https://limo.anomali.com/taxii to retrieve the API Root. Authenticate with the username and password `guest`.
-
- You will receive the following response:
-
- ```json
- {
- "api_roots":
- [
- "https://limo.anomali.com/api/v1/taxii2/feeds/",
- "https://limo.anomali.com/api/v1/taxii2/trusted_circles/",
- "https://limo.anomali.com/api/v1/taxii2/search_filters/"
- ],
- "contact": "info@anomali.com",
- "default": "https://limo.anomali.com/api/v1/taxii2/feeds/",
- "description": "TAXII 2.0 Server (guest)",
- "title": "ThreatStream Taxii 2.0 Server"
- }
- ```
-
-2. Use the cURL utility and the API Root (https://limo.anomali.com/api/v1/taxii2/feeds/) from the previous response, appending "`collections/`" to the API Root to browse the list of Collection IDs hosted on the API Root:
-
- ```json
- curl -u guest https://limo.anomali.com/api/v1/taxii2/feeds/collections/
- ```
- After authenticating again with the password "guest", you will receive the following response:
-
- ```json
- {
- "collections":
- [
- {
- "can_read": true,
- "can_write": false,
- "description": "",
- "id": "107",
- "title": "Phish Tank"
- },
- {
- "can_read": true,
- "can_write": false,
- "description": "",
- "id": "135",
- "title": "Abuse.ch Ransomware IPs"
- },
- {
- "can_read": true,
- "can_write": false,
- "description": "",
- "id": "136",
- "title": "Abuse.ch Ransomware Domains"
- },
- {
- "can_read": true,
- "can_write": false,
- "description": "",
- "id": "150",
- "title": "DShield Scanning IPs"
- },
- {
- "can_read": true,
- "can_write": false,
- "description": "",
- "id": "200",
- "title": "Malware Domain List - Hotlist"
- },
- {
- "can_read": true,
- "can_write": false,
- "description": "",
- "id": "209",
- "title": "Blutmagie TOR Nodes"
- },
- {
- "can_read": true,
- "can_write": false,
- "description": "",
- "id": "31",
- "title": "Emerging Threats C&C Server"
- },
- {
- "can_read": true,
- "can_write": false,
- "description": "",
- "id": "33",
- "title": "Lehigh Malwaredomains"
- },
- {
- "can_read": true,
- "can_write": false,
- "description": "",
- "id": "41",
- "title": "CyberCrime"
- },
- {
- "can_read": true,
- "can_write": false,
- "description": "",
- "id": "68",
- "title": "Emerging Threats - Compromised"
- }
- ]
- }
- ```
-
-You now have all the information you need to connect Microsoft Sentinel to one or more TAXII server Collections provided by Anomali Limo.
-
-| **API Root** (https://limo.anomali.com/api/v1/taxii2/feeds/) | Collection ID |
-| | : |
-| **Phish Tank** | 107 |
-| **Abuse.ch Ransomware IPs** | 135 |
-| **Abuse.ch Ransomware Domains** | 136 |
-| **DShield Scanning IPs** | 150 |
-| **Malware Domain List - Hotlist** | 200 |
-| **Blutmagie TOR Nodes** | 209 |
-| **Emerging Threats C&C Server** | 31 |
-| **Lehigh Malwaredomains** | 33 |
-| **CyberCrime** | 41 |
-| **Emerging Threats - Compromised** | 68 |
-|
+> In some cases, the provider will only advertise a URL called a Discovery Endpoint. You can use the [cURL](https://en.wikipedia.org/wiki/CURL) utility to browse the discovery endpoint and request the API Root.
## Enable the Threat Intelligence - TAXII data connector in Microsoft Sentinel
sentinel Threat Intelligence Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/threat-intelligence-integration.md
To connect to TAXII threat intelligence feeds, follow the instructions to [conne
### Anomali - [Learn how to import threat intelligence from Anomali ThreatStream into Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/import-anomali-threatstream-feed-into-microsoft-sentinel/ba-p/3561742#M3787)-- [See what you need to connect to Anomali's Limo feed](https://www.anomali.com/resources/limo). ### Cybersixgill Darkfeed
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
If you're looking for items older than six months, you'll find them in the [Arch
## July 2022
+- [Sync user entities from your on-premises Active Directory with Microsoft Sentinel](#sync-user-entities-from-your-on-premises-active-directory-with-microsoft-sentinel)
- [Automation rules for alerts](#automation-rules-for-alerts)
+### Sync user entities from your on-premises Active Directory with Microsoft Sentinel
+
+Until now, you've been able to bring your user account entities from your Azure Active Directory (Azure AD) into the IdentityInfo table in Microsoft Sentinel, so that User and Entity Behavior Analytics (UEBA) can use that information to provide context and give insight into user activities, to enrich your investigations.
+
+Now you can do the same with your on-premises (non-Azure) Active Directory as well.
+
+If you have Microsoft Defender for Identity, [enable and configure User and Entity Behavior Analytics (UEBA)](enable-entity-behavior-analytics.md#how-to-enable-user-and-entity-behavior-analytics) to collect and sync your Active Directory user account information into Microsoft Sentinel's IdentityInfo table, so you can get the same insight value from your on-premises users as you do from your cloud users.
+
+Learn more about the [requirements for using Microsoft Defender for Identity](/defender-for-identity/prerequisites) this way.
+ ### Automation rules for alerts In addition to their incident-management duties, [automation rules](automate-incident-handling-with-automation-rules.md) have a new, added function: they are the preferred mechanism for running playbooks built on the **alert trigger**.
site-recovery Vmware Azure Manage Configuration Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-manage-configuration-server.md
Previously updated : 05/27/2021 Last updated : 08/03/2022 # Manage the configuration server for VMware VM/physical server disaster recovery
The expiry date appears under **Configuration Server health**. For configuration
### If certificates have already expired 1. Post expiry, certificates **cannot be renewed from Azure portal**. Before proceeding, ensure all components scale-out process servers, master target servers and mobility agents on all protected machines are on latest versions and are in connected state.
-2. **Follow this procedure only if certificates have already expired.** Login to configuration server, navigate to C drive > Program Data > Site Recovery > home > svsystems > bin and execute "RenewCerts" executor tool as administrator.
+2. **Follow this procedure only if certificates have already expired.** Login to configuration server, navigate to *C:\ProgramData\ASR\home\svsystems\bin* and execute **RenewCerts** executor tool as administrator.
3. A PowerShell execution window pops-up and triggers renewal of certificates. This can take up to 15 minutes. Do not close the window until completion of renewal. :::image type="content" source="media/vmware-azure-manage-configuration-server/renew-certificates.png" alt-text="RenewCertificates":::
static-web-apps Get Started Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/get-started-cli.md
Previously updated : 11/17/2021 Last updated : 08/03/2022 ms.devlang: azurecli
ms.devlang: azurecli
# Quickstart: Building your first static site using the Azure CLI
-Azure Static Web Apps publishes a website to a production environment by building apps from a GitHub repository. In this quickstart, you deploy a web application to Azure Static Web apps using the Azure CLI.
+Azure Static Web Apps publishes websites to production by building apps from a code repository.
-If you don't have an Azure subscription, [create a free trial account](https://azure.microsoft.com/free).
+In this quickstart, you deploy a web application to Azure Static Web apps using the Azure CLI.
## Prerequisites - [GitHub](https://github.com) account-- [Azure](https://portal.azure.com) account
+- [Azure](https://portal.azure.com) account.
+ - If you don't have an Azure subscription, you can [create a free trial account](https://azure.microsoft.com/free).
- [Azure CLI](/cli/azure/install-azure-cli) installed (version 2.29.0 or higher) [!INCLUDE [create repository from template](../../includes/static-web-apps-get-started-create-repo.md)]
-## Create a static web app
+## Deploy a static web app
-Now that the repository is created, you can create a static web app from the Azure CLI.
+Now that the repository is generated from the template, you can deploy the app as a static web app from the Azure CLI.
1. Sign in to the Azure CLI by using the following command.
Now that the repository is created, you can create a static web app from the Azu
1. Create a variable to hold your GitHub user name.
+ Before you execute the following command, replace the placeholder `<YOUR_GITHUB_USER_NAME>` with your GitHub user name.
+ ```bash GITHUB_USER_NAME=<YOUR_GITHUB_USER_NAME> ```
- Replace the placeholder `<YOUR_GITHUB_USER_NAME>` with your GitHub user name.
-
-1. Create a new static web app from your repository.
+1. Deploy a new static web app from your repository.
# [No Framework](#tab/vanilla-javascript)
Now that the repository is created, you can create a static web app from the Azu
> [!IMPORTANT] > The URL passed to the `--source` parameter must not include the `.git` suffix.
- As you execute this command, the CLI starts GitHub interactive login experience. Look for a line in your console that resembles the following message.
+ As you execute this command, the CLI starts the GitHub interactive log in experience. Look for a line in your console that resembles the following message.
> Please navigate to `https://github.com/login/device` and enter the user code 329B-3945 to activate and retrieve your GitHub personal access token.
Now that the repository is created, you can create a static web app from the Azu
## View the website
-There are two aspects to deploying a static app. The first operation creates the underlying Azure resources that make up your app. The second is a GitHub Actions workflow that builds and publishes your application.
+There are two aspects to deploying a static app. The first operation creates the underlying Azure resources that make up your app. The second is a workflow that builds and publishes your application.
Before you can navigate to your new static site, the deployment build must first finish running.
Before you can navigate to your new static site, the deployment build must first
The output of this command returns the URL to your GitHub repository.
-1. Copy the **repository URL** and paste it into the browser.
+1. Copy the **repository URL** and paste it into your browser.
1. Select the **Actions** tab.
Before you can navigate to your new static site, the deployment build must first
--query "defaultHostname" ```
- Copy the URL into the browser and navigate to your website.
+ Copy the URL into your browser to navigate to your website.
## Clean up resources
storage Storage Files Quick Create Use Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-quick-create-use-linux.md
Title: Tutorial - Create an NFS Azure file share and mount it on a Linux VM using the Azure Portal
+ Title: Tutorial - Create an NFS Azure file share and mount it on a Linux VM using the Azure portal
description: This tutorial covers how to use the Azure portal to deploy a Linux virtual machine, create an Azure file share using the NFS protocol, and mount the file share so that it's ready to store files. Previously updated : 05/24/2022 Last updated : 08/03/2022 #Customer intent: As an IT admin new to Azure Files, I want to try out Azure file share using NFS and Linux so I can determine whether I want to subscribe to the service.
-# Tutorial: Create an NFS Azure file share and mount it on a Linux VM using the Azure Portal
+# Tutorial: Create an NFS Azure file share and mount it on a Linux VM using the Azure portal
Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard [Server Message Block (SMB) protocol](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) or [Network File System (NFS) protocol](https://en.wikipedia.org/wiki/Network_File_System). Both NFS and SMB protocols are supported on Azure virtual machines (VMs) running Linux. This tutorial shows you how to create an Azure file share using the NFS protocol and connect it to a Linux VM.
Before you can work with an NFS 4.1 Azure file share, you have to create an Azur
1. Enter a name for your storage account. The name you choose must be unique across Azure. The name also must be between 3 and 24 characters in length, and may include only numbers and lowercase letters. 1. Select a region for your storage account, or use the default region. Azure supports NFS file shares in all the same regions that support premium file storage. 1. Select the *Premium* performance tier to store your data on solid-state drives (SSD). Under **Premium account type**, select *File shares*.
-1. Leave replication set to its default value of *Locally-redundant storage (LRS)*.
+1. Leave replication set to its default value of *Locally redundant storage (LRS)*.
1. Select **Review + Create** to review your storage account settings and create the account. 1. When you see the **Validation passed** notification appear, select **Create**. You should see a notification that deployment is in progress.
Next, create an Azure VM running Linux to represent the on-premises server. When
1. Select **Home**, and then select **Virtual machines** under **Azure services**.
-1. Select **+ Create** and then **+ Virtual machine**.
+1. Select **+ Create** and then **+ Azure virtual machine**.
-1. In the **Basics** tab, under **Project details**, make sure the correct subscription and resource group are selected. Under **Instance details**, type *myVM* for the **Virtual machine name**, and select the same region as your storage account. Choose the default Ubuntu Server version for your **Image**. Leave the other defaults. The default size and pricing is only shown as an example. Size availability and pricing is dependent on your region and subscription.
+1. In the **Basics** tab, under **Project details**, make sure the correct subscription and resource group are selected. Under **Instance details**, type *myVM* for the **Virtual machine name**, and select the same region as your storage account. Choose the default Ubuntu Server version for your **Image**. Leave the other defaults. The default size and pricing is only shown as an example. Size availability and pricing are dependent on your region and subscription.
:::image type="content" source="media/storage-files-quick-create-use-linux/create-vm-project-instance-details.png" alt-text="Screenshot showing how to enter the project and instance details to create a new V M." lightbox="media/storage-files-quick-create-use-linux/create-vm-project-instance-details.png" border="true":::
Next, create an Azure VM running Linux to represent the on-premises server. When
:::image type="content" source="media/storage-files-quick-create-use-linux/create-vm-admin-account.png" alt-text="Screenshot showing how to configure the administrator account and create an S S H key pair for a new V M." lightbox="media/storage-files-quick-create-use-linux/create-vm-admin-account.png" border="true":::
-1. Under **Inbound port rules > Public inbound ports**, choose **Allow selected ports** and then select **SSH (22) and HTTP (80)** from the drop-down.
+1. Under **Inbound port rules > Public inbound ports**, choose **Allow selected ports** and then select **SSH (22)** and **HTTP (80)** from the drop-down.
:::image type="content" source="media/storage-files-quick-create-use-linux/create-vm-inbound-port-rules.png" alt-text="Screenshot showing how to configure the inbound port rules for a new V M." lightbox="media/storage-files-quick-create-use-linux/create-vm-inbound-port-rules.png" border="true":::
Next, create an Azure VM running Linux to represent the on-premises server. When
1. On the **Create a virtual machine** page, you can see the details about the VM you are about to create. Note the name of the virtual network. When you are ready, select **Create**.
-1. When the **Generate new key pair** window opens, select **Download private key and create resource**. Your key file will be download as **myKey.pem**. Make sure you know where the .pem file was downloaded, because you'll need the path to it to connect to your VM.
+1. When the **Generate new key pair** window opens, select **Download private key and create resource**. Your key file will be download as **myVM_key.pem**. Make sure you know where the .pem file was downloaded, because you'll need the path to it to connect to your VM.
You'll see a message that deployment is in progress. Wait a few minutes for deployment to complete.
Next, you'll need to set up a private endpoint for your storage account. This gi
:::image type="content" source="media/storage-files-quick-create-use-linux/create-private-endpoint.png" alt-text="Screenshot showing how to select + private endpoint to create a new private endpoint.":::
-1. Leave **Subscription** and **Resource group** the same. Under **Instance**, provide a name and select a region for the new private endpoint. Your private endpoint must be in the same region as your virtual network, so use the same region as you specified when creating the V M. When all the fields are complete, select **Next: Resource**.
+1. Leave **Subscription** and **Resource group** the same. Under **Instance**, provide a name and select a region for the new private endpoint. Your private endpoint must be in the same region as your virtual network, so use the same region as you specified when creating the VM. When all the fields are complete, select **Next: Resource**.
:::image type="content" source="media/storage-files-quick-create-use-linux/private-endpoint-basics.png" alt-text="Screenshot showing how to provide the project and instance details for a new private endpoint." lightbox="media/storage-files-quick-create-use-linux/private-endpoint-basics.png" border="true":::
Next, you'll need to set up a private endpoint for your storage account. This gi
:::image type="content" source="media/storage-files-quick-create-use-linux/private-endpoint-resource.png" alt-text="Screenshot showing how to select the resources that a new private endpoint should connect to." lightbox="media/storage-files-quick-create-use-linux/private-endpoint-resource.png" border="true":::
-1. Under **Networking**, select the virtual network associated with your VM and leave the default subnet. Select **Yes** for **Integrate with private DNS zone**. Select the correct subscription and resource group, and then select **Next: Tags**.
+1. Under **Networking**, select the virtual network associated with your VM and leave the default subnet. Under **Private IP configuration**, leave **Dynamically allocate IP address** selected. Select **Next: DNS**.
- :::image type="content" source="media/storage-files-quick-create-use-linux/private-endpoint-networking.png" alt-text="Screenshot showing how to add virtual networking and D N S integration to a new private endpoint." lightbox="media/storage-files-quick-create-use-linux/private-endpoint-networking.png" border="true":::
+ :::image type="content" source="media/storage-files-quick-create-use-linux/private-endpoint-virtual-network.png" alt-text="Screenshot showing how to add virtual networking and private IP configuration to a new private endpoint." lightbox="media/storage-files-quick-create-use-linux/private-endpoint-virtual-network.png" border="true":::
+
+1. Select **Yes** for **Integrate with private DNS zone**. Make sure the correct subscription and resource group are selected, and then select **Next: Tags**.
+
+ :::image type="content" source="media/storage-files-quick-create-use-linux/private-endpoint-dns.png" alt-text="Screenshot showing how to integrate your private endpoint with a private DNS zone." lightbox="media/storage-files-quick-create-use-linux/private-endpoint-dns.png" border="true":::
1. You can optionally apply tags to categorize your resources, such as applying the name **Environment** and the value **Test** to all testing resources. Enter name/value pairs if desired, and then select **Next: Review + create**.
synapse-analytics Performance Tuning Result Set Caching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/performance-tuning-result-set-caching.md
When result set caching is enabled, dedicated SQL pool automatically caches query results in the user database for repetitive use. This allows subsequent query executions to get results directly from the persisted cache so recomputation is not needed. Result set caching improves query performance and reduces compute resource usage. In addition, queries using cached results set do not use any concurrency slots and thus do not count against existing concurrency limits. For security, users can only access the cached results if they have the same data access permissions as the users creating the cached results. Result set caching is OFF by default at the database and session levels.
+>[!NOTE]
+> Result set caching should not be used in conjunction with [DECRYPTBYKEY](/sql/t-sql/functions/decryptbykey-transact-sql). If this cryptographic function must be used, ensure you have result set caching disabled (either at [session-level](/sql/t-sql/statements/set-result-set-caching-transact-sql) or [database-level](/sql/t-sql/statements/alter-database-transact-sql-set-options)) at the time of execution.
+ ## Key commands [Turn ON/OFF result set caching for a user database](/sql/t-sql/statements/alter-database-transact-sql-set-options?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true)
virtual-desktop Create Profile Container Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-profile-container-azure-ad.md
Previously updated : 08/02/2022 Last updated : 08/03/2022 # Create a profile container with Azure Files and Azure Active Directory (preview)
If the service principal was already created for you in the last section, follow
1. Open **Azure Active Directory**. 2. Select **App registrations** on the left pane. 3. Select **All Applications**.
-4. Select the application with the name matching your storage account.
+4. Select the application with the name matching **[Storage Account] $storageAccountName.file.core.windows.net**.
5. Select **API permissions** in the left pane. 6. Select **Add permissions** at the bottom of the page. 7. Select **Grant admin consent for "DirectoryName"**.
virtual-machines Dedicated Hosts How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-hosts-how-to.md
Tags : {}
+## Restart a host (Preview)
+
+You can restart the entire host, meaning that the host's not **completely** powered off. Because the host will be restarted, the underlying VMs will also be restarted. The host will remain on the same underlying physical hardware as it restarts and both the host ID and asset ID will remain the same after the restart. The host SKU will also remain the same after the restart.
+
+Note: Host restart is in preview.
+
+### [Portal](#tab/portal)
+1. Search for and select the host.
+1. In the top menu bar, select the **Restart** button. Note, this feature is in Preview.
+1. In the **Essentials** section of the Host Resource Pane, Host Status will switch to **Host undergoing restart** during the restart.
+1. Once the restart has completed, the Host Status will return to **Host available**.
+
+### [CLI](#tab/cli)
+
+Restart the host using [az vm host restart](/cli/azure/vm#az-vm-host-restart) (Preview).
+
+```azurecli-interactive
+az vm host restart --resource-group myResourceGroup --host-group myHostGroup --name myDedicatedHost
+```
+
+To view the status of the restart, you can use the [az vm host get-instance-view](/cli/azure/vm#az-vm-host-get-instance-view) command. The **displayStatus** will be set to **Host undergoing restart** during the restart. Once the restart has completed, the displayStatus will return to **Host available**.
+
+```azurecli-interactive
+az vm host get-instance-view --resource-group myResourceGroup --host-group myHostGroup --name myDedicatedHost
+```
+
+### [PowerShell](#tab/powershell)
+
+Restart the host using the [Restart-AzHost](/powershell/module/az.compute/restart-azhost) (Preview) command.
+
+```azurepowershell-interactive
+Restart-AzHost -ResourceGroupName myResourceGroup -HostGroupName myHostGroup -Name myDedicatedHost
+```
+
+To view the status of the restart, you can use the [Get-AzHost](/powershell/module/az.compute/get-azhost) commandlet using the **InstanceView** parameter. The **displayStatus** will be set to **Host undergoing restart** during the restart. Once the restart has completed, the displayStatus will return to **Host available**.
++
+```azurepowershell-interactive
+$hostRestartStatus = Get-AzHost -ResourceGroupName myResourceGroup -HostGroupName myHostGroup -Name myDedicatedHost -InstanceView;
+$hostRestartStatus.InstanceView.Statuses[1].DisplayStatus;
+```
+++ ## Deleting a host
virtual-machines Disks Deploy Premium V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-deploy-premium-v2.md
Title: Deploy a Premium SSD v2 (preview) managed disk
description: Learn how to deploy a Premium SSD v2 (preview). Previously updated : 07/18/2022 Last updated : 08/03/2022
Update-AzVM -VM $vm -ResourceGroupName $resourceGroupName
# [Azure portal](#tab/portal)
-1. Sign in to the [Azure portal](https://portal.azure.com/).
+> [!IMPORTANT]
+> Premium SSD v2 managed disks can only be deployed and managed in the Azure portal from the following link: [https://portal.azure.com/?feature.premiumv2=true#home](https://portal.azure.com/?feature.premiumv2=true#home).
+
+1. Sign in to the Azure portal with the following link: [https://portal.azure.com/?feature.premiumv2=true#home](https://portal.azure.com/?feature.premiumv2=true#home).
1. Navigate to **Virtual machines** and follow the normal VM creation process. 1. On the **Basics** page, select a [supported region](#regional-availability) and set **Availability options** to **Availability zone**. 1. Select one of the zones.
Update-AzDisk -ResourceGroupName $resourceGroup -DiskName $diskName -DiskUpdate
# [Azure portal](#tab/portal)
+> [!IMPORTANT]
+> Premium SSD v2 managed disks can only be deployed and managed in the Azure portal from the following link: [https://portal.azure.com/?feature.premiumv2=true#home](https://portal.azure.com/?feature.premiumv2=true#home).
+
+1. Sign in to the Azure portal with the following link: [https://portal.azure.com/?feature.premiumv2=true#home](https://portal.azure.com/?feature.premiumv2=true#home).
1. Navigate to your disk and select **Size + Performance**. 1. Change the values to your desire. 1. Select **Resize**.
virtual-machines Disks Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-types.md
With Premium SSD v2 disks, you can individually set the capacity, throughput, an
#### Premium SSD v2 capacities
-Premium SSD v2 capacities range from 1 GiB to 64 TiB, in 1-GiB increments. You're billed on a per GiB ratio, see the [pricing page](https://azure.microsoft.com/pricing/details/managed-disks/) for details.
+Premium SSD v2 capacities range from 1 GiB to 64 TiBs, in 1-GiB increments. You're billed on a per GiB ratio, see the [pricing page](https://azure.microsoft.com/pricing/details/managed-disks/) for details.
-Premium SSD v2 offers up to 32 TiB per region per subscription by default in the public preview, but supports higher capacity by request. To request an increase in capacity, request a quota increase or contact Azure Support.
+Premium SSD v2 offers up to 32 TiBs per region per subscription by default in the public preview, but supports higher capacity by request. To request an increase in capacity, request a quota increase or contact Azure Support.
#### Premium SSD v2 IOPS
-All Premium SSD v2 disks have a baseline IOPS of 3000 that is free of charge. After 6 GiB, the maximum IOPS a disk can have increases at a rate of 500 per GiB, up to 80,000 IOPS. So an 8 GiB disk can have up to 4,000 IOPS, and a 10 GiB can have up to 5,000 IOPS. To be able to set 80,000 IOPS on a disk, that disk must have at least 160 GiB. Increasing your IOPS beyond 3000 increases the price of your disk.
+All Premium SSD v2 disks have a baseline IOPS of 3000 that is free of charge. After 6 GiB, the maximum IOPS a disk can have increases at a rate of 500 per GiB, up to 80,000 IOPS. So an 8 GiB disk can have up to 4,000 IOPS, and a 10 GiB can have up to 5,000 IOPS. To be able to set 80,000 IOPS on a disk, that disk must have at least 160 GiBs. Increasing your IOPS beyond 3000 increases the price of your disk.
#### Premium SSD v2 throughput
The following table provides a comparison of disk capacities and performance max
|Disk Size |Maximum available IOPS |Maximum available throughput (MB/s) | ||||
-|1 GiB-64 TiB |3,000-80,000 (Increases by 500 IOPS per GiB) |125-1,200 (increases by 0.25 MB/s per set IOPS) |
+|1 GiB-64 TiBs |3,000-80,000 (Increases by 500 IOPS per GiB) |125-1,200 (increases by 0.25 MB/s per set IOPS) |
To deploy a Premium SSD v2, see [Deploy a Premium SSD v2 (preview)](disks-deploy-premium-v2.md).
virtual-machines Diagnostics Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/diagnostics-linux.md
Set-AzVMExtension -ResourceGroupName <resource_group_name> -VMName <vm_name> -Lo
+### Enable auto update
+The **recommendation** is to enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature, using the following PowerShell commands.
+# [Azure CLI](#tab/azcli)
+```powershell
+az vm extension set --publisher Microsoft.Azure.Diagnostics --name LinuxDiagnostic --version 4.0 --resource-group <resource_group_name> --vm-name <vm_name> --protected-settings ProtectedSettings.json --settings PublicSettings.json --enable-auto-upgrade true
+```
+# [Powershell](#tab/powershell)
+```powershellSet-AzVMExtension -ResourceGroupName <resource_group_name> -VMName <vm_name> -Location <vm_location> -ExtensionType LinuxDiagnostic -Publisher Microsoft.Azure.Diagnostics -Name LinuxDiagnostic -SettingString $publicSettings -ProtectedSettingString $protectedSettings -TypeHandlerVersion 4.0 -EnableAutomaticUpgrade $true
+```
+++ ### Sample installation > [!NOTE]
virtual-machines Share Gallery Direct https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery-direct.md
During the preview:
You need to create a [new direct shared gallery ](./create-gallery.md#create-a-direct-shared-gallery). A direct shared gallery has the `sharingProfile.permissions` property is set to `Groups`. When using the CLI to create a gallery, use the `--permissions groups` parameter. You can't use an existing gallery, the property can't currently be updated.
-## Share to subscriptions and tenants
+## How sharing with direct shared gallery works
First you create a gallery under `Microsoft.Compute/Galleries` and choose `groups` as a sharing option.
When you are ready, you share your gallery with subscriptions and tenants. Only
### [Portal](#tab/portaldirect)
+> [!NOTE]
+> **Known issue**: In the Azure portal, If you get an error "Failed to update Azure compute gallery", please verify if you have owner (or) compute gallery sharing admin permission on the gallery.
+>
1. Sign in to the Azure portal at https://portal.azure.com. 1. Type **Azure Compute Gallery** in the search box and select **Azure Compute Gallery** in the results. 1. In the **Azure Compute Gallery** page, click **Add**.
To share the gallery:
1. If you would like to share with someone within your organization, for **Type** select *Subscription* or *Tenant* and choose the appropriate item from the **Tenants and subscriptions** drop-down. If you want to share with someone outside of your organization, select either *Subscription outside of my organization* or *Tenant outside of my organization* and then paste or type the ID into the text box. 1. When you are done adding items, select **Save**. + ### [CLI](#tab/clidirect) To create a direct shared gallery, you need to create the gallery with the `--permissions` parameter set to `groups`.
virtual-machines Oracle Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-reference-architecture.md
Finally, when migrating or creating applications for the cloud, it's important t
### Oracle RAC in the cloud
-Oracle Real Application Cluster (RAC) is a solution by Oracle to help customers achieve high throughputs by having many instances accessing one database storage (Shared-all architecture pattern). While Oracle RAC can also be used for high availability on-premises, Oracle RAC alone cannot be used for high availability in the cloud as it only protects against instance level failures and not against Rack-level or Data center-level failures. For this reason, Oracle recommends using Oracle Data Guard with your database (whether single instance or RAC) for high availability. Customers generally require a high SLA for running their mission critical applications. Oracle RAC is currently not certified or supported by Oracle on Azure. However, Azure offers features such as Azure offers Availability Zones and planned maintenance windows to help protect against instance-level failures. In addition to this, customers can use technologies such as Oracle Data Guard, Oracle GoldenGate and Oracle Sharding for high performance and resiliency by protecting their databases from rack-level as well as datacenter-level and geo-political failures.
+Oracle Real Application Cluster (RAC) is a solution by Oracle to help customers achieve high throughputs by having many instances accessing one database storage (Shared-all architecture pattern). While Oracle RAC can also be used for high availability on-premises, Oracle RAC alone cannot be used for high availability in the cloud as it only protects against instance level failures and not against Rack-level or Data center-level failures. For this reason, Oracle recommends using Oracle Data Guard with your database (whether single instance or RAC) for high availability. Customers generally require a high SLA for running their mission critical applications. Oracle RAC is currently not certified or supported by Oracle on Azure. However, Azure offers features such as Availability Zones and planned maintenance windows to help protect against instance-level failures. In addition to this, customers can use technologies such as Oracle Data Guard, Oracle GoldenGate and Oracle Sharding for high performance and resiliency by protecting their databases from rack-level as well as datacenter-level and geo-political failures.
When running Oracle Databases across multiple [availability zones](../../../availability-zones/az-overview.md) in conjunction with Oracle Data Guard or GoldenGate, customers are able to get an uptime SLA of 99.99%. In Azure regions where Availability zones are not yet present, customers can use [Availability Sets](../../availability-set-overview.md) and achieve an uptime SLA of 99.95%.
In regions where availability zones aren't supported, you may use availability s
#### Oracle Data Guard Far Sync
-Oracle Data Guard Far Sync provides zero data loss protection capability for Oracle Databases. This capability allows you to protect against data loss in if your database machine fails. Oracle Data Guard Far Sync needs to be installed on a separate VM. Far Sync is a lightweight Oracle instance that only has a control file, password file, spfile, and standby logs. There are no data files or redo log files.
+Oracle Data Guard Far Sync provides zero data loss protection capability for Oracle Databases. This capability allows you to protect against data loss if your database machine fails. Oracle Data Guard Far Sync needs to be installed on a separate VM. Far Sync is a lightweight Oracle instance that only has a control file, password file, spfile, and standby logs. There are no data files or redo log files.
For zero data loss protection, there must be synchronous communication between your primary database and the Far Sync instance. The Far Sync instance receives redo from the primary in a synchronous manner and forwards it immediately to all the standby databases in an asynchronous manner. This setup also reduces the overhead on the primary database, because it only has to send the redo to the Far Sync instance rather than all the standby databases. If a Far Sync instance fails, Data Guard automatically uses asynchronous transport to the secondary database from the primary database to maintain near-zero data loss protection. For added resiliency, customers may deploy multiple Far Sync instances per each database instance (primary and secondaries).
virtual-wan Cross Tenant Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/cross-tenant-vnet.md
In the following steps, you will switch between the context of the two subscript
* **PowerShell:** The metadata from the newly formed connection will show in the PowerShell console if the connection was successfully formed. * **Azure portal:** Navigate to the virtual hub, **Connectivity -> Virtual Network Connections**. You can view the pointer to the connection. To see the actual resource you will need the proper permissions.+
+## Scenario: add static routes to virtual network hub connection
+In the following steps, you will add a static route to the virtual hub default route table and virtual network connection to point to a next hop ip address (i.e NVA appliance).
+- Replace the example values to reflect your own environment.
+
+1. Make sure you are in the context of your parent account by running the following command:
+
+ ```azurepowershell-interactive
+Select-AzSubscription -SubscriptionId "[parent ID]"
+```
+
+2. Add route in the Virtual hub default route table without a specific ip address and next hop as the virtual hub connection by:
+
+ 2.1 get the connection details:
+ ```azurepowershell-interactive
+ $hubVnetConnection = Get-AzVirtualHubVnetConnection -Name "[HubconnectionName]" -ParentResourceName "[Hub Name]" -ResourceGroupName "[resource group name]"
+ ```
+ 2.2 add a static route to the virtual hub route table (next hop is hub vnet connection):
+ ```azurepowershell-interactive
+ $Route2 = New-AzVHubRoute -Name "[Route Name]" -Destination ΓÇ£[@("Destination prefix")]ΓÇ¥ -DestinationType "CIDR" -NextHop $hubVnetConnection.Id -NextHopType "ResourceId"
+ ```
+ 2.3 update the current hub default route table:
+ ```azurepowershell-interactive
+ Update-AzVHubRouteTable -ResourceGroupName "[resource group name]"-VirtualHubName [ΓÇ£Hub NameΓÇ¥] -Name "defaultRouteTable" -Route @($Route2)
+ ```
+ ## Customize static routes to specify next hop as an IP address for the virtual hub connection.
+
+ 2.4 update the route in the vnethub connection:
+ ```azurepowershell-interactive
+ $newroute = New-AzStaticRoute -Name "[Route Name]" -AddressPrefix "[@("Destination prefix")]" -NextHopIpAddress "[Destination NVA IP address]"
+
+ $newroutingconfig = New-AzRoutingConfiguration -AssociatedRouteTable $hubVnetConnection.RoutingConfiguration.AssociatedRouteTable.id -Id $hubVnetConnection.RoutingConfiguration.PropagatedRouteTables.Ids[0].id -Label @("default") -StaticRoute @($newroute)
+
+ Update-AzVirtualHubVnetConnection -ResourceGroupName $rgname -VirtualHubName "[Hub Name]" -Name "[Virtual hub connection name]" -RoutingConfiguration $newroutingconfig
+
+ ```
+ 2.5 verify static route is established to a next hop IP address:
+
+ ```azurepowershell-interactive
+ Get-AzVirtualHubVnetConnection -ResourceGroupName "[Resource group]" -VirtualHubName "[virtual hub name]" -Name "[Virtual hub connection name]"
+ ```
++
+>[!NOTE]
+>- In step 2.2 and 2.4 the route name should be same otherwise it will create two routes one without ip address one with ip address in the routing table.
+>- If you run 2.5 it will remove the previous manual config route in your routing table.
+>- Make sure you have access and are authorized to the remote subscription as well when running the above.
+>- Destination prefix can be one CIDR or multiple ones
+>- Please use this format @("10.19.2.0/24") or @("10.19.2.0/24", "10.40.0.0/16") for multiple CIDR
+>
++ ## <a name="troubleshoot"></a>Troubleshooting
vpn-gateway Vpn Gateway Howto Point To Site Resource Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md
# Configure a point-to-site VPN connection using Azure certificate authentication: Azure portal
-This article helps you securely connect individual clients running Windows, Linux, or macOS to an Azure VNet. point-to-site VPN connections are useful when you want to connect to your VNet from a remote location, such when you're telecommuting from home or a conference. You can also use P2S instead of a Site-to-Site VPN when you have only a few clients that need to connect to a VNet. point-to-site connections don't require a VPN device or a public-facing IP address. P2S creates the VPN connection over either SSTP (Secure Socket Tunneling Protocol), or IKEv2. For more information about point-to-site VPN, see [About point-to-site VPN](point-to-site-about.md).
+This article helps you securely connect individual clients running Windows, Linux, or macOS to an Azure VNet. point-to-site VPN connections are useful when you want to connect to your VNet from a remote location, such as when you're telecommuting from home or a conference. You can also use P2S instead of a Site-to-Site VPN when you have only a few clients that need to connect to a VNet. point-to-site connections don't require a VPN device or a public-facing IP address. P2S creates the VPN connection over either SSTP (Secure Socket Tunneling Protocol), or IKEv2. For more information about point-to-site VPN, see [About point-to-site VPN](point-to-site-about.md).
:::image type="content" source="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/point-to-site-diagram.png" alt-text="Connect from a computer to an Azure VNet - point-to-site connection diagram.":::
web-application-firewall Waf Front Door Exclusion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-exclusion.md
description: This article provides information on exclusion lists configuration
Previously updated : 02/10/2022 Last updated : 08/03/2022
The following attributes can be added to exclusion lists by name. The values of
* Request cookie name * Query string args name * Request body post args name
+* RequestBodyJSONArgNames
You can specify an exact request header, body, cookie, or query string attribute match. Or, you can optionally specify partial matches. The following operators are the supported match criteria: