Updates from: 02/04/2022 02:08:17
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/access-tokens.md
client_id=<application-ID>
&scope=<application-ID-URI>/<scope-name> &response_type=code ```
+This is the interactive part of the flow, where you take action. You're asked to complete the user flow's workflow. This might involve entering your username and password in a sign in form or any other number of steps. The steps you complete depend on how the user flow is defined.
+
+If you're testing this GET HTTP request, use your browser.
The response with the authorization code should be similar to this example:
grant_type=authorization_code
&redirect_uri=https://jwt.ms &client_secret=2hMG2-_:y12n10vwH... ```
-
-You should see something similar to the following response:
+
+If you're testing this POST HTTP request, you can use any HTTP client such as [Microsoft PowerShell](/powershell/scripting/overview.md) or [Postman](https://www.postman.com/).
+
+A successful token response looks like this:
```json {
active-directory-b2c Authorization Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/authorization-code-flow.md
The authorization code flow for single page applications requires some additiona
The `spa` redirect type is backwards compatible with the implicit flow. Apps currently using the implicit flow to get tokens can move to the `spa` redirect URI type without issues and continue using the implicit flow. ## 1. Get an authorization code
-The authorization code flow begins with the client directing the user to the `/authorize` endpoint. This is the interactive part of the flow, where the user takes action. In this request, the client indicates in the `scope` parameter the permissions that it needs to acquire from the user. The following three examples (with line breaks for readability) each use a different user flow.
+The authorization code flow begins with the client directing the user to the `/authorize` endpoint. This is the interactive part of the flow, where the user takes action. In this request, the client indicates in the `scope` parameter the permissions that it needs to acquire from the user. The following three examples (with line breaks for readability) each use a different user flow. If you're testing this GET HTTP request, use your browser.
```http
grant_type=authorization_code&client_id=90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6&sco
| redirect_uri |Required |The redirect URI of the application where you received the authorization code. | | code_verifier | recommended | The same code_verifier that was used to obtain the authorization_code. Required if PKCE was used in the authorization code grant request. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). |
+If you're testing this POST HTTP request, you can use any HTTP client such as [Microsoft PowerShell](/powershell/scripting/overview.md) or [Postman](https://www.postman.com/).
+ A successful token response looks like this: ```json
active-directory-b2c Custom Policy Reference Sso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-policy-reference-sso.md
Title: Single sign-on session management using custom policies
+ Title: Single sign-on session providers using custom policies
-description: Learn how to manage SSO sessions using custom policies in Azure AD B2C.
+description: Learn how to manage single sign-on sessions using custom policies in Azure AD B2C.
Previously updated : 12/07/2020 Last updated : 02/03/2022
-# Single sign-on session management in Azure Active Directory B2C
+# Single sign-on session providers in Azure Active Directory B2C
+In the [Configure session behavior in Azure Active Directory B2C](session-behavior.md) article, we describe the session management for your Azure AD B2C custom policy. This article describes how to further configure the single sign-on (SSO) behavior of any individual technical profile within your custom policy.
-[Single sign-on (SSO) session](session-behavior.md) management uses the same semantics as any other technical profile in custom policies. When an orchestration step is executed, the technical profile associated with the step is queried for a `UseTechnicalProfileForSessionManagement` reference. If one exists, the referenced SSO session provider is then checked to see if the user is a session participant. If so, the SSO session provider is used to repopulate the session. Similarly, when the execution of an orchestration step is complete, the provider is used to store information in the session if an SSO session provider has been specified.
+For example, you configure your policy for tenant-wide SSO, but you would like to always perform the multifactor step regardless of an active SSO session. You can achieve this behavior by configuring the session provider of the multifactor technical profile.
-Azure AD B2C has defined a number of SSO session providers that can be used:
+You can apply session providers to two flows:
-|Session provider |Scope |
-|||
-|[NoopSSOSessionProvider](#noopssosessionprovider) | None |
-|[DefaultSSOSessionProvider](#defaultssosessionprovider) | Azure AD B2C internal session manager. |
-|[ExternalLoginSSOSessionProvider](#externalloginssosessionprovider) | Between Azure AD B2C and OAuth1, OAuth2, or OpenId Connect identity provider. |
-|[OAuthSSOSessionProvider](#oauthssosessionprovider) | Between an OAuth2 or OpenId connect relying party application and Azure AD B2C. |
-|[SamlSSOSessionProvider](#samlssosessionprovider) | Between Azure AD B2C and SAML identity provider. And between a SAML service provider (relying party application) and Azure AD B2C. |
+- **Fresh logon**
+ - When the user logs in for the first time, thereΓÇÖs no session. Any technical profiles that use a session provider become a session participant.
+ - The session provider can write claims to the session cookie.
+- **Subsequent logons**
+ - When the user has an active session, claims that are part of the session cookie are read into the claim bag.
+ - Claims that are part of the session cookie canΓÇÖt be updated.
+ - The session provider can issue extra claims into the claim bag, indicating that this technical profile was executed under SSO conditions.
+ - The technical profile can be skipped.
+Depending on the session management provider chosen for a given technical profile, session behavior can be active or suppressed. The following list presents some of the many possible examples using session providers:
+- Prevent or enforce user interface interruptions during subsequent logons (SSO).
+- Remember the chosen identity provider during subsequent logons (SSO).
+- Reduce the number of read operations into the directory during subsequent logons (SSO).
+- Track the social identity provider sessions to perform identity provider sign-out.
+- Track logged in relying party applications for single sign-out.
+## Session providers
-SSO management classes are specified using the `<UseTechnicalProfileForSessionManagement ReferenceId="{ID}" />` element of a technical profile.
+There are five session providers available to manage how a technical profile handles the SSO session. You must choose the most appropriate session provider when configuring your technical profile.
-## Input claims
+The following table shows which session provider to use depending on the type of technical profile you want to manage. Some session providers allow reading and writing claims to the session cookie.
-The `InputClaims` element is empty or absent.
+|Session provider |Applicable technical profile types| Purpose |Write claims|Read claims|
+||||||
+|[DefaultSSOSessionProvider](#defaultssosessionprovider) | [Self-asserted](self-asserted-technical-profile.md), [Azure Active Directory](active-directory-technical-profile.md), [Azure AD Multi-Factor Authentication](multi-factor-auth-technical-profile.md), [Claims transformation](claims-transformation-technical-profile.md)| Skips technical profile execution.| Yes | Yes |
+|[ExternalLoginSSOSessionProvider](#externalloginssosessionprovider) | [OAuth1 identity provider](oauth1-technical-profile.md), [Oauth2 identity provider](oauth2-technical-profile.md), [OpenID Connect identity provider](openid-connect-technical-profile.md), [SAML identity provider](saml-identity-provider-technical-profile.md)| Accelerate identity provider selection page. Performing single-logout.|Yes|Yes|
+|[OAuthSSOSessionProvider](#oauthssosessionprovider) |[JWT token issuer](jwt-issuer-technical-profile.md) | Manages session between OAuth2 or OpenId Connect relying party and Azure AD B2C. Performs single-logout. | No | No |
+|[SamlSSOSessionProvider](#samlssosessionprovider) | [SAML token issuer](saml-issuer-technical-profile.md) | Manages session between SAML relying party and Azure AD B2C. Performs single-logout. | No | No |
+|[NoopSSOSessionProvider](#noopssosessionprovider) |Any| Suppress any technical profile from being part of the session.| No | No |
-## Persisted claims
+The following diagram shows the types of session used by Azure AD B2C.
-Claims that need to be returned to the application or used by preconditions in subsequent steps, should be stored in the session or augmented by a read from the user's profile in the directory. Using persisted claims ensures that your authentication journeys won't fail on missing claims. To add claims in the session, use the `<PersistedClaims>` element of the technical profile. When the provider is used to repopulate the session, the persisted claims are added to the claims bag.
+![Diagram showing the Azure AD B2C types of session providers.](./media/custom-policy-reference-sso/azure-ad-b2c-session-providers.png)
-## Output claims
+## Referencing a session provider
-The `<OutputClaims>` is used for retrieving claims from the session.
+To use a session provider in your technical profile:
-## Session providers
+1. Create a session management technical profile of the appropriate. Note, the Azure AD B2C starter pack includes the most common session management technical profiles. You can reference an existing session management technical profile if applicable.
-### NoopSSOSessionProvider
+ The following XML snippet shows the starter pack's `SM-AAD` session management technical profile. The session provide is type of `DefaultSSOSessionProvider`.
-As the name dictates, this provider does nothing. This provider can be used for suppressing SSO behavior for a specific technical profile. The following `SM-Noop` technical profile is included in the [custom policy starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack).
+ ```xml
+ <TechnicalProfile Id="SM-AAD">
+ <DisplayName>Session Mananagement Provider</DisplayName>
+ <Protocol Name="Proprietary" Handler="Web.TPEngine.SSO.DefaultSSOSessionProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
+ <PersistedClaims>
+ <PersistedClaim ClaimTypeReferenceId="objectId" />
+ <PersistedClaim ClaimTypeReferenceId="signInName" />
+ <PersistedClaim ClaimTypeReferenceId="authenticationSource" />
+ <PersistedClaim ClaimTypeReferenceId="identityProvider" />
+ <PersistedClaim ClaimTypeReferenceId="newUser" />
+ <PersistedClaim ClaimTypeReferenceId="executed-SelfAsserted-Input" />
+ </PersistedClaims>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="objectIdFromSession" DefaultValue="true" />
+ </OutputClaims>
+ </TechnicalProfile>
+ ```
-```xml
-<TechnicalProfile Id="SM-Noop">
- <DisplayName>Noop Session Management Provider</DisplayName>
- <Protocol Name="Proprietary" Handler="Web.TPEngine.SSO.NoopSSOSessionProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
-</TechnicalProfile>
-```
-### DefaultSSOSessionProvider
+1. Reference the session management technical profile within your technical profile. By doing so, you control the behavior of that technical profile during subsequent logons (SSO).
+
+ To reference a session management technical profile from your technical profile, add the `UseTechnicalProfileForSessionManagement` element. The following example shows the use of `SM-AAD` session management technical profile. Change the `ReferenceId` to the ID of your session management technical profile.
+
+ ```xml
+ <TechnicalProfile Id="{Technical-profile-ID}">
+ ...
+ <UseTechnicalProfileForSessionManagement ReferenceId="SM-AAD" />
+ </TechnicalProfile>
+ ```
+
+> [!IMPORTANT]
+> When a technical profile doesn't reference any session management provider, the [DefaultSSOSessionProvider](#defaultssosessionprovider) session provider is applied, which may cause unexpected behavior.
-This provider can be used for storing claims in a session. This provider is typically referenced in a technical profile used for managing local and federated accounts. The following `SM-AAD` technical profile is included in the [custom policy starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack).
+> [!Note]
+> During a refresh token flow, the session management providers aren't invoked. All attempts to issue a new access token are a copy of the original claims issued.
+
+## Manage session claims
+
+The session management technical profiles control which claims can be read, written or output during custom policy execution.
+
+Within the session management technical profile, use `PersistedClaims` and `OutputClaims` elements to manage the claims.
+
+- **Persisted claims** - Claims that can be written to the session cookie.
+ - For a claim to be written into the session cookie, it must be part of the current claim bag.
+ - All claims that are written automatically return during subsequent logons (single sign-on). You donΓÇÖt need to specify the output claims.
+- **Output claims** - Extra claims that can be output to the claim bag during subsequent logons (single sign-on). Since the output claims aren't returned from the session, you must set a default value.
+
+The persisted and output claims elements are demonstrated in the following XML snippet:
```xml <TechnicalProfile Id="SM-AAD">
This provider can be used for storing claims in a session. This provider is typi
<Protocol Name="Proprietary" Handler="Web.TPEngine.SSO.DefaultSSOSessionProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> <PersistedClaims> <PersistedClaim ClaimTypeReferenceId="objectId" />
- <PersistedClaim ClaimTypeReferenceId="signInName" />
- <PersistedClaim ClaimTypeReferenceId="authenticationSource" />
- <PersistedClaim ClaimTypeReferenceId="identityProvider" />
- <PersistedClaim ClaimTypeReferenceId="newUser" />
- <PersistedClaim ClaimTypeReferenceId="executed-SelfAsserted-Input" />
</PersistedClaims> <OutputClaims> <OutputClaim ClaimTypeReferenceId="objectIdFromSession" DefaultValue="true"/>
This provider can be used for storing claims in a session. This provider is typi
</TechnicalProfile> ```
+The `DefaultSSOSessionProvider` and `ExternalLoginSSOSessionProvider` session management providers can be configured to manage claims, such that during:
-The following `SM-MFA` technical profile is included in the [custom policy starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack) `SocialAndLocalAccountsWithMfa`. This technical profile manages the multi-factor authentication session.
+- **Fresh logon**
+ - The `PersistedClaims` element will write claims into the session cookie. Persisted claims canΓÇÖt be rewritten.
+- **Subsequent logons**
+ - Every claim that is written to the session cookie, will be output into the claims bag, available to be used in the next orchestration step.
+ - The `OutputClaims` element will output static claims into the claims bag. Use the `DefaultValue` attribute to set the value of the output claim.
+
+## DefaultSSOSessionProvider
+
+The `DefaultSSOSessionProvider` session provider can be configured to manage claims during subsequent logons (single sign-on), and allow technical profiles to be skipped. The `DefaultSSOSessionProvider` should be used to persist and issue claims that are needed by subsequent [orchestration steps](userjourneys.md) that wonΓÇÖt be otherwise obtained during subsequent logons (single sign-on). For example, claims that might be obtained from reading the user object from the directory.
+
+The following `SM-AAD` technical profile is type of `DefaultSSOSessionProvider` session provider. The `SM-AAD` technical profile can be found in the [custom policy starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack).
```xml
-<TechnicalProfile Id="SM-MFA">
- <DisplayName>Session Mananagement Provider</DisplayName>
+<TechnicalProfile Id="SM-AAD">
+ <DisplayName>Session Management Provider</DisplayName>
<Protocol Name="Proprietary" Handler="Web.TPEngine.SSO.DefaultSSOSessionProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> <PersistedClaims>
- <PersistedClaim ClaimTypeReferenceId="Verified.strongAuthenticationPhoneNumber" />
+ <PersistedClaim ClaimTypeReferenceId="objectId" />
+ <PersistedClaim ClaimTypeReferenceId="signInName" />
+ <PersistedClaim ClaimTypeReferenceId="authenticationSource" />
+ <PersistedClaim ClaimTypeReferenceId="identityProvider" />
+ <PersistedClaim ClaimTypeReferenceId="newUser" />
+ <PersistedClaim ClaimTypeReferenceId="executed-SelfAsserted-Input" />
</PersistedClaims> <OutputClaims>
- <OutputClaim ClaimTypeReferenceId="isActiveMFASession" DefaultValue="true"/>
+ <OutputClaim ClaimTypeReferenceId="objectIdFromSession" DefaultValue="true"/>
</OutputClaims> </TechnicalProfile> ```
-### ExternalLoginSSOSessionProvider
+For example, the `SM-AAD`session management technical profile uses the `DefaultSSOSessionProvider` session provider. It will behave as follows when applied against the `SelfAsserted-LocalAccountSignin-Email` technical profile from the [custom policy starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack):
+
+- **Fresh logon**
+ - `signInName` will be written into the session cookie, because the session management technical profile (SM-AAD) is configured with `signInName` to be persisted, and the technical profile referencing SM-AAD contains an `OutputClaim` for `signInName`. This behavior is applicable to all claims that meet this pattern.
+- **Subsequent logons**
+ - The technical profile is skipped and the user wonΓÇÖt see the sign-in page.
+ - The claim bag will contain the `signInName` value from the session cookie, which was persisted at fresh sign-in, and any other claims that met the pattern to be persisted into the session cookie.
+ - The session management technical profile returns the `objectIdFromSession` claim because `Output` claims of the session provider are processed during subsequent logons (single sign-on). In this case, the `objectIdFromSession` claim being present in the claim bag, indicates that the user's claims are coming from the session cookie due to single sign-on.
+
+## ExternalLoginSSOSessionProvider
+
+The `ExternalLoginSSOSessionProvider` session provider is used to skip the "identity provider selection" screen and sign-out from a federated identity provider. ItΓÇÖs typically referenced in a technical profile configured for a federated identity provider, such as Facebook, or Azure Active Directory.
-This provider is used to suppress the "choose identity provider" screen and sign-out from a federated identity provider. It is typically referenced in a technical profile configured for a federated identity provider, such as Facebook, or Azure Active Directory. The following `SM-SocialLogin` technical profile is included in the [custom policy starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack).
+- **Fresh logon**
+ - The `PersistedClaims` element will write claims into the session cookie. Persisted claims canΓÇÖt be rewritten.
+- **Subsequent logons**
+ - Every claim that is written to the session cookie, will be output into the claim bag, available to be used in the next orchestration step.
+ - The `OutputClaims` element will output static claims into the claims bag. Use the `DefaultValue` attribute the set the value of the claim.
+ - When a technical profile, which references a session management technical profile, contains an `OutputClaim`, which has been persisted into the session cookie, then this technical profile will be skipped.
+
+The following `SM-SocialLogin` technical profile is type of `ExternalLoginSSOSessionProvider` session provider. The `SM-SocialLogin` technical profile can be found in the [custom policy starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack).
```xml <TechnicalProfile Id="SM-SocialLogin">
This provider is used to suppress the "choose identity provider" screen and sign
</TechnicalProfile> ```
-#### Metadata
+The `AlternativeSecurityId` claim is generated when a user signs in with an external identity provider. Representing the external identity provider user's unique identifier. The `AlternativeSecurityId` claim is persisted such that on single sign on journeys, the user's profile can be read from the directory without any interaction with the federated identity provider.
-| Attribute | Required | Description|
-| | | |
-| AlwaysFetchClaimsFromProvider | No | Not currently used, can be ignored. |
+To configure the external session provider, add a reference to the `SM-SocialLogin` from your [OAuth1](oauth1-technical-profile.md), [OAuth2](oauth2-technical-profile.md), or [OpenID Connect](openid-connect-technical-profile.md) technical profiles. For example, the `Facebook-OAUTH` uses the `SM-SocialLogin` session management technical profile. For more information, see the [custom policy starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack).
+
+```xml
+<TechnicalProfile Id="Facebook-OAUTH">
+ ...
+ <UseTechnicalProfileForSessionManagement ReferenceId="SM-SocialLogin" />
+</TechnicalProfile>
+```
-### OAuthSSOSessionProvider
+## OAuthSSOSessionProvider
-This provider is used for managing the Azure AD B2C sessions between a OAuth2 or OpenId Connect relying party and Azure AD B2C.
+The `OAuthSSOSessionProvider` session provider is used for managing the Azure AD B2C sessions between the OAuth2 or OpenId Connect relying party and Azure AD B2C. Azure AD B2C supports [Single sign-out](session-behavior.md#single-sign-out), also known as *Single Log-Out (SLO)*. When a user signs out through the [Azure AD B2C sign-out endpoint](openid-connect.md#send-a-sign-out-request), Azure AD B2C will clear the user's session cookie from the browser. However, the user might still be signed in to other applications that use Azure AD B2C for authentication.
+
+This type of session provider allows Azure AD B2C to track all OAuth2 or OpenId Connect applications the user logged into. During the sign-out of one application, Azure AD B2C will attempt to call the `logout` endpoints of all other known logged in applications. This functionality is built in to the session provider. There are no persisted or output claims available to be configured. The following `SM-jwt-issuer` technical profile is type of `OAuthSSOSessionProvider` session provider.
```xml <TechnicalProfile Id="SM-jwt-issuer">
This provider is used for managing the Azure AD B2C sessions between a OAuth2 or
</TechnicalProfile> ```
-### SamlSSOSessionProvider
+The `SM-jwt-issuer` technical profile is referenced from the `JwtIssuer` technical profile:
+
+```xml
+<TechnicalProfile Id="JwtIssuer">
+ ...
+ <UseTechnicalProfileForSessionManagement ReferenceId="SM-jwt-issuer" />
+</TechnicalProfile>
+```
+
+## SamlSSOSessionProvider
+
+The `SamlSSOSessionProvider` session provider is used for managing the session behavior with federated SAML identity providers or SAML relying party applications and Azure AD B2C.
-This provider is used for managing the Azure AD B2C SAML sessions between a relying party application or a federated SAML identity provider. When using the SSO provider for storing a SAML identity provider session, the `RegisterServiceProviders` must be set to `false`. The following `SM-Saml-idp` technical profile is used by the [SAML identity provider](identity-provider-generic-saml.md).
+### SAML identity provider session management
+
+When you reference a `SamlSSOSessionProvider` session provider from a SAML identity provider session, the `RegisterServiceProviders` must be set to `false`.
+
+The following `SM-Saml-idp` technical profile is type of `SamlSSOSessionProvider` session provider:
```xml <TechnicalProfile Id="SM-Saml-idp">
This provider is used for managing the Azure AD B2C SAML sessions between a rely
</TechnicalProfile> ```
-When using the provider for storing the B2C SAML session, the `RegisterServiceProviders` must set to `true`. SAML session logout requires the `SessionIndex` and `NameID` to complete.
+To use the `SM-Saml-idp` session management technical profile, add a reference to your [SAML identity provider](identity-provider-generic-saml.md) technical profile. For example, the [AD-FS SAML identity provider](identity-provider-adfs-saml.md) `Contoso-SAML2` uses the `SM-Saml-idp` session management technical profile.
+
+```xml
+<TechnicalProfile Id="Contoso-SAML2">
+ ...
+ <UseTechnicalProfileForSessionManagement ReferenceId="SM-Saml-idp" />
+</TechnicalProfile>
+```
+
+### SAML service provider session management
+
+When referencing a `SamlSSOSessionProvider` session provider to manage a SAML relying party session, the `RegisterServiceProviders` must set to `true`. SAML session sign-out requires the `SessionIndex` and `NameID` to complete.
-The following `SM-Saml-issuer` technical profile is used by [SAML issuer technical profile](saml-service-provider.md)
+The following `SM-Saml-issuer` technical profile is type of `SamlSSOSessionProvider` session provider:
```xml <TechnicalProfile Id="SM-Saml-issuer">
The following `SM-Saml-issuer` technical profile is used by [SAML issuer technic
</TechnicalProfile> ```
-#### Metadata
+To use the `SM-Saml-issuer` session management technical profile, add a reference to your [SAML token issuer](saml-issuer-technical-profile.md) technical profile. For example, the `Saml2AssertionIssuer` technical profile uses the `SM-Saml-issuer` session management technical profile.
+
+```xml
+<TechnicalProfile Id="Saml2AssertionIssuer">
+ ...
+ <UseTechnicalProfileForSessionManagement ReferenceId="SM-Saml-issuer" />
+</TechnicalProfile>
+```
+
+### Metadata
| Attribute | Required | Description| | | | | | IncludeSessionIndex | No | Not currently used, can be ignored.| | RegisterServiceProviders | No | Indicates that the provider should register all SAML service providers that have been issued an assertion. Possible values: `true` (default), or `false`.|
+## NoopSSOSessionProvider
+
+The `NoopSSOSessionProvider` session provider is used to suppress single sign on behavior. Technical profiles that use this type of session provider will always be processed, even when the user has an active session. This type of session provider can be useful to force particular technical profiles to always run, for example:
+
+- [Claims transformation](claims-transformation-technical-profile.md) - To create, or transform claims that are later used to determine which orchestration steps to process or skip.
+- [Restful](restful-technical-profile.md) - Fetch updated data from a Restful service each time the policy runs. You can also call a Restful for extended logging, and auditing.
+- [Self-asserted](self-asserted-technical-profile.md) - Force the user to provide data each time the policy runs. For example, verify emails with one-time pass-code, or ask the user's consent.
+- [Phonefactor](phone-factor-technical-profile.md) - Force the user to perform Multi Factor Authentication as part of a "step up authentication", even during subsequent logons (single sign-on).
+
+This type of session provider doesn't persist claims to the user's session cookie. The following `SM-Noop` technical profile is type of `NoopSSOSessionProvider` session provider. The `SM-Noop` technical profile can be found in the [custom policy starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack).
+
+```xml
+<TechnicalProfile Id="SM-Noop">
+ <DisplayName>Noop Session Management Provider</DisplayName>
+ <Protocol Name="Proprietary" Handler="Web.TPEngine.SSO.NoopSSOSessionProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
+</TechnicalProfile>
+```
+
+To suppress single sign on behavior of a technical profile, add a reference to `SM-Noop` to the technical profile. For example, the `AAD-Common` uses the `SM-Noop` session management technical profile. For more information, see the [custom policy starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack).
+
+```xml
+<TechnicalProfile Id="AAD-Common">
+ ...
+ <UseTechnicalProfileForSessionManagement ReferenceId="SM-Noop" />
+</TechnicalProfile>
+```
## Next steps Learn how to [configure session behavior](session-behavior.md).+
active-directory-b2c Enable Authentication Spa App Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/enable-authentication-spa-app-options.md
To use a custom domain and your tenant ID in the authentication URL, follow the
The following JavaScript code shows the MSAL configuration object *before* the change:
-```Javascript
+```javascript
const msalConfig = { auth: { ...
const msalConfig = {
The following JavaScript code shows the MSAL configuration object *after* the change:
-```Javascript
+```javascript
const msalConfig = { auth: { ...
After logout, the user is redirected to the URI specified in the `post_logout_re
To support a secured logout redirect URI, follow the steps below: 1. Create a globally accessible variable to store the `id_token`.+ ```javascript let id_token = ""; ``` 1. In the MSAL `handleResponse` function, parse the `id_token` from the `authenticationResult` object into the `id_token` variable.+ ```javascript function handleResponse(response) { if (response !== null) {
To support a secured logout redirect URI, follow the steps below:
``` 1. In the `signOut` function, add the `id_token_hint` parameter to the **logoutRequest** object.+ ```javascript function signOut() { const logoutRequest = {
active-directory-b2c Javascript And Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/javascript-and-page-layout.md
Follow these guidelines when you customize the interface of your application usi
A common way to help your customers with their sign-up success is to allow them to see what theyΓÇÖve entered as their password. This option helps users sign up by enabling them to easily see and make corrections to their password if needed. Any field of type password has a checkbox with a **Show password** label. This enables the user to see the password in plain text. Include this code snippet into your sign-up or sign-in template for a self-asserted page:
-```Javascript
+```javascript
function makePwdToggler(pwd){ // Create show-password checkbox var checkbox = document.createElement('input');
setupPwdTogglers();
Include the following code into your page where you want to include a **Terms of Use** checkbox. This checkbox is typically needed in your local account sign-up and social account sign-up pages.
-```Javascript
+```javascript
function addTermsOfUseLink() { // find the terms of use label element var termsOfUseLabel = document.querySelector('#api label[for="termsOfUse"]');
active-directory-b2c Openid Connect Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/openid-connect-technical-profile.md
The technical profile also returns claims that aren't returned by the identity p
| IdTokenAudience | No | The audience of the id_token. If specified, Azure AD B2C checks whether the `aud` claim in a token returned by the identity provider is equal to the one specified in the IdTokenAudience metadata. | | METADATA | Yes | A URL that points to an OpenID Connect identity provider configuration document, which is also known as OpenID well-known configuration endpoint. The URL can contain the `{tenant}` expression, which is replaced with the tenant name. | | authorization_endpoint | No | A URL that points to an OpenID Connect identity provider configuration authorization endpoint. The value of authorization_endpoint metadata takes precedence over the `authorization_endpoint` specified in the OpenID well-known configuration endpoint. The URL can contain the `{tenant}` expression, which is replaced with the tenant name. |
-| end_session_endpoint | No | The URL of the end session endpoint. The value of authorization_endpoint metadata takes precedence over the `end_session_endpoint` specified in the OpenID well-known configuration endpoint. |
+| end_session_endpoint | No | The URL of the end session endpoint. The value of `end_session_endpoint` metadata takes precedence over the `end_session_endpoint` specified in the OpenID well-known configuration endpoint. |
| issuer | No | The unique identifier of an OpenID Connect identity provider. The value of issuer metadata takes precedence over the `issuer` specified in the OpenID well-known configuration endpoint. If specified, Azure AD B2C checks whether the `iss` claim in a token returned by the identity provider is equal to the one specified in the issuer metadata. | | ProviderName | No | The name of the identity provider. | | response_types | No | The response type according to the OpenID Connect Core 1.0 specification. Possible values: `id_token`, `code`, or `token`. |
active-directory How Provisioning Works https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/how-provisioning-works.md
Previously updated : 12/06/2021 Last updated : 02/03/2022
If one of the above four events occurs and the target application does not suppo
If you see an attribute IsSoftDeleted in your attribute mappings, it is used to determine the state of the user and whether to send an update request with active = false to soft delete the user.
+**Deprovisioning events**
+
+The following table describes how you can configure deprovisioning actions with the Azure AD provisioning service. These rules are written with the non-gallery / custom application in mind, but generally apply to applications in the gallery. However, the behavior for gallery applications can differ as they have been optimized to meet the needs of the application. For example, the Azure AD provisioning service may always sende a request to hard delete users in certain applications rather than soft deleting, if the target application doesn't support soft deleting users.
+
+|Scenario|How to configure in Azure AD|
+|--|--|
+|If a user is unassigned from an app, soft-deleted in Azure AD, or blocked from sign-in, do nothing.|Remove isSoftDeleted from the attribute mappings and / or set the [skip out of scope deletions](skip-out-of-scope-deletions.md) property to true.|
+|If a user is unassigned from an app, soft-deleted in Azure AD, or blocked from sign-in, set a specific attribute to true / false.|Map isSoftDeleted to the attribute that you would like to set to false.|
+|When a user is disabled in Azure AD, unassigned from an app, soft-deleted in Azure AD, or blocked from sign-in, send a DELETE request to the target application.|This is currently supported for a limited set of gallery applications where the functionality is required. It is not configurable by customers.|
+|When a user is deleted in Azure AD, do nothing in the target application.|Ensure that "Delete" is not selected as one of the target object actions in the [attriubte configuration experience](skip-out-of-scope-deletions.md).|
+|When a user is deleted in Azure AD, set the value of an attribute in the target application.|Not supported.|
+|When a user is deleted in Azure AD, delete the user in the target application|This is supported. Ensure that Delete is selected as one of the target object actions in the [attribute configuration experience](skip-out-of-scope-deletions.md).|
+ **Known limitations** * If a user that was previously managed by the provisioning service is unassigned from an app, or from a group assigned to an app we will send a disable request. At that point, the user is not managed by the service and we will not send a delete request when they are deleted from the directory.
active-directory On Premises Ecma Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/on-premises-ecma-troubleshoot.md
Previously updated : 11/19/2021 Last updated : 02/03/2022
To resolve the following issues, run the ECMA host as an admin:
## Turn on verbose logging
-By default, `switchValue` for the ECMA Connector Host is set to `Error`. This setting means it will only log events that are errors. To enable verbose logging for the ECMA host service or wizard, set `switchValue` to `Verbose` in both locations as shown.
+By default, `switchValue` for the ECMA Connector Host is set to `Verbose`. This will emit detailed logging that will help you troubleshoot issues. You can change the verbosity to `Error` if you would like to limit the number of logs emitted to only errors. Wen using the SQL connector without Windows Integrated Auth, we recommend setting the `switchValue` to `Error` as it will ensure that the connection string is not emitted in the logs. In order to change the verbosity to error, please update the `switchValue` to "Error" in both places as shown below.
The file location for verbose service logging is C:\Program Files\Microsoft ECMA2Host\Service\Microsoft.ECMA2Host.Service.exe.config. ```
The file location for verbose service logging is C:\Program Files\Microsoft ECMA
</appSettings> <system.diagnostics> <sources>
- <source name="ConnectorsLog" switchValue="Verbose">
+ <source name="ConnectorsLog" switchValue="Error">
<listeners> <add initializeData="ConnectorsLog" type="System.Diagnostics.EventLogTraceListener, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" name="ConnectorsLog" traceOutputOptions="LogicalOperationStack, DateTime, Timestamp, Callstack"> <filter type=""/>
The file location for verbose service logging is C:\Program Files\Microsoft ECMA
</listeners> </source> <!-- Choose one of the following switchTrace: Off, Error, Warning, Information, Verbose -->
- <source name="ECMA2Host" switchValue="Verbose">
+ <source name="ECMA2Host" switchValue="Error">
<listeners> <add initializeData="ECMA2Host" type="System.Diagnos ```
-The file location for verbose wizard logging is C:\Program Files\Microsoft ECMA2Host\Wizard\Microsoft.ECMA2Host.ConfigWizard.exe.config.
+The file location for wizard logging is C:\Program Files\Microsoft ECMA2Host\Wizard\Microsoft.ECMA2Host.ConfigWizard.exe.config.
```
- <source name="ConnectorsLog" switchValue="Verbose">
+ <source name="ConnectorsLog" switchValue="Error">
<listeners> <add initializeData="ConnectorsLog" type="System.Diagnostics.EventLogTraceListener, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" name="ConnectorsLog" traceOutputOptions="LogicalOperationStack, DateTime, Timestamp, Callstack"> <filter type=""/>
The file location for verbose wizard logging is C:\Program Files\Microsoft ECMA2
</listeners> </source> <!-- Choose one of the following switchTrace: Off, Error, Warning, Information, Verbose -->
- <source name="ECMA2Host" switchValue="Verbose">
+ <source name="ECMA2Host" switchValue="Error">
<listeners> <add initializeData="ECMA2Host" type="System.Diagnostics.EventLogTraceListener, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" name="ECMA2HostListener" traceOutputOptions="LogicalOperationStack, DateTime, Timestamp, Callstack" /> ```
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/whats-new-docs.md
Title: "What's new in Azure Active Directory application provisioning" description: "New and updated documentation for the Azure Active Directory application provisioning." Previously updated : 01/07/2022 Last updated : 02/03/2022
Welcome to what's new in Azure Active Directory application provisioning documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the provisioning service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## January 2022
+
+### Updated articles
+
+- [How Azure Active Directory provisioning integrates with SAP SuccessFactors](sap-successfactors-integration-reference.md)
+- [Reference for writing expressions for attribute mappings in Azure Active Directory](functions-for-customizing-application-data.md)
++ ## December 2021 ### Updated articles
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/whats-new-docs.md
Title: "What's new in Azure Active Directory application proxy" description: "New and updated documentation for the Azure Active Directory application proxy." Previously updated : 01/07/2022 Last updated : 02/03/2022
Welcome to what's new in Azure Active Directory application proxy documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## January 2022
+
+### Updated articles
+
+- [Secure access to on-premises APIs with Azure Active Directory Application Proxy](application-proxy-secure-api-access.md)
++ ## December 2021 ### Updated articles
active-directory Howto Mfa Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-getstarted.md
description: Learn about deployment considerations and strategy for successful i
Previously updated : 07/22/2021-- Last updated : 02/02/2022++ # Plan an Azure Active Directory Multi-Factor Authentication deployment
-Azure Active Directory (Azure AD) Multi-Factor Authentication (MFA) helps safeguard access to data and applications, providing another layer of security by using a second form of authentication. Organizations can enable multifactor authentication with [Conditional Access](../conditional-access/overview.md) to make the solution fit their specific needs.
+Azure Active Directory (Azure AD) Multi-Factor Authentication helps safeguard access to data and applications, providing another layer of security by using a second form of authentication. Organizations can enable multifactor authentication (MFA) with [Conditional Access](../conditional-access/overview.md) to make the solution fit their specific needs.
-This deployment guide shows you how to plan and implement an [Azure AD MFA](concept-mfa-howitworks.md) roll-out.
+This deployment guide shows you how to plan and implement an [Azure AD Multi-Factor Authentication](concept-mfa-howitworks.md) roll-out.
-## Prerequisites for deploying Azure AD MFA
+## Prerequisites for deploying Azure AD Multi-Factor Authentication
Before you begin your deployment, ensure you meet the following prerequisites for your relevant scenarios.
You can control the authentication methods available in your tenant. For example
| Authentication method | Manage from | Scoping | |--|-||
-| Microsoft Authenticator (Push notification and passwordless phone sign-in) | MFA settings or Authentication methods policy | Authenticator passwordless phone sign-in can be scoped to users and groups |
+| Microsoft Authenticator (Push notification and passwordless phone sign in) | MFA settings or Authentication methods policy | Authenticator passwordless phone sign in can be scoped to users and groups |
| FIDO2 security key | Authentication methods policy | Can be scoped to users and groups | | Software or Hardware OATH tokens | MFA settings | |
-| SMS verification | MFA settings <br/>Manage SMS sign-in for primary authentication in authentication policy | SMS sign-in can be scoped to users and groups. |
+| SMS verification | MFA settings <br/>Manage SMS sign in for primary authentication in authentication policy | SMS sign in can be scoped to users and groups. |
| Voice calls | Authentication methods policy | | ## Plan Conditional Access policies
-Azure AD MFA is enforced with Conditional Access policies. These policies allow you to prompt users for multifactor authentication when needed for security and stay out of users' way when not needed.
+Azure AD Multi-Factor Authentication is enforced with Conditional Access policies. These policies allow you to prompt users for MFA when needed for security and stay out of users' way when not needed.
![Conceptual Conditional Access process flow](media/howto-mfa-getstarted/conditional-access-overview-how-it-works.png) In the Azure portal, you configure Conditional Access policies under **Azure Active Directory** > **Security** > **Conditional Access**.
-To learn more about creating Conditional Access policies, see [Conditional Access policy to prompt for Azure AD MFA when a user signs in to the Azure portal](tutorial-enable-azure-mfa.md). This helps you to:
+To learn more about creating Conditional Access policies, see [Conditional Access policy to prompt for Azure AD Multi-Factor Authentication when a user signs in to the Azure portal](tutorial-enable-azure-mfa.md). This helps you to:
- Become familiar with the user interface - Get a first impression of how Conditional Access works For end-to-end guidance on Azure AD Conditional Access deployment, see the [Conditional Access deployment plan](../conditional-access/plan-conditional-access.md).
-### Common policies for Azure AD MFA
+### Common policies for Azure AD Multi-Factor Authentication
-Common use cases to require Azure AD MFA include:
+Common use cases to require Azure AD Multi-Factor Authentication include:
- For [administrators](../conditional-access/howto-conditional-access-policy-admin-mfa.md) - To [specific applications](tutorial-enable-azure-mfa.md)
To manage your Conditional Access policies, the location condition of a Conditio
### Risk-based policies
-If your organization uses [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md) to detect risk signals, consider using [risk-based policies](../identity-protection/howto-identity-protection-configure-risk-policies.md) instead of named locations. Policies can be created to force password changes when there is a threat of compromised identity or require multifactor authentication when a sign-in is deemed [risky by events](../identity-protection/overview-identity-protection.md#risk-detection-and-remediation) such as leaked credentials, sign-ins from anonymous IP addresses, and more.
+If your organization uses [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md) to detect risk signals, consider using [risk-based policies](../identity-protection/howto-identity-protection-configure-risk-policies.md) instead of named locations. Policies can be created to force password changes when there is a threat of compromised identity or require MFA when a sign in is deemed [risky by events](../identity-protection/overview-identity-protection.md#risk-detection-and-remediation) such as leaked credentials, sign ins from anonymous IP addresses, and more.
Risk policies include: -- [Require all users to register for Azure AD MFA](../identity-protection/howto-identity-protection-configure-mfa-policy.md)
+- [Require all users to register for Azure AD Multi-Factor Authentication](../identity-protection/howto-identity-protection-configure-mfa-policy.md)
- [Require a password change for users that are high-risk](../identity-protection/howto-identity-protection-configure-risk-policies.md#enable-policies)-- [Require MFA for users with medium or high sign-in risk](../identity-protection/howto-identity-protection-configure-risk-policies.md#enable-policies)
+- [Require MFA for users with medium or high sign in risk](../identity-protection/howto-identity-protection-configure-risk-policies.md#enable-policies)
### Convert users from per-user MFA to Conditional Access based MFA
-If your users were enabled using per-user enabled and enforced Azure AD Multi-Factor Authentication the following PowerShell can assist you in making the conversion to Conditional Access based Azure AD Multi-Factor Authentication.
+If your users were enabled using per-user enabled and enforced MFA, the following PowerShell can assist you in making the conversion to Conditional Access based MFA.
Run this PowerShell in an ISE window or save as a `.PS1` file to run locally. The operation can only be done by using the [MSOnline module](/powershell/module/msonline#msonline).
Get-MsolUser -All | Set-MfaState -State Disabled
## Plan user session lifetime
-When planning your MFA deployment, it's important to think about how frequently you would like to prompt your users. Asking users for credentials often seems like a sensible thing to do, but it can backfire. If users are trained to enter their credentials without thinking, they can unintentionally supply them to a malicious credential prompt.
-Azure AD has multiple settings that determine how often you need to reauthenticate. Understand the needs of your business and users and configure settings that provide the best balance for your environment.
+When planning your multifactor authentication deployment, it's important to think about how frequently you would like to prompt your users. Asking users for credentials often seems like a sensible thing to do, but it can backfire. If users are trained to enter their credentials without thinking, they can unintentionally supply them to a malicious credential prompt. Azure AD has multiple settings that determine how often you need to reauthenticate. Understand the needs of your business and users and configure settings that provide the best balance for your environment.
-We recommend using devices with Primary Refresh Tokens (PRT) for improved end user experience and reduce the session lifetime with sign-in frequency policy only on specific business use cases.
+We recommend using devices with Primary Refresh Tokens (PRT) for improved end user experience and reduce the session lifetime with sign in frequency policy only on specific business use cases.
-For more information, see [Optimize reauthentication prompts and understand session lifetime for Azure AD MFA](concepts-azure-multi-factor-authentication-prompts-session-lifetime.md).
+For more information, see [Optimize reauthentication prompts and understand session lifetime for Azure AD Multi-Factor Authentication](concepts-azure-multi-factor-authentication-prompts-session-lifetime.md).
## Plan user registration
-A major step in every MFA deployment is getting users registered to use MFA. Authentication methods such as Voice and SMS allow pre-registration, while others like the Authenticator App require user interaction. Administrators must determine how users will register their methods.
+A major step in every multifactor authentication deployment is getting users registered to use Azure AD Multi-Factor Authentication. Authentication methods such as Voice and SMS allow pre-registration, while others like the Authenticator App require user interaction. Administrators must determine how users will register their methods.
### Combined registration for SSPR and Azure AD MFA
-We recommend using the [combined registration experience](howto-registration-mfa-sspr-combined.md) for Azure AD MFA and [Azure AD self-service password reset (SSPR)](concept-sspr-howitworks.md). SSPR allows users to reset their password in a secure way using the same methods they use for Azure AD MFA. Combined registration is a single step for end users.
+
+We recommend that organizations use the [combined registration experience for Azure AD Multi-Factor Authentication and self-service password reset (SSPR)](howto-registration-mfa-sspr-combined.md). SSPR allows users to reset their password in a secure way using the same methods they use for Azure AD Multi-Factor Authentication. Combined registration is a single step for end users. To make sure you understand the functionality and end-user experience, see the [Combined security information registration concepts](concept-registration-mfa-sspr-combined.md).
+
+It's critical to inform users about upcoming changes, registration requirements, and any necessary user actions. We provide [communication templates](https://aka.ms/mfatemplates) and [user documentation](https://support.microsoft.com/account-billing/set-up-security-info-from-a-sign-in-page-28180870-c256-4ebf-8bd7-5335571bf9a8) to prepare your users for the new experience and help to ensure a successful rollout. Send users to https://myprofile.microsoft.com to register by selecting the **Security Info** link on that page.
### Registration with Identity Protection
-Azure AD Identity Protection contributes both a registration policy for and automated risk detection and remediation policies to the Azure AD MFA story. Policies can be created to force password changes when there is a threat of compromised identity or require MFA when a sign-in is deemed risky.
+
+Azure AD Identity Protection contributes both a registration policy for and automated risk detection and remediation policies to the Azure AD Multi-Factor Authentication story. Policies can be created to force password changes when there is a threat of compromised identity or require MFA when a sign in is deemed risky.
If you use Azure AD Identity Protection, [configure the Azure AD MFA registration policy](../identity-protection/howto-identity-protection-configure-mfa-policy.md) to prompt your users to register the next time they sign in interactively. ### Registration without Identity Protection
-If you don't have licenses that enable Azure AD Identity Protection, users are prompted to register the next time that MFA is required at sign-in.
+
+If you don't have licenses that enable Azure AD Identity Protection, users are prompted to register the next time that MFA is required at sign in.
To require users to use MFA, you can use Conditional Access policies and target frequently used applications like HR systems. If a user's password is compromised, it could be used to register for MFA, taking control of their account. We therefore recommend [securing the security registration process with conditional access policies](../conditional-access/howto-conditional-access-policy-registration.md) requiring trusted devices and locations. You can further secure the process by also requiring a [Temporary Access Pass](howto-authentication-temporary-access-pass.md). A time-limited passcode issued by an admin that satisfies strong authentication requirements and can be used to onboard other authentication methods, including Passwordless ones. ### Increase the security of registered users
-If you have users registered for MFA using SMS or voice calls, you may want to move them to more secure methods such as the Microsoft Authenticator app. Microsoft now offers a public preview of functionality that allows you to prompt users to set up the Microsoft Authenticator app during sign-in. You can set these prompts by group, controlling who is prompted, enabling targeted campaigns to move users to the more secure method.
+
+If you have users registered for MFA using SMS or voice calls, you may want to move them to more secure methods such as the Microsoft Authenticator app. Microsoft now offers a public preview of functionality that allows you to prompt users to set up the Microsoft Authenticator app during sign in. You can set these prompts by group, controlling who is prompted, enabling targeted campaigns to move users to the more secure method.
### Plan recovery scenarios + As mentioned before, ensure users are registered for more than one MFA method, so that if one is unavailable, they have a backup. If the user does not have a backup method available, you can:
If the user does not have a backup method available, you can:
- Update their methods as an administrator. To do so, select the user in the Azure portal, then select Authentication methods and update their methods. User communications
-It's critical to inform users about upcoming changes, Azure AD MFA registration requirements, and any necessary user actions.
-We provide [communication templates](https://aka.ms/mfatemplates) and [end-user documentation](https://support.microsoft.com/account-billing/set-up-your-security-info-from-a-sign-in-prompt-28180870-c256-4ebf-8bd7-5335571bf9a8) to help draft your communications. Send users to [https://myprofile.microsoft.com](https://myprofile.microsoft.com/) to register by selecting the **Security Info** link on that page.
- ## Plan integration with on-premises systems Applications that authenticate directly with Azure AD and have modern authentication (WS-Fed, SAML, OAuth, OpenID Connect) can make use of Conditional Access policies.
-Some legacy and on-premises applications do not authenticate directly against Azure AD and require additional steps to use Azure AD MFA. You can integrate them by using Azure AD Application proxy or [Network policy services](/windows-server/networking/core-network-guide/core-network-guide#BKMK_optionalfeatures).
+Some legacy and on-premises applications do not authenticate directly against Azure AD and require additional steps to use Azure AD Multi-Factor Authentication. You can integrate them by using Azure AD Application proxy or [Network policy services](/windows-server/networking/core-network-guide/core-network-guide#BKMK_optionalfeatures).
### Integrate with AD FS resources
-We recommend migrating applications secured with Active Directory Federation Services (AD FS) to Azure AD. However, if you are not ready to migrate these to Azure AD, you can use the Azure MFA adapter with AD FS 2016 or newer.
-If your organization is federated with Azure AD, you can [configure Azure AD MFA as an authentication provider with AD FS resources](/windows-server/identity/ad-fs/operations/configure-ad-fs-and-azure-mfa) both on-premises and in the cloud.
+We recommend migrating applications secured with Active Directory Federation Services (AD FS) to Azure AD. However, if you are not ready to migrate these to Azure AD, you can use the Azure Multi-Factor Authentication adapter with AD FS 2016 or newer.
+
+If your organization is federated with Azure AD, you can [configure Azure AD Multi-Factor Authentication as an authentication provider with AD FS resources](/windows-server/identity/ad-fs/operations/configure-ad-fs-and-azure-mfa) both on-premises and in the cloud.
-### RADIUS clients and Azure AD MFA
+### RADIUS clients and Azure AD Multi-Factor Authentication
For applications that are using RADIUS authentication, we recommend moving client applications to modern protocols such as SAML, Open ID Connect, or OAuth on Azure AD. If the application cannot be updated, then you can deploy [Network Policy Server (NPS) with the Azure MFA extension](howto-mfa-nps-extension.md). The network policy server (NPS) extension acts as an adapter between RADIUS-based applications and Azure AD MFA to provide a second factor of authentication.
Others might include:
- All VPNs
-## Deploy Azure AD MFA
+## Deploy Azure AD Multi-Factor Authentication
-Your MFA rollout plan should include a pilot deployment followed by deployment waves that are within your support capacity. Begin your rollout by applying your Conditional Access policies to a small group of pilot users. After evaluating the effect on the pilot users, process used, and registration behaviors, you can either add more groups to the policy or add more users to the existing groups.
+Your Azure AD Multi-Factor Authentication rollout plan should include a pilot deployment followed by deployment waves that are within your support capacity. Begin your rollout by applying your Conditional Access policies to a small group of pilot users. After evaluating the effect on the pilot users, process used, and registration behaviors, you can either add more groups to the policy or add more users to the existing groups.
Follow the steps below:
Follow the steps below:
1. Configure session lifetime settings 1. Configure Azure AD MFA registration policies
-## Manage Azure AD MFA
-This section provides reporting and troubleshooting information for Azure AD MFA.
+## Manage Azure AD Multi-Factor Authentication
+This section provides reporting and troubleshooting information for Azure AD Multi-Factor Authentication.
### Reporting and Monitoring
-Azure AD has reports that provide technical and business insights, follow the progress of your deployment and check if your users are successful at sign-in with MFA. Have your business and technical application owners assume ownership of and consume these reports based on your organization's requirements.
+Azure AD has reports that provide technical and business insights, follow the progress of your deployment and check if your users are successful at sign in with MFA. Have your business and technical application owners assume ownership of and consume these reports based on your organization's requirements.
You can monitor authentication method registration and usage across your organization using the [Authentication Methods Activity dashboard](howto-authentication-methods-activity.md). This helps you understand what methods are being registered and how they're being used.
-#### Sign-in report to review MFA events
+#### Sign in report to review MFA events
-The Azure AD sign-in reports include authentication details for events when a user is prompted for multi-factor authentication, and if any Conditional Access policies were in use. You can also use PowerShell for reporting on users registered for MFA.
+The Azure AD sign in reports include authentication details for events when a user is prompted for MFA, and if any Conditional Access policies were in use. You can also use PowerShell for reporting on users registered for Azure AD Multi-Factor Authentication.
NPS extension and AD FS logs can be viewed from **Security** > **MFA** > **Activity report**.
-For more information, and additional MFA reports, see [Review Azure AD Multi-Factor Authentication events](howto-mfa-reporting.md#view-the-azure-ad-sign-ins-report).
+For more information, and additional Azure AD Multi-Factor Authentication reports, see [Review Azure AD Multi-Factor Authentication events](howto-mfa-reporting.md#view-the-azure-ad-sign-ins-report).
-### Troubleshoot Azure AD MFA
-See [Troubleshooting Azure AD MFA](https://support.microsoft.com/help/2937344/troubleshooting-azure-multi-factor-authentication-issues) for common issues.
+### Troubleshoot Azure AD Multi-Factor Authentication
+See [Troubleshooting Azure AD Multi-Factor Authentication](https://support.microsoft.com/help/2937344/troubleshooting-azure-multi-factor-authentication-issues) for common issues.
## Next steps
active-directory Howto Sspr Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-sspr-deployment.md
Previously updated : 07/13/2021 Last updated : 02/02/2022 --++
SSPR has the following key capabilities: * Self-service allows end users to reset their expired or non-expired passwords without contacting an administrator or helpdesk for support.
-* [Password Writeback](./concept-sspr-writeback.md) allows management of on-premises passwords and resolution of account lockout though the cloud.
+* [Password Writeback](./concept-sspr-writeback.md) allows management of on-premises passwords and resolution of account lockout through the cloud.
* Password management activity reports give administrators insight into password reset and registration activity occurring in their organization. This deployment guide shows you how to plan and then test an SSPR roll-out.
Before deploying SSPR, you may opt to determine the number and the average cost
#### Enable combined registration for SSPR and MFA
-Microsoft recommends that organizations enable the combined registration experience for SSPR and multi-factor authentication. When you enable this combined registration experience, users need only select their registration information once to enable both features.
+### Combined registration for SSPR and Azure AD Multi-Factor Authentication
-The combined registration experience does not require organizations to enable both SSPR and Azure AD Multi-Factor Authentication. Combined registration provides organizations a better user experience. For more information, see [Combined security information registration](concept-registration-mfa-sspr-combined.md)
+We recommend that organizations use the [combined registration experience for Azure AD Multi-Factor Authentication and self-service password reset (SSPR)](howto-registration-mfa-sspr-combined.md). SSPR allows users to reset their password in a secure way using the same methods they use for Azure AD Multi-Factor Authentication. Combined registration is a single step for end users. To make sure you understand the functionality and end-user experience, see the [Combined security information registration concepts](concept-registration-mfa-sspr-combined.md).
+
+It's critical to inform users about upcoming changes, registration requirements, and any necessary user actions. We provide [communication templates](https://aka.ms/mfatemplates) and [user documentation](https://support.microsoft.com/account-billing/set-up-security-info-from-a-sign-in-page-28180870-c256-4ebf-8bd7-5335571bf9a8) to prepare your users for the new experience and help to ensure a successful rollout. Send users to https://myprofile.microsoft.com to register by selecting the **Security Info** link on that page.
## Plan the deployment project
When technology projects fail, they typically do so due to mismatched expectatio
| Level 2 helpdesk| User administrator | | SSPR administrator| Global administrator | -
-### Plan communications
-
-Communication is critical to the success of any new service. You should proactively communicate with your users how their experience will change, when it will change, and how to gain support if they experience issues. Review the [Self-service password reset rollout materials on the Microsoft download center](https://www.microsoft.com/download/details.aspx?id=56768) for ideas on how to plan your end-user communication strategy.
- ### Plan a pilot We recommend that the initial configuration of SSPR is in a test environment. Start with a pilot group by enabling SSPR for a subset of users in your organization. See [Best practices for a pilot](../fundamentals/active-directory-deployment-plans.md).
To roll back the deployment:
Before deploying, ensure that you have done the following:
-1. Created and begun executing your [communication plan](#plan-communications).
- 1. Determined the appropriate [configuration settings](#plan-configuration).
-1. Identified the users and groups for the [pilot](#plan-a-pilot) and production environments.
+2. Identified the users and groups for the [pilot](#plan-a-pilot) and production environments.
-1. [Determined configuration settings](#plan-configuration) for registration and self-service.
+3. [Determined configuration settings](#plan-configuration) for registration and self-service.
-1. [Configured password writeback](#password-writeback) if you have a hybrid environment.
+4. [Configured password writeback](#password-writeback) if you have a hybrid environment.
**You're now ready to deploy SSPR!**
active-directory Msal Js Avoid Page Reloads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-js-avoid-page-reloads.md
Set the `redirect_uri` property on config to a simple page, that does not requir
## Initialization in your main app file
-If your app is structured such that there is one central Javascript file that defines the app's initialization, routing, and other stuff, you can conditionally load your app modules based on whether the app is loading in an `iframe` or not. For example:
+If your app is structured such that there is one central JavaScript file that defines the app's initialization, routing, and other stuff, you can conditionally load your app modules based on whether the app is loading in an `iframe` or not. For example:
In AngularJS: app.js
active-directory Users Custom Security Attributes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/users-custom-security-attributes.md
description: Assign or remove custom security attributes for a user in Azure Act
Previously updated : 11/16/2021 Last updated : 02/03/2022
To manage custom security attribute assignments for users in your Azure AD organ
#### Get the custom security attribute assignments for a user
+Use the [Get-AzureADMSUser](/powershell/module/azuread/get-azureadmsuser) command to get the custom security attribute assignments for a user.
+ ```powershell $user1 = Get-AzureADMSUser -Id dbb22700-a7de-4372-ae78-0098ee60e55e -Select CustomSecurityAttributes $user1.CustomSecurityAttributes
$user1.CustomSecurityAttributes
#### Assign a custom security attribute with a multi-string value to a user
+Use the [Set-AzureADMSUser](/powershell/module/azuread/set-azureadmsuser) command to assign a custom security attribute with a multi-string value to a user.
+ - Attribute set: `Engineering` - Attribute: `Project` - Attribute data type: Collection of Strings
Set-AzureADMSUser -Id dbb22700-a7de-4372-ae78-0098ee60e55e -CustomSecurityAttrib
#### Update a custom security attribute with a multi-string value for a user
+Use the [Set-AzureADMSUser](/powershell/module/azuread/set-azureadmsuser) command to update a custom security attribute with a multi-string value for a user.
+ - Attribute set: `Engineering` - Attribute: `Project` - Attribute data type: Collection of Strings
Set-AzureADMSUser -Id dbb22700-a7de-4372-ae78-0098ee60e55e -CustomSecurityAttrib
## Microsoft Graph API
-To manage custom security attribute assignments for users in your Azure AD organization, you can use the Microsoft Graph API. The following API calls can be made to manage assignments.
+To manage custom security attribute assignments for users in your Azure AD organization, you can use the Microsoft Graph API. The following API calls can be made to manage assignments. For more information, see [Assign, update, or remove custom security attributes using the Microsoft Graph API](/graph/custom-security-attributes-examples).
#### Get the custom security attribute assignments for a user
+Use the [Get a user](/graph/api/user-get?view=graph-rest-beta&preserve-view=true) API to get the custom security attribute assignments for a user.
+ ```http GET https://graph.microsoft.com/beta/users/{id}?$select=customSecurityAttributes ```
If there are no custom security attributes assigned to the user or if the callin
#### Assign a custom security attribute with a string value to a user
+Use the [Update user](/graph/api/user-update?view=graph-rest-beta&preserve-view=true) API to assign a custom security attribute with a string value to a user.
+ - Attribute set: `Engineering` - Attribute: `ProjectDate` - Attribute data type: String
PATCH https://graph.microsoft.com/beta/users/{id}
#### Assign a custom security attribute with a multi-string value to a user
+Use the [Update user](/graph/api/user-update?view=graph-rest-beta&preserve-view=true) API to assign a custom security attribute with a multi-string value to a user.
+ - Attribute set: `Engineering` - Attribute: `Project` - Attribute data type: Collection of Strings
PATCH https://graph.microsoft.com/beta/users/{id}
#### Assign a custom security attribute with an integer value to a user
+Use the [Update user](/graph/api/user-update?view=graph-rest-beta&preserve-view=true) API to assign a custom security attribute with an integer value to a user.
+ - Attribute set: `Engineering` - Attribute: `NumVendors` - Attribute data type: Integer
PATCH https://graph.microsoft.com/beta/users/{id}
#### Assign a custom security attribute with a multi-integer value to a user
+Use the [Update user](/graph/api/user-update?view=graph-rest-beta&preserve-view=true) API to assign a custom security attribute with a multi-integer value to a user.
+ - Attribute set: `Engineering` - Attribute: `CostCenter` - Attribute data type: Collection of Integers
PATCH https://graph.microsoft.com/beta/users/{id}
#### Assign a custom security attribute with a Boolean value to a user
+Use the [Update user](/graph/api/user-update?view=graph-rest-beta&preserve-view=true) API to assign a custom security attribute with a Boolean value to a user.
+ - Attribute set: `Engineering` - Attribute: `Certification` - Attribute data type: Boolean
PATCH https://graph.microsoft.com/beta/users/{id}
#### Update a custom security attribute with an integer value for a user
+Use the [Update user](/graph/api/user-update?view=graph-rest-beta&preserve-view=true) API to update a custom security attribute with an integer value for a user.
+ - Attribute set: `Engineering` - Attribute: `NumVendors` - Attribute data type: Integer
PATCH https://graph.microsoft.com/beta/users/{id}
#### Update a custom security attribute with a Boolean value for a user
+Use the [Update user](/graph/api/user-update?view=graph-rest-beta&preserve-view=true) API to update a custom security attribute with a Boolean value for a user.
+ - Attribute set: `Engineering` - Attribute: `Certification` - Attribute data type: Boolean
PATCH https://graph.microsoft.com/beta/users/{id}
#### Remove a single-valued custom security attribute assignment from a user
-To remove a single-valued custom security attribute assignment, set the value to null.
+Use the [Update user](/graph/api/user-update?view=graph-rest-beta&preserve-view=true) API to remove a single-valued custom security attribute assignment from a user by setting the value to null.
- Attribute set: `Engineering` - Attribute: `ProjectDate`
PATCH https://graph.microsoft.com/beta/users/{id}
#### Remove a multi-valued custom security attribute assignment from a user
-To remove a multi-valued custom security attribute assignment, set the value to an empty collection.
+Use the [Update user](/graph/api/user-update?view=graph-rest-beta&preserve-view=true) API to remove a multi-valued custom security attribute assignment from a user by setting the value to an empty collection.
- Attribute set: `Engineering` - Attribute: `Project`
PATCH https://graph.microsoft.com/beta/users/{id}
#### Filter all users with an attribute that equals a value
-The following example, retrieves users with an `AppCountry` attribute that equals `Canada`. You must add `ConsistencyLevel: eventual` in the header. You must also include `$count=true` to ensure the request is routed correctly.
+Use the [List users](/graph/api/user-list?view=graph-rest-beta&preserve-view=true) API to filter all users with an attribute that equals a value. The following example, retrieves users with an `AppCountry` attribute that equals `Canada`. You must add `ConsistencyLevel: eventual` in the header. You must also include `$count=true` to ensure the request is routed correctly.
- Attribute set: `Marketing` - Attribute: `AppCountry`
GET https://graph.microsoft.com/beta/users?$count=true&$select=id,displayName,cu
#### Filter all users with an attribute that starts with a value
-The following example, retrieves users with an `EmployeeId` attribute that starts with `111`. You must add `ConsistencyLevel: eventual` in the header. You must also include `$count=true` to ensure the request is routed correctly.
+Use the [List users](/graph/api/user-list?view=graph-rest-beta&preserve-view=true) API to filter all users with an attribute that starts with a value. The following example, retrieves users with an `EmployeeId` attribute that starts with `111`. You must add `ConsistencyLevel: eventual` in the header. You must also include `$count=true` to ensure the request is routed correctly.
- Attribute set: `Marketing` - Attribute: `EmployeeId`
GET https://graph.microsoft.com/beta/users?$count=true&$select=id,displayName,cu
#### Filter all users with an attribute that does not equal a value
-The following example, retrieves users with a `AppCountry` attribute that does not equal `Canada`. This query will also retrieve users that do not have the `AppCountry` attribute assigned. You must add `ConsistencyLevel: eventual` in the header. You must also include `$count=true` to ensure the request is routed correctly.
+Use the [List users](/graph/api/user-list?view=graph-rest-beta&preserve-view=true) API to filter all users with an attribute that does not equal a value. The following example, retrieves users with a `AppCountry` attribute that does not equal `Canada`. This query will also retrieve users that do not have the `AppCountry` attribute assigned. You must add `ConsistencyLevel: eventual` in the header. You must also include `$count=true` to ensure the request is routed correctly.
- Attribute set: `Marketing` - Attribute: `AppCountry`
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/whats-new-docs.md
Title: "What's new in Azure Active Directory external identities" description: "New and updated documentation for the Azure Active Directory external identities." Previously updated : 01/07/2022 Last updated : 02/03/2022
Welcome to what's new in Azure Active Directory external identities documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the external identities service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## January 2022
+
+### Updated articles
+
+- [Properties of an Azure Active Directory B2B collaboration user](user-properties.md)
+- [Azure Active Directory B2B collaboration invitation redemption](redemption-experience.md)
++ ## December 2021 ### Updated articles
active-directory Custom Security Attributes Add https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/custom-security-attributes-add.md
Previously updated : 11/16/2021 Last updated : 02/03/2022
To manage custom security attributes in your Azure AD organization, you can also
#### Get all attribute sets
+Use the [Get-AzureADMSAttributeSet](/powershell/module/azuread/get-azureadmsattributeset) command without any parameters to get all attribute sets.
+ ```powershell Get-AzureADMSAttributeSet ``` #### Get an attribute set
+Use the [Get-AzureADMSAttributeSet](/powershell/module/azuread/get-azureadmsattributeset) command to get an attribute set.
+ - Attribute set: `Engineering` ```powershell
Get-AzureADMSAttributeSet -Id "Engineering"
#### Add an attribute set
+Use the [New-AzureADMSAttributeSet](/powershell/module/azuread/new-azureadmsattributeset) command to add a new attribute set.
+ - Attribute set: `Engineering` ```powershell
New-AzureADMSAttributeSet -Id "Engineering" -Description "Attributes for enginee
#### Update an attribute set
+Use the [Set-AzureADMSAttributeSet](/powershell/module/azuread/set-azureadmsattributeset) command to update an attribute set.
+ - Attribute set: `Engineering` ```powershell
Set-AzureADMSAttributeSet -Id "Engineering" -MaxAttributesPerSet 20
#### Get all custom security attributes
+Use the [Get-AzureADMSCustomSecurityAttributeDefinition](/powershell/module/azuread/get-azureadmscustomsecurityattributedefinition) command without any parameters to get all custom security attribute definitions.
+ ```powershell Get-AzureADMSCustomSecurityAttributeDefinition ``` #### Get a custom security attribute
+Use the [Get-AzureADMSCustomSecurityAttributeDefinition](/powershell/module/azuread/get-azureadmscustomsecurityattributedefinition) command to get a custom security attribute definition.
+ - Attribute set: `Engineering` - Attribute: `ProjectDate`
Get-AzureADMSCustomSecurityAttributeDefinition -Id "Engineering_ProjectDate"
#### Add a custom security attribute
+Use the [New-AzureADMSCustomSecurityAttributeDefinition](/powershell/module/azuread/new-azureadmscustomsecurityattributedefinition) command to add a new custom security attribute definition.
+ - Attribute set: `Engineering` - Attribute: `ProjectDate` - Attribute data type: String
New-AzureADMSCustomSecurityAttributeDefinition -AttributeSet "Engineering" -Name
#### Update a custom security attribute
+Use the [Set-AzureADMSCustomSecurityAttributeDefinition](/powershell/module/azuread/set-azureadmscustomsecurityattributedefinition) command to update a custom security attribute definition.
+ - Attribute set: `Engineering` - Attribute: `ProjectDate`
Set-AzureADMSCustomSecurityAttributeDefinition -Id "Engineering_ProjectDate" -De
#### Deactivate a custom security attribute
+Use the [Set-AzureADMSCustomSecurityAttributeDefinition](/powershell/module/azuread/set-azureadmscustomsecurityattributedefinition) command to deactivate a custom security attribute definition.
+ - Attribute set: `Engineering` - Attribute: `Project`
Set-AzureADMSCustomSecurityAttributeDefinition -Id "Engineering_Project" -Status
#### Get all predefined values
+Use the [Get-AzureADMSCustomSecurityAttributeDefinitionAllowedValue](/powershell/module/azuread/get-azureadmscustomsecurityattributedefinitionallowedvalue) command to get all predefined values for a custom security attribute definition.
+ - Attribute set: `Engineering` - Attribute: `Project`
Get-AzureADMSCustomSecurityAttributeDefinitionAllowedValue -CustomSecurityAttrib
#### Get a predefined value
+Use the [Get-AzureADMSCustomSecurityAttributeDefinitionAllowedValue](/powershell/module/azuread/get-azureadmscustomsecurityattributedefinitionallowedvalue) command to get a predefined value for a custom security attribute definition.
+ - Attribute set: `Engineering` - Attribute: `Project` - Predefined value: `Alpine`
Get-AzureADMSCustomSecurityAttributeDefinitionAllowedValue -CustomSecurityAttrib
#### Add a predefined value
+Use the [Add-AzureADMScustomSecurityAttributeDefinitionAllowedValues](/powershell/module/azuread/add-azureadmscustomsecurityattributedefinitionallowedvalues) command to add a predefined value for a custom security attribute definition.
+ - Attribute set: `Engineering` - Attribute: `Project` - Predefined value: `Alpine`
Add-AzureADMScustomSecurityAttributeDefinitionAllowedValues -CustomSecurityAttri
#### Deactivate a predefined value
+Use the [Set-AzureADMSCustomSecurityAttributeDefinitionAllowedValue](/powershell/module/azuread/set-azureadmscustomsecurityattributedefinitionallowedvalue) command to deactivate a predefined value for a custom security attribute definition.
+ - Attribute set: `Engineering` - Attribute: `Project` - Predefined value: `Alpine`
To manage custom security attributes in your Azure AD organization, you can also
#### Get all attribute sets
+Use the [List attributeSets](/graph/api/directory-list-attributesets) API to get all attribute sets.
+ ```http GET https://graph.microsoft.com/beta/directory/attributeSets ``` #### Get top attribute sets
+Use the [List attributeSets](/graph/api/directory-list-attributesets) API to get the top attribute sets.
+ ```http GET https://graph.microsoft.com/beta/directory/attributeSets?$top=10 ``` #### Get attribute sets in order
+Use the [List attributeSets](/graph/api/directory-list-attributesets) API to get attribute sets in order.
+ ```http GET https://graph.microsoft.com/beta/directory/attributeSets?$orderBy=id ``` #### Get an attribute set
+Use the [Get attributeSet](/graph/api/attributeset-get) API to get an attribute set.
+ - Attribute set: `Engineering` ```http
GET https://graph.microsoft.com/beta/directory/attributeSets/Engineering
#### Add an attribute set
+Use the [Create attributeSet](/graph/api/directory-post-attributesets) API to add a new attribute set.
+ - Attribute set: `Engineering` ```http
POST https://graph.microsoft.com/beta/directory/attributeSets
#### Update an attribute set
+Use the [Update attributeSet](/graph/api/attributeset-update) API to update an attribute set.
+ - Attribute set: `Engineering` ```http
PATCH https://graph.microsoft.com/beta/directory/attributeSets/Engineering
#### Get all custom security attributes
+Use the [List customSecurityAttributeDefinitions](/graph/api/directory-list-customsecurityattributedefinitions) API to get all custom security attribute definitions.
+ ```http GET https://graph.microsoft.com/beta/directory/customSecurityAttributeDefinitions ``` #### Filter custom security attributes
+Use the [List customSecurityAttributeDefinitions](/graph/api/directory-list-customsecurityattributedefinitions) API to filter custom security attribute definitions.
+ - Filter: Attribute name eq 'Project' and status eq 'Available' ```http
GET https://graph.microsoft.com/beta/directory/customSecurityAttributeDefinition
#### Get a custom security attribute
+Use the [Get customSecurityAttributeDefinition](/graph/api/customsecurityattributedefinition-get) API to get a custom security attribute definition.
+ - Attribute set: `Engineering` - Attribute: `ProjectDate`
GET https://graph.microsoft.com/beta/directory/customSecurityAttributeDefinition
#### Add a custom security attribute
+Use the [Create customSecurityAttributeDefinition](/graph/api/directory-post-customsecurityattributedefinitions) API to add a new custom security attribute definition.
+ - Attribute set: `Engineering` - Attribute: `ProjectDate` - Attribute data type: String
POST https://graph.microsoft.com/beta/directory/customSecurityAttributeDefinitio
#### Add a custom security attribute that supports multiple predefined values
+Use the [Create customSecurityAttributeDefinition](/graph/api/directory-post-customsecurityattributedefinitions) API to add a new custom security attribute definition that supports multiple predefined values.
+ - Attribute set: `Engineering` - Attribute: `Project` - Attribute data type: Collection of Strings
POST https://graph.microsoft.com/beta/directory/customSecurityAttributeDefinitio
#### Update a custom security attribute
+Use the [Update customSecurityAttributeDefinition](/graph/api/customsecurityattributedefinition-update) API to update a custom security attribute definition.
+ - Attribute set: `Engineering` - Attribute: `ProjectDate`
PATCH https://graph.microsoft.com/beta/directory/customSecurityAttributeDefiniti
#### Deactivate a custom security attribute
+Use the [Update customSecurityAttributeDefinition](/graph/api/customsecurityattributedefinition-update) API to deactivate a custom security attribute definition.
+ - Attribute set: `Engineering` - Attribute: `Project`
PATCH https://graph.microsoft.com/beta/directory/customSecurityAttributeDefiniti
} ```
-#### Get the properties of a predefined value
+#### Get all predefined values
+
+Use the [List allowedValues](/graph/api/customsecurityattributedefinition-list-allowedvalues) API to get all predefined values for a custom security attribute definition.
- Attribute set: `Engineering` - Attribute: `Project`-- Predefined value: `Alpine` ```http
-GET https://graph.microsoft.com/beta/directory/customSecurityAttributeDefinitions/Engineering_Project/allowedValues/Alpine
+GET https://graph.microsoft.com/beta/directory/customSecurityAttributeDefinitions/Engineering_Project/allowedValues
```
-#### Get all predefined values
+#### Get a predefined value
+
+Use the [Get allowedValue](/graph/api/allowedvalue-get) API to get a predefined value for a custom security attribute definition.
- Attribute set: `Engineering` - Attribute: `Project`
+- Predefined value: `Alpine`
```http
-GET https://graph.microsoft.com/beta/directory/customSecurityAttributeDefinitions/Engineering_Project/allowedValues
+GET https://graph.microsoft.com/beta/directory/customSecurityAttributeDefinitions/Engineering_Project/allowedValues/Alpine
``` #### Add a predefined value
+Use the [Create allowedValue](/graph/api/customsecurityattributedefinition-post-allowedvalues) API to add a predefined value for a custom security attribute definition.
+ You can add predefined values for custom security attributes that have `usePreDefinedValuesOnly` set to `true`. - Attribute set: `Engineering`
POST https://graph.microsoft.com/beta/directory/customSecurityAttributeDefinitio
#### Deactivate a predefined value
+Use the [Update allowedValue](/graph/api/allowedvalue-update) API to deactivate a predefined value for a custom security attribute definition.
+ - Attribute set: `Engineering` - Attribute: `Project` - Predefined value: `Alpine`
active-directory Resilience Client App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/resilience-client-app.md
catch(MsalUiRequiredException ex)
} ```
-## [Javascript](#tab/javascript)
+## [JavaScript](#tab/javascript)
```javascript return myMSALObj.acquireTokenSilent(request).catch(error => {
active-directory Plan Connect Topologies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/plan-connect-topologies.md
This topology implements the following use cases:
* It is supported to have different sync scopes and different sync rules for different tenants. * Only one Azure AD tenant sync can be configured to write back to Active Directory for the same object. This includes device and group writeback as well as Hybrid Exchange configurations ΓÇô these features can only be configured in one tenant. The only exception here is Password Writeback ΓÇô see below. * It is supported to configure Password Hash Sync from Active Directory to multiple Azure AD tenants for the same user object. If Password Hash Sync is enabled for a tenant, then Password Writeback may be enabled as well, and this can be done on multiple tenants: if the password is changed on one tenant, then password writeback will update it in Active Directory, and Password Hash Sync will update the password in the other tenants.
-* It is not supported to use the same custom domain name in more than one Azure AD tenant, with one exception: it is supported to use a custom domain name in the Azure Commercial environment and use that same domain name in the Azure GCCH environment. Note that the custom domain name MUST exist in Commercial before it can be verified in the GCCH environment.
+* It is not supported to add and verify the same custom domain name in more than one Azure AD tenant, with one exception: it is supported to [add and verify](../fundamentals/add-custom-domain.md) a custom domain name in a tenant in the Azure Commercial environment and subsequently add and verify that same domain name in a tenant in the Azure Government environment. Note that the custom domain name **MUST** exist in Commercial Azure AD tenant before it can be verified in the Azure Government Azure AD tenant.
* It is not supported to configure hybrid experiences such as Seamless SSO and Hybrid Azure AD Join on more than one tenant. Doing so would overwrite the configuration of the other tenant and would make it unusable. * You can synchronize device objects to more than one tenant but only one tenant can be configured to trust a device. * Each Azure AD Connect instance should be running on a domain-joined machine.
-Related information to enable cross cloud federation can be found in [the dual federation article](./how-to-connect-fed-single-adfs-multitenant-federation.md)
+Related information : [Federate multiple instances of Azure AD with single instance of AD FS](./how-to-connect-fed-single-adfs-multitenant-federation.md)
>[!NOTE] >Global Address List Synchronization (GalSync) is not done automatically in this topology and requires an additional custom MIM implementation to ensure each tenant has a complete Global Address List (GAL) in Exchange Online and Skype for Business Online.
active-directory Custom Security Attributes Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/custom-security-attributes-apps.md
Previously updated : 11/16/2021 Last updated : 02/03/2022
To manage custom security attribute assignments for applications in your Azure A
#### Get the custom security attribute assignments for an application (service principal)
+Use the [Get-AzureADMSServicePrincipal](/powershell/module/azuread/get-azureadmsserviceprincipal) command to get the custom security attribute assignments for an application (service principal).
+ ```powershell Get-AzureADMSServicePrincipal -Select CustomSecurityAttributes Get-AzureADMSServicePrincipal -Id 7d194b0c-bf17-40ff-9f7f-4b671de8dc20 -Select "CustomSecurityAttributes, Id"
Get-AzureADMSServicePrincipal -Id 7d194b0c-bf17-40ff-9f7f-4b671de8dc20 -Select
#### Assign a custom security attribute with a multi-string value to an application (service principal)
+Use the [Set-AzureADMSServicePrincipal](/powershell/module/azuread/set-azureadmsserviceprincipal) command to assign a custom security attribute with a multi-string value to an application (service principal).
+ - Attribute set: `Engineering` - Attribute: `Project` - Attribute data type: Collection of Strings
Set-AzureADMSServicePrincipal -Id 7d194b0c-bf17-40ff-9f7f-4b671de8dc20 -CustomSe
#### Update a custom security attribute with a multi-string value for an application (service principal)
+Use the [Set-AzureADMSServicePrincipal](/powershell/module/azuread/set-azureadmsserviceprincipal) command to update a custom security attribute with a multi-string value for an application (service principal).
+ - Attribute set: `Engineering` - Attribute: `Project` - Attribute data type: Collection of Strings
Set-AzureADMSServicePrincipal -Id 7d194b0c-bf17-40ff-9f7f-4b671de8dc20 -CustomSe
To manage custom security attribute assignments for applications in your Azure AD organization, you can use the Microsoft Graph API. The following API calls can be made to manage assignments.
+For other similar Microsoft Graph API examples for users, see [Assign or remove custom security attributes for a user](../enterprise-users/users-custom-security-attributes.md#microsoft-graph-api) and [Assign, update, or remove custom security attributes using the Microsoft Graph API](/graph/custom-security-attributes-examples).
+ #### Get the custom security attribute assignments for an application (service principal)
+Use the [Get servicePrincipal](/graph/api/serviceprincipal-get?view=graph-rest-beta&preserve-view=true) API to get the custom security attribute assignments for an application (service principal).
+ ```http GET https://graph.microsoft.com/beta/servicePrincipals/{id}?$select=customSecurityAttributes ```
If there are no custom security attributes assigned to the application or if the
#### Assign a custom security attribute with a string value to an application (service principal)
+Use the [Update servicePrincipal](/graph/api/serviceprincipal-update?view=graph-rest-beta&preserve-view=true) API to assign a custom security attribute with a string value to a user.
+ - Attribute set: `Engineering` - Attribute: `ProjectDate` - Attribute data type: String
PATCH https://graph.microsoft.com/beta/servicePrincipals/{id}
} ```
-#### Other examples
-
-For other similar Microsoft Graph API examples for users, see [Assign or remove custom security attributes for a user](../enterprise-users/users-custom-security-attributes.md#microsoft-graph-api).
- ## Next steps - [Add or deactivate custom security attributes in Azure AD](../fundamentals/custom-security-attributes-add.md)
active-directory F5 Big Ip Oracle Enterprise Business Suite Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-oracle-enterprise-business-suite-easy-button.md
Enabling BIG-IP published services for Azure Active Directory (Azure AD) SSO pro
* Full SSO between Azure AD and BIG-IP published services
-* Manage Identities and access from a single control plane, [the Azure portal](https://portal.azure.com/)
+* Manage Identities and access from a single control plane, the [Azure portal](https://portal.azure.com/)
To learn about all the benefits, see the article on [F5 BIG-IP and Azure AD integration](http://f5-aad-integration.md/) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
A BIG-IP must also be registered as a client in Azure AD, before it is allowed t
## Configure Easy Button
-Initiate **Easy Button** configuration to set up a SAML Service Provider (SP) and Azure AD as an Identity Provider (IdP) for your application.
+Initiate the **Easy Button** configuration to set up a SAML Service Provider (SP) and Azure AD as an Identity Provider (IdP) for your application.
1. Navigate to **Access > Guided Configuration > Microsoft Integration** and select **Azure AD Application**.
When a user successfully authenticates, Azure AD issues a SAML token with a defa
![Screenshot for Azure configuration ΓÇô User attributes & claims](./media/f5-big-ip-easy-button-ldap/user-attributes-claims.png)
-You can include additional Azure AD attributes if necessary, but the example PeopleSoft scenario only requires the default attributes.
+You can include additional Azure AD attributes if necessary, but the Oracle EBS scenario only requires the default attributes.
#### Additional User Attributes
The **Application Pool tab** details the services behind a BIG-IP, represented a
#### Single Sign-On & HTTP Headers
-The **Easy Button wizard** supports Kerberos, OAuth Bearer, and HTTP authorization headers for SSO to published applications. As the PeopleSoft application expects headers, enable **HTTP Headers** and enter the following properties.
+The **Easy Button wizard** supports Kerberos, OAuth Bearer, and HTTP authorization headers for SSO to published applications. As the Oracle EBS application expects headers, enable **HTTP Headers** and enter the following properties.
* **Header Operation:** replace * **Header Name:** USER_NAME
Select **Deploy** to commit all settings and verify that the application has app
## Next steps
-From a browser, connect to the **PeopleSoft applicationΓÇÖs external URL** or select the applicationΓÇÖs icon in the [Microsoft MyApps portal](https://myapps.microsoft.com/). After authenticating to Azure AD, youΓÇÖll be redirected to the BIG-IP virtual server for the application and automatically signed in through SSO.
+From a browser, connect to the **Oracle EBS applicationΓÇÖs external URL** or select the applicationΓÇÖs icon in the [Microsoft MyApps portal](https://myapps.microsoft.com/). After authenticating to Azure AD, youΓÇÖll be redirected to the BIG-IP virtual server for the application and automatically signed in through SSO.
For increased security, organizations using this pattern could also consider blocking all direct access to the application, thereby forcing a strict path through the BIG-IP. ## Advanced deployment
-There may be cases where the Guided Configuration templates lack the flexibility to achieve more specific requirements. For those scenarios, see [Advanced Configuration for kerberos-based SSO](./f5-big-ip-kerberos-advanced.md). Alternatively, the BIG-IP gives the option to disable **Guided ConfigurationΓÇÖs strict management mode**. This allows you to manually tweak your configurations, even though bulk of your configurations are automated through the wizard-based templates.
+There may be cases where the Guided Configuration templates lack the flexibility to achieve more specific requirements. For those scenarios, see ![Advanced Configuration for headers-based SSO](./f5-big-ip-header-advanced.md). Alternatively, the BIG-IP gives the option to disable **Guided ConfigurationΓÇÖs strict management mode**. This allows you to manually tweak your configurations, even though bulk of your configurations are automated through the wizard-based templates.
You can navigate to **Access > Guided Configuration** and select the **small padlock icon** on the far right of the row for your applicationsΓÇÖ configs.
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/whats-new-docs.md
Title: "What's new in Azure Active Directory application management" description: "New and updated documentation for the Azure Active Directory application management." Previously updated : 01/07/2022 Last updated : 02/03/2022
reviewer: napuri
Welcome to what's new in Azure Active Directory application management documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the application management service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## January 2022
+
+### New articles
+
+- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for header-based SSO](f5-big-ip-headers-easy-button.md)
+- [Your sign-in was blocked](troubleshoot-app-publishing.md)
+- [Publish your application in the Azure Active Directory application gallery](v2-howto-app-gallery-listing.md)
+
+### Updated articles
+
+- [Tutorial: Configure F5 BIG-IP SSL-VPN for Azure AD SSO](f5-aad-password-less-vpn.md)
+- [Configure F5 BIG-IP Access Policy Manager for form-based SSO](f5-big-ip-forms-advanced.md)
+- [Tutorial: Configure F5 BIG-IP Access Policy Manager for Kerberos authentication](f5-big-ip-kerberos-advanced.md)
+- [Tutorial: Configure F5 BIG-IPΓÇÖs Access Policy Manager for header-based SSO](f5-big-ip-header-advanced.md)
+- [Tutorial: Configure F5 BIG-IP Easy Button for Kerberos SSO](f5-big-ip-kerberos-easy-button.md)
+- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for header-based SSO](f5-big-ip-headers-easy-button.md)
+- [Assign users and groups to an application](assign-user-or-group-access-portal.md)
+- [What is single sign-on in Azure Active Directory?](what-is-single-sign-on.md)
+- [Restrict access to a tenant](tenant-restrictions.md)
+- [Configure how users consent to applications](configure-user-consent.md)
+- [Troubleshoot password-based single sign-on](troubleshoot-password-based-sso.md)
+- [Understand how users are assigned to apps](ways-users-get-assigned-to-applications.md)
+- [Manage app consent policies](manage-app-consent-policies.md)
+- [Tutorial: Configure F5 BIG-IP Easy Button for header-based and LDAP SSO](f5-big-ip-ldap-header-easybutton.md)
+- [Azure Active Directory application management: What's new](whats-new-docs.md)
+- [Quickstart: Add an enterprise application](add-application-portal.md)
+- [Integrate F5 BIG-IP with Azure Active Directory](f5-aad-integration.md)
+- [Configure risk-based step-up consent using PowerShell](configure-risk-based-step-up-consent.md)
+- [An app page shows an error message after the user signs in](application-sign-in-problem-application-error.md)
+- [Configure the admin consent workflow](configure-admin-consent-workflow.md)
+- [Disable how a user signs in for an application](disable-user-sign-in-portal.md)
+- [Grant tenant-wide admin consent to an application](grant-admin-consent.md)
+- [Integrating Azure Active Directory with applications getting started guide](plan-an-application-integration.md)
+- [Manage access to an application](what-is-access-management.md)
++ ## December 2021 ### New articles
active-directory Tutorial Linux Vm Access Nonaad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-nonaad.md
To complete these steps, you need an SSH client.  If you are using Windows, you
>[!IMPORTANT] > All Azure SDKs support the Azure.Identity library that makes it easy to acquire Azure AD tokens to access target services. Learn more about [Azure SDKs](https://azure.microsoft.com/downloads/) and leverage the Azure.Identity library. > - [.NET](/dotnet/api/overview/azure/identity-readme)
-> - [JAVA](/java/api/overview/azure/identity-readme)
-> - [Javascript](/javascript/api/overview/azure/identity-readme)
+> - [Java](/java/api/overview/azure/identity-readme)
+> - [JavaScript](/javascript/api/overview/azure/identity-readme)
> - [Python](/python/api/overview/azure/identity-readme)
active-directory Adobe Identity Management Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/adobe-identity-management-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Adobe Identity Management | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Adobe Identity Management.
+ Title: 'Tutorial: Azure AD SSO integration with Adobe Identity Management (SAML)'
+description: Learn how to configure single sign-on between Azure Active Directory and Adobe Identity Management (SAML).
Previously updated : 01/15/2021 Last updated : 01/27/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Adobe Identity Management
+# Tutorial: Azure AD SSO integration with Adobe Identity Management (SAML)
-In this tutorial, you'll learn how to integrate Adobe Identity Management with Azure Active Directory (Azure AD). When you integrate Adobe Identity Management with Azure AD, you can:
+In this tutorial, you'll learn how to integrate Adobe Identity Management (SAML) with Azure Active Directory (Azure AD). When you integrate Adobe Identity Management (SAML) with Azure AD, you can:
-* Control in Azure AD who has access to Adobe Identity Management.
-* Enable your users to be automatically signed-in to Adobe Identity Management with their Azure AD accounts.
+* Control in Azure AD who has access to Adobe Identity Management (SAML).
+* Enable your users to be automatically signed-in to Adobe Identity Management (SAML) with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate Adobe Identity Management with A
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Adobe Identity Management single sign-on (SSO) enabled subscription.
+* Adobe Identity Management (SAML) single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Adobe Identity Management supports **SP** initiated SSO
-* Adobe Identity Management supports [**automated** user provisioning and deprovisioning](adobe-identity-management-provisioning-tutorial.md) (recommended).
+* Adobe Identity Management (SAML) supports **SP** initiated SSO.
+* Adobe Identity Management (SAML) supports [**automated** user provisioning and deprovisioning](adobe-identity-management-provisioning-tutorial.md) (recommended).
-## Adding Adobe Identity Management from the gallery
+## Adding Adobe Identity Management (SAML) from the gallery
-To configure the integration of Adobe Identity Management into Azure AD, you need to add Adobe Identity Management from the gallery to your list of managed SaaS apps.
+To configure the integration of Adobe Identity Management (SAML) into Azure AD, you need to add Adobe Identity Management (SAML) from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Adobe Identity Management** in the search box.
-1. Select **Adobe Identity Management** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **Adobe Identity Management (SAML)** in the search box.
+1. Select **Adobe Identity Management (SAML)** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for Adobe Identity Management
+## Configure and test Azure AD SSO for Adobe Identity Management (SAML)
-Configure and test Azure AD SSO with Adobe Identity Management using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Adobe Identity Management.
+Configure and test Azure AD SSO with Adobe Identity Management (SAML) using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Adobe Identity Management (SAML).
-To configure and test Azure AD SSO with Adobe Identity Management, perform the following steps:
+To configure and test Azure AD SSO with Adobe Identity Management (SAML), perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Adobe Identity Management SSO](#configure-adobe-identity-management-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Adobe Identity Management test user](#create-adobe-identity-management-test-user)** - to have a counterpart of B.Simon in Adobe Identity Management that is linked to the Azure AD representation of user.
+1. **[Configure Adobe Identity Management (SAML) SSO](#configure-adobe-identity-management-saml-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Adobe Identity Management (SAML) test user](#create-adobe-identity-management-saml-test-user)** - to have a counterpart of B.Simon in Adobe Identity Management (SAML) that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **Adobe Identity Management** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Adobe Identity Management (SAML)** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
a. In the **Sign on URL** text box, type the URL: `https://adobe.com`
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://federatedid-na1.services.adobe.com/federated/saml/metadata/alias/<CUSTOM_ID>` > [!NOTE]
- > The Identifier value is not real. Update the value with the actual Identifier. Contact [Adobe Identity Management Client support team](mailto:identity@adobe.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > The Identifier value is not real. Update the value with the actual Identifier. Contact [Adobe Identity Management (SAML) Client support team](mailto:identity@adobe.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ![The Certificate download link](common/metadataxml.png)
-1. On the **Set up Adobe Identity Management** section, copy the appropriate URL(s) based on your requirement.
+1. On the **Set up Adobe Identity Management (SAML)** section, copy the appropriate URL(s) based on your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Adobe Identity Management.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Adobe Identity Management (SAML).
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Adobe Identity Management**.
+1. In the applications list, select **Adobe Identity Management (SAML)**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure Adobe Identity Management SSO
+## Configure Adobe Identity Management (SAML) SSO
-1. To automate the configuration within Adobe Identity Management, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+1. To automate the configuration within Adobe Identity Management (SAML), you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
![My apps extension](common/install-myappssecure-extension.png)
-2. After adding extension to the browser, click on **Set up Adobe Identity Management** will direct you to the Adobe Identity Management application. From there, provide the admin credentials to sign into Adobe Identity Management. The browser extension will automatically configure the application for you and automate steps 3-8.
+2. After adding extension to the browser, click on **Set up Adobe Identity Management (SAML)** will direct you to the Adobe Identity Management (SAML) application. From there, provide the admin credentials to sign into Adobe Identity Management (SAML). The browser extension will automatically configure the application for you and automate steps 3-8.
![Setup configuration](common/setup-sso.png)
-3. If you want to setup Adobe Identity Management manually, in a different web browser window, sign in to your Adobe Identity Management company site as an administrator.
+3. If you want to setup Adobe Identity Management (SAML) manually, in a different web browser window, sign in to your Adobe Identity Management (SAML) company site as an administrator.
4. Go to the **Settings** tab and click on **Create Directory**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
8. Click on **Done**.
-### Create Adobe Identity Management test user
+### Create Adobe Identity Management (SAML) test user
1. Go to the **Users** tab and click on **Add User**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on **Test this application** in Azure portal. This will redirect to Adobe Identity Management Sign-on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to Adobe Identity Management (SAML) Sign-on URL where you can initiate the login flow.
-* Go to Adobe Identity Management Sign-on URL directly and initiate the login flow from there.
+* Go to Adobe Identity Management (SAML) Sign-on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the Adobe Identity Management tile in the My Apps, this will redirect to Adobe Identity Management Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* You can use Microsoft My Apps. When you click the Adobe Identity Management (SAML) tile in the My Apps, this will redirect to Adobe Identity Management (SAML) Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
-Once you configure Adobe Identity Management you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Adobe Identity Management (SAML) you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Citrixgotomeeting Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/citrixgotomeeting-provisioning-tutorial.md
The objective of this tutorial is to show you the steps you need to perform in GoToMeeting and Azure AD to automatically provision and de-provision user accounts from Azure AD to GoToMeeting.
+> [!WARNING]
+> This provisioning integration is no longer supported. As a result of this, the provisioning functionality of the GoToMeeting application in the Azure Active Directory Enterprise App Gallery will be removed soon. The application's SSO functionality will remain intact. Microsoft is working with GoToMeeting to build a new modernized provisioning integration, but there are no timelines on when this will be completed.
+ ## Prerequisites The scenario outlined in this tutorial assumes that you already have the following items:
active-directory Cornerstone Ondemand Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/cornerstone-ondemand-provisioning-tutorial.md
This tutorial demonstrates the steps to perform in Cornerstone OnDemand and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and deprovision users or groups to Cornerstone OnDemand.
+> [!WARNING]
+> This provisioning integration is no longer supported. As a result of this, the provisioning functionality of the Cornerstone OnDemand application in the Azure Active Directory Enterprise App Gallery will be removed soon. The application's SSO functionality will remain intact. Microsoft is working with Cornerstone to build a new modernized provisioning integration, but there are no timelines on when this will be completed.
++ > [!NOTE]
-> This Conerstone OnDemand automatic provisioning service is deprecated and support will end soon.
> This tutorial describes a connector that's built on top of the Azure AD user provisioning service. For information on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to software-as-a-service (SaaS) applications with Azure Active Directory](../app-provisioning/user-provisioning.md). ## Prerequisites
active-directory Euromonitor Passport Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/euromonitor-passport-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Euromonitor Passport | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Euromonitor Passport.
+ Title: 'Tutorial: Azure AD SSO integration with Euromonitor International'
+description: Learn how to configure single sign-on between Azure Active Directory and Euromonitor International.
Previously updated : 04/23/2021 Last updated : 01/27/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Euromonitor Passport
+# Tutorial: Azure AD SSO integration with Euromonitor International
-In this tutorial, you'll learn how to integrate Euromonitor Passport with Azure Active Directory (Azure AD). When you integrate Euromonitor Passport with Azure AD, you can:
+In this tutorial, you'll learn how to integrate Euromonitor International with Azure Active Directory (Azure AD). When you integrate Euromonitor International with Azure AD, you can:
-* Control in Azure AD who has access to Euromonitor Passport.
-* Enable your users to be automatically signed-in to Euromonitor Passport with their Azure AD accounts.
+* Control in Azure AD who has access to Euromonitor International.
+* Enable your users to be automatically signed-in to Euromonitor International with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate Euromonitor Passport with Azure
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Euromonitor Passport single sign-on (SSO) enabled subscription.
+* Euromonitor International single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Euromonitor Passport supports **SP and IDP** initiated SSO.
+* Euromonitor International supports **SP and IDP** initiated SSO.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Add Euromonitor Passport from the gallery
+## Add Euromonitor International from the gallery
-To configure the integration of Euromonitor Passport into Azure AD, you need to add Euromonitor Passport from the gallery to your list of managed SaaS apps.
+To configure the integration of Euromonitor International into Azure AD, you need to add Euromonitor International from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Euromonitor Passport** in the search box.
-1. Select **Euromonitor Passport** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **Euromonitor International** in the search box.
+1. Select **Euromonitor International** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for Euromonitor Passport
+## Configure and test Azure AD SSO for Euromonitor International
-Configure and test Azure AD SSO with Euromonitor Passport using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Euromonitor Passport.
+Configure and test Azure AD SSO with Euromonitor International using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Euromonitor International.
-To configure and test Azure AD SSO with Euromonitor Passport, perform the following steps:
+To configure and test Azure AD SSO with Euromonitor International, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Euromonitor Passport SSO](#configure-euromonitor-passport-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Euromonitor Passport test user](#create-euromonitor-passport-test-user)** - to have a counterpart of B.Simon in Euromonitor Passport that is linked to the Azure AD representation of user.
+1. **[Configure Euromonitor International SSO](#configure-euromonitor-international-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Euromonitor International test user](#create-euromonitor-international-test-user)** - to have a counterpart of B.Simon in Euromonitor International that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal on the **Euromonitor Passport** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal on the **Euromonitor International** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
-1. If you wish to configure the application in **SP** initiated mode, you need to get the Sign-on URL form the [Euromonitor Passport support team](mailto:passport.support@euromonitor.com). After you get the Sign-on URL from the Euromonitor Passport support team, click **Set additional URLs** and perform the following step:
+1. If you wish to configure the application in **SP** initiated mode, you need to get the Sign-on URL from the [Euromonitor International support team](mailto:passport.support@euromonitor.com). After you get the Sign-on URL from the Euromonitor International support team, click **Set additional URLs** and perform the following step:
- Paste the obtained Sign-on URL value from the Euromonitor Passport support team into the Sign-on URL textbox.
+ Paste the obtained Sign-on URL value from the Euromonitor International support team into the Sign-on URL textbox.
1. Click **Save**.
-1. Euromonitor Passport application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+1. Euromonitor International application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
![image](common/default-attributes.png)
-1. In addition to above, Euromonitor Passport application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+1. In addition to above, Euromonitor International application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
| Name | Source Attribute| | | |
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Euromonitor Passport.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Euromonitor International.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Euromonitor Passport**.
+1. In the applications list, select **Euromonitor International**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure Euromonitor Passport SSO
+## Configure Euromonitor International SSO
-To configure single sign-on on **Euromonitor Passport** side, you need to send the **App Federation Metadata Url** to [Euromonitor Passport support team](mailto:passport.support@euromonitor.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **Euromonitor International** side, you need to send the **App Federation Metadata Url** to [Euromonitor International support team](mailto:passport.support@euromonitor.com). They set this setting to have the SAML SSO connection set properly on both sides.
-### Create Euromonitor Passport test user
+### Create Euromonitor International test user
-In this section, you create a user called B.Simon in Euromonitor Passport. Work with [Euromonitor Passport support team](mailto:passport.support@euromonitor.com) to add the users in the Euromonitor Passport platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called B.Simon in Euromonitor International. Work with [Euromonitor International support team](mailto:passport.support@euromonitor.com) to add the users in the Euromonitor International platform. Users must be created and activated before you use single sign-on.
## Test SSO
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to Euromonitor Passport Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to Euromonitor International Sign on URL where you can initiate the login flow.
-* Go to Euromonitor Passport Sign-on URL directly and initiate the login flow from there.
+* Go to Euromonitor International Sign-on URL directly and initiate the login flow from there.
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Euromonitor Passport for which you set up the SSO.
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Euromonitor International for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the Euromonitor Passport tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Euromonitor Passport for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+You can also use Microsoft My Apps to test the application in any mode. When you click the Euromonitor International tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Euromonitor International for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
-Once you configure Euromonitor Passport you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Euromonitor International you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Insight4grc Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/insight4grc-tutorial.md
Previously updated : 09/30/2021 Last updated : 01/27/2022 # Tutorial: Azure AD SSO integration with Insight4GRC
Follow these steps to enable Azure AD SSO in the Azure portal.
4. On the **Basic SAML Configuration** section, If you wish to configure the application in **IDP** initiated mode, perform the following steps: a. In the **Identifier** text box, type a URL using the following pattern:
- `https://<subdomain>.Insight4GRC.com/SAML`
+ `https://<SUBDOMAIN>.Insight4GRC.com/SAML`
b. In the **Reply URL** text box, type a URL using the following pattern:
- `https://<subdomain>.Insight4GRC.com/Public/SAML/ACS.aspx`
+ `https://<SUBDOMAIN>.Insight4GRC.com/auth/saml/sp/assertion-consumer-service`
5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode: In the **Sign-on URL** text box, type a URL using the following pattern:
- `https://<subdomain>.Insight4GRC.com/Public/Login.aspx`
+ `https://<SUBDOMAIN>.Insight4GRC.com`
> [!NOTE] > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Insight4GRC Client support team](mailto:support.ss@rsmuk.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
You can also use Microsoft My Apps to test the application in any mode. When you
## Next steps
-Once you configure Insight4GRC you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Insight4GRC you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Litmos Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/litmos-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Litmos | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Litmos.
+ Title: 'Tutorial: Azure AD SSO integration with SAP Litmos'
+description: Learn how to configure single sign-on between Azure Active Directory and SAP Litmos.
Previously updated : 05/12/2021 Last updated : 01/27/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Litmos
+# Tutorial: Azure AD SSO integration with SAP Litmos
-In this tutorial, you'll learn how to integrate Litmos with Azure Active Directory (Azure AD). When you integrate Litmos with Azure AD, you can:
+In this tutorial, you'll learn how to integrate SAP Litmos with Azure Active Directory (Azure AD). When you integrate SAP Litmos with Azure AD, you can:
-* Control in Azure AD who has access to Litmos.
-* Enable your users to be automatically signed-in to Litmos with their Azure AD accounts.
+* Control in Azure AD who has access to SAP Litmos.
+* Enable your users to be automatically signed-in to SAP Litmos with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate Litmos with Azure Active Directo
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Litmos single sign-on (SSO) enabled subscription.
+* SAP Litmos single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Litmos supports **IDP** initiated SSO.
-* Litmos supports **Just In Time** user provisioning.
+* SAP Litmos supports **SP** and **IDP** initiated SSO.
+* SAP Litmos supports **Just In Time** user provisioning.
-## Add Litmos from the gallery
+## Add SAP Litmos from the gallery
-To configure the integration of Litmos into Azure AD, you need to add Litmos from the gallery to your list of managed SaaS apps.
+To configure the integration of SAP Litmos into Azure AD, you need to add SAP Litmos from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Litmos** in the search box.
-1. Select **Litmos** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **SAP Litmos** in the search box.
+1. Select **SAP Litmos** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for Litmos
+## Configure and test Azure AD SSO for SAP Litmos
-Configure and test Azure AD SSO with Litmos using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Litmos.
+Configure and test Azure AD SSO with SAP Litmos using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in SAP Litmos.
-To configure and test Azure AD SSO with Litmos, perform the following steps:
+To configure and test Azure AD SSO with SAP Litmos, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Litmos SSO](#configure-litmos-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Litmos test user](#create-litmos-test-user)** - to have a counterpart of B.Simon in Litmos that is linked to the Azure AD representation of user.
+1. **[Configure SAP Litmos SSO](#configure-sap-litmos-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create SAP Litmos test user](#create-sap-litmos-test-user)** - to have a counterpart of B.Simon in SAP Litmos that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **Litmos** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **SAP Litmos** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Set up single sign-on with SAML** page, perform the following steps:
-
- a. In the **Identifier** text box, type a URL using the following pattern:
- `https://<companyname>.litmos.com/account/Login`
-
- b. In the **Reply URL** text box, type a URL using the following pattern:
- `https://<companyname>.litmos.com/integration/samllogin`
-
- > [!NOTE]
- > These values are not real. Update these values with the actual Identifier and Reply URL, which are explained later in tutorial or contact [Litmos Client support team](https://www.litmos.com/contact-us) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type one of the following URLs:
+
+ | **Identifier** |
+ |--|
+ | `https://<CustomerName>.litmos.com` |
+ | `https://<CustomerName>.litmos.com.au` |
+ | `https://<CustomerName>.litmoseu.com` |
+
+ b. In the **Reply URL** text box, type one of the following URLs:
+
+ | **Reply URL** |
+ |--|
+ | `https://<CompanyName>.litmos.com/integration/splogin` |
+ | `https://<CompanyName>.litmos.com/integration/splogin?IdP=1` |
+ | `https://<CompanyName>.litmos.com/integration/splogin?IdP=2` |
+ | `https://<CompanyName>.litmos.com/integration/splogin?IdP=3` |
+ | `https://<CompanyName>.litmos.com/integration/splogin?IdP=14` |
+ | `https://<CompanyName>.litmos.com.au/integration/splogin` |
+ | `https://<CompanyName>.litmos.com.au/integration/splogin?IdP=1` |
+ | `https://<CompanyName>.litmos.com.au/integration/splogin?IdP=2` |
+ | `https://<CompanyName>.litmos.com.au/integration/splogin?IdP=3` |
+ | `https://<CompanyName>.litmos.com.au/integration/splogin?IdP=14` |
+ | `https://<CompanyName>.litmoseu.com/integration/splogin` |
+ | `https://<CompanyName>.litmoseu.com/integration/splogin?IdP=1`|
+ | `https://<CompanyName>.litmoseu.com/integration/splogin?IdP=2`|
+ | `https://<CompanyName>.litmoseu.com/integration/splogin?IdP=3` |
+ | `https://<CompanyName>.litmoseu.com/integration/splogin?IdP=14` |
+
+ c. In the **Sign on URL** text box, type one of the following URLs:
+
+ | **Sign on URL** |
+ |-|
+ | `https://<CompanyName>.litmos.com/integration/splogin` |
+ | `https://<CompanyName>.litmos.com/integration/splogin?IdP=1` |
+ | `https://<CompanyName>.litmos.com/integration/splogin?IdP=2` |
+ | `https://<CompanyName>.litmos.com/integration/splogin?IdP=3` |
+ | `https://<CompanyName>.litmos.com/integration/splogin?IdP=14` |
+ | `https://<CompanyName>.litmos.com.au/integration/splogin` |
+ | `https://<CompanyName>.litmos.com.au/integration/splogin?IdP=1` |
+ | `https://<CompanyName>.litmos.com.au/integration/splogin?IdP=2` |
+ | `https://<CompanyName>.litmos.com.au/integration/splogin?IdP=3` |
+ | `https://<CompanyName>.litmos.com.au/integration/splogin?IdP=14` |
+ | `https://<CompanyName>.litmoseu.com/integration/splogin` |
+ | `https://<CompanyName>.litmoseu.com/integration/splogin?IdP=1`|
+ | `https://<CompanyName>.litmoseu.com/integration/splogin?IdP=2`|
+ | `https://<CompanyName>.litmoseu.com/integration/splogin?IdP=3` |
+ | `https://<CompanyName>.litmoseu.com/integration/splogin?IdP=14` |
+
+ d. In the **Relay State URL** text box, type one of the following URLs:
+
+ | **Relay State URL** |
+ |--|
+ | `https://<CompanyName>.litmos.com/integration/splogin?RelayState=https://<CustomerName>.litmos.com/Course/12345` |
+ | `https://<CompanyName>.litmos.com/integration/splogin?RelayState=https://<CustomerName>.litmos.com/LearningPath/12345` |
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL, Sign on URL and Relay State URL which are explained later in tutorial or contact [SAP Litmos Client support team](https://www.litmos.com/contact-us) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. ![The Certificate download link](common/certificatebase64.png)
-1. On the **Set up Litmos** section, copy the appropriate URL(s) based on your requirement.
+1. On the **Set up SAP Litmos** section, copy the appropriate URL(s) based on your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Litmos.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to SAP Litmos.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Litmos**.
+1. In the applications list, select **SAP Litmos**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure Litmos SSO
+## Configure SAP Litmos SSO
-1. In a different browser window, sign-on to your Litmos company site as administrator.
+1. In a different browser window, sign-on to your SAP Litmos company site as administrator.
2. In the navigation bar on the left side, click **Accounts**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![SAML endpoint](./media/litmos-tutorial/certificate.png)
-6. In your **Litmos** application, perform the following steps:
+6. In your **SAP Litmos** application, perform the following steps:
![Litmos Application](./media/litmos-tutorial/application.png)
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
c. Click **Save Changes**.
-### Create Litmos test user
+### Create SAP Litmos test user
-In this section, a user called B.Simon is created in Litmos. Litmos supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Litmos, a new one is created after authentication.
+In this section, a user called B.Simon is created in SAP Litmos. SAP Litmos supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in SAP Litmos, a new one is created after authentication.
-**To create a user called Britta Simon in Litmos, perform the following steps:**
+**To create a user called Britta Simon in SAP Litmos, perform the following steps:**
-1. In a different browser window, sign-on to your Litmos company site as administrator.
+1. In a different browser window, sign-on to your SAP Litmos company site as administrator.
2. In the navigation bar on the left side, click **Accounts**.
In this section, a user called B.Simon is created in Litmos. Litmos supports jus
## Test SSO
-In this section, you test your Azure AD single sign-on configuration with following options.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to SAP Litmos Sign on URL where you can initiate the login flow.
+
+* Go to SAP Litmos Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
-* Click on Test this application in Azure portal and you should be automatically signed in to the Litmos for which you set up the SSO.
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the SAP Litmos for which you set up the SSO.
-* You can use Microsoft My Apps. When you click the Litmos tile in the My Apps, you should be automatically signed in to the Litmos for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+You can also use Microsoft My Apps to test the application in any mode. When you click the SAP Litmos tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the SAP Litmos for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure Litmos you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+Once you configure SAP Litmos you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Mimecast Personal Portal Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/mimecast-personal-portal-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Mimecast Personal Portal | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Mimecast Personal Portal.
+ Title: 'Tutorial: Azure AD SSO integration with Mimecast'
+description: Learn how to configure single sign-on between Azure Active Directory and Mimecast.
Previously updated : 01/15/2021 Last updated : 01/27/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Mimecast Personal Portal
+# Tutorial: Azure AD SSO integration with Mimecast
-In this tutorial, you'll learn how to integrate Mimecast Personal Portal with Azure Active Directory (Azure AD). When you integrate Mimecast Personal Portal with Azure AD, you can:
+In this tutorial, you'll learn how to integrate Mimecast with Azure Active Directory (Azure AD). When you integrate Mimecast with Azure AD, you can:
-* Control in Azure AD who has access to Mimecast Personal Portal.
-* Enable your users to be automatically signed-in to Mimecast Personal Portal with their Azure AD accounts.
+* Control in Azure AD who has access to Mimecast.
+* Enable your users to be automatically signed-in to Mimecast with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate Mimecast Personal Portal with Az
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Mimecast Personal Portal single sign-on (SSO) enabled subscription.
+* Mimecast single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Mimecast Personal Portal supports **SP and IDP** initiated SSO
+* Mimecast supports **SP and IDP** initiated SSO.
-## Add Mimecast Personal Portal from the gallery
+## Add Mimecast from the gallery
-To configure the integration of Mimecast Personal Portal into Azure AD, you need to add Mimecast Personal Portal from the gallery to your list of managed SaaS apps.
+To configure the integration of Mimecast into Azure AD, you need to add Mimecast from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Mimecast Personal Portal** in the search box.
-1. Select **Mimecast Personal Portal** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **Mimecast** in the search box.
+1. Select **Mimecast** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for Mimecast Personal Portal
+## Configure and test Azure AD SSO for Mimecast
-Configure and test Azure AD SSO with Mimecast Personal Portal using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Mimecast Personal Portal.
+Configure and test Azure AD SSO with Mimecast using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Mimecast.
-To configure and test Azure AD SSO with Mimecast Personal Portal, perform the following steps:
+To configure and test Azure AD SSO with Mimecast, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Mimecast Personal Portal SSO](#configure-mimecast-personal-portal-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Mimecast Personal Portal test user](#create-mimecast-personal-portal-test-user)** - to have a counterpart of B.Simon in Mimecast Personal Portal that is linked to the Azure AD representation of user.
+1. **[Configure Mimecast SSO](#configure-mimecast-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Mimecast test user](#create-mimecast-test-user)** - to have a counterpart of B.Simon in Mimecast that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **Mimecast Personal Portal** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Mimecast** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, if you wish to configure the application in IDP initiated mode, perform the following steps:
- a. In the **Identifier** textbox, type the URL using the following pattern:
+ a. In the **Identifier** textbox, type a URL using one of the following patterns:
| Region | Value | | | |
Follow these steps to enable Azure AD SSO in the Azure portal.
| Offshore | `https://jer-api.mimecast.com/sso/<accountcode>`| > [!NOTE]
- > You will find the `accountcode` value in the Mimecast Personal Portal under **Account** > **Settings** > **Account Code**. Append the `accountcode` to the Identifier.
+ > You will find the `accountcode` value in the Mimecast under **Account** > **Settings** > **Account Code**. Append the `accountcode` to the Identifier.
- b. In the **Reply URL** textbox, type the URL:
+ b. In the **Reply URL** textbox, type one of the following URLs:
| Region | Value | | | |
Follow these steps to enable Azure AD SSO in the Azure portal.
1. If you wish to configure the application in **SP** initiated mode:
- In the **Sign-on URL** textbox, type the URL:
+ In the **Sign-on URL** textbox, type one of the following URLs:
| Region | Value | | | |
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Mimecast Personal Portal .
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Mimecast .
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Mimecast Personal Portal**.
+1. In the applications list, select **Mimecast**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure Mimecast Personal Portal SSO
+## Configure Mimecast SSO
1. In a different web browser window, sign into Mimecast Administration Console.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![Screenshot shows new Authentication Profile selected.](./media/mimecast-personal-portal-tutorial/new-authenticatio-profile.png)
-1. Provide a valid description in the **Description** textbox and select **Enforce SAML Authentication for Mimecast Personal Portal** checkbox.
+1. Provide a valid description in the **Description** textbox and select **Enforce SAML Authentication for Mimecast** checkbox.
![Screenshot shows New Authentication Profile selected.](./media/mimecast-personal-portal-tutorial/selecting-personal-portal.png)
-1. On the **SAML Configuration for Mimecast Personal Portal** page, perform the following steps:
+1. On the **SAML Configuration for Mimecast** page, perform the following steps:
![Screenshot shows where to select Enforce SAML Authentication for Administration Console.](./media/mimecast-personal-portal-tutorial/sso-settings.png)
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
e. Click **Save**.
-### Create Mimecast Personal Portal test user
+### Create Mimecast test user
1. In a different web browser window, sign into Mimecast Administration Console. 1. Navigate to **Administration** > **Directories** > **Internal Directories**.
- ![Screenshot shows the SAML Configuration for Mimecast Personal Portal where you can enter the values described.](./media/mimecast-personal-portal-tutorial/internal-directories.png)
+ ![Screenshot shows the SAML Configuration for Mimecast where you can enter the values described.](./media/mimecast-personal-portal-tutorial/internal-directories.png)
1. Select on your domain, if the domain is mentioned below, otherwise please create a new domain by clicking on the **New Domain**.
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to Mimecast Personal Portal Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to Mimecast Sign on URL where you can initiate the login flow.
-* Go to Mimecast Personal Portal Sign-on URL directly and initiate the login flow from there.
+* Go to Mimecast Sign-on URL directly and initiate the login flow from there.
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Mimecast Personal Portal for which you set up the SSO
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Mimecast for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the Mimecast Personal Portal tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Mimecast Personal Portal for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+You can also use Microsoft My Apps to test the application in any mode. When you click the Mimecast tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Mimecast for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
-Once you configure Mimecast Personal Portal you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Mimecast you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Openidoauth Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/openidoauth-tutorial.md
Title: 'Configure an OpenID/OAuth application from the Azure AD app gallery | Microsoft Docs'
-description: Steps to configure an OpenID/OAuth application from the Azure AD app gallery.
+ Title: 'Configure an OpenID Connect OAuth application from Azure AD app gallery'
+description: Steps to Configure an OpenID Connect OAuth application from Azure AD app gallery.
Previously updated : 05/30/2019 Last updated : 02/02/2022
-# Configure an OpenID/OAuth application from the Azure AD app gallery
+# Configure an OpenID Connect OAuth application from Azure AD app gallery
## Process of adding an OpenID application from the gallery
active-directory Presentation Request Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/presentation-request-api.md
The callback endpoint is called when a user scans the QR code, uses the deep lin
| `state` |string| Returns the state value that you passed in the original payload. | | `subject`|string | The verifiable credential user DID.| | `issuers`| array |Returns an array of verifiable credentials requested. For each verifiable credential, it provides: </li><li>The verifiable credential type.</li><li>The claims retrieved.</li><li>The verifiable credential issuerΓÇÖs domain. </li><li>The verifiable credential issuerΓÇÖs domain validation status. </li></ul> |
-| `receipt`| string | Optional. The receipt contains the original payload sent from the authenticator to Verifiable Credentials. |
+| `receipt`| string | Optional. The receipt contains the original payload sent from the wallet to the Verifiable Credentials service. The receipt should be used for troubleshooting/debugging only. The format in the receipt is not fix and can change based on the wallet and version used.|
The following example demonstrates a callback payload when the authenticator app starts the presentation request:
api-management Api Management Access Restriction Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-access-restriction-policies.md
Previously updated : 08/20/2021 Last updated : 02/02/2022
For more information and examples of this policy, see [Advanced request throttli
bandwidth="kilobytes" renewal-period="seconds" increment-condition="condition"
- counter-key="key value" />
-
+ counter-key="key value"
+ first-period-start="date-time" />
``` ### Example
In the following example, the quota is keyed by the caller IP address.
| counter-key | The key to use for the quota policy. | Yes | N/A | | increment-condition | The boolean expression specifying if the request should be counted towards the quota (`true`) | No | N/A | | renewal-period | The time period in seconds after which the quota resets. When it's set to `0` the period is set to infinite. | Yes | N/A |
+| first-period-start | The starting date and time for quota renewal periods, in the following format: `yyyy-MM-ddTHH:mm:ssZ` as specified by the ISO 8601 standard. | No | `0001-01-01T00:00:00Z` |
> [!NOTE] > The `counter-key` attribute value must be unique across all the APIs in the API Management if you don't want to share the total between the other APIs.
api-management Import And Publish https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/import-and-publish.md
description: In this tutorial, you import an OpenAPI Specification API into Azur
-+ Previously updated : 09/30/2020 Last updated : 12/10/2021
In this tutorial, you learn how to:
After import, you can manage the API in the Azure portal. ## Prerequisites
This section shows how to import and publish an OpenAPI Specification backend AP
1. In the left navigation of your API Management instance, select **APIs**. 1. Select the **OpenAPI** tile. 1. In the **Create from OpenAPI specification** window, select **Full**.
-1. Enter the values from the following table. Then select **Create** to create your API.
+1. Enter the values from the following table.
You can set API values during creation or later by going to the **Settings** tab.
- :::image type="content" source="media/import-and-publish/create-api.png" alt-text="Create an API":::
+ :::image type="content" source="media/import-and-publish/open-api-specs.png" alt-text="Create an API":::
|Setting|Value|Description| |-|--|--|
- |**OpenAPI specification**|*https:\//conferenceapi.azurewebsites.net?format=json*|The service implementing the API. API Management forwards requests to this address. The service must be hosted at a publicly accessible internet address. |
+ |**OpenAPI specification**|*https:\//conferenceapi.azurewebsites.net?format=json*|The service implementing the API. API Management forwards requests to this address. The service must be hosted at a publicly accessible internet address.|
|**Display name**|After you enter the preceding service URL, API Management fills out this field based on the JSON.|The name displayed in the [developer portal](api-management-howto-developer-portal.md).| |**Name**|After you enter the preceding service URL, API Management fills out this field based on the JSON.|A unique name for the API.| |**Description**|After you enter the preceding service URL, API Management fills out this field based on the JSON.|An optional description of the API.|
This section shows how to import and publish an OpenAPI Specification backend AP
> [!NOTE] > To publish the API to API consumers, you must associate it with a product.
-2. Select **Create**.
+1. Select **Create** to create your API.
If you have problems importing an API definition, see the [list of known issues and restrictions](api-management-api-import-restrictions.md).
You can call API operations directly from the Azure portal, which provides a con
1. Select the **Test** tab, and then select **GetSpeakers**. The page shows **Query parameters** and **Headers**, if any. The **Ocp-Apim-Subscription-Key** is filled in automatically for the subscription key associated with this API. 1. Select **Send**.
- :::image type="content" source="media/import-and-publish/01-import-first-api-01.png" alt-text="Test API in Azure portal":::
+ :::image type="content" source="media/import-and-publish/test-new-api.png" alt-text="Test API in Azure portal":::
The backend responds with **200 OK** and some data.
In this tutorial, you learned how to:
Advance to the next tutorial to learn how to create and publish a product: > [!div class="nextstepaction"]
-> [Create and publish a product](api-management-howto-add-products.md)
+> [Create and publish a product](api-management-howto-add-products.md)
api-management Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/zone-redundancy.md
Configuring API Management for zone redundancy is currently supported in the fol
* Australia East * Brazil South * Canada Central
-* Central India (*)
+* Central India
* Central US * East Asia * East US
app-service Configure Language Php https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-php.md
az webapp list-runtimes --linux | grep PHP
::: zone pivot="platform-windows"
-Run the following command in the [Cloud Shell](https://shell.azure.com) to set the PHP version to 7.4:
+Run the following command in the [Cloud Shell](https://shell.azure.com) to set the PHP version to 8.0:
```azurecli-interactive
-az webapp config set --resource-group <resource-group-name> --name <app-name> --php-version 7.4
+az webapp config set --resource-group <resource-group-name> --name <app-name> --php-version 8.0
``` ::: zone-end ::: zone pivot="platform-linux"
-Run the following command in the [Cloud Shell](https://shell.azure.com) to set the PHP version to 7.2:
+Run the following command in the [Cloud Shell](https://shell.azure.com) to set the PHP version to 8.0:
```azurecli-interactive
-az webapp config set --resource-group <resource-group-name> --name <app-name> --linux-fx-version "PHP|7.2"
+az webapp config set --resource-group <resource-group-name> --name <app-name> --linux-fx-version "PHP|8.0"
``` ::: zone-end
app-service Configure Ssl Certificate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-ssl-certificate.md
If you already have a working App Service certificate, you can:
Start an App Service certificate order in the <a href="https://portal.azure.com/#create/Microsoft.SSL" target="_blank">App Service Certificate create page</a>.
-![Start App Service certificate purchase](./media/configure-ssl-certificate/purchase-app-service-cert.png)
+> [!NOTE]
+> All prices shown are for examples only.
+ Use the following table to help you configure the certificate. When finished, click **Create**. | Setting | Description | |-|-|
-| Name | A friendly name for your App Service certificate. |
-| Naked Domain Host Name | Specify the root domain here. The issued certificate secures *both* the root domain and the `www` subdomain. In the issued certificate, the Common Name field contains the root domain, and the Subject Alternative Name field contains the `www` domain. To secure any subdomain only, specify the fully qualified domain name of the subdomain here (for example, `mysubdomain.contoso.com`).|
| Subscription | The subscription that will contain the certificate. | | Resource group | The resource group that will contain the certificate. You can use a new resource group or select the same resource group as your App Service app, for example. |
-| Certificate SKU | Determines the type of certificate to create, whether a standard certificate or a [wildcard certificate](https://wikipedia.org/wiki/Wildcard_certificate). |
-| Legal Terms | Click to confirm that you agree with the legal terms. The certificates are obtained from GoDaddy. |
+| SKU | Determines the type of certificate to create, whether a standard certificate or a [wildcard certificate](https://wikipedia.org/wiki/Wildcard_certificate). |
+| Naked Domain Host Name | Specify the root domain here. The issued certificate secures *both* the root domain and the `www` subdomain. In the issued certificate, the Common Name field contains the root domain, and the Subject Alternative Name field contains the `www` domain. To secure any subdomain only, specify the fully qualified domain name of the subdomain here (for example, `mysubdomain.contoso.com`).|
+| Certificate name | A friendly name for your App Service certificate. |
+| Enable auto renewal | Select whether the certificate should be renewed automatically before it expires. Each renewal extends the certificate expiration by one year and the cost is charged to your subscription. |
> [!NOTE] > App Service Certificates purchased from Azure are issued by GoDaddy. For some domains, you must explicitly allow GoDaddy as a certificate issuer by creating a [CAA domain record](https://wikipedia.org/wiki/DNS_Certification_Authority_Authorization) with the value: `0 issue godaddy.com`
Before a certificate expires, you should add the renewed certificate into App Se
To replace an expiring certificate, how you update the certificate binding with the new certificate can adversely affect user experience. For example, your inbound IP address can change when you delete a binding, even if that binding is IP-based. This is especially important when you renew a certificate that's already in an IP-based binding. To avoid a change in your app's IP address, and to avoid downtime for your app due to HTTPS errors, follow these steps in order: 1. [Upload the new certificate](#upload-a-private-certificate).
-2. [Bind the new certificate to the same custom domain](configure-ssl-bindings.md) without deleting the existing (expiring) certificate. This action replaces the binding instead of removing the existing certificate binding.
+2. Bind the new certificate to the same custom domain without deleting the existing (expiring) certificate. This action replaces the binding instead of removing the existing certificate binding. To do this, navigate to the TLS/SSL settings blade of your App Service and select the Add Binding button.
3. Delete the existing certificate. ### Renew an App Service certificate > [!NOTE]
-> Beginning on September 23 2021, App Service certificates require domain verification every 395 days.
+> Beginning September 23 2021, App Service certificates require domain verification during renew or rekey if you haven't verified domain in the last 395 days. The new certificate order remains in "pending issuance" during renew or rekey until you complete the domain verification.
>
-> Unlike App Service Managed Certificate, domain re-verification for App Service certificates is *not* automated. Refer to [verify domain ownership](#verify-domain-ownership) for more information on how to verify your App Service certificate.
+> Unlike App Service Managed Certificate, domain re-verification for App Service certificates is *not* automated, and failure to verify domain ownership will result in failed renewals. Refer to [verify domain ownership](#verify-domain-ownership) for more information on how to verify your App Service certificate.
> [!NOTE] > The renewal process requires that [the well-known service principal for App Service has the required permissions on your key vault](deploy-resource-manager-template.md#deploy-web-app-certificate-from-key-vault). This permission is configured for you when you import an App Service Certificate through the portal, and should not be removed from your key vault.
-To toggle the automatic renewal setting of your App Service certificate at any time, select the certificate in the [App Service Certificates](https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.CertificateRegistration%2FcertificateOrders) page, then click **Auto Renew Settings** in the left navigation. By default, App Service Certificates have a one-year validity period.
+By default, App Service certificates have a one-year validity period. Near the time of expiration, App Service certificates, can be renewed in one-year increments automatically or manually. In effect, th renewal process gives you a new App Service certificate with the expiration date extended to one year from the existing certificate's expiration date.
+
+To toggle the automatic renewal setting of your App Service certificate at any time, select the certificate in the [App Service Certificates](https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.CertificateRegistration%2FcertificateOrders) page, then click **Auto Renew Settings** in the left navigation.
-Select **On** or **Off** and click **Save**. Certificates can start automatically renewing 31 days before expiration if you have automatic renewal turned on.
+Select **On** or **Off** and click **Save**. Certificates can start automatically renewing 32 days before expiration if you have automatic renewal turned on.
![Renew App Service certificate automatically](./media/configure-ssl-certificate/auto-renew-app-service-cert.png)
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/extension-based-hybrid-runbook-worker-install.md
Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/delete | Delet
* To learn how to configure your runbooks to automate processes in your on-premises datacenter or other cloud environment, see [Run runbooks on a Hybrid Runbook Worker](automation-hrw-run-runbooks.md).
-* To learn how to troubleshoot your Hybrid Runbook Workers, see [Troubleshoot Hybrid Runbook Worker issues](troubleshoot/hybrid-runbook-worker.md#general).
+* To learn how to troubleshoot your Hybrid Runbook Workers, see [Troubleshoot Hybrid Runbook Worker issues](troubleshoot/extension-based-hybrid-runbook-worker.md).
azure-app-configuration Enable Dynamic Configuration Dotnet Core Push Refresh https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/enable-dynamic-configuration-dotnet-core-push-refresh.md
description: In this tutorial, you learn how to dynamically update the configuration data for .NET Core apps using push refresh documentationcenter: ''-+ editor: ''
ms.devlang: csharp Previously updated : 07/25/2020- Last updated : 02/03/2022+ #Customer intent: I want to use push refresh to dynamically update my app to use the latest configuration data in App Configuration.
The App Configuration .NET Core client library supports updating configuration o
1. Push Model: This uses [App Configuration events](./concept-app-configuration-event.md) to detect changes in configuration. Once App Configuration is set up to send key value change events to Azure Event Grid, the application can use these events to optimize the total number of requests needed to keep the configuration updated. Applications can choose to subscribe to these either directly from Event Grid, or through one of the [supported event handlers](../event-grid/event-handlers.md) such as a webhook, an Azure function, or a Service Bus topic.
-Applications can choose to subscribe to these events either directly from Event Grid, or through a web hook, or by forwarding events to Azure Service Bus. The Azure Service Bus SDK provides an API to register a message handler that simplifies this process for applications that either don't have an HTTP endpoint or don't wish to poll the event grid for changes continuously.
-
-This tutorial shows how you can implement dynamic configuration updates in your code using push refresh. It builds on the app introduced in the quickstarts. Before you continue, finish [Create a .NET Core app with App Configuration](./quickstart-dotnet-core-app.md) first.
+This tutorial shows how you can implement dynamic configuration updates in your code using push refresh. It builds on the app introduced in the tutorial. Before you continue, finish Tutorial: [Use dynamic configuration in a .NET Core app](./enable-dynamic-configuration-dotnet-core.md) first.
You can use any code editor to do the steps in this tutorial. [Visual Studio Code](https://code.visualstudio.com/) is an excellent option that's available on the Windows, macOS, and Linux platforms.
In this tutorial, you learn how to:
## Prerequisites
-To do this tutorial, install the [.NET Core SDK](https://dotnet.microsoft.com/download).
-
+* Tutorial: [Use dynamic configuration in a .NET Core app](./enable-dynamic-configuration-dotnet-core.md)
+* NuGet package `Microsoft.Extensions.Configuration.AzureAppConfiguration` version 5.0.0 or later
## Set up Azure Service Bus topic and subscription
-This tutorial uses the Service Bus integration for Event Grid to simplify the detection of configuration changes for applications that don't wish to poll App Configuration for changes continuously. The Azure Service Bus SDK provides an API to register a message handler that can be used to update configuration when changes are detected in App Configuration. Follow steps in the [Quickstart: Use the Azure portal to create a Service Bus topic and subscription](../service-bus-messaging/service-bus-quickstart-topics-subscriptions-portal.md) to create a service bus namespace, topic, and subscription.
+This tutorial uses the Service Bus integration for Event Grid to simplify the detection of configuration changes for applications that don't wish to poll App Configuration for changes continuously. The Azure Service Bus SDK provides an API to register a message handler that can be used to update configuration when changes are detected in App Configuration. Follow steps in the [Quickstart: Use the Azure portal to create a Service Bus topic and subscription](../service-bus-messaging/service-bus-quickstart-topics-subscriptions-portal.md) to create a service bus namespace, topic, and subscription.
Once the resources are created, add the following environment variables. These will be used to register an event handler for configuration changes in the application code.
Once the resources are created, add the following environment variables. These w
Open *Program.cs* and update the file with the following code. ```csharp
+using Azure.Messaging.EventGrid;
using Microsoft.Azure.ServiceBus; using Microsoft.Extensions.Configuration; using Microsoft.Extensions.Configuration.AzureAppConfiguration;
+using Microsoft.Extensions.Configuration.AzureAppConfiguration.Extensions;
using System;
-using System.Diagnostics;
-using System.Text;
-using System.Text.Json;
using System.Threading.Tasks; namespace TestConsole { class Program {
- private const string AppConfigurationConnectionStringEnvVarName = "AppConfigurationConnectionString"; // e.g. Endpoint=https://{store_name}.azconfig.io;Id={id};Secret={secret}
- private const string ServiceBusConnectionStringEnvVarName = "ServiceBusConnectionString"; // e.g. Endpoint=sb://{service_bus_name}.servicebus.windows.net/;SharedAccessKeyName={key_name};SharedAccessKey={key}
+ private const string AppConfigurationConnectionStringEnvVarName = "AppConfigurationConnectionString";
+ // e.g. Endpoint=https://{store_name}.azconfig.io;Id={id};Secret={secret}
+
+ private const string ServiceBusConnectionStringEnvVarName = "ServiceBusConnectionString";
+ // e.g. Endpoint=sb://{service_bus_name}.servicebus.windows.net/;SharedAccessKeyName={key_name};SharedAccessKey={key}
+
private const string ServiceBusTopicEnvVarName = "ServiceBusTopic"; private const string ServiceBusSubscriptionEnvVarName = "ServiceBusSubscription";
namespace TestConsole
options.ConfigureRefresh(refresh => refresh .Register("TestApp:Settings:Message")
- .SetCacheExpiration(TimeSpan.FromDays(30)) // Important: Reduce poll frequency
+ .SetCacheExpiration(TimeSpan.FromDays(1)) // Important: Reduce poll frequency
); _refresher = options.GetRefresher();
namespace TestConsole
Console.WriteLine($"New value: {configuration["TestApp:Settings:Message"]}"); message = configuration["TestApp:Settings:Message"]; }
-
+ await Task.Delay(TimeSpan.FromSeconds(1)); } }
namespace TestConsole
serviceBusClient.RegisterMessageHandler( handler: (message, cancellationToken) =>
- {
- string messageText = Encoding.UTF8.GetString(message.Body);
- JsonElement messageData = JsonDocument.Parse(messageText).RootElement.GetProperty("data");
- string key = messageData.GetProperty("key").GetString();
- Console.WriteLine($"Event received for Key = {key}");
-
- _refresher.SetDirty();
- return Task.CompletedTask;
- },
+ {
+ // Build EventGridEvent from notification message
+ EventGridEvent eventGridEvent = EventGridEvent.Parse(BinaryData.FromBytes(message.Body));
+
+ // Create PushNotification from eventGridEvent
+ eventGridEvent.TryCreatePushNotification(out PushNotification pushNotification);
+
+ // Prompt Configuration Refresh based on the PushNotification
+ _refresher.ProcessPushNotification(pushNotification);
+
+ return Task.CompletedTask;
+ },
exceptionReceivedHandler: (exceptionargs) => { Console.WriteLine($"{exceptionargs.Exception}");
namespace TestConsole
} ```
-The [SetDirty](/dotnet/api/microsoft.extensions.configuration.azureappconfiguration.iconfigurationrefresher.setdirty) method is used to set the cached value for key-values registered for refresh as dirty. This ensures that the next call to `RefreshAsync` or `TryRefreshAsync` revalidates the cached values with App Configuration and updates them if needed.
+The `ProcessPushNotification` method resets the cache expiration to a short random delay. This causes future calls to `RefreshAsync` or `TryRefreshAsync` to re-validate the cached values against App Configuration and update them as necessary. In this example you register to monitor changes to the key: *TestApp:Settings:Message* with a cache expiration of one day. This means no request to App Configuration will be made before a day has passed since the last check. By calling `ProcessPushNotification` your application will send requests to App Configuration in the next few seconds. Your application will load the new configuration values shortly after changes occur in the `App Configuration` store without the need to constantly poll for updates. In case your application misses the change notification for any reason, it will still check for configuration changes once a day.
-A random delay is added before the cached value is marked as dirty to reduce potential throttling in case multiple instances refresh at the same time. The default maximum delay before the cached value is marked as dirty is 30 seconds, but can be overridden by passing an optional `TimeSpan` parameter to the `SetDirty` method.
+The short random delay for cache expiration is helpful if you have many instances of your application or microservices connecting to the same App Configuration store with the push model. Without this delay, all instances of your application could send requests to your App Configuration store simultaneously as soon as they receive a change notification. This can cause the App Configuration Service to throttle your store. Cache expiration delay is set to a random number between 0 and a maximum of 30 seconds by default, but you can change the maximum value through the optional parameter `maxDelay` to the `ProcessPushNotification` method.
-> [!NOTE]
-> To reduce the number of requests to App Configuration when using push refresh, it is important to call `SetCacheExpiration(TimeSpan cacheExpiration)` with an appropriate value of `cacheExpiration` parameter. This controls the cache expiration time for pull refresh and can be used as a safety net in case there is an issue with the Event subscription or the Service Bus subscription. The recommended value is `TimeSpan.FromDays(30)`.
+The `ProcessPushNotification` method takes in a `PushNotification` object containing information about which change in App Configuration triggered the push notfication. This helps ensure all configuration changes up to the triggering event are loaded in the following configuration refresh. The `SetDirty` method does not gurarantee the change that triggers the push notification to be loaded in an immediate configuration refresh. If you are using the `SetDirty` method for the push model, we recommend using the `ProcessPushNotification` method instead.
## Build and run the app locally
A random delay is added before the cached value is marked as dirty to reduce pot
In this tutorial, you enabled your .NET Core app to dynamically refresh configuration settings from App Configuration. To learn how to use an Azure managed identity to streamline the access to App Configuration, continue to the next tutorial. > [!div class="nextstepaction"]
-> [Managed identity integration](./howto-integrate-azure-managed-service-identity.md)
+> [Managed identity integration](./howto-integrate-azure-managed-service-identity.md)
azure-app-configuration Quickstart Aspnet Core App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/quickstart-aspnet-core-app.md
dotnet new mvc --no-https --output TestAppConfig
```csharp var builder = WebApplication.CreateBuilder(args); //Retrieve the Connection String from the secrets manager
- var connectionString = builder.Configuration["AppConfig"];
+ var connectionString = builder.Configuration.GetConnectionString("AppConfig");
builder.Host.ConfigureAppConfiguration(builder => {
azure-arc Upgrade Data Controller Direct Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upgrade-data-controller-direct-portal.md
+
+ Title: Upgrade directly connected Azure Arc data controller using the portal
+description: Article describes how to upgrade a directly connected Azure Arc data controller using the portal
++++++ Last updated : 01/18/2022+++
+# Upgrade a directly connected Azure Arc data controller using the portal
+
+This article describes how to upgrade a directly connected Azure Arc-enabled data controller using the Azure portal.
+
+During a data controller upgrade, portions of the data control plane such as Custom Resource Definitions (CRDs) and containers may be upgraded. An upgrade of the data controller will not cause downtime for the data services (SQL Managed Instance or PostgreSQL Hyperscale server).
+
+## Prerequisites
+
+You will need a directly connected data controller with the imageTag v1.0.0_2021-07-30 or later.
+
+To check the version, run:
+
+```console
+kubectl get datacontrollers -n <namespace> -o custom-columns=BUILD:.spec.docker.imageTag
+```
+
+## Upgrade data controller
+
+This section shows how to upgrade a directly connected data controller.
+
+> [!NOTE]
+> Some of the data services tiers and modes are generally available and some are in preview.
+> If you install GA and preview services on the same data controller, you can't upgrade in place.
+> To upgrade, delete all non-GA database instances. You can find the list of generally available
+> and preview services in the [Release Notes](./release-notes.md).
+
+### Upgrade
+
+Open your data controller resource. If an upgrade is available, you will see a notification on the **Overview** blade that says, "One or more upgrades are available for this data controller."
+
+Under **Settings**, select the **Upgrade Management** blade.
+
+In the table of available versions, choose the version you want to upgrade to and click "Upgrade Now".
+
+In the confirmation dialog box, click "Upgrade".
+
+## Monitor the upgrade status
+
+To view the status of your upgrade in the portal, go to the resource group of the data controller and select the **Activity log** blade.
+
+You will see a "Validate Deploy" option that shows the status.
+
+## Troubleshoot upgrade problems
+
+If you encounter any troubles with upgrading, see the [troubleshooting guide](troubleshoot-guide.md).
azure-arc Upgrade Sql Managed Instance Auto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upgrade-sql-managed-instance-auto.md
+
+ Title: Enable automatic upgrades - Azure Arc enabled SQL Managed Instance
+description: Article describes how to enable automatic upgrades of SQL Managed Instance for Azure Arc
++++++ Last updated : 01/24/2022+++
+# Enable automatic upgrades of a SQL Managed Instance
++
+You can set the `--desired-version` parameter of the `spec.update.desiredVersion` property of an Azure Arc-enabled SQL Managed Instance to `auto` to ensure that your Managed Instance will be upgraded after a data controller upgrade, with no interaction from a user. This allows for ease of management, as you do not need to manually upgrade every instance for every release.
+
+After setting the `--desired-version` parameter of the `spec.update.desiredVersion` property to `auto` the first time, Azure Arc-enabled data service will begin an upgrade to the newest image version within five minutes for the Managed Instance. Thereafter, within five minutes of a data controller being upgraded, the Managed Instance will begin the upgrade process. This works for both directly connected and indirectly connected modes.
+
+If the `spec.update.desiredVersion` property is pinned to a specific version, automatic upgrades will not take place. This allows you to let most instances automatically upgrade, while manually managing instances that need a more hands-on approach.
+
+## Enable with with Kubernetes tools (kubectl)
+
+Use kubectl to view the existing spec in yaml.
+
+```console
+kubectl --namespace <namespace> get sqlmi <sqlmi-name> --output yaml
+```
+
+Run `kubectl patch` to set `desiredVersion` to `auto`.
+
+```console
+kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{"spec": {"update": {"desiredVersion": "auto"}}}'
+```
+
+## Enable with CLI
+
+To set the `--desired-version` to `auto`, use the following command:
+
+Indirectly connected:
+
+````cli
+az sql mi-arc upgrade --name <instance name> --desired-version auto --k8s-namespace <namespace> --use-k8s
+````
+
+Example:
+
+````cli
+az sql mi-arc upgrade --name instance1 --desired-version auto --k8s-namespace arc1 --use-k8s
+````
+
+Directly connected:
+
+````cli
+az sql mi-arc upgrade --resource-group <resource group> --name <instance name> --desired-version auto [--no-wait]
+````
+
+Example:
+
+````cli
+az sql mi-arc upgrade --resource-group rgarc --name instance1 --desired-version auto
+````
azure-arc Upgrade Sql Managed Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upgrade-sql-managed-instance-cli.md
Preparing to upgrade sql sqlmi-1 in namespace arc to data controller version.
During a SQL Managed Instance General Purpose upgrade, the containers in the pod will be upgraded and will be reprovisioned. This will cause a short amount of downtime as the new pod is created. You will need to build resiliency into your application, such as connection retry logic, to ensure minimal disruption. Read [Overview of the reliability pillar](/azure/architecture/framework/resiliency/overview) for more information on architecting resiliency and [Retry Guidance for Azure Services](/azure/architecture/best-practices/retry-service-specific#sql-database-using-adonet).
+### Business Critical
++
+### Upgrade
+ To upgrade the Managed Instance, use the following command: ````cli
azure-arc Upgrade Sql Managed Instance Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upgrade-sql-managed-instance-direct-cli.md
Preparing to upgrade sql sqlmi-1 in namespace arc to data controller version.
During a SQL Managed Instance General Purpose upgrade, the containers in the pod will be upgraded and will be reprovisioned. This will cause a short amount of downtime as the new pod is created. You will need to build resiliency into your application, such as connection retry logic, to ensure minimal disruption. Read [Overview of the reliability pillar](/azure/architecture/framework/resiliency/overview) for more information on architecting resiliency and [retry guidance for Azure Services](/azure/architecture/best-practices/retry-service-specific#sql-database-using-adonet).
+### Business Critical
++
+### Upgrade
+ To upgrade the Managed Instance, use the following command: ````cli
azure-arc Upgrade Sql Managed Instance Indirect Kubernetes Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upgrade-sql-managed-instance-indirect-kubernetes-tools.md
Currently, only one Managed Instance can be upgraded at a time.
During a SQL Managed Instance General Purpose upgrade, the containers in the pod will be upgraded and will be reprovisioned. This will cause a short amount of downtime as the new pod is created. You will need to build resiliency into your application, such as connection retry logic, to ensure minimal disruption. Read [Overview of the reliability pillar](/azure/architecture/framework/resiliency/overview) for more information on architecting resiliency.
+### Business Critical
++
+### Upgrade
+ Use a kubectl command to view the existing spec in yaml. ```console
azure-cache-for-redis Cache High Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-high-availability.md
description: Learn about Azure Cache for Redis high availability features and op
Previously updated : 01/26/2022 Last updated : 02/02/2022
Azure Cache for Redis implements high availability by using multiple VMs, called
| - | - | - | :: | :: | :: | | [Standard replication](#standard-replication)| Dual-node replicated configuration in a single datacenter with automatic failover | 99.9% (see [details](https://azure.microsoft.com/support/legal/sla/cache/v1_1/)) |Γ£ö|Γ£ö|-| | [Zone redundancy](#zone-redundancy) | Multi-node replicated configuration across AZs, with automatic failover | 99.9% in Premium; 99.99% in Enterprise (see [details](https://azure.microsoft.com/support/legal/sla/cache/v1_1/)) |-|Γ£ö|Γ£ö|
-| [Geo-replication](#geo-replication) | Linked cache instances in two regions, with user-controlled failover | Up to 99.999% (see [details](https://azure.microsoft.com/support/legal/sla/cache/v1_1/)) |-|Γ£ö|Preview|
+| [Geo-replication](#geo-replication) | Linked cache instances in two regions, with user-controlled failover | Up to 99.999% (see [details](https://azure.microsoft.com/support/legal/sla/cache/v1_1/)) |-|Γ£ö|Γ£ö|
## Standard replication
Azure Cache for Redis distributes nodes in a zone redundant cache in a round-rob
A zone redundant cache provides automatic failover. When the current primary node is unavailable, one of the replicas will take over. Your application may experience higher cache response time if the new primary node is located in a different AZ. AZs are geographically separated. Switching from one AZ to another alters the physical distance between where your application and cache are hosted. This change impacts round-trip network latencies from your application to the cache. The extra latency is expected to fall within an acceptable range for most applications. We recommend you test your application to ensure it does well with a zone-redundant cache.
-### Enterprise tiers
+### Enterprise tier
A cache in either Enterprise tier runs on a Redis Enterprise cluster. It always requires an odd number of server nodes to form a quorum. By default, it has three nodes, each hosted on a dedicated VM. An Enterprise cache has two same-sized *data nodes* and one smaller *quorum node*. An Enterprise Flash cache has three same-sized data nodes. The Enterprise cluster divides Redis data into partitions internally. Each partition has a *primary* and at least one *replica*. Each data node holds one or more partitions. The Enterprise cluster ensures that the primary and replica(s) of any partition are never collocated on the same data node. Partitions replicate data asynchronously from primaries to their corresponding replicas.
When a data node becomes unavailable or a network split happens, a failover simi
[Geo-replication](cache-how-to-geo-replication.md) is a mechanism for linking two or more Azure Cache for Redis instances, typically spanning two Azure regions.
-### Premium tier
+### Premium tier geo-replication
>[!NOTE] >Geo-replication in the Premium tier is designed mainly for disaster recovery.
An application accesses the cache through separate endpoints for the primary and
Geo-replication doesn't provide automatic failover because of concerns over added network roundtrip time between regions if the rest of your application remains in the primary region. You'll need to manage and start the failover by unlinking the secondary cache. Unlinking promotes it to be the new primary instance.
-### Enterprise tiers
-
->[!NOTE]
->This is available as a preview.
->
->
+### Enterprise tier geo-replication
-The Enterprise tiers support a more advanced form of geo-replication. We call it [active geo-replication](cache-how-to-active-geo-replication.md). Using conflict-free replicated data types, the Redis Enterprise software supports writes to multiple cache instances and takes care of merging of changes and resolving conflicts. You can join two or more Enterprise tier cache instances in different Azure regions to form an active geo-replicated cache.
+The Enterprise tiers support a more advanced form of geo-replication. We call it [active geo-replication](cache-how-to-active-geo-replication.md). Using conflict-free replicated data types, the Redis Enterprise software supports writes to multiple cache instances and takes care of merging of changes and resolving conflicts. You can join two or more Enterprise tier cache instances in different Azure regions to form an active geo-replicated cache.
An application using such a cache can read and write to the geo-distributed cache instances through corresponding endpoints. The cache should use what is the closest to each compute instance, giving you the lowest latency. The application also needs to monitor the cache instances and switch to another region when one of the instances becomes unavailable. For more information on how active geo-replication works, see [Active-Active Geo-Distribution (CRDTs-Based)](https://redislabs.com/redis-enterprise/technology/active-active-geo-distribution/).
azure-cache-for-redis Cache How To Active Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-active-geo-replication.md
description: Learn how to replicate your Azure Cache for Redis Enterprise instan
Previously updated : 02/08/2021 Last updated : 02/02/2022
-# Configure active geo-replication for Enterprise Azure Cache for Redis instances (Preview)
+# Configure active geo-replication for Enterprise Azure Cache for Redis instances
In this article, you'll learn how to configure an active geo-replicated Azure Cache using the Azure portal. Active geo-replication groups up to five Enterprise Azure Cache for Redis instances into a single cache that spans across Azure regions. All instances act as the local primaries. An application decides which instance or instances to use for read and write requests. > [!NOTE]
-> Data transfer between Azure regions will be charged at standard [bandwidth rates](https://azure.microsoft.com/pricing/details/bandwidth/).
+> Data transfer between Azure regions is charged at standard [bandwidth rates](https://azure.microsoft.com/pricing/details/bandwidth/).
## Create or join an active geo-replication group
Active geo-replication groups up to five Enterprise Azure Cache for Redis instan
For more information on choosing **Clustering policy**, see [Clustering Policy](quickstart-create-redis-enterprise.md#clustering-policy).
- :::image type="content" source="media/cache-how-to-active-geo-replication/cache-active-geo-replication-not-configured.png" alt-text="Configure active geo-replication":::
+ :::image type="content" source="media/cache-how-to-active-geo-replication/cache-clustering-policy.png" alt-text="Configure active geo-replication":::
1. Select **Configure** to set up **Active geo-replication**.
azure-cache-for-redis Cache Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-overview.md
Previously updated : 02/08/2021 Last updated : 02/02/2022 #Customer intent: As a developer, I want to understand what Azure Cache for Redis is and how it can improve performance in my application.
The [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/
| [OSS clustering](cache-how-to-premium-clustering.md) |-|-|Γ£ö|Γ£ö|Γ£ö| | [Data persistence](cache-how-to-premium-persistence.md) |-|-|Γ£ö|Preview|Preview| | [Zone redundancy](cache-how-to-zone-redundancy.md) |-|-|Γ£ö|Γ£ö|Γ£ö|
-| [Geo-replication](cache-how-to-geo-replication.md) |-|-|Γ£ö|Preview|Preview|
+| [Geo-replication](cache-how-to-geo-replication.md) |-|-|Γ£ö|Γ£ö|Γ£ö|
| [Redis Modules](#choosing-the-right-tier) |-|-|-|Γ£ö|-| | [Import/Export](cache-how-to-import-export-data.md) |-|-|Γ£ö|Γ£ö|Γ£ö| | [Reboot](cache-administration.md#reboot) |Γ£ö|Γ£ö|Γ£ö|-|-|
azure-cache-for-redis Cache Troubleshoot Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-troubleshoot-server.md
Previously updated : 12/30/2021 Last updated : 02/02/2022
This section discusses troubleshooting issues caused by conditions on an Azure C
## High server load
-High server load means the Redis server is busy and unable to keep up with requests, leading to timeouts. Check the **Redis Server Load** metric on your cache by selecting **Monitor** from the Resource menu on the left. You can Redis Server Load graph in the working pane.
+High server load means the Redis server is busy and unable to keep up with requests, leading to timeouts. Check the *Server Load* metric on your cache by selecting **Monitoring** from the Resource menu on the left. You see the **Server Load** graph in the working pane under **Insights**. Or, add a metric set to *Server Load* under **Metrics**.
Following are some options to consider for high server load.
azure-cache-for-redis Cache Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-whats-new.md
Previously updated : 01/21/2022 Last updated : 02/02/2022 # What's New in Azure Cache for Redis
+## February 2022
+
+### Active geo-replication for Azure Cache For Redis Enterprise GA
+
+Active geo-replication for Azure Cache for Redis Enterprise is now generally available (GA).
+
+Active geo-replication is a powerful tool that enables Azure Cache for Redis clusters to be linked together for seamless active-active replication of data. Your applications can write to one Redis cluster and your data is automatically copied to the other linked clusters, and vice versa. For more information, see this [post](https://aka.ms/ActiveGeoGA) in the *Azure Developer Community Blog*.
+ ## January 2022 ### Support for managed identity in Azure Cache for Redis
Get started with Azure Cache for Redis 6.0, today, and select Redis 6.0 during c
### Diagnostics for connected clients
-Azure Cache for Redis now integrates with Azure diagnostic settings to log information on all client connections to your cache. Logging and then analyzing this diagnostic setting helps you understand who is connecting to your caches and the timestamp of those connections. This data could be used to identify the scope of a security breach and for security auditing purposes. Users can route these logs to a destination of their choice, such as a storage account or event hub.
+Azure Cache for Redis now integrates with Azure diagnostic settings to log information on all client connections to your cache. Logging and then analyzing this diagnostic setting helps you understand who is connecting to your caches and the timestamp of those connections. This data could be used to identify the scope of a security breach and for security auditing purposes. Users can route these logs to a destination of their choice, such as a storage account or Event Hub.
For more information, see [Monitor Azure Cache for Redis data using diagnostic settings](cache-monitor-diagnostic-settings.md).
azure-cache-for-redis Quickstart Create Redis Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/quickstart-create-redis-enterprise.md
The Azure Cache for Redis Enterprise tiers provide fully integrated and managed
* Enterprise, which uses volatile memory (DRAM) on a virtual machine to store data * Enterprise Flash, which uses both volatile and non-volatile memory (NVMe or SSD) to store data.
-Both Enterprise and Enterprise Flash support open-source Redis 6 and some new features that aren't yet available in the Basic, Standard, or Premium tiers. The supported features include some Redis modules that enable additional features like search, bloom filters, and time series.
+Both Enterprise and Enterprise Flash support open-source Redis 6 and some new features that aren't yet available in the Basic, Standard, or Premium tiers. The supported features include some Redis modules that enable other features like search, bloom filters, and time series.
## Prerequisites
You'll need an Azure subscription before you begin. If you don't have one, creat
1. Select **Next: Networking** and skip.
-1. Select **Next: Advanced**.
-
+1. Select **Next: Advanced**.
+ Enable **Non-TLS access only** if you plan to connect to the new cache without using TLS. Disabling TLS is **not** recommended, however. Set **Clustering policy** to **Enterprise** for a non-clustered cache. For more information on choosing **Clustering policy**, see [Clustering Policy](#clustering-policy).
- :::image type="content" source="media/cache-create/enterprise-tier-advanced.png" alt-text="Screenshot that shows the Enterprise tier Advanced tab.":::
+ :::image type="content" source="media/cache-create/cache-clustering-policy.png" alt-text="Screenshot that shows the Enterprise tier Advanced tab.":::
> [!NOTE]
- > Redis Enterprise supports two clustering policies. Use the **Enterprise** policy to access your cache using the regular Redis API, and **OSS** the OSS Cluster API.
+ > Redis Enterprise supports two clustering policies. Use the **Enterprise** policy to access your cache using the regular Redis API. Use **OSS** to use the OSS Cluster API.
> > [!NOTE]
You'll need an Azure subscription before you begin. If you don't have one, creat
The OSS Cluster mode allows clients to communicate with Redis using the same Redis Cluster API as open-source Redis. This mode provides optimal latency and near-linear scalability improvements when scaling the cluster. Your client library must support clustering to use the OSS Cluster mode.
-The Enterprise Cluster mode is a simpler configuration that exposes a single endpoint for client connections. This mode allows an application designed to use a standalone, or non-clustered, Redis server to seamlessly operate with a scalable, multi-node, Redis implementation. Enterprise Cluster mode abstracts the Redis Cluster implementation from the client by internally routing requests to the correct node in the cluster. Clients are not required to support OSS Cluster mode.
+The Enterprise Cluster mode is a simpler configuration that exposes a single endpoint for client connections. This mode allows an application designed to use a standalone, or non-clustered, Redis server to seamlessly operate with a scalable, multi-node, Redis implementation. Enterprise Cluster mode abstracts the Redis Cluster implementation from the client by internally routing requests to the correct node in the cluster. Clients aren't required to support OSS Cluster mode.
## Next steps
azure-edge-hardware-center Azure Edge Hardware Center Contact Microsoft Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-edge-hardware-center/azure-edge-hardware-center-contact-microsoft-support.md
Title: Log support ticket for Azure Edge Hardware Center orders
description: Learn how to log support request for issues related to orders created via Azure Edge Hardware Center. ---+ Last updated 01/03/2022
azure-edge-hardware-center Azure Edge Hardware Center Create Order https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-edge-hardware-center/azure-edge-hardware-center-create-order.md
Title: Tutorial to create an order using Azure Edge Hardware Center
description: The tutorial about creating an Azure Edge Hardware Center via the Azure portal. ---+ Last updated 01/03/2022
-# Customer intent: As an IT admin, I need to understand how to create an order via the Azure Edge Hardware Center.
+# Customer intent: As an IT admin, I need to understand how to create an order via the Azure Edge Hardware Center.
# Tutorial: Create an Azure Edge Hardware Center
azure-edge-hardware-center Azure Edge Hardware Center Manage Order https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-edge-hardware-center/azure-edge-hardware-center-manage-order.md
Title: Manage Azure Edge Hardware Center orders
+ Title: Manage Azure Edge Hardware Center orders
description: Describes how to use the Azure portal to manage orders created via Azure Edge Hardware Center. ---+ Last updated 01/03/2022
azure-edge-hardware-center Azure Edge Hardware Center Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-edge-hardware-center/azure-edge-hardware-center-overview.md
Title: Azure Edge Hardware Center overview
+ Title: Azure Edge Hardware Center overview
description: Describes Azure Edge Hardware Center - an Azure service that lets you order all Azure hardware and manage and track those orders ---+ Last updated 01/03/2022
azure-edge-hardware-center Azure Edge Hardware Center Resource Move Subscription Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-edge-hardware-center/azure-edge-hardware-center-resource-move-subscription-resource-group.md
Title: Move Azure Edge Hardware Center resource across subscriptions, resource groups
+ Title: Move Azure Edge Hardware Center resource across subscriptions, resource groups
description: Use the Azure portal to move an Azure Edge Hardware Center resource to another subscription or a resource group. ---+ Last updated 01/03/2022
azure-edge-hardware-center Azure Edge Hardware Center Troubleshoot Order https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-edge-hardware-center/azure-edge-hardware-center-troubleshoot-order.md
Title: Troubleshoot Azure Edge Hardware Center issues via the Azure portal
+ Title: Troubleshoot Azure Edge Hardware Center issues via the Azure portal
description: Describes how to troubleshoot Azure Edge Hardware Center ordering issues. ---+ Last updated 01/03/2022
azure-functions Create First Function Vs Code Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-vs-code-powershell.md
Before you get started, make sure you have the following requirements in place:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-+ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 3.x.
++ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 4.x. + [PowerShell 7](/powershell/scripting/install/installing-powershell-core-on-windows)
-+ Both [.NET Core 3.1 runtime](https://dotnet.microsoft.com/download) and [.NET Core 2.1 runtime](https://dotnet.microsoft.com/download/dotnet/2.1)
++ [.NET 6 runtime](https://dotnet.microsoft.com/download) + [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
azure-functions Functions Dotnet Dependency Injection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-dotnet-dependency-injection.md
Overriding services provided by the host is currently not supported. If there a
Values defined in [app settings](./functions-how-to-use-azure-function-app-settings.md#settings) are available in an `IConfiguration` instance, which allows you to read app settings values in the startup class.
-You can extract values from the `IConfiguration` instance into a custom type. Copying the app settings values to a custom type makes it easy test your services by making these values injectable. Settings read into the configuration instance must be simple key/value pairs.
+You can extract values from the `IConfiguration` instance into a custom type. Copying the app settings values to a custom type makes it easy test your services by making these values injectable. Settings read into the configuration instance must be simple key/value pairs. Please note that, the functions running on Elastic Premium SKU has this constraint "App setting names can only contain letters, numbers (0-9), periods ("."), colons (":") and underscores ("_")"
Consider the following class that includes a property named consistent with an app setting:
azure-maps Traffic Coverage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/traffic-coverage.md
Azure Maps provides rich traffic information in the form of traffic **flow** and **incidents**. This data can be visualized on maps or used to generate smarter routes that factor in real driving conditions.
-The following tables provide information about what kind of traffic information you can request from each country or region. If a market is missing in the following tables, it is not currently supported.
+The following tables provide information about what kind of traffic information you can request from each country or region. If a market is missing in the following tables, it isn't currently supported.
## Americas
The following tables provide information about what kind of traffic information
| Ukraine | Γ£ô | Γ£ô | | United Kingdom | Γ£ô | Γ£ô |
-## Middle East and Africa
+## Middle East & Africa
| Country/Region | Incidents | Flow | |-|::|:-:|
The following tables provide information about what kind of traffic information
## Additional information
-For more information about incorporating Azure Maps traffic data into your mapping applications, see the [Traffic](/rest/api/maps/traffic) REST API reference.
+Use the [Traffic](/rest/api/maps/traffic) REST API to incorporate Azure Maps traffic data into your mapping applications.
azure-maps Weather Coverage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/weather-coverage.md
The following table refers to the *Other* column and provides a list containing
## Americas
-| Country/region | Infrared Satellite Tiles | Minute Forecast, Radar Tiles | Severe Weather Alerts | Other* |
+| Country/Region | Infrared Satellite Tiles | Minute Forecast, Radar Tiles | Severe Weather Alerts | Other* |
||::|:-:|::|::| | Anguilla | Γ£ô | | | Γ£ô | | Antarctica | Γ£ô | | | Γ£ô |
The following table refers to the *Other* column and provides a list containing
## Asia Pacific
-| Country/region | Infrared Satellite Tiles | Minute Forecast, Radar Tiles | Severe Weather Alerts | Other* |
+| Country/Region | Infrared Satellite Tiles | Minute Forecast, Radar Tiles | Severe Weather Alerts | Other* |
||--|::|:-:|::|::| | Afghanistan | Γ£ô | | | Γ£ô | | American Samoa | Γ£ô | | Γ£ô | Γ£ô |
The following table refers to the *Other* column and provides a list containing
## Europe
-| Country/region | Infrared Satellite Tiles | Minute Forecast, Radar Tiles | Severe Weather Alerts | Other* |
+| Country/Region | Infrared Satellite Tiles | Minute Forecast, Radar Tiles | Severe Weather Alerts | Other* |
|-|::|:-:|::|::| | Albania | Γ£ô | | | Γ£ô | | Andorra | Γ£ô | | Γ£ô | Γ£ô |
The following table refers to the *Other* column and provides a list containing
## Middle East & Africa
-| Country/region | Infrared Satellite Tiles | Minute Forecast, Radar Tiles | Severe Weather Alerts | Other* |
+| Country/Region | Infrared Satellite Tiles | Minute Forecast, Radar Tiles | Severe Weather Alerts | Other* |
|-|::|:-:|::|::| | Algeria | Γ£ô | | | Γ£ô | | Angola | Γ£ô | | | Γ£ô |
azure-monitor Action Groups Logic App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/action-groups-logic-app.md
The process is similar if you want the logic app to perform a different action.
![Add an action](media/action-groups-logic-app/add-action.png "Add an action")
-1. Search for and select the Microsoft Teams connector. Choose the **Microsoft Teams - Post message** action.
+1. Search for and select the Microsoft Teams connector. Choose the **Post message in a chat or channel** action.
- ![Microsoft Teams actions](media/action-groups-logic-app/microsoft-teams-actions.png "Microsoft Teams actions")
+ ![Microsoft Teams actions](media/action-groups-logic-app/microsoft-teams-actions-2.png "Microsoft Teams actions")
1. Configure the Microsoft Teams action. The **Logic Apps Designer** asks you to authenticate to your work or school account. Choose the **Team ID** and **Channel ID** to send the message to.
Azure Service Health entries are part of the activity log. The process for creat
``` - Steps 5 and 6 are the same.-- For steps 7 through 11, use the following process:
+- For steps 7 through 10, use the following process:
1. Select **+** **New step** and then choose **Add a condition**. Set the following conditions so the logic app executes only when the input data matches the values below. When entering the version value into the text box, put quotes around it ("0.1.1") to make sure that it's evaluated as a string and not a numeric type. The system does not show the quotes if you return to the page, but the underlying code still maintains the string type. - `schemaId == Microsoft.Insights/activityLogs`
Azure Service Health entries are part of the activity log. The process for creat
!["Service Health payload condition"](media/action-groups-logic-app/service-health-payload-condition.png "Service Health payload condition")
- 1. In the **If true** condition, follow the instructions in steps 11 through 13 in [Create an activity log alert](#create-an-activity-log-alert-administrative) to add the Microsoft Teams action.
+ 1. In the **If true** condition, follow the instructions in steps 6 through 8 in [Create an activity log alert](#create-an-activity-log-alert-administrative) to add the Microsoft Teams action.
- 1. Define the message by using a combination of HTML and dynamic content. Copy and paste the following content into the **Message** field. Replace the `[incidentType]`, `[trackingID]`, `[title]`, and `[communication]` fields with dynamic content tags of the same name:
+ 1. Define the message by using a text and dynamic content. Copy and paste the following content into the **Message** field. Replace the `[incidentType]`, `[trackingID]`, `[title]`, and `[communication]` fields with dynamic content tags of the same name. Use edit options available in Message to add strong/bold texts and links. The link *"For details, log in to the Azure Service Health dashboard."* in the below image has the destination set to https://ms.portal.azure.com/#blade/Microsoft_Azure_Health/AzureHealthBrowseBlade/serviceIssues
- ```html
- <p>
- <b>Alert Type:</b>&nbsp;<strong>[incidentType]</strong>&nbsp;
- <strong>Tracking ID:</strong>&nbsp;[trackingId]&nbsp;
- <strong>Title:</strong>&nbsp;[title]</p>
- <p>
- <a href="https://ms.portal.azure.com/#blade/Microsoft_Azure_Health/AzureHealthBrowseBlade/serviceIssues">For details, log in to the Azure Service Health dashboard.</a>
- </p>
- <p>[communication]</p>
- ```
-
- !["Service Health true condition post action"](media/action-groups-logic-app/service-health-true-condition-post-action.png "Service Health true condition post action")
+ !["Service Health true condition post action"](media/action-groups-logic-app/service-health-true-condition-post-action-2.png "Service Health true condition post action")
1. For the **If false** condition, provide a useful message:
- ```html
- <p><strong>Service Health Alert</strong></p>
- <p><b>Unrecognized alert schema</b></p>
- <p><a href="https://ms.portal.azure.com/#blade/Microsoft_Azure_Health/AzureHealthBrowseBlade/serviceIssues">For details, log in to the Azure Service Health dashboard.\</a></p>
- ```
-
- !["Service Health false condition post action"](media/action-groups-logic-app/service-health-false-condition-post-action.png "Service Health false condition post action")
+ !["Service Health false condition post action"](media/action-groups-logic-app/service-health-false-condition-post-action-2.png "Service Health false condition post action")
-- Step 15 is the same. Follow the instructions to save your logic app and update your action group.
+- Step 11 is the same. Follow the instructions to save your logic app and update your action group.
## Create a metric alert
The process for creating a metric alert is similar to [creating an activity log
``` - Steps 5 and 6 are the same.-- For steps 7 through 11, use the following process:
+- For steps 7 through 10, use the following process:
1. Select **+** **New step** and then choose **Add a condition**. Set the following conditions so the logic app executes only when the input data matches these values below. When entering the version value into the text box, put quotes around it ("2.0") to makes sure that it's evaluated as a string and not a numeric type. The system does not show the quotes if you return to the page, but the underlying code still maintains the string type. - `schemaId == AzureMonitorMetricAlert`
The process for creating a metric alert is similar to [creating an activity log
1. In the **If true** condition, add a **For each** loop and the Microsoft Teams action. Define the message by using a combination of HTML and dynamic content.
- !["Metric alert true condition post action"](media/action-groups-logic-app/metric-alert-true-condition-post-action.png "Metric alert true condition post action")
+ !["Metric alert true condition post action"](media/action-groups-logic-app/metric-alert-true-condition-post-action-2.png "Metric alert true condition post action")
1. In the **If false** condition, define a Microsoft Teams action to communicate that the metric alert doesn't match the expectations of the logic app. Include the JSON payload. Notice how to reference the `triggerBody` dynamic content in the `json()` expression.
- !["Metric alert false condition post action"](media/action-groups-logic-app/metric-alert-false-condition-post-action.png "Metric alert false condition post action")
+ !["Metric alert false condition post action"](media/action-groups-logic-app/metric-alert-false-condition-post-action-2.png "Metric alert false condition post action")
-- Step 15 is the same. Follow the instructions to save your logic app and update your action group.
+- Step 11 is the same. Follow the instructions to save your logic app and update your action group.
## Calling other applications besides Microsoft Teams Logic Apps has a number of different connectors that allow you to trigger actions in a wide range of applications and databases. Slack, SQL Server, Oracle, Salesforce, are just some examples. For more information about connectors, see [Logic App connectors](../../connectors/apis-list.md).
azure-monitor Change Analysis Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/change-analysis-visualizations.md
Last updated 01/11/2022
## Standalone UI
-Change Analysis lives in a standalone pane under Azure Monitor, where you can view all changes and application dependency/resource insights.
+Change Analysis lives in a standalone pane under Azure Monitor, where you can view all changes and application dependency/resource insights. You can access Change Analysis through a couple of entry points:
In the Azure portal, search for Change Analysis to launch the experience. :::image type="content" source="./media/change-analysis/search-change-analysis.png" alt-text="Screenshot of searching Change Analysis in Azure portal":::-
+
Select one or more subscriptions to view: - All of its resources' changes from the past 24 hours. - Old and new values to provide insights at one glance.-
+
:::image type="content" source="./media/change-analysis/change-analysis-standalone-blade.png" alt-text="Screenshot of Change Analysis blade in Azure portal":::-
+
Click into a change to view full Resource Manager snippet and other properties.-
+
:::image type="content" source="./media/change-analysis/change-details.png" alt-text="Screenshot of change details":::-
+
Send any feedback to the [Change Analysis team](mailto:changeanalysisteam@microsoft.com) from the Change Analysis blade: :::image type="content" source="./media/change-analysis/change-analysis-feedback.png" alt-text="Screenshot of feedback button in Change Analysis tab"::: + ### Multiple subscription support The UI supports selecting multiple subscriptions to view resource changes. Use the subscription filter:
If you've enabled [VM Insights](../vm/vminsights-overview.md), you can view chan
:::image type="content" source="./media/change-analysis/vm-insights-2.png" alt-text="View of the property panel, selecting Investigate Changes button.":::
+## Drill to Change Analysis logs
+
+You can also drill to Change Analysis logs via a chart you've created or pinned to your resource's **Monitoring** dashboard.
+
+1. Navigate to the resource for which you'd like to view Change Analysis logs.
+1. On the resource's overview page, select the **Monitoring** tab.
+1. Select a chart from the **Key Metrics** dashboard.
+
+ :::image type="content" source="./media/change-analysis/view-change-analysis-1.png" alt-text="Chart from the Monitoring tab of the resource.":::
+
+1. From the chart, select **Drill into logs** and choose **Change Analysis** to view it.
+
+ :::image type="content" source="./media/change-analysis/view-change-analysis-2.png" alt-text="Drill into logs and select to view Change Analysis.":::
+ ## Next steps - Learn how to [troubleshoot problems in Change Analysis](change-analysis-troubleshoot.md)
azure-monitor Monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/monitor-functions.md
Add the following application settings with below values, then click Save on the
XDT_MicrosoftApplicationInsights_Java -> 1 ApplicationInsightsAgent_EXTENSION_VERSION -> ~2 ```
+> [!IMPORTANT]
+> This feature will have a cold start implication of 8-9 seconds in the Windows Consumption plan.
#### Linux Dedicated/Premium ```
azure-monitor Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/api/authentication-authorization.md
To set up authentication and authorization for the Azure Monitor Log Analytics API: ## Set Up Authentication
-1. Use [these instructions](../../../active-directory/develop/quickstart-create-new-tenant.md) to set up Azure Active Directory, using these settings at the relevant steps:
+1. [Set up Azure Directory](../../../active-directory/develop/quickstart-register-app.md). During setup, use these settings at the relevant steps:
- When asked for the API to connect to, select **APIs my organization uses** and then search for "Log Analytics API". - For the API permissions, select **Delegated permissions**.
-1. After completing the Active Directory setup, request an authorization token as described in the section below.
-1. (Optional) If you only want to work with sample data in a non-production environment, use an API key for authentication as described below.
+1. After completing the Active Directory setup, [Request an Authorization Token](#request-an-authorization-token).
+1. (Optional) If you only want to work with sample data in a non-production environment, you can just [use an API key](#authenticating-with-an-api-key).
## Request an Authorization Token Before beginning, make sure you have all the values required to make OAuth2 calls successfully. All requests require:
The main OAuth2 flow supported is through [authorization codes](/azure/active-di
&resource=https://api.loganalytics.io ```
-When making a request to the Authorize URL, the client\_id is the Application ID from your Azure AD App, copied from the App's properties menu. The redirect\_uri is the home page/login URL from the same Azure AD App. When a request is successful, this endpoint redirects you to the login page you provided at sign-up with the authorization code appended to the URL. See the following example:
+When making a request to the Authorize URL, the client\_id is the Application ID from your Azure AD App, copied from the App's properties menu. The redirect\_uri is the home page/login URL from the same Azure AD App. When a request is successful, this endpoint redirects you to the sign in page you provided at sign-up with the authorization code appended to the URL. See the following example:
``` http://YOUR_REDIRECT_URI/?code=AUTHORIZATION_CODE&session_state=STATE_GUID
At this point you will have obtained an authorization code, which you need now t
&client_secret=YOUR_CLIENT_SECRET ```
-All values are the same as before, with some additions. The authorization code is the same code you received in the previous request after a successful redirect. We now combine it with the key we previously obtained from our Azure AD App, or if you did not save the key you can delete it and create a new one from the keys tab of the Azure AD App menu. The response is a JSON string containing the token with the following schema. Exact values are indicated where they should not be changed. Types are indicated for the token values.
+All values are the same as before, with some additions. The authorization code is the same code you received in the previous request after a successful redirect. The code is combined with the key obtained from the Azure AD App. If you did not save the key, you can delete it and create a new one from the keys tab of the Azure AD App menu. The response is a JSON string containing the token with the following schema. Exact values are indicated where they should not be changed. Types are indicated for the token values.
Response example:
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/monitor-reference.md
description: Reference of all services and other resources monitored by Azure Mo
Previously updated : 11/10/2021 Last updated : 02/01/2021
The other services and older monitoring solutions in the following table store t
|:|:| | [Azure Automation](../automation/index.yml) | Manage operating system updates and track changes on Windows and Linux computers. See [Change Tracking](../automation/change-tracking/overview.md) and [Update Management](../automation/update-management/overview.md). | | [Azure Information Protection](/azure/information-protection/) | Classify and optionally protect documents and emails. See [Central reporting for Azure Information Protection](/azure/information-protection/reports-aip#configure-a-log-analytics-workspace-for-the-reports). |
-| [Azure Security Center](../security-center/index.yml) | Collect and analyze security events and perform threat analysis. See [Data collection in Azure Security Center](../security-center/security-center-enable-data-collection.md) |
-| [Azure Sentinel](../sentinel/index.yml) | Connects to different sources including Office 365 and Amazon Web Services Cloud Trail. See [Connect data sources](../sentinel/connect-data-sources.md). |
+| [Defender for the Cloud (was Azure Security Center)](/azure/defender-for-cloud/defender-for-cloud-introduction/) | Collect and analyze security events and perform threat analysis. See [Data collection in Defender for the Cloud](/azure/defender-for-cloud/enable-data-collection) |
+| [Microsoft Sentinel](../sentinel/index.yml) | Connects to different sources including Office 365 and Amazon Web Services Cloud Trail. See [Connect data sources](../sentinel/connect-data-sources.md). |
| [Microsoft Intune](/intune/) | Create a diagnostic setting to send logs to Azure Monitor. See [Send log data to storage, Event Hubs, or log analytics in Intune (preview)](/intune/fundamentals/review-logs-using-azure-monitor). | | Network [Traffic Analytics](../network-watcher/traffic-analytics.md) | Analyzes Network Watcher network security group (NSG) flow logs to provide insights into traffic flow in your Azure cloud. | | [System Center Operations Manager](/system-center/scom) | Collect data from Operations Manager agents by connecting their management group to Azure Monitor. See [Connect Operations Manager to Azure Monitor](agents/om-agents.md)<br> Assess the risk and health of your System Center Operations Manager management group with [Operations Manager Assessment](insights/scom-assessment.md) solution. |
azure-monitor Workbooks Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/visualize/workbooks-data-sources.md
To make a query control use this data source, use the Data source drop-down to c
## Azure Data Explorer Workbooks now have support for querying from [Azure Data Explorer](/azure/data-explorer/) clusters with the powerful [Kusto](/azure/kusto/query/index) query language.
+For the **Cluster Name** field, you should add ther region name following the cluster name. For example: *mycluster.westeurope*.
![Screenshot of Kusto query window](./media/workbooks-data-sources/data-explorer.png)
This provider supports [JSONPath](workbooks-jsonpath.md).
* [Get started](./workbooks-overview.md#visualizations) learning more about workbooks many rich visualizations options. * [Control](./workbooks-access-control.md) and share access to your workbook resources.
-* [Log Analytics query optimization tips](../logs/query-optimization.md)
+* [Log Analytics query optimization tips](../logs/query-optimization.md)
azure-percept Voice Control Your Inventory Then Visualize With Power Bi Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/voice-control-your-inventory-then-visualize-with-power-bi-dashboard.md
In this section, you will train, test, and publish your Custom Commands
1. Replace the web endpoints URL 1. Click Web endpoints and replace the URL
- 2. Replace the value in the URL to the <strong>HTTP Trigger Url</strong> you noted down in section 2 (ex: https://xxx.azurewebsites.net/api/httpexample)
+ 2. Replace the value in the URL to the <strong>HTTP Trigger Url</strong> you noted down in section 2 (ex: `https://xxx.azurewebsites.net/api/httpexample`)
![Replace the value in the URL](./media/voice-control-your-inventory-images/web-point-url.png) 2. Create LUIS prediction resource 1. Click <strong>settings</strong> and create a <strong>S0</strong> prediction resource under LUIS <strong>prediction resource</strong>.
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/overview.md
Bicep provides the following advantages:
-- **Authoring experience**: When you use VS Code to create your Bicep files, you get a first-class authoring experience. The editor provides rich type-safety, intellisense, and syntax validation.
+- **Authoring experience**: When you use the [Bicep Extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep) to create your Bicep files, you get a first-class authoring experience. The editor provides rich type-safety, intellisense, and syntax validation.
+
+ ![Bicep file authoring example](./media/overview/bicep-intellisense.gif)
+ - **Repeatable results**: Repeatedly deploy your infrastructure throughout the development lifecycle and have confidence your resources are deployed in a consistent manner. Bicep files are idempotent, which means you can deploy the same file many times and get the same resource types in the same state. You can develop one file that represents the desired state, rather than developing lots of separate files to represent updates. - **Orchestration**: You don't have to worry about the complexities of ordering operations. Resource Manager orchestrates the deployment of interdependent resources so they're created in the correct order. When possible, Resource Manager deploys resources in parallel so your deployments finish faster than serial deployments. You deploy the file through one command, rather than through multiple imperative commands.
azure-resource-manager Parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/parameters.md
description: Describes how to define parameters in a Bicep file.
Previously updated : 11/12/2021 Last updated : 02/03/2022 # Parameters in Bicep
To help users understand the value to provide, add a description to the paramete
param virtualMachineSize string = 'Standard_DS1_v2' ```
+Markdown-formatted text can be used for the description text:
+
+```bicep
+@description('''
+Storage account name restrictions:
+- Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only.
+- Your storage account name must be unique within Azure. No two storage accounts can have the same name.
+''')
+@minLength(3)
+@maxLength(24)
+param storageAccountName string
+```
+
+When you hover your cursor over **storageAccountName** in VSCode, you see the formatted text:
++
+Make sure the text is well-formatted Markdown. Otherwise the text won't be rendered correctly.
+ ## Use parameter To reference the value for a parameter, use the parameter name. The following example uses a parameter value for a key vault name.
azure-sql Maintenance Window Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/maintenance-window-configure.md
Configure the [maintenance window (Preview)](maintenance-window.md) for an Azure
The *System default* maintenance window is 5PM to 8AM daily (local time of the Azure region the resource is located) to avoid peak business hours interruptions. If the *System default* maintenance window is not the best time, select one of the other available maintenance windows.
-The ability to change to a different maintenance window is not available for every service level or in every region. For details on availability, see [Maintenance window availability](maintenance-window.md#availability).
+The ability to change to a different maintenance window is not available for every service level or in every region. For details on feature availability, see [Maintenance window availability](maintenance-window.md#feature-availability).
> [!Important] > Configuring maintenance window is a long running asynchronous operation, similar to changing the service tier of the Azure SQL resource. The resource is available during the operation, except a short reconfiguration that happens at the end of the operation and typically lasts up to 8 seconds even in case of interrupted long-running transactions. To minimize the impact of the reconfiguration you should perform the operation outside of the peak hours.
azure-sql Maintenance Window https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/maintenance-window.md
Previously updated : 12/15/2021 Last updated : 02/02/2022 # Maintenance window (Preview)
Azure periodically performs [planned maintenance](planned-maintenance.md) of SQL
Maintenance window is intended for production workloads that are not resilient to database or instance reconfigurations and cannot absorb short connection interruptions caused by planned maintenance events. By choosing a maintenance window you prefer, you can minimize the impact of planned maintenance as it will be occurring outside of your peak business hours. Resilient workloads and non-production workloads may rely on Azure SQL's default maintenance policy.
-The maintenance window can be configured on creation or for existing Azure SQL resources. It can be configured using the Azure portal, PowerShell, CLI, or Azure API.
+The maintenance window is free of charge and can be configured on creation or for existing Azure SQL resources. It can be configured using the Azure portal, PowerShell, CLI, or Azure API.
> [!Important] > Configuring maintenance window is a long running asynchronous operation, similar to changing the service tier of the Azure SQL resource. The resource is available during the operation, except a short reconfiguration that happens at the end of the operation and typically lasts up to 8 seconds even in case of interrupted long-running transactions. To minimize the impact of the reconfiguration you should perform the operation outside of the peak hours.
Once the maintenance window selection is made and service configuration complete
> [!Important] > In very rare circumstances where any postponement of action could cause serious impact, like applying critical security patch, configured maintenance window may be temporarily overriden.
-### Cost and eligibility
+## Advance notifications
-Configuring and using maintenance window is free of charge for all eligible [offer types](https://azure.microsoft.com/support/legal/offer-details/): Pay-As-You-Go, Cloud Solution Provider (CSP), Microsoft Enterprise Agreement, or Microsoft Customer Agreement.
+Maintenance notifications can be configured to alert you on upcoming planned maintenance events for your Azure SQL Database 24 hours in advance, at the time of maintenance, and when the maintenance is complete. For more information, see [Advance Notifications](advance-notifications.md).
-> [!Note]
-> An Azure offer is the type of the Azure subscription you have. For example, a subscription with [pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/), [Azure in Open](https://azure.microsoft.com/offers/ms-azr-0111p/), and [Visual Studio Enterprise](https://azure.microsoft.com/offers/ms-azr-0063p/) are all Azure offers. Each offer or plan has different terms and benefits. Your offer or plan is shown on the subscription's Overview. For more information on switching your subscription to a different offer, see [Change your Azure subscription to a different offer](../../cost-management-billing/manage/switch-azure-offer.md).
+## Feature availability
-## Advance notifications
+### Supported subscription types
-Maintenance notifications can be configured to alert you on upcoming planned maintenance events for your Azure SQL Database 24 hours in advance, at the time of maintenance, and when the maintenance is complete. For more information, see [Advance Notifications](advance-notifications.md).
+Configuring and using maintenance window is available for the following [offer types](https://azure.microsoft.com/support/legal/offer-details/): Pay-As-You-Go, Cloud Solution Provider (CSP), Microsoft Enterprise Agreement, or Microsoft Customer Agreement.
+
+Offers restricted to dev/test usage only are not eligible (like Pay-As-You-Go Dev/Test or Enterprise Dev/Test as examples).
-## Availability
+> [!Note]
+> An Azure offer is the type of the Azure subscription you have. For example, a subscription with [pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/), [Azure in Open](https://azure.microsoft.com/offers/ms-azr-0111p/), and [Visual Studio Enterprise](https://azure.microsoft.com/offers/ms-azr-0063p/) are all Azure offers. Each offer or plan has different terms and benefits. Your offer or plan is shown on the subscription's Overview. For more information on switching your subscription to a different offer, see [Change your Azure subscription to a different offer](../../cost-management-billing/manage/switch-azure-offer.md).
### Supported service level objectives
backup Backup Azure Sap Hana Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-sap-hana-database.md
Title: Back up an SAP HANA database to Azure with Azure Backup description: In this article, learn how to back up an SAP HANA database to Azure virtual machines with the Azure Backup service. Previously updated : 11/02/2021 Last updated : 02/03/2022
If you choose to allow access service IPs, refer to the IP ranges in the JSON fi
You can also use the following FQDNs to allow access to the required services from your servers:
-| Service | Domain names to be accessed |
-| -- | |
-| Azure Backup | `*.backup.windowsazure.com` |
-| Azure Storage | `*.blob.core.windows.net` <br><br> `*.queue.core.windows.net` <br><br> `*.blob.storage.azure.net` |
-| Azure AD | Allow access to FQDNs under sections 56 and 59 according to [this article](/office365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online) |
+| Service | Domain names to be accessed | Ports |
+| -- | | - |
+| Azure Backup | `*.backup.windowsazure.com` | 443 |
+| Azure Storage | `*.blob.core.windows.net` <br><br> `*.queue.core.windows.net` <br><br> `*.blob.storage.azure.net` | 443 |
+| Azure AD | Allow access to FQDNs under sections 56 and 59 according to [this article](/office365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online) | As applicable |
#### Use an HTTP proxy server to route traffic
backup Quick Backup Vm Bicep Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/quick-backup-vm-bicep-template.md
To set up your environment for Bicep development, see [Install Bicep tools](../a
The template used below is from [Azure quickstart templates](https://azure.microsoft.com/resources/templates/recovery-services-create-vm-and-configure-backup/). This template allows you to deploy simple Windows VM and Recovery Services vault configured with _DefaultPolicy_ for _Protection_.
-```json
+```bicep
@description('Specifies a name for generating resource names.') @maxLength(8) param projectName string
bastion Vm Upload Download Native https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/vm-upload-download-native.md
Azure Bastion offers support for file transfer between your target VM and local computer using Bastion and a native RDP or SSH client. To learn more about native client support, refer to [Connect to a VM using the native client](connect-native-client-windows.md). You can use either SSH or RDP to upload files to a VM from your local computer. To download files from a VM, you must use RDP. > [!NOTE]
-> Uploading and downloading files is supported using the native client only. You can't upload and download files using PowerShell or via the Azure portal.
+> * Uploading and downloading files is supported using the native client only. You can't upload and download files using PowerShell or via the Azure portal.
+> * This feature requires the Standard SKU. The Basic SKU doesn't support using the native client.
> ## Upload and download files - RDP This section helps you transfer files between your local Windows computer and your target VM over RDP. Once connected to the target VM, you can transfer files using right-click, then **Copy** and **Paste**.
-1. Sign in to your Azure account and select your subscription containing your Bastion resource.
+1. Sign in to your Azure account and select the subscription containing your Bastion resource.
```azurecli-interactive az login
This section helps you upload files from your local computer to your target VM o
> File download over SSH is not currently supported. >
-1. Sign in to your Azure account and select your subscription containing your Bastion resource.
+1. Sign in to your Azure account and select the subscription containing your Bastion resource.
```azurecli-interactive az login
cognitive-services Facebook Post Moderation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/facebook-post-moderation.md
This diagram illustrates each component of this scenario:
## Create a review team
-Refer to the [Try Content Moderator on the web](quick-start.md) quickstart for instructions on how to sign up for the [Content Moderator Review tool](https://contentmoderator.cognitive.microsoft.com/) and create a review team. Take note of the **Team ID** value on the **Credentials** page.
+Refer to the [Try Content Moderator on the web](quick-start.md) quickstart for instructions on how to sign up for the Content Moderator Review tool and create a review team. Take note of the **Team ID** value on the **Credentials** page.
## Configure image moderation workflow
cognitive-services Luis Reference Regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-reference-regions.md
LUIS authoring regions are supported by the LUIS portal. To publish a LUIS app t
## LUIS Authoring regions - LUIS has the following authoring regions available: * Australia east
The authoring region app can only be published to a corresponding publish region
> [!NOTE] > LUIS apps created on https://www.luis.ai can now be published to all endpoints including the [European](#publishing-to-europe) and [Australian](#publishing-to-australia) regions.
+## Single data residency
+
+The following publishing regions do not have a backup region:
+
+* Brazil South
+* Southeast Asia
+
+> [!Note]
+> Make sure to set `log=false` for [V3 APIs](https://westus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a91e54c9db63d589f433) to disable active learning. By default this value is `false`, to ensure that data does not leave the boundaries of the publishing region. If `log=true`, data is returned to the authoring region for active learning even if it is one of the single publishing regions.
++ ## Publishing to Europe Global region | Authoring API region | Publishing & querying region<br>`API region name` | Endpoint URL format |
Authoring regions have [paired fail-over regions](../../availability-zones/cross
## Next steps
-> [!div class="nextstepaction"]
+ > [Prebuilt entities reference](./luis-reference-prebuilt-entities.md) [www.luis.ai]: https://www.luis.ai
cognitive-services Azure Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Concepts/azure-resources.md
description: QnA Maker uses several Azure sources, each with a different purpose
Previously updated : 10/11/2021 Last updated : 02/02/2022
Each Azure resource created with QnA Maker has a specific purpose:
### QnA Maker resource
-The QnA Maker resource provides access to the authoring and publishing APIs as well as the natural language processing (NLP) based second ranking layer (ranker #2) of the QnA pairs at runtime.
-
-The second ranking applies intelligent filters that can include metadata and follow-up prompts.
+The QnA Maker resource provides access to the authoring and publishing APIs.
#### QnA Maker resource configuration settings
Learn [how to configure](../How-To/configure-QnA-Maker-resources.md#configure-qn
### App service and App service plan
-The [App service](../../../app-service/index.yml) is used by your client application to access the published knowledge bases via the runtime endpoint.
+The [App service](../../../app-service/index.yml) is used by your client application to access the published knowledge bases via the runtime endpoint. App service includes the natural language processing (NLP) based second ranking layer (ranker #2) of the QnA pairs at runtime. The second ranking applies intelligent filters that can include metadata and follow-up prompts.
To query the published knowledge base, all published knowledge bases use the same URL endpoint, but specify the **knowledge base ID** within the route.
cognitive-services Quickstart Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Quickstarts/quickstart-sdk.md
Get started with the QnA Maker client library. Follow these steps to install the
::: zone-end ::: zone pivot="programming-language-javascript" ::: zone-end ::: zone pivot="programming-language-python"
cognitive-services Conversation Transcription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/conversation-transcription.md
Title: Conversation Transcription (Preview) - Speech service
+ Title: Conversation transcription (preview) - Speech service
-description: Conversation Transcription is a solution for meetings, that combines recognition, speaker ID, and diarization to provide transcription of any conversation.
+description: You use the conversation transcription feature for meetings. It combines recognition, speaker ID, and diarization to provide transcription of any conversation.
-# What is Conversation Transcription (Preview)?
+# What is conversation transcription (preview)?
-Conversation Transcription is a [speech-to-text](speech-to-text.md) solution that provides real-time or asynchronous transcription of any conversation. Conversation Transcription combines speech recognition, speaker identification, and sentence attribution to determine who said what and when in a conversation.
+Conversation transcription is a [speech-to-text](speech-to-text.md) solution that provides real-time or asynchronous transcription of any conversation. This feature, which is currently in preview, combines speech recognition, speaker identification, and sentence attribution to determine who said what, and when, in a conversation.
## Key features -- **Timestamps** - Each speaker utterance has a timestamp, so that you can easily find when a phrase was said.-- **Readable transcripts** - Transcripts have formatting and punctuation added automatically to ensure the text closely matches what was being said.-- **User profiles** - User profiles are generated by collecting user voice samples and sending them to signature generation.-- **Speaker identification** - Speakers are identified using user profiles and a _speaker identifier_ is assigned to each.-- **Multi-speaker diarization** - Determine who said what by synthesizing the audio stream with each speaker identifier.-- **Real-time transcription** ΓÇô Provide live transcripts of who is saying what and when while the conversation is happening.-- **Asynchronous transcription** ΓÇô Provide transcripts with higher accuracy by using a multichannel audio stream.
+You might find the following features of conversation transcription useful:
+
+- **Timestamps:** Each speaker utterance has a timestamp, so that you can easily find when a phrase was said.
+- **Readable transcripts:** Transcripts have formatting and punctuation added automatically to ensure the text closely matches what was being said.
+- **User profiles:** User profiles are generated by collecting user voice samples and sending them to signature generation.
+- **Speaker identification:** Speakers are identified by using user profiles, and a _speaker identifier_ is assigned to each.
+- **Multi-speaker diarization:** Determine who said what by synthesizing the audio stream with each speaker identifier.
+- **Real-time transcription:** Provide live transcripts of who is saying what, and when, while the conversation is happening.
+- **Asynchronous transcription:** Provide transcripts with higher accuracy by using a multichannel audio stream.
> [!NOTE]
-> Although Conversation Transcription does not put a limit on the number of speakers in the room, it is optimized for 2-10 speakers per session.
+> Although conversation transcription doesn't put a limit on the number of speakers in the room, it's optimized for 2-10 speakers per session.
## Get started
See the real-time conversation transcription [quickstart](how-to-use-conversatio
## Use cases
-To make meetings inclusive for everyone, such as participants who are deaf and hard of hearing, it is important to have transcription in real time. Conversation Transcription in real-time mode takes meeting audio and determines who is saying what, allowing all meeting participants to follow the transcript and participate in the meeting without a delay.
-
-### Improved efficiency
+To make meetings inclusive for everyone, such as participants who are deaf and hard of hearing, it's important to have transcription in real time. Conversation transcription in real-time mode takes meeting audio and determines who is saying what, allowing all meeting participants to follow the transcript and participate in the meeting, without a delay.
-Meeting participants can focus on the meeting and leave note-taking to Conversation Transcription. Participants can actively engage in the meeting and quickly follow up on next steps, using the transcript instead of taking notes and potentially missing something during the meeting.
+Meeting participants can focus on the meeting and leave note-taking to conversation transcription. Participants can actively engage in the meeting and quickly follow up on next steps, using the transcript instead of taking notes and potentially missing something during the meeting.
## How it works
-This is a high-level overview of how Conversation Transcription works.
+The following diagram shows a high-level overview of how the feature works.
-![The Import Conversation Transcription Diagram](media/scenarios/conversation-transcription-service.png)
+![Diagram that shows the relationships among different pieces of the conversation transcription solution.](media/scenarios/conversation-transcription-service.png)
## Expected inputs -- **Multi-channel audio stream** ΓÇô For specification and design details, see [Microphone array recommendations](./speech-sdk-microphone.md). -- **User voice samples** ΓÇô Conversation Transcription needs user profiles in advance of the conversation for speaker identification. You will need to collect audio recordings from each user, then send the recordings to the [Signature Generation Service](https://aka.ms/cts/signaturegenservice) to validate the audio and generate user profiles.
+Conversation transcription uses two types of inputs:
+
+- **Multi-channel audio stream:** For specification and design details, see [Microphone array recommendations](./speech-sdk-microphone.md).
+- **User voice samples:** Conversation transcription needs user profiles in advance of the conversation for speaker identification. Collect audio recordings from each user, and then send the recordings to the [signature generation service](https://aka.ms/cts/signaturegenservice) to validate the audio and generate user profiles.
+
+User voice samples for voice signatures are required for speaker identification. Speakers who don't have voice samples are recognized as *unidentified*. Unidentified speakers can still be differentiated when the `DifferentiateGuestSpeakers` property is enabled (see the following example). The transcription output then shows speakers as, for example, *Guest_0* and *Guest_1*, instead of recognizing them as pre-enrolled specific speaker names.
-User voice samples for voice signatures are required for speaker identification. Speakers who do not have voice samples will be recognized as "Unidentified". Unidentified speakers can still be differentiated when the `DifferentiateGuestSpeakers` property is enabled (see example below). The transcription output will then show speakers as "Guest_0", "Guest_1", etc. instead of recognizing as pre-enrolled specific speaker names.
```csharp config.SetProperty("DifferentiateGuestSpeakers", "true"); ``` ## Real-time vs. asynchronous
-Conversation Transcription offers three transcription modes:
+The following sections provide more detail about transcription modes you can choose.
### Real-time
-Audio data is processed live to return speaker identifier + transcript. Select this mode if your transcription solution requirement is to provide conversation participants a live transcript view of their ongoing conversation. For example, building an application to make meetings more accessible the deaf and hard of hearing participants is an ideal use case for real-time transcription.
+Audio data is processed live to return the speaker identifier and transcript. Select this mode if your transcription solution requirement is to provide conversation participants a live transcript view of their ongoing conversation. For example, building an application to make meetings more accessible to participants with hearing loss or deafness is an ideal use case for real-time transcription.
### Asynchronous
-Audio data is batch processed to return speaker identifier and transcript. Select this mode if your transcription solution requirement is to provide higher accuracy without live transcript view. For example, if you want to build an application to allow meeting participants to easily catch up on missed meetings, then use the asynchronous transcription mode to get high-accuracy transcription results.
+Audio data is batch processed to return the speaker identifier and transcript. Select this mode if your transcription solution requirement is to provide higher accuracy, without the live transcript view. For example, if you want to build an application to allow meeting participants to easily catch up on missed meetings, then use the asynchronous transcription mode to get high-accuracy transcription results.
### Real-time plus asynchronous
-Audio data is processed live to return speaker identifier + transcript, and, in addition, a request is created to also get a high-accuracy transcript through asynchronous processing. Select this mode if your application has a need for real-time transcription but also requires a higher accuracy transcript for use after the conversation or meeting occurred.
+Audio data is processed live to return the speaker identifier and transcript, and, in addition, requests a high-accuracy transcript through asynchronous processing. Select this mode if your application has a need for real-time transcription, and also requires a higher accuracy transcript for use after the conversation or meeting occurred.
## Language support
-Currently, Conversation Transcription supports [all speech-to-text languages](language-support.md#speech-to-text) in the following regions:ΓÇ»`centralus`, `eastasia`, `eastus`, `westeurope`. If you require additional locale support, contact the [Conversation Transcription Feature Crew](mailto:CTSFeatureCrew@microsoft.com).
+Currently, conversation transcription supports [all speech-to-text languages](language-support.md#speech-to-text) in the following regions:ΓÇ»`centralus`, `eastasia`, `eastus`, `westeurope`.
## Next steps
cognitive-services Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/custom-neural-voice.md
Title: Custom neural voice overview - Speech service
+ Title: Custom Neural Voice overview - Speech service
-description: Custom Neural Voice is a text-to-Speech feature that allows you to create a one-of-a-kind customized synthetic voice for your applications by providing your own audio data as a sample.
+description: Custom Neural Voice is a text-to-speech feature that allows you to create a one-of-a-kind, customized, synthetic voice for your applications. You provide your own audio data as a sample.
# What is Custom Neural Voice?
-Custom Neural Voice is a Text-to-Speech (TTS) feature that lets you create a one-of-a-kind customized synthetic voice for your applications. With Custom Neural Voice, you can build a highly natural-sounding voice by providing your audio samples as training data. Based on the Neural Text-to-Speech technology and the multi-lingual multi-speaker universal model, Custom Neural Voice lets you create synthetic voices that are rich in speaking styles, or adaptable cross languages. The realistic and natural sounding voice of Custom Neural Voice can represent brands, personify machines, and allow users to interact with applications conversationally. See the supported [languages](language-support.md#custom-neural-voice) for Custom Neural Voice and cross-lingual feature.
+Custom Neural Voice is a text-to-speech feature that lets you create a one-of-a-kind, customized, synthetic voice for your applications. With Custom Neural Voice, you can build a highly natural-sounding voice by providing your audio samples as training data.
+
+Based on the neural text-to-speech technology and the multilingual, multi-speaker, universal model, Custom Neural Voice lets you create synthetic voices that are rich in speaking styles, or adaptable cross languages. The realistic and natural sounding voice of Custom Neural Voice can represent brands, personify machines, and allow users to interact with applications conversationally. See the [supported languages](language-support.md#custom-neural-voice) for Custom Neural Voice.
> [!NOTE]
-> The Custom Neural Voice feature requires registration, and access to it is limited based upon MicrosoftΓÇÖs eligibility and use criteria. Customers who wish to use this feature are required to register their use cases through the [intake form](https://aka.ms/customneural).
+> Custom Neural Voice requires registration, and access to it is limited based on eligibility and use criteria. To use this feature, register your use cases by using the [intake form](https://aka.ms/customneural).
## The basics of Custom Neural Voice
-The underlying Neural TTS technology used for Custom Neural Voice
-consists of three major components: Text Analyzer, Neural Acoustic
-Model, and Neural Vocoder. To generate natural synthetic speech from
-text, text is first input into Text Analyzer, which provides output in
-the form of phoneme sequence. A phoneme is a basic unit of sound that
-distinguishes one word from another in a particular language. A sequence
-of phonemes defines the pronunciations of the words provided in the
-text.
+Custom Neural Voice consists of three major components: the text analyzer, the neural acoustic
+model, and the neural vocoder. To generate natural synthetic speech from text, text is first input into the text analyzer, which provides output in the form of phoneme sequence. A *phoneme* is a basic unit of sound that distinguishes one word from another in a particular language. A sequence of phonemes defines the pronunciations of the words provided in the text.
-Next, the phoneme sequence goes into the Neural Acoustic Model to
-predict acoustic features that define speech signals, such as the
-timbre, the speaking style, speed, intonations, and stress patterns. Finally, the Neural Vocoder converts the acoustic features into audible waves so that synthetic speech is generated.
+Next, the phoneme sequence goes into the neural acoustic model to predict acoustic features that define speech signals. Acoustic features include the timbre, the speaking style, speed, intonations, and stress patterns. Finally, the neural vocoder converts the acoustic features into audible waves, so that synthetic speech is generated.
-![Introduction image for custom neural voice.](./media/custom-voice/cnv-intro.png)
+![Flowchart that shows the components of Custom Neural Voice.](./media/custom-voice/cnv-intro.png)
-Neural Text-to-Speech voice models are trained using deep neural networks based on
-the recording samples of human voices. In this
-[blog](https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-extends-support-to-15-more-languages-with/ba-p/1505911),
-we describe how Neural Text-to-Speech works with state-of-the-art neural speech
-synthesis models. The blog also explains how a universal base model can be adapted to a target speaker's voice with less
-than 2 hours of speech data (or less than 2,000 recorded utterances), and additionally transfer the voice to another language or style. To read about how a neural vocoder is trained, see the [blog post](https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-tts-upgraded-with-hifinet-achieving-higher-audio/ba-p/1847860).
+Neural text-to-speech voice models are trained by using deep neural networks based on
+the recording samples of human voices. For more information, see [this Microsoft blog post](https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-extends-support-to-15-more-languages-with/ba-p/1505911). To learn more about how a neural vocoder is trained, see [this Microsoft blog post](https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-tts-upgraded-with-hifinet-achieving-higher-audio/ba-p/1847860).
-Custom Neural Voice lets you adapt the Neural Text-to-Speech engine to fit your scenarios. To create a custom neural voice, use [Speech Studio](https://speech.microsoft.com/customvoice) to upload the recorded audio and corresponding scripts, train the model, and deploy the voice to a custom endpoint. Custom Neural Voice can use text provided by the user to convert text into speech in real-time, or generate audio content offline with text input. This is made available via the [REST API](./rest-text-to-speech.md), the [Speech SDK](./get-started-text-to-speech.md), or the [web portal](https://speech.microsoft.com/audiocontentcreation).
+You can adapt the neural text-to-speech engine to fit your needs. To create a custom neural voice, use [Speech Studio](https://speech.microsoft.com/customvoice) to upload the recorded audio and corresponding scripts, train the model, and deploy the voice to a custom endpoint. Custom Neural Voice can use text provided by the user to convert text into speech in real time, or generate audio content offline with text input. You can do this by using the [REST API](./rest-text-to-speech.md), the [Speech SDK](./get-started-text-to-speech.md), or the [web portal](https://speech.microsoft.com/audiocontentcreation).
## Get started
+The following articles help you start using this feature:
+ * To get started with Custom Neural Voice and create a project, see [Get started with Custom Neural Voice](how-to-custom-voice.md). * To prepare and upload your audio data, see [Prepare training data](how-to-custom-voice-prepare-data.md). * To train and deploy your models, see [Create and use your voice model](how-to-custom-voice-create-voice.md).
Custom Neural Voice lets you adapt the Neural Text-to-Speech engine to fit your
| **Term** | **Definition** | |||
-| Voice model | A Text-to-Speech model that can mimic the unique vocal characteristics of a target speaker. A *voice model* is also known as a *voice font* or *synthetic voice*. A voice model is a set of parameters in binary format that is not human readable and does not contain audio recordings. It cannot be reverse engineered to derive or construct the audio of a human voice. |
-| Voice talent | Individuals or target speakers whose voices are recorded and used to create voice models that are intended to sound like the voice talentΓÇÖs voice.|
-| Standard TTS | The standard, or "traditional," method of Text-to-Speech that breaks down spoken language into phonetic snippets so that they can be remixed and matched using classical programming or statistical methods.|
-| Neural TTS | Neural TTS synthesizes speech using deep neural networks that have "learned" the way phonetics are combined in natural human speech, rather than using procedural programming or statistical methods. In addition to the recordings of a target voice talent, Neural TTS uses a source library/base model that is built with voice recordings from many different speakers. |
-| Training data | A custom neural voice training dataset that includes the audio recordings of the voice talent, and the associated text transcriptions.|
-| Persona | A persona describes who you want this voice to be. A good persona design will inform all voice creation whether itΓÇÖs choosing an available voice model already created, or starting from scratch by casting and recording a new voice talent.|
-| Script | A script is a text file that contains the utterances to be spoken by your voice talent. (The term "*utterances*" encompasses both full sentences and shorter phrases.)|
+| Voice model | A text-to-speech model that can mimic the unique vocal characteristics of a target speaker. A *voice model* is also known as a *voice font* or *synthetic voice*. A voice model is a set of parameters in binary format that is not human readable and does not contain audio recordings. It can't be reverse engineered to derive or construct the audio of a human voice. |
+| Voice talent | Individuals or target speakers whose voices are recorded and used to create voice models. These voice models are intended to sound like the voice talentΓÇÖs voice.|
+| Standard text-to-speech | The standard, or "traditional," method of text-to-speech. This method breaks down spoken language into phonetic snippets so that they can be remixed and matched by using classical programming or statistical methods.|
+| Neural text-to-speech | This method synthesizes speech by using deep neural networks. These networks have "learned" the way phonetics are combined in natural human speech, rather than using procedural programming or statistical methods. In addition to the recordings of a target voice talent, neural text-to-speech uses a source library or base model that is built with voice recordings from many different speakers. |
+| Training data | A Custom Neural Voice training dataset that includes the audio recordings of the voice talent, and the associated text transcriptions.|
+| Persona | A persona describes who you want this voice to be. A good persona design will inform all voice creation. This might include choosing an available voice model already created, or starting from scratch by casting and recording a new voice talent.|
+| Script | A script is a text file that contains the utterances to be spoken by your voice talent. (The term *utterances* encompasses both full sentences and shorter phrases.)|
## Responsible use of AI
-To learn how to use Custom Neural Voice responsibly, see the [transparency note](/legal/cognitive-services/speech-service/custom-neural-voice/transparency-note-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context). MicrosoftΓÇÖs transparency notes are intended to help you understand how our AI technology works, the choices system owners can make that influence system performance and behavior, and the importance of thinking about the whole system, including the technology, the people, and the environment.
+To learn how to use Custom Neural Voice responsibly, see the [transparency note](/legal/cognitive-services/speech-service/custom-neural-voice/transparency-note-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context). Transparency notes are intended to help you understand how the AI technology from Microsoft works, and the choices system owners can make that influence system performance and behavior. Transparency notes also discuss the importance of thinking about the whole system, including the technology, the people, and the environment.
## Next steps
cognitive-services Get Started Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/get-started-speech-to-text.md
Title: "Speech-to-text quickstart - Speech service"
-description: Learn how to use the Speech SDK to convert speech-to-text. In this quickstart, you learn about object construction, supported audio input formats, and configuration options for speech recognition.
+description: Learn how to use the Speech SDK to convert speech to text, including object construction, supported audio input formats, and configuration options for speech recognition.
cognitive-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md
Previously updated : 01/23/2022 Last updated : 02/02/2022
The following table specifies the limits and other properties for the Markdown f
## Pronunciation data for training
-If there are uncommon terms without standard pronunciations that your users will encounter or use, you can provide a custom pronunciation file to improve recognition. For a list of languages that support custom pronunciation, see **Pronunciation** in the **Customizations** column in [the Speech-to-text table](language-support.md#speech-to-text).
-
-> [!IMPORTANT]
-> We don't recommend that you use custom pronunciation files to alter the pronunciation of common words.
+If there are uncommon terms without standard pronunciations that your users will encounter or use, you can provide a custom pronunciation file to improve recognition. Don't use custom pronunciation files to alter the pronunciation of common words. For a list of languages that support custom pronunciation, see **Pronunciation** in the **Customizations** column in [the Speech-to-text table](language-support.md#speech-to-text).
> [!NOTE] > You can't combine this type of pronunciation file with structured-text training data. For structured-text data, use the phonetic pronunciation capability that's included in the structured-text Markdown format.
-Provide pronunciations in a single text file. This file includes examples of a spoken utterance and a custom pronunciation for each:
+The spoken form is the phonetic sequence spelled out. It can be composed of letters, words, syllables, or a combination of all three. This table includes some examples:
-| Recognized/displayed form | Spoken form |
+| Recognized displayed form | Spoken form |
|--|--| | 3CPO | three c p o | | CNTK | c n t k | | IEEE | i triple e |
-The spoken form is the phonetic sequence spelled out. It can be composed of letters, words, syllables, or a combination of all three.
+You provide pronunciations in a single text file. Include the spoken utterance and a custom pronunciation for each. Each row in the file should begin with the recognized form, then a tab character, and then the space-delimited phonetic sequence.
+
+```tsv
+3CPO three c p o
+CNTK c n t k
+IEEE i triple e
+```
-Use the following table to ensure that your related data file for pronunciations is correctly formatted. Pronunciation files are small and should be only a few kilobytes in size.
+Refer to the following table to ensure that your related data file for pronunciations is correctly formatted. The size of pronunciation files should be limited to a few kilobytes.
| Property | Value | |-|-|
cognitive-services How To Custom Voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-voice.md
Title: "Get started with Custom Neural Voice - Speech service"
+ Title: Get started with Custom Neural Voice - Speech service
-description: "Custom Neural Voice is a set of online tools that allow you to create a recognizable, one-of-a-kind voice for your brand. All it takes to get started are a handful of audio files and the associated transcriptions."
+description: Custom Neural Voice is a set of online tools that you use to create a recognizable, one-of-a-kind voice for your brand. All it takes to get started are a handful of audio files and the associated transcriptions."
# Get started with Custom Neural Voice
-[Custom Neural Voice](https://aka.ms/customvoice) is a set of online tools that allow you to create a recognizable, one-of-a-kind voice for your brand. All it takes to get started are a handful of audio files and the associated transcriptions. Follow the links below to start creating a custom Text-to-Speech experience. See the supported [languages](language-support.md#custom-neural-voice) and [regions](regions.md#custom-neural-voices) for Custom Neural Voice.
+[Custom Neural Voice](https://aka.ms/customvoice) is a set of online tools that you use to create a recognizable, one-of-a-kind voice for your brand. All it takes to get started are a handful of audio files and the associated transcriptions. See if Custom Neural Voice supports your [language](language-support.md#custom-neural-voice) and [region](regions.md#custom-neural-voices).
> [!NOTE]
-> As part of Microsoft's commitment to designing responsible AI, we have limited the use of Custom Neural Voice. You may gain access to the technology only after your applications are reviewed and you have committed to using it in alignment with our responsible AI principles. Learn more about our [policy on the limit access](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) and [apply here](https://aka.ms/customneural).
+> Microsoft is committed to designing responsible AI. For that reason, we have limited the use of Custom Neural Voice. You can gain access to the technology only after your applications are reviewed and you have committed to using it in alignment with our responsible AI principles. Learn more about our [policy on the limit access](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext), and [apply here](https://aka.ms/customneural).
## Set up your Azure account
-A Speech service subscription is required before you can use Custom Neural Voice. Follow these instructions to create a Speech service subscription in Azure. If you do not have an Azure account, you can sign up for a new one.
+A Speech service subscription is required before you can use Custom Neural Voice. Follow these instructions to create a Speech service subscription in Azure. If you don't have an Azure account, you can sign up for a new one.
-Once you've created an Azure account and a Speech service subscription, you'll need to sign in Speech Studio and connect your subscription.
+Once you've created an Azure account and a Speech service subscription, you'll need to sign in to Speech Studio and connect your subscription.
1. Get your Speech service subscription key from the Azure portal.
-2. Sign in to [Speech Studio](https://speech.microsoft.com), then select **Custom Voice**.
-3. Select your subscription and create a speech project.
-4. If you'd like to switch to another Speech subscription, use the cog icon located in the top navigation.
+1. Sign in to [Speech Studio](https://speech.microsoft.com), and then select **Custom Voice**.
+1. Select your subscription and create a speech project.
+1. If you want to switch to another Speech subscription, select the **cog** icon at the top.
> [!NOTE]
-> You must have a F0 or a S0 Speech service key created in Azure before you can use the service. Custom Neural Voice only supports the S0 tier.
+> You must have an F0 or S0 Speech service key created in Azure before you can use the service. Custom Neural Voice only supports the S0 tier.
## Create a project
-Content like data, models, tests, and endpoints are organized into **Projects** in Speech Studio. Each project is specific to a country/language and the gender of the voice you want to create. For example, you may create a project for a female voice for your call center's chat bots that use English in the United States ('en-US').
+Content like data, models, tests, and endpoints are organized into projects in Speech Studio. Each project is specific to a country and language, and the gender of the voice you want to create. For example, you might create a project for a female voice for your call center's chat bots that use English in the United States.
To create a custom voice project:
-1. Sign in [Speech Studio](https://speech.microsoft.com).
+
+1. Sign in to [Speech Studio](https://speech.microsoft.com).
1. Select **Text-to-Speech** > **Custom Voice** > **Create project**. 1. Follow the instructions provided by the wizard to create your project.
-1. After you've created a project, you will see four tabs: **Set up voice talent**, **Prepare training data**, **Train model**, and **Deploy model**. See [Prepare data for custom neural voice](how-to-custom-voice-prepare-data.md) to set up voice talent and proceed to training data.
+1. After you've created a project, you see four tabs: **Set up voice talent**, **Prepare training data**, **Train model**, and **Deploy model**. See [Prepare data for Custom Neural Voice](how-to-custom-voice-prepare-data.md) to set up the voice talent, and proceed to training data.
## Tips for creating a custom neural voice
-Creating a great custom neural voice requires careful quality control in each step, from voice design and data preparation, to the deployment of the voice model to your system. Below are some key steps to take when creating a custom neural voice for your organization.
+Creating a great custom neural voice requires careful quality control in each step, from voice design and data preparation, to the deployment of the voice model to your system. The following sections discuss some key steps to take when you're creating a custom neural voice for your organization.
### Persona design
-First, design a persona of the voice that represents your brand using a persona brief document that defines elements such as the features of the voice, and the character behind the voice. This will help to guide the process of creating a custom neural voice model, including defining the scripts, selecting your voice talent, training and voice tuning.
+First, design a persona of the voice that represents your brand by using a persona brief document. This document defines elements such as the features of the voice, and the character behind the voice. This helps to guide the process of creating a custom neural voice model, including defining the scripts, selecting your voice talent, training, and voice tuning.
### Script selection
-Carefully select the recording script to represent the user scenarios for your voice. For example, you can use the phrases from bot conversations as your recording script if you are creating a customer service bot. Include different sentence types in your scripts, including statements, questions, exclamations, etc.
+Carefully select the recording script to represent the user scenarios for your voice. For example, you can use the phrases from bot conversations as your recording script if you are creating a customer service bot. Include different sentence types in your scripts, including statements, questions, and exclamations.
### Preparing training data
-We recommend that the audio recordings be captured in a professional quality recording studio to achieve a high signal-to-noise ratio. The quality of the voice model heavily depends on your training data. Consistent volume, speaking rate, pitch, and consistency in expressive mannerisms of speech are required.
+It's a good idea to capture the audio recordings in a professional quality recording studio to achieve a high signal-to-noise ratio. The quality of the voice model depends heavily on your training data. Consistent volume, speaking rate, pitch, and consistency in expressive mannerisms of speech are required.
-Once the recordings are ready, follow [Prepare training data](how-to-custom-voice-prepare-data.md) to prepare the training data in the right format.
+After the recordings are ready, follow [Prepare training data](how-to-custom-voice-prepare-data.md) to prepare the training data in the right format.
### Training
-Once you have prepared the training data, go to [Speech Studio](https://aka.ms/custom-voice) to create your custom neural voice. You need to select at least 300 utterances to create a custom neural voice. A series of data quality checks are automatically performed when you upload them. To build high-quality voice models, you should fix the errors and submit again.
+After you have prepared the training data, go to [Speech Studio](https://aka.ms/custom-voice) to create your custom neural voice. Select at least 300 utterances to create a custom neural voice. A series of data quality checks are automatically performed when you upload them. To build high-quality voice models, you should fix any errors and submit again.
### Testing
-Prepare test scripts for your voice model that cover the different use cases for your apps. ItΓÇÖs recommended that you use scripts within and outside the training dataset so you can test the quality more broadly for different content.
+Prepare test scripts for your voice model that cover the different use cases for your apps. ItΓÇÖs a good idea to use scripts within and outside the training dataset, so you can test the quality more broadly for different content.
### Tuning and adjustment
-The style and the characteristics of the trained voice model depend on the style and the quality of the recordings from the voice talent used for training. However, several adjustments can be made using [SSML (Speech Synthesis Markup Language)](./speech-synthesis-markup.md?tabs=csharp) when you make the API calls to your voice model to generate synthetic speech. SSML is the markup language used to communicate with the TTS service to convert text into audio. The adjustments include change of pitch, rate, intonation, and pronunciation correction. If the voice model is built with multiple styles, SSML can also be used to switch the styles.
+The style and the characteristics of the trained voice model depend on the style and the quality of the recordings from the voice talent used for training. However, you can make several adjustments by using [SSML (Speech Synthesis Markup Language)](./speech-synthesis-markup.md?tabs=csharp) when you make the API calls to your voice model to generate synthetic speech.
-## Migrate to Custom Neural Voice
+SSML is the markup language used to communicate with the text-to-speech service to convert text into audio. The adjustments you can make include change of pitch, rate, intonation, and pronunciation correction. If the voice model is built with multiple styles, you can also use SSML to switch the styles.
-If you are using the old version of Custom Voice (to be retired in 2/2024), check the instructions [here](how-to-migrate-to-custom-neural-voice.md) on how to migrate it to Custom Neural Voice.
+## Migrate to Custom Neural Voice
+If you're using the old version of Custom Voice (which is scheduled to be retired in February 2024), see [How to migrate to Custom Neural Voice](how-to-migrate-to-custom-neural-voice.md).
## Next steps -- [Prepare data for custom neural voice](how-to-custom-voice-prepare-data.md)
+- [Prepare data for Custom Neural Voice](how-to-custom-voice-prepare-data.md)
- [Train and deploy a custom neural voice](how-to-custom-voice-create-voice.md) - [How to record voice samples](record-custom-voice-samples.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
Previously updated : 01/07/2022 Last updated : 02/02/2022
The following table outlines supported languages for custom keyword and keyword
## Next steps
-* [Create a free Azure account](https://azure.microsoft.com/free/cognitive-services/)
-* [See how to recognize speech in C#](./get-started-speech-to-text.md?pivots=programming-language-chsarp)
+* [Region support](regions.md)
cognitive-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/regions.md
# Speech service supported regions
-The Speech service allows your application to convert audio to text, perform speech translation, and convert text to speech. The service is available in multiple regions with unique endpoints for the Speech SDK and REST APIs.
+The Speech service allows your application to convert audio to text, perform speech translation, and convert text to speech. The service is available in multiple regions with unique endpoints for the Speech SDK and REST APIs. You can perform custom configurations to your speech experience, for all regions, at the [Speech portal](https://speech.microsoft.com).
-The Speech portal, where you can perform custom configurations to your speech experience for all regions, is available at [speech.microsoft.com](https://speech.microsoft.com).
+Keep in mind the following points:
-Keep in mind the following points when considering regions:
-
-* If your application uses a [Speech SDK](speech-sdk.md), you provide the region identifier, such as `westus`, when creating a speech configuration. Make sure the region matches the region of your subscription.
+* If your application uses a [Speech SDK](speech-sdk.md), you provide the region identifier, such as `westus`, when you create a speech configuration. Make sure the region matches the region of your subscription.
* If your application uses one of the Speech service's [REST APIs](./overview.md#reference-docs), the region is part of the endpoint URI you use when making requests.
-* Keys created for a region are valid only in that region. Attempting to use them with other regions will result in authentication errors.
+* Keys created for a region are valid only in that region. If you attempt to use them with other regions, you get authentication errors.
> [!NOTE]
-> Speech Services doesn't store/process customer data outside the region the customer deploys the service instance in.
+> Speech service doesn't store or process customer data outside the region the customer deploys the service instance in.
## Speech SDK
-In the [Speech SDK](speech-sdk.md), the region is specified as a parameter (for example, as a parameter to `SpeechConfig.FromSubscription` in the Speech SDK for C#).
+In the [Speech SDK](speech-sdk.md), you specify the region as a parameter (for example, in the Speech SDK for C#, you specify the region as a parameter to `SpeechConfig.FromSubscription`).
-### Speech-to-Text, Text-to-Speech, and translation
+### Speech-to-text, text-to-speech, and translation
-The Speech service is available in these regions for **Speech-to-Text**, **Text-to-Speech**, and **translation**:
+The Speech service is available in these regions for speech-to-text, text-to-speech, and translation:
[!INCLUDE [](../../../includes/cognitive-services-speech-service-region-identifier.md)]
If you plan to train a custom model with audio data, use one of the [regions wit
### Intent recognition
-Available regions for **intent recognition** via the Speech SDK are in the following table.
+Available regions for intent recognition via the Speech SDK are in the following table.
| Global region | Region | Region identifier | | - | - | -- |
This is a subset of the publishing regions supported by the [Language Understand
### Voice assistants
-The [Speech SDK](speech-sdk.md) supports **voice assistant** capabilities through [Direct Line Speech](./direct-line-speech.md) for regions in the following table.
+The [Speech SDK](speech-sdk.md) supports voice assistant capabilities through [Direct Line Speech](./direct-line-speech.md) for regions in the following table.
| Global region | Region | Region identifier | | - | - | -- |
The [Speech SDK](speech-sdk.md) supports **voice assistant** capabilities throug
| Asia | Southeast Asia | `southeastasia` | | India | Central India | `centralindia` |
-### Speaker Recognition
+### Speaker recognition
-Available regions for **Speaker Recognition** are in the following table.
+Available regions for speaker recognition are in the following table.
| Geography | Region | Region identifier | | - | - | -- |
Available regions for **Speaker Recognition** are in the following table.
### Keyword recognition
-Available regions for **Keyword recognition** are in the following table.
+Available regions for keyword recognition are in the following table.
-| Region | Custom Keyword (Basic models) | Custom Keyword (Advanced models) | Keyword Verification |
+| Region | Custom keyword (basic models) | Custom keyword (advanced models) | Keyword verification |
| | -- | -- | -- | | West US | Yes | No | Yes | | West US 2 | Yes | Yes | Yes |
Available regions for **Keyword recognition** are in the following table.
## REST APIs
-The Speech service also exposes REST endpoints for Speech-to-Text, Text-to-Speech and speaker recognition requests.
-
-### Speech-to-Text
+The Speech service also exposes REST endpoints for speech-to-text, text-to-speech, and speaker recognition requests.
-For Speech-to-Text reference documentation, see [Speech-to-Text REST API](rest-speech-to-text.md).
+### Speech-to-text
The endpoint for the REST API has this format:
Replace `<REGION_IDENTIFIER>` with the identifier matching the region of your su
[!INCLUDE [](../../../includes/cognitive-services-speech-service-region-identifier.md)] > [!NOTE]
-> The language parameter must be appended to the URL to avoid receiving an 4xx HTTP error. For example, the language set to US English using the West US endpoint is: `https://westus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US`.
+> The language parameter must be appended to the URL to avoid receiving an HTTP error. For example, the language set to `US English` by using the `West US` endpoint is: `https://westus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US`.
+
+For more information, see the [speech-to-text REST API](rest-speech-to-text.md).
-### Text-to-Speech
+### Text-to-speech
-For Text-to-Speech reference documentation, see [Text-to-Speech REST API](rest-text-to-speech.md).
+For more information, see the [text-to-speech REST API](rest-text-to-speech.md).
[!INCLUDE [](includes/cognitive-services-speech-service-endpoints-text-to-speech.md)]
-### Speaker Recognition
+### Speaker recognition
-For speaker recognition reference documentation, see [Speaker Recognition REST API](/rest/api/speakerrecognition/). Available regions are the same as Speaker Recognition SDK.
+For more information, see the [speaker recognition REST API](/rest/api/speakerrecognition/). The regions available are the same as those for the speaker recognition SDK.
cognitive-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
Title: Speech service Quotas and Limits
+ Title: Speech service quotas and limits
-description: Quick reference, detailed description, and best practices on Azure Cognitive Speech service Quotas and Limits
+description: Quick reference, detailed description, and best practices on the quotas and limits for the Speech service in Azure Cognitive Services.
Last updated 01/24/2022
-# Speech service Quotas and Limits
+# Speech service quotas and limits
-This article contains a quick reference and the **detailed description** of Azure Cognitive Speech service Quotas and Limits for all [pricing tiers](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). It also contains some best practices to avoid request throttling.
+This article contains a quick reference and a detailed description of the quotas and limits for the Speech service in Azure Cognitive Services. The information applies to all [pricing tiers](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) of the service. It also contains some best practices to avoid request throttling.
-## Quotas and Limits quick reference
-Jump to [Text-to-Speech Quotas and limits](#text-to-speech-quotas-and-limits-per-speech-resource)
-### Speech-to-Text Quotas and Limits per Speech resource
-In the following tables, the parameters without "Adjustable" row are **not** adjustable for all price tiers.
+## Quotas and limits reference
-#### Online Transcription
-For the usage with [Speech SDK](speech-sdk.md) and/or [Speech-to-text REST API for short audio](rest-speech-to-text.md#speech-to-text-rest-api-for-short-audio).
+The following sections provide you with a quick guide to the quotas and limits that apply to Speech service.
+
+### Speech-to-text quotas and limits per resource
+
+In the following tables, the parameters without the **Adjustable** row aren't adjustable for all price tiers.
+
+#### Online transcription
+
+You can use online transcription with the [Speech SDK](speech-sdk.md) or the [speech-to-text REST API for short audio](rest-speech-to-text.md#speech-to-text-rest-api-for-short-audio).
| Quota | Free (F0)<sup>1</sup> | Standard (S0) | |--|--|--|
-| **Concurrent Request limit - Base model endpoint** | 1 | 100 (default value) |
+| Concurrent request limit - base model endpoint | 1 | 100 (default value) |
| Adjustable | No<sup>2</sup> | Yes<sup>2</sup> |
-| **Concurrent Request limit - Custom endpoint** | 1 | 100 (default value) |
+| Concurrent request limit - custom endpoint | 1 | 100 (default value) |
| Adjustable | No<sup>2</sup> | Yes<sup>2</sup> |
-#### Batch Transcription
+#### Batch transcription
+ | Quota | Free (F0)<sup>1</sup> | Standard (S0) | |--|--|--|
-| [Speech-to-text REST API V2.0 and v3.0](rest-speech-to-text.md#speech-to-text-rest-api-v30) limit | Batch transcription is not available for F0 | 300 requests per minute |
+| [Speech-to-text REST API V2.0 and v3.0](rest-speech-to-text.md#speech-to-text-rest-api-v30) limit | Not available for F0 | 300 requests per minute |
| Max audio input file size | N/A | 1 GB |
-| Max input blob size (may contain more than one file, for example, in a zip archive; ensure to note the file size limit above) | N/A | 2.5 GB |
+| Max input blob size (for example, can contain more than one file in a zip archive). Note the file size limit from the preceding row. | N/A | 2.5 GB |
| Max blob container size | N/A | 5 GB | | Max number of blobs per container | N/A | 10000 |
-| Max number of files per Transcription request (when using multiple content URLs as input) | N/A | 1000 |
+| Max number of files per transcription request (when you're using multiple content URLs as input). | N/A | 1000 |
+
+#### Model customization
-#### Model Customization
| Quota | Free (F0)<sup>1</sup> | Standard (S0) | |--|--|--| | REST API limit | 300 requests per minute | 300 requests per minute | | Max number of speech datasets | 2 | 500 |
-| Max acoustic dataset file size for Data Import | 2 GB | 2 GB |
-| Max language dataset file size for Data Import | 200 MB | 1.5 GB |
-| Max pronunciation dataset file size for Data Import | 1 KB | 1 MB |
-| Max text size when using `text` parameter in [Create Model](https://westcentralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateModel/) API request | 200 KB | 500 KB |
+| Max acoustic dataset file size for data import | 2 GB | 2 GB |
+| Max language dataset file size for data import | 200 MB | 1.5 GB |
+| Max pronunciation dataset file size for data import | 1 KB | 1 MB |
+| Max text size when you're using the `text` parameter in the [Create Model](https://westcentralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateModel/) API request | 200 KB | 500 KB |
+
+<sup>1</sup> For the free (F0) pricing tier, see also the monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).<br/>
+<sup>2</sup> See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-online-transcription-concurrent-request-limit).<br/>
-<sup>1</sup> For **Free (F0)** pricing tier see also monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).<br/>
-<sup>2</sup> See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increasing-online-transcription-concurrent-request-limit).<br/>
+### Text-to-speech quotas and limits per resource
-### Text-to-Speech Quotas and limits per Speech resource
-In the tables below Parameters without "Adjustable" row are **not** adjustable for all price tiers.
+In the following tables, the parameters without the **Adjustable** row aren't adjustable for all price tiers.
#### General | Quota | Free (F0)<sup>3</sup> | Standard (S0) | |--|--|--|
-| **Max number of Transactions per Second (TPS) per Speech resource** | | |
-| Real-time API. Prebuilt neural voices and custom neural voices | 200<sup>4</sup> | 200<sup>4</sup> |
+| *Max number of transactions per second (TPS) per Speech service resource* | | |
+| Real-time API. Prebuilt neural voices and custom neural voices. | 200<sup>4</sup> | 200<sup>4</sup> |
| Adjustable | No<sup>4</sup> | Yes<sup>4</sup> |
-| **HTTP-specific quotas** | | |
-| Max Audio length produced per request | 10 min | 10 min |
+| *HTTP-specific quotas* | | |
+| Max audio length produced per request | 10 min | 10 min |
| Max total number of distinct `<voice>` and `<audio>` tags in SSML | 50 | 50 |
-| **Websocket specific quotas** | | |
-| Max Audio length produced per turn | 10 min | 10 min |
+| *Websocket specific quotas* | | |
+| Max audio length produced per turn | 10 min | 10 min |
| Max total number of distinct `<voice>` and `<audio>` tags in SSML | 50 | 50 |
-| Max SSML Message size per turn | 64 KB | 64 KB |
+| Max SSML message size per turn | 64 KB | 64 KB |
#### Long Audio API
In the tables below Parameters without "Adjustable" row are **not** adjustable f
| Max text length | N/A | 10000 paragraphs | | Start time | N/A | 10 tasks or 10000 characters accumulated |
-#### Custom neural voice
+#### Custom Neural Voice
| Quota | Free (F0)<sup>3</sup> | Standard (S0) | |--|--|--|
-| Max number of Transactions per Second (TPS) per Speech resource | [See above](#general) | [See above](#general) |
-| Max number of data sets per Speech resource | 10 | 500 |
-| Max number of simultaneous dataset upload per Speech resource | 2 | 5 |
+| Max number of transactions per second (TPS) per Speech service resource | See [General](#general) | See [General](#general) |
+| Max number of datasets per Speech service resource | 10 | 500 |
+| Max number of simultaneous dataset uploads per Speech service resource | 2 | 5 |
| Max data file size for data import per dataset | 2 GB | 2 GB | | Upload of long audios or audios without script | No | Yes |
-| Max number of simultaneous model trainings per Speech resource | N/A | 3 |
-| Max number of custom endpoints per Speech resource | N/A | 50 |
-| **Concurrent Request limit for custom neural voice** | | |
+| Max number of simultaneous model trainings per Speech service resource | N/A | 3 |
+| Max number of custom endpoints per Speech service resource | N/A | 50 |
+| *Concurrent request limit for Custom Neural Voice* | | |
| Default value | N/A | 10 | | Adjustable | N/A | Yes<sup>5</sup> |
-<sup>3</sup> For **Free (F0)** pricing tier see also monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).<br/>
+<sup>3</sup> For the free (F0) pricing tier, see also the monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).<br/>
<sup>4</sup> See [additional explanations](#detailed-description-quota-adjustment-and-best-practices) and [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling).<br/>
-<sup>5</sup> See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#text-to-speech-increasing-concurrent-request-limit-for-custom-neural-voices).<br/>
+<sup>5</sup> See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#text-to-speech-increase-concurrent-request-limit-for-custom-neural-voices).<br/>
-## Detailed description, Quota adjustment, and best practices
-Before requesting a quota increase (where applicable), ensure that it is necessary. Speech service is using autoscaling technologies to bring the required computational resources in "on-demand" mode and at the same time to keep the customer costs low by not maintaining an excessive amount of hardware capacity. Every time your application receives a Response Code 429 ("Too many requests") while your workload is within the defined limits (see [Quotas and Limits quick reference](#quotas-and-limits-quick-reference)) the most likely explanation is that the Service is scaling up to your demand and did not reach the required scale yet, thus does not immediately have enough resources to serve the request. This state is usually transient and should not last long.
+## Detailed description, quota adjustment, and best practices
+
+Before requesting a quota increase (where applicable), ensure that it's necessary. Speech service uses autoscaling technologies to bring the required computational resources in on-demand mode. At the same time, Speech service tries to keep your costs low by not maintaining an excessive amount of hardware capacity.
+
+Let's look at an example. Suppose that your application receives response code 429, which indicates that there are too many requests. Your application receives this response even though your workload is within the limits defined by the [Quotas and limits reference](#quotas-and-limits-reference). The most likely explanation is that Speech service is scaling up to your demand and didn't reach the required scale yet. Therefore the service doesn't immediately have enough resources to serve the request. In most cases, this throttled state is transient.
### General best practices to mitigate throttling during autoscaling
-To minimize issues related to throttling (Response Code 429), we recommend using the following techniques:
-- Implement retry logic in your application-- Avoid sharp changes in the workload. Increase the workload gradually <br/>
-*Example.* Your application is using text-to-speech and your current workload is 5 Transactions per Second (TPS). The next second you increase the load to 20 TPS (that is four times more). The Service immediately starts scaling up to fulfill the new load, but likely it will not be able to do it within a second, so some of the requests will get Response Code 429.
-- Test different load increase patterns. See the [workload pattern example](#example-of-a-workload-pattern-best-practice)-- Create additional Speech resources in the same or different Regions and distribute the workload among them using "Round Robin" technique. This is especially important for the text-to-speech Transactions per Second (TPS) parameter, which is set to 200 per Speech resource and cannot be adjusted.
-The next sections describe specific cases of adjusting quotas.<br/>
-Jump to [Text-to-Speech: increasing concurrent request limit for custom neural voices](#text-to-speech-increasing-concurrent-request-limit-for-custom-neural-voices)
+To minimize issues related to throttling, it's a good idea to use the following techniques:
+
+- Implement retry logic in your application.
+- Avoid sharp changes in the workload. Increase the workload gradually. For example, let's say your application is using text-to-speech, and your current workload is 5 TPS. The next second, you increase the load to 20 TPS (that is, four times more). Speech service immediately starts scaling up to fulfill the new load, but is unable to scale as needed within one second. Some of the requests will get response code 429 (too many requests).
+- Test different load increase patterns. For more information, see the [workload pattern example](#example-of-a-workload-pattern-best-practice).
+- Create additional Speech service resources in the same or different regions, and distribute the workload among them. This is especially important for the text-to-speech TPS) parameter, which is set to 200 per resource, and can't be adjusted.
+
+The next sections describe specific cases of adjusting quotas.
-### Speech-to-text: increasing online transcription concurrent request limit
-By default the number of concurrent requests is limited to 100 per Speech resource (Base model) and to 100 per Custom endpoint (Custom model). For the Standard pricing tier, this amount can be increased. Before submitting the request, ensure you are familiar with the material in [this section](#detailed-description-quota-adjustment-and-best-practices) and aware of these [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling).
+### Speech-to-text: increase online transcription concurrent request limit
+
+By default, the number of concurrent requests is limited to 100 per resource in the base model, and 100 per custom endpoint in the custom model. For the standard pricing tier, you can increase this amount. Before submitting the request, ensure that you're familiar with the material discussed earlier in this article, such as the best practices to mitigate throttling.
>[!NOTE]
-> If you use custom models, please be aware, that one Speech resource may be associated with many custom endpoints hosting many custom model deployments. Each Custom endpoint has the default number of concurrent request limit (100) set by creation. If you need to adjust it, you need to make the adjustment of each custom endpoint **separately**. Please also note, that the value of the number of concurrent request limit for the base model of a Speech resource has **no** effect to the custom endpoints associated with this resource.
+> If you use custom models, be aware that one Speech service resource might be associated with many custom endpoints hosting many custom model deployments. Each custom endpoint has the default limit of concurrent requests (100) set by creation. If you need to adjust it, you need to make the adjustment of each custom endpoint *separately*. Note also that the value of the limit of concurrent requests for the base model of a resource has *no* effect to the custom endpoints associated with this resource.
-Increasing the Concurrent Request limit does **not** directly affect your costs. Speech service uses "Pay only for what you use" model. The limit defines how high the Service may scale before it starts throttle your requests.
+Increasing the limit of concurrent requests doesn't directly affect your costs. Speech service uses a payment model that requires that you pay only for what you use. The limit defines how high the service can scale before it starts throttle your requests.
-Concurrent Request limits for **Base** and **Custom** models need to be adjusted **separately**.
+Concurrent request limits for base and custom models need to be adjusted separately.
-Existing value of Concurrent Request limit parameter is **not** visible via Azure portal, Command-Line tools, or API requests. To verify the existing value, create an Azure Support Request.
+You aren't able to see the existing value of the concurrent request limit parameter in the Azure portal, the command-line tools, or API requests. To verify the existing value, create an Azure support request.
>[!NOTE]
->[Speech containers](speech-container-howto.md) do not require increases of Concurrent Request limit, as containers are constrained only by the CPUs of the hardware they are hosted on. However Speech containers have their own capacity limitations that should be taken into account. See the question *"Could you help with capacity planning and cost estimation of on-prem Speech-to-text containers?"* from the [Speech containers FAQ](./speech-container-howto.md).
+>[Speech containers](speech-container-howto.md) don't require increases of the concurrent request limit, because containers are constrained only by the CPUs of the hardware they are hosted on. Speech containers do, however, have their own capacity limitations that should be taken into account. For more information, see the [Speech containers FAQ](./speech-container-howto.md).
+
+#### Have the required information ready
-#### Have the required information ready:
-- For **Base model**:
- - Speech Resource ID
+- For the base model:
+ - Speech resource ID
- Region-- For **Custom model**:
+- For the custom model:
- Region
- - Custom Endpoint ID
--- **How to get information (Base model)**:
- - Go to [Azure portal](https://portal.azure.com/)
- - Select the Speech Resource for which you would like to increase the Concurrency Request limit
- - Select *Properties* (*Resource Management* group)
- - Copy and save the values of the following fields:
- - **Resource ID**
- - **Location** (your endpoint Region)
--- **How to get information (Custom Model)**:
- - Go to [Speech Studio](https://speech.microsoft.com/) portal
- - Sign in if necessary
- - Go to Custom Speech
- - Select your project
- - Go to *Deployment*
- - Select the required Endpoint
- - Copy and save the values of the following fields:
- - **Service Region** (your endpoint Region)
- - **Endpoint ID**
-
-#### Create and submit support request
-Initiate the increase of Concurrent Request limit for your resource or if necessary check the today's limit by submitting the Support Request:
--- Ensure you have the [required information](#have-the-required-information-ready)-- Go to [Azure portal](https://portal.azure.com/)-- Select the Speech Resource for which you would like to increase (or to check) the Concurrency Request limit-- Select *New support request* (*Support + troubleshooting* group)-- A new window will appear with auto-populated information about your Azure Subscription and Azure Resource-- Enter *Summary* (like "Increase STT Concurrency Request limit")-- In *Problem type* select "Quota or Subscription issues"-- In appeared *Problem subtype* select:
- - "Quota or concurrent requests increase" - for an increase request
- - "Quota or usage validation" to check existing limit
-- Click *Next: Solutions*-- Proceed further with the request creation-- When in *Details* tab enter in the *Description* field:
- - a note, that the request is about **Speech-to-Text** quota
- - **Base** or **Custom** model
- - Azure resource information you [collected before](#have-the-required-information-ready)
- - Complete entering the required information and click *Create* button in *Review + create* tab
- - Note the support request number in Azure portal notifications. You will be contacted shortly for further processing
+ - Custom endpoint ID
+
+How to get information for the base model:
+
+1. Go to the [Azure portal](https://portal.azure.com/).
+1. Select the Speech service resource for which you would like to increase the concurrency request limit.
+1. From the **Resource Management** group, select **Properties**.
+1. Copy and save the values of the following fields:
+ - **Resource ID**
+ - **Location** (your endpoint region)
+
+How to get information for the custom model:
+
+1. Go to the [Speech Studio](https://speech.microsoft.com/) portal.
+1. Sign in if necessary, and go to **Custom Speech**.
+1. Select your project, and go to **Deployment**.
+1. Select the required endpoint.
+1. Copy and save the values of the following fields:
+ - **Service Region** (your endpoint region)
+ - **Endpoint ID**
+
+#### Create and submit a support request
+
+Initiate the increase of the limit for concurrent requests for your resource, or if necessary check the current limit, by submitting a support request. Here's how:
+
+1. Ensure you have the required information listed in the previous section.
+1. Go to the [Azure portal](https://portal.azure.com/).
+1. Select the Speech service resource for which you would like to increase (or to check) the concurrency request limit.
+1. In the **Support + troubleshooting** group, select **New support request**. A new window will appear, with auto-populated information about your Azure subscription and Azure resource.
+1. In **Summary**, describe what you want (for example, "Increase speech-to-text concurrency request limit").
+1. In **Problem type**, select **Quota or Subscription issues**.
+1. In **Problem subtype**, select either:
+ - **Quota or concurrent requests increase** for an increase request.
+ - **Quota or usage validation** to check the existing limit.
+1. Select **Next: Solutions**. Proceed further with the request creation.
+1. On the **Details** tab, in the **Description** field, enter the following:
+ - A note that the request is about the speech-to-text quota.
+ - Choose either the base or custom model.
+ - The Azure resource information you [collected previously](#have-the-required-information-ready).
+ - Any other required information.
+1. On the **Review + create** tab, select **Create**.
+1. Note the support request number in Azure portal notifications. You'll be contacted shortly about your request.
### Example of a workload pattern best practice
-This example presents the approach we recommend following to mitigate possible request throttling due to [Autoscaling being in progress](#detailed-description-quota-adjustment-and-best-practices). It is not an "exact recipe", but merely a template we invite to follow and adjust as necessary.
-Let us suppose that a Speech resource has the Concurrent Request limit set to 300. Start the workload from 20 concurrent connections and increase the load by 20 concurrent connections every 1.5-2 minutes. Control the Service responses and implement the logic that falls back (reduces the load) if you get too many Response Codes 429. Then retry in 1-2-4-4 minute pattern. (That is retry the load increase in 1 min, if still does not work, then in 2 min, and so on)
+Here's a general example of a good approach to take. It's meant only as a template that you can adjust as necessary for your own use.
+
+Suppose that a Speech service resource has the concurrent request limit set to 300. Start the workload from 20 concurrent connections, and increase the load by 20 concurrent connections every 90-120 seconds. Control the service responses, and implement the logic that falls back (reduces the load) if you get too many requests (response code 429). Then, retry the load increase in one minute, and if it still doesn't work, try again in two minutes. Use a pattern of 1-2-4-4 minutes for the intervals.
-Generally, it is highly recommended to test the workload and the workload patterns before going to production.
+Generally, it's a very good idea to test the workload and the workload patterns before going to production.
-### Text-to-Speech: increasing concurrent request limit for custom neural voices
+### Text-to-speech: increase concurrent request limit for custom neural voices
-By default the number of concurrent requests for custom neural voice endpoints is limited to 10. For the Standard pricing tier, this amount can be increased. Before submitting the request, ensure you are familiar with the material in [this section](#detailed-description-quota-adjustment-and-best-practices) and aware of these [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling).
+By default, the number of concurrent requests for Custom Neural Voice endpoints is limited to 10. For the standard pricing tier, you can increase this amount. Before submitting the request, ensure that you're familiar with the material discussed earlier in this article, such as the best practices to mitigate throttling.
-Increasing the Concurrent Request limit does **not** directly affect your costs. Speech service uses "Pay only for what you use" model. The limit defines how high the Service may scale before it starts throttle your requests.
+Increasing the limit of concurrent requests doesn't directly affect your costs. Speech service uses a payment model that requires that you pay only for what you use. The limit defines how high the service can scale before it starts throttle your requests.
-Existing value of Concurrent Request limit parameter is **not** visible via Azure portal, Command-Line tools, or API requests. To verify the existing value, create an Azure Support Request.
+You aren't able to see the existing value of the concurrent request limit parameter in the Azure portal, the command-line tools, or API requests. To verify the existing value, create an Azure support request.
>[!NOTE]
->[Speech containers](speech-container-howto.md) do not require increases of Concurrent Request limit, as containers are constrained only by the CPUs of the hardware they are hosted on.
-
-#### Prepare the required information:
-To create an increase request, you will need to provide your Deployment Region and the Custom Endpoint ID. To get it, perform the following actions:
--- Go to [Speech Studio](https://speech.microsoft.com/) portal-- Sign in if necessary-- Go to *Custom Voice*-- Select your project-- Go to *Deployment*-- Select the required Endpoint-- Copy and save the values of the following fields:
- - **Service Region** (your endpoint Region)
- - **Endpoint ID**
-
-#### Create and submit support request
-Initiate the increase of Concurrent Request limit for your resource or if necessary check the today's limit by submitting the Support Request:
--- Ensure you have the [required information](#prepare-the-required-information)-- Go to [Azure portal](https://portal.azure.com/)-- Select the Speech Resource for which you would like to increase (or to check) the Concurrency Request limit-- Select *New support request* (*Support + troubleshooting* group)-- A new window will appear with auto-populated information about your Azure Subscription and Azure Resource-- Enter *Summary* (like "Increase TTS Custom Endpoint Concurrency Request limit")-- In *Problem type* select "Quota or Subscription issues"-- In appeared *Problem subtype* select:
- - "Quota or concurrent requests increase" - for an increase request
- - "Quota or usage validation" to check existing limit
-- Click *Next: Solutions*-- Proceed further with the request creation-- When in *Details* tab enter in the *Description* field:
- - a note, that the request is about **Text-to-Speech** quota
- - Azure resource information you [collected before](#prepare-the-required-information)
- - Complete entering the required information and click *Create* button in *Review + create* tab
- - Note the support request number in Azure portal notifications. You will be contacted shortly for further processing
+>[Speech containers](speech-container-howto.md) don't require increases of the concurrent request limit, because containers are constrained only by the CPUs of the hardware they are hosted on.
+
+#### Prepare the required information
+
+To create an increase request, you provide your deployment region and the custom endpoint ID. To get it, perform the following actions:
+
+1. Go to the [Speech Studio](https://speech.microsoft.com/) portal.
+1. Sign in if necessary, and go to **Custom Voice**.
+1. Select your project, and go to **Deployment**.
+1. Select the required endpoint.
+1. Copy and save the values of the following fields:
+ - **Service Region** (your endpoint region)
+ - **Endpoint ID**
+
+#### Create and submit a support request
+
+Initiate the increase of the limit for concurrent requests for your resource, or if necessary check the current limit, by submitting a support request. Here's how:
+
+1. Ensure you have the required information listed in the previous section.
+1. Go to the [Azure portal](https://portal.azure.com/).
+1. Select the Speech service resource for which you would like to increase (or to check) the concurrency request limit.
+1. In the **Support + troubleshooting** group, select **New support request**. A new window will appear, with auto-populated information about your Azure subscription and Azure resource.
+1. In **Summary**, describe what you want (for example, "Increase text-to-speech concurrency request limit").
+1. In **Problem type**, select **Quota or Subscription issues**.
+1. In **Problem subtype**, select either:
+ - **Quota or concurrent requests increase** for an increase request.
+ - **Quota or usage validation** to check the existing limit.
+1. Select **Next: Solutions**. Proceed further with the request creation.
+1. On the **Details** tab, in the **Description** field, enter the following:
+ - A note that the request is about the text-to-speech quota.
+ - Choose either the base or custom model.
+ - The Azure resource information you [collected previously](#have-the-required-information-ready).
+ - Any other required information.
+1. On the **Review + create** tab, select **Create**.
+1. Note the support request number in Azure portal notifications. You'll be contacted shortly about your request.
cognitive-services Cognitive Services Development Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/cognitive-services-development-options.md
The tools that you will use to train and configure models are different from tho
| Pillar | Service | Customization UI | Quickstart | |--|||| | Vision | Custom Vision | https://www.customvision.ai/ | [Quickstart](./custom-vision-service/quickstarts/image-classification.md?pivots=programming-language-csharp) |
-| Decision | Content Moderator | https://contentmoderator.cognitive.microsoft.com/dashboard | [Quickstart](./content-moderator/review-tool-user-guide/human-in-the-loop.md) |
| Decision | Personalizer | UI is available in the Azure portal under your Personalizer resource. | [Quickstart](./personalizer/quickstart-personalizer-sdk.md) | | Language | Language Understanding (LUIS) | https://www.luis.ai/ | | | Language | QnA Maker | https://www.qnamaker.ai/ | [Quickstart](./qnamaker/quickstarts/create-publish-knowledge-base.md) |
cognitive-services Backwards Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/conversational-language-understanding/concepts/backwards-compatibility.md
When importing the LUIS JSON application into CLU, it will create a **Conversati
|**Feature**|**Notes**| | :- | :- | |Intents|All of your intents will be transferred as CLU intents with the same names.|
-|ML entities|All of your ML entities will be transferred as CLU entities with the same names. The ML labels will be persisted and used to train the Learned component of the entity. Structured ML entities will only be transferred as the top-level entity. The individual sub-entities will be ignored.|
+|ML entities|All of your ML entities will be transferred as CLU entities with the same names. The labels will be persisted and used to train the Learned component of the entity. Structured ML entities will transfer over the leaf nodes of the structure as different entities and apply their labels accordingly.|
|Utterances|All of your LUIS utterances will be transferred as CLU utterances with their intent and entity labels. Structured ML entity labels will only consider the top-level entity labels, and the individual sub-entity labels will be ignored.| |Culture|The primary language of the Conversation project will be the LUIS app culture. If the culture is not supported, the importing will fail. |
+|List entities|All of your list entities will be transferred as CLU entities with the same names. The normalized values and synonyms of each list will be transferred as keys and synonyms in the list component for the CLU entity.|
+|Prebuilt entities|All of your prebuilt entities will be transferred as CLU entities with the same names. The CLU entity will have the relevant [prebuilt entities](entity-components.md#prebuilt-component) enabled if they are supported. |
+|Required entity features in ML entities|If you had a prebuilt entity or a list entity as a required feature to another ML entity, then the ML entity will be transferred as a CLU entity with the same name and its labels will apply. The CLU entity will include the required feature entity as a component. The [overlap method](entity-components.md#overlap-methods) will be set as ΓÇ£Exact OverlapΓÇ¥ for the CLU entity.|
+|Non-required entity features in ML entities|If you had a prebuilt entity or a list entity as a non-required feature to another ML entity, then the ML entity will be transferred as a CLU entity with the same name and its ML labels will apply. If an ML entity was used as a feature to another ML entity, it will not be transferred over.|
+|Roles|All of your roles will be transferred as CLU entities with the same names. Each role will be its own CLU entity. The roleΓÇÖs entity type will determine which component is populated for the role. Roles on prebuilt entities will transfer as CLU entities with the prebuilt entity component enabled and the role labels transferred over to train the Learned component. Roles on list entities will transfer as CLU entities with the list entity component populated and the role labels transferred over to train the Learned component. Roles on ML entities will be transferred as CLU entities with their labels applied to train the Learned component of the entity. |
### Unsupported features
When importing the LUIS JSON application into CLU, certain features will be igno
|Pattern.Any Entities|Pattern.Any entities were used to cover for lack of quality in ML entity extraction. The new models in CLU are expected to perform better without needing pattern.any entities.| |Regex Entities| Not currently supported | |Structured ML Entities| Not currently supported |
-|List entities | Not currently supported |
-|Prebuilt entities | Not currently supported |
-|Required entity features in ML entities | Not currently supported |
-|Non-required entity features in ML entities | Not currently supported |
-|Roles | Not currently supported |
## Use a published LUIS application in Conversational Language Understanding orchestration projects
cognitive-services Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/custom-classification/tutorials/cognitive-search.md
+
+ Title: Enrich a Cognitive Search index with custom classes
+
+description: Improve your cognitive search indices using custom classifications
++++++ Last updated : 02/02/2022++++
+# Tutorial: Enrich Cognitive search index with custom classifications from your data
+
+With the abundance of electronic documents within the enterprise, the problem of search through them becomes a tiring and expensive task. [Azure Cognitive Search](../../../../search/search-create-service-portal.md) helps with searching through your files based on their indices. Custom classification helps in enriching the indexing of these files by classifying them into your custom classes.
+
+In this tutorial, you will learn how to:
+
+* Create a custom classification project.
+* Publish Azure function.
+* Add Index to your Azure Cognitive search.
+
+## Prerequisites
+
+* [An Azure Language resource connected to an Azure blob storage account](../how-to/create-project.md).
+ * We recommend following the instructions for creating a resource using the Azure portal, for easier setup.
+
+* [An Azure Cognitive Search service](../../../../search/search-create-service-portal.md) in your current subscription
+ * You can use any tier, and any region for this service.
+
+* An [Azure function app](../../../../azure-functions/functions-create-function-app-portal.md)
+
+* Download this [sample data](). <!-- TODO: add link to sample data here (Movies)-->
+
+## Create a custom classification project through Language studio
+
+1. Log in to [Language Studio](https://aka.ms/languageStudio). A window will appear to let you select your subscription and Language resource. Select the resource you created in the above step.
+
+2. Under the **Classify text** section of Language Studio, select **custom text classification** from the available services, and select it.
+
+3. Select **Create new project** from the top menu in your projects page. Creating a project will let you tag data, train, evaluate, improve, and deploy your models.
+
+4. If youΓÇÖve created your resource using the steps in [Create a project](../how-to/create-project.md#azure-resources), the **Connect storage** step will be completed already. If not, you need to assign [roles for your storage account](../how-to/create-project.md#roles-for-your-storage-account) before connecting it to your resource.
+
+5. Select your project type. For this tutorial, we'll create a multi-label classification project where you can assign multiple classes to the same file. Then click **Next**. See [project types](../glossary.md#project-types) in the FAQ for more information.
+
+6. Enter project information, including a name, description, and the language of the files in your project. You wonΓÇÖt be able to change the name of your project later.
+ >[!TIP]
+ > Your dataset doesn't have to be entirely in the same language. You can have multiple files, each with different supported languages. If your dataset contains files of different languages or if you expect different languages during runtime, select **enable multi-lingual dataset** when you enter the basic information for your project.
+
+7. Select the container where youΓÇÖve uploaded your data. For this tutorial we'll use the tags file you downloaded from the sample data.
+
+8. Review the data you entered and select **Create Project**.
+
+## Train your model
++
+## Deploy your model
+
+1. Select **Deploy model** from the left side menu.
+
+2. Select the model you want to deploy and from the top menu click on **Deploy model**. If you deploy your model through Language Studio, your `deployment-name` will be `prod`.
+
+## Use CogSvc language utilities tool for Cognitive search integration
+
+### Publish your Azure Function
+
+1. Download and use the [provided sample function](https://aka.ms/CustomTextAzureFunction).
+
+2. After you download the sample function, open the *program.cs* file in Visual Studio and [publish the function to Azure](../../../../azure-functions/functions-develop-vs.md?tabs=in-process#publish-to-azure).
+
+### Prepare configuration file
+
+1. Download [sample configuration file](https://aka.ms/CognitiveSearchIntegrationToolAssets) and open it in a text editor.
+
+2. Get your storage account connection string by:
+
+ 1. Navigating to your storage account overview page in the [Azure portal](https://ms.portal.azure.com/#home).
+ 2. In the **Access Keys** section in the menu to the left of the screen, copy your **Connection string** to the `connectionString` field in the configuration file, under `blobStorage`.
+ 3. Go to the container where you have the files you want to index and copy container name to the `containerName` field in the configuration file, under `blobStorage`.
+
+3. Get your cognitive search endpoint and keys by:
+
+ 1. Navigating to your resource overview page in the [Azure portal](https://ms.portal.azure.com/#home).
+ 2. Copy the **Url** at the top-right section of the page to the `endpointUrl` field within `cognitiveSearch`.
+ 3. Go to the **Keys** section in the menu to the left of the screen. Copy your **Primary admin key** to the `apiKey` field within `cognitiveSearch`.
+
+4. Get Azure Function endpoint and keys
+
+ 1. To get your Azure Function endpoint and keys, go to your function overview page in the [Azure portal](https://ms.portal.azure.com/#home).
+ 2. Go to **Functions** menu on the left of the screen, and click on the function you created.
+ 3. From the top menu, click **Get Function Url**. The URL will be formatted like this: `YOUR-ENDPOINT-URL?code=YOUR-API-KEY`.
+ 4. Copy `YOUR-ENDPOINT-URL` to the `endpointUrl` field in the configuration file, under `azureFunction`.
+ 5. Copy `YOUR-API-KEY` to the `apiKey` field in the configuration file, under `azureFunction`.
+
+5. Get your resource keys endpoint
+
+ 1. Navigate to your resource in the [Azure portal](https://ms.portal.azure.com/#home).
+ 2. From the menu on the left side, select **Keys and Endpoint**. You will need the endpoint and one of the keys for the API requests.
+
+ :::image type="content" source="../../media/azure-portal-resource-credentials.png" alt-text="A screenshot showing the key and endpoint screen in the Azure portal" lightbox="../../media/azure-portal-resource-credentials.png":::
+
+6. Get your custom classification project secrets
+
+ 1. You will need your **project-name**, project names are case-sensitive.
+
+ 2. You will also need the **deployment-name**.
+ * If youΓÇÖve deployed your model via Language Studio, your deployment name will be `prod` by default.
+ * If youΓÇÖve deployed your model programmatically, using the API, this is the deployment name you assigned in your request.
+
+### Run the indexer command
+
+After youΓÇÖve published your Azure function and prepared your configs file, you can run the indexer command.
+```cli
+ indexer index --index-name <name-your-index-here> --configs <absolute-path-to-configs-file>
+```
+
+Replace `name-your-index-here` with the index name that appears in your Cognitive Search instance.
+
+## Next steps
+
+* [Search your app with with the Cognitive Search SDK](../../../../search/search-howto-dotnet-sdk.md#run-queries)
cognitive-services Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/custom-named-entity-recognition/tutorials/cognitive-search.md
Previously updated : 11/02/2021 Last updated : 02/02/2022 # Tutorial: Enrich a Cognitive Search index with custom entities from your data
-In enterprise, having an abundance of electronic documents can mean that searching through them is a time-consuming and expensive task. [Azure Cognitive Search](../../../../search/search-create-service-portal.md) can help with searching through your files, based on their indices. Custom NER can help by extracting relevant entities from your files, and enriching the process of indexing these files.
+In enterprise, having an abundance of electronic documents can mean that searching through them is a time-consuming and expensive task. [Azure Cognitive Search](../../../../search/search-create-service-portal.md) can help with searching through your files, based on their indices. Custom named entity recognition can help by extracting relevant entities from your files, and enriching the process of indexing these files.
In this tutorial, you learn how to:
-* Create a Custom Named Entity Recognition project.
-* Publish Azure Function.
+* Create a custom named entity recognition project.
+* Publish Azure function.
* Add an index to Azure Cognitive Search. ## Prerequisites
In this tutorial, you learn how to:
## Create a custom NER project through Language studio
-1. Login through the [Language studio portal](https://aka.ms/LanguageStudio) and select **Custom entity extraction**.
+1. Sign in to [Language Studio](https://aka.ms/languageStudio). A window will appear to let you select your subscription and Language resource. Select the resource you created in the above step.
-2. Select your Language resource. Make sure you have [enabled identity management](../how-to/create-project.md#enable-identity-management-for-your-resource) and roles for your resource and storage account.
-
-3. From the top of the projects screen, select **Create new project**. If requested, choose your storage account from the menu that appears.
-
- :::image type="content" source="../media/create-project.png" alt-text="A screenshot of the project creation page." lightbox="../media/create-project.png":::
+2. Under the **Extract information** section of Language Studio, select **custom named entity recognition** from the available services, and select it.
+
+3. Select **Create new project** from the top menu in your projects page. Creating a project will let you tag data, train, evaluate, improve, and deploy your models.
-4. Enter the information for your project:
+4. If youΓÇÖve created your resource using the steps above in this [guide](../how-to/create-project.md#azure-resources), the **Connect storage** step will be completed already. If not, you need to assign [roles for your storage account](../how-to/create-project.md#roles-for-your-storage-account) before connecting it to your resource
- | Key | Description |
- |--|--|
- | Name | The name of your project. You will not be able to rename your project after you create it. |
- | Description | A description of your project |
- | Language | The language of the files in your project.|
+5. Enter project information, including a name, description, and the language of the files in your project. You wonΓÇÖt be able to change the name of your project later.
+ >[!TIP]
+ > Your dataset doesn't have to be entirely in the same language. You can have multiple files, each with different supported languages. If your dataset contains files of different languages or if you expect different languages during runtime, select **enable multi-lingual dataset** when you enter the basic information for your project.
- > [!NOTE]
- > If your documents will be in multiple languages, select the **multiple languages** option in project creation, and set the **language** option to the language of the majority of your documents.
+6. Select the container where youΓÇÖve uploaded your data. For this tutorial weΓÇÖll use the tags file you downloaded from the sample data.
-5. For this tutorial, use an **existing tags file** and select the tags file you downloaded from the sample data.
+7. Review the data you entered and select **Create Project**.
## Train your model
-1. Select **Train** from the left side menu.
-
-2. To train a new model, select **Train a new model** and type in the model name in the text box below.
-
- :::image type="content" source="../media/train-model.png" alt-text="Create a new model" lightbox="../media/train-model.png":::
-
-3. Select the **Train** button at the bottom of the page.
-
-4. After training is completed, you can [view the model's evaluation details](../how-to/view-model-evaluation.md) and [improve the model](../how-to/improve-model.md)
## Deploy your model 1. Select **Deploy model** from the left side menu.
-2. Select the model you want to deploy and from the top menu click on **Deploy model**. You can only see models that have completed training successfully.
+2. Select the model you want to deploy and from the top menu click on **Deploy model**. If you deploy your model through Language Studio, your `deployment-name` will be `prod`.
-## Prepare your secrets for the Azure function
+## Use CogSvc language utilities tool for Cognitive search integration
+
+### Publish your Azure Function
-Next you will need to prepare your secrets for your Azure function. Your project secrets are your:
-* Endpoint
-* Resource key
-* Deployment name
+1. Download and use the [provided sample function](https://aka.ms/CustomTextAzureFunction).
-### Get your custom NER project secrets
-
-* You will need your **Project name**, Project names are case sensitive.
-
-* You will also need the deployment name.
- * If you have deployed your model via Language Studio, your deployment slot will be `prod` by default.
- * If you have deployed your model programmatically, using the API, this is the deployment name you assigned in your request.
-
-### Get your resource keys endpoint
-
-1. Navigate to your resource in the [Azure portal](https://ms.portal.azure.com/#home).
-
-2. From the menu on the left side, select **Keys and Endpoint**. You will need the endpoint and one of the keys for the API requests.
-
- :::image type="content" source="../../media/azure-portal-resource-credentials.png" alt-text="A screenshot showing the key and endpoint screen in the Azure portal" lightbox="../../media/azure-portal-resource-credentials.png":::
-
-## Edit and deploy your Azure Function
-
-1. Download and use the [provided sample function](https://aka.ms/ct-cognitive-search-integration-tool).
-
-2. After you download the sample function, open the *program.cs* file and enter your app secrets.
-
-3. [Publish the function to Azure](../../../../azure-functions/functions-develop-vs.md?tabs=in-process#publish-to-azure).
-
-## Use the integration tool
-
-In the following sections, you will use the [Cognitive Search Integration tool](https://aka.ms/ct-cognitive-search-integration-tool) to integrate your project with Azure Cognitive Search. Download this repo now.
+2. After you download the sample function, open the *program.cs* file in Visual Studio and [publish the function to Azure](../../../../azure-functions/functions-develop-vs.md?tabs=in-process#publish-to-azure).
### Prepare configuration file
-1. In the folder you just download, and find the [sample configuration file](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/dev/CustomTextAnalytics.CognitiveSearch/Samples/configs.json). Open it in a text editor.
+1. Download [sample configuration file](https://aka.ms/CognitiveSearchIntegrationToolAssets) and open it in a text editor.
2. Get your storage account connection string by:
- 1. Navigating to your storage account overview page in the [Azure portal](https://ms.portal.azure.com/#home).
- 2. In the top section of the screen, copy your container name to the `containerName` field in the configuration file, under `blobStorage`.
- 3. In the **Access Keys** section in the menu to the left of the screen, copy your **Connection string** to the `connectionString` field in the configuration file, under `blobStorage`.
+
+ 1. Navigating to your storage account overview page in the [Azure portal](https://ms.portal.azure.com/#home).
+ 2. In the **Access Keys** section in the menu to the left of the screen, copy your **Connection string** to the `connectionString` field in the configuration file, under `blobStorage`.
+ 3. Go to the container where you have the files you want to index and copy container name to the `containerName` field in the configuration file, under `blobStorage`.
-1. Get your cognitive search endpoint and keys by:
+3. Get your cognitive search endpoint and keys by:
+
1. Navigating to your resource overview page in the [Azure portal](https://ms.portal.azure.com/#home). 2. Copy the **Url** at the top-right section of the page to the `endpointUrl` field within `cognitiveSearch`. 3. Go to the **Keys** section in the menu to the left of the screen. Copy your **Primary admin key** to the `apiKey` field within `cognitiveSearch`.
-3. Get Azure Function endpoint and keys
-
+4. Get Azure Function endpoint and keys
+
1. To get your Azure Function endpoint and keys, go to your function overview page in the [Azure portal](https://ms.portal.azure.com/#home).
- 2. Go to **Functions** menu on the left of the screen, and click on the function you created.
- 3. From the top menu, click **Get Function Url**. The URL will be formatted like this: `YOUR-ENDPOINT-URL?code=YOUR-API-KEY`.
+ 2. Go to **Functions** menu on the left of the screen, and select on the function you created.
+ 3. From the top menu, select **Get Function Url**. The URL will be formatted like this: `YOUR-ENDPOINT-URL?code=YOUR-API-KEY`.
4. Copy `YOUR-ENDPOINT-URL` to the `endpointUrl` field in the configuration file, under `azureFunction`. 5. Copy `YOUR-API-KEY` to the `apiKey` field in the configuration file, under `azureFunction`.
-### Prepare schema file
+5. Get your resource keys endpoint
+
+ 1. Navigate to your resource in the [Azure portal](https://ms.portal.azure.com/#home).
+ 2. From the menu on the left side, select **Keys and Endpoint**. YouΓÇÖll need the endpoint and one of the keys for the API requests.
+
+ :::image type="content" source="../../media/azure-portal-resource-credentials.png" alt-text="A screenshot showing the key and endpoint screen in the Azure portal" lightbox="../../media/azure-portal-resource-credentials.png":::
-In the folder you downloaded earlier, find the [sample schema file](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/dev/CustomTextAnalytics.CognitiveSearch/Samples/app-schema.json). Open it in a text editor.
+6. Get your custom NER project secrets
-The entries in the `entityNames` array will be the entity names you have assigned while creating your project. You can copy and paste them from your project in [Language Studio](https://aka.ms/custom-extraction), or
+ 1. YouΓÇÖll need your **project-name**, project names are case-sensitive.
-### Run the `Index` command
+ 2. YouΓÇÖll also need the **deployment-name**.
+ * If youΓÇÖve deployed your model via Language Studio, your deployment name will be `prod` by default.
+ * If youΓÇÖve deployed your model programmatically, using the API, this is the deployment name you assigned in your request.
-After you have completed your configuration and schema file, you can index your project. Place your configuration file in the same path of the CLI tool, and run the following command:
+### Run the indexer command
+After youΓÇÖve published your Azure function and prepared your configs file, you can run the indexer command.
```cli
- indexer index --schema <path/to/your/schema> --index-name <name-your-index-here>
+ indexer index --index-name <name-your-index-here> --configs <absolute-path-to-configs-file>
``` Replace `name-your-index-here` with the index name that appears in your Cognitive Search instance.
communication-services Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/best-practices.md
Your application should invoke `call.hangup` when the `onbeforeunload` event is
Your application should not connect to calls from multiple browser tabs simultaneously as this can cause undefined behavior due to resource allocation for microphone and camera on the device. Developers are encouraged to always hang up calls when completed in the background before starting a new one. ### Handle OS muting call when phone call comes in.
-While on an ACS call (for both iOS and Android) if a phone call comes in or Voice assistant is activated, the OS will automatically mute the users microphone and camera. On Android the call automatically unmutes and video restarts after the phone call ends. On iOS it requires user action to "unmute" and "start video" again. You can listen for the notification that the microphone was muted unexpectedly with the quality event of `microphoneMuteUnexpectedly`. Do note in order to be able to rejoin a call properly you will need to used SDK 1.2.3-beta.1 or higher.
+While on an ACS call (for both iOS and Android) if a phone call comes in or Voice assistant is activated, the OS will automatically mute the user's microphone and camera. On Android, the call automatically unmutes and video restarts after the phone call ends. On iOS, it requires user action to "unmute" and "start video" again. You can listen for the notification that the microphone was muted unexpectedly with the quality event of `microphoneMuteUnexpectedly`. Do note in order to be able to rejoin a call properly you will need to use SDK 1.2.3-beta.1 or higher.
```javascript const latestMediaDiagnostic = call.api(SDK.Features.Diagnostics).media.getLatest();
Your application should invoke `call.startVideo(localVideoStream);` to start a v
You can use the Azure Communication Services SDK to manage your devices and media operations. - Your application shouldn't use native browser APIs like `getUserMedia` or `getDisplayMedia` to acquire streams outside of the SDK. If you do, you'll have to manually dispose your media stream(s) before using `DeviceManager` or other device management APIs via the Communication Services SDK.
-### Request device permissions
+#### Request device permissions
You can request device permissions using the SDK: - Your application should use `DeviceManager.askDevicePermission` to request access to audio and/or video devices. - If the user denies access, `DeviceManager.askDevicePermission` will return 'false' for a given device type (audio or video) on subsequent calls, even after the page is refreshed. In this scenario, your application must detect that the user previously denied access and instruct the user to manually reset or explicitly grant access to a given device type. +
+#### Camera being used by another process
+- On Windows Chrome and Windows Edge, if you start/join/accept a call with video on and the camera device is being used by another process other than the browser that the web sdk is running on, then the call will be started with audio only and no video. A cameraStartFailed UFD will be raised because the camera failed to start since it was being used by another process. Same applies to turning video on mid-call. You can turn off the camera in the other process so that that process releases the camera device, and then start video again from the call and video will now turn on for the call and remote participants will start seeing your video.
+- This is not an issue in MacOS Chrome nor MacOS Safari because the OS will let processes/threads share the camera device.
+- On mobile devices, if a ProcessA requests the camera device and it is being used by ProcessB, then ProcessA will overtake the camera device and ProcessB will stop using the camera device
+- On iOS safari, you cannot have the camera on for multiple call clients within the same tab nor across tabs. When any call client uses the camera, it will overtake the camera from any previous call client that was using it. Previous call client will get a cameraStoppedUnexpectedly UFD.
+ ## Next steps For more information, see the following articles:
communication-services Call Flows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/call-flows.md
The section below gives an overview of the call flows in Azure Communication Ser
When you establish a peer-to-peer or group call, two protocols are used behind the scenes - HTTP (REST) for signaling and SRTP for media.
-Signaling between the SDKs or between SDKs and Communication Services Signaling Controllers is handled with HTTP REST (TLS). For Real-Time Media Traffic (RTP), the User Datagram Protocol (UDP) is preferred. If the use of UDP is prevented by your firewall, the SDK will use the Transmission Control Protocol (TCP) for media.
+Signaling between the SDKs or between SDKs and Communication Services Signaling Controllers is handled with HTTP REST (TLS). The ACS uses TLS 1.2. For Real-Time Media Traffic (RTP), the User Datagram Protocol (UDP) is preferred. If the use of UDP is prevented by your firewall, the SDK will use the Transmission Control Protocol (TCP) for media.
Let's review the signaling and media protocols in various scenarios.
communication-services Teams User Calling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/interop/teams-user-calling.md
The following list presents the set of Teams capabilities, which are currently a
| | Placing a call honors Teams guest access configuration | ✔️ | | | Joining Teams meeting honors configuration for automatic people admit in the Lobby | ✔️ | | | Actions available in the Teams meeting are defined by assigned role | ✔️ |
-| Mid call control | Forward a call | ❌ |
-| | Receive simultaneous ringing | ❌ |
-| | Play music on hold | ❌ |
+| Mid call control | Receive forwarded call | ✔️ |
+| | Receive simultaneous ringing | ✔️ |
+| | Play music on hold | ❌ |
| | Park a call | ❌ | | | Transfer a call to a person | ✔️ | | | Transfer a call to a call | ✔️ |
The following list presents the set of Teams capabilities, which are currently a
| | Start call recording | ❌ | | | Start call transcription | ❌ | | | Start live captions | ❌ |
+| | Receive information of call being recorded | ✔️ |
| PSTN | Make an Emergency call | ❌ | | | Place a call honors location-based routing | ❌ | | | Support for survivable branch appliance | ❌ | | Compliance | Place a call honors information barriers | ✔️ |
+| | Support for compliance recording | ✔️ |
## Next steps
communication-services Classification Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/router/classification-concepts.md
For more information see the section [below](#using-label-selector-attachments).
The following label selector attachments are available:
-**Static label selector -** Always attaches the given `LabelSelector`.
+**Static label selector -** Always attaches the given `LabelSelector` to the Job.
-**Conditional label selector -** Will evaluate a condition defined by a [rule](router-rule-concepts.md). If it resolves to `true`, then the specified collection of selectors will be applied.
+**Conditional label selector -** Evaluates a condition defined by a [rule](router-rule-concepts.md). If it resolves to `true`, then the specified collection of selectors will be attached to the Job.
-**Passthrough label selector -** Uses a key and `LabelOperator` to check for the existence of the key. This selector can be used in the `QueueLabelSelector` to match a Queue based on the set of labels. When used with the `WorkerSelectors`, the Job's key/value pair are attached to the `WorkerSelectors` of the Job.
+**Passthrough label selector -** Attaches a selector to the Job with the specified key and operator but gets the value from the Job label of the same key.
**Rule label selector -** Sources a collection of selectors from one of many rule engines. Read the [RouterRule concepts](router-rule-concepts.md) page for more information.
-**Weighted allocation label selector -** Enables you to specify a percentage-based weighting and a collection of selectors to apply based on the weighting allocation. For example, you may want 30% of the Jobs to go to "Vendor 1" and 70% of Jobs to go to "Vendor 2".
+**Weighted allocation label selector -** Enables you to specify a percentage-based weighting and a collection of selectors to attach based on the weighting allocation. For example, you may want 30% of the Jobs to go to "Vendor 1" and 70% of Jobs to go to "Vendor 2".
## Reclassifying a job
communication-services Job Classification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/how-tos/router-sdk/job-classification.md
You can use the classification policy to attach additional worker selectors to a
### Static Attachments
-In this example, we are using a static attachment, which will always attach the specified label selector to a job.
+In this example, the Classification Policy is configured with a static attachment, which will always attach the specified label selector to a job.
::: zone pivot="programming-language-csharp"
await client.upsertClassificationPolicy(
### Conditional Attachments
-In this example, we are using a conditional attachment, which will evaluate a condition against the job labels to determine if the said label selectors should be attached to the job.
+In this example, the Classification Policy is configured with a conditional attachment. So it will evaluate a condition against the job labels to determine if the said label selectors should be attached to the job.
::: zone pivot="programming-language-csharp"
await client.upsertClassificationPolicy(
::: zone-end
+### Passthrough Attachments
+
+In this example, the Classification Policy is configured to attach a worker selector (`"Foo" = "<value comes from "Foo" label of the job>"`) to the job.
++
+```csharp
+await client.SetClassificationPolicyAsync(
+ id: "policy-1",
+ workerSelectors: new List<LabelSelectorAttachment>
+ {
+ new PassThroughLabelSelector(key: "Foo", @operator: LabelOperator.Equal)
+ }
+);
+```
+++
+```typescript
+await client.upsertClassificationPolicy(
+ id: "policy-1",
+ workerSelectors: [
+ {
+ kind: "pass-through",
+ key: "Foo",
+ operator: "equal"
+ }
+ ]
+);
+```
++ ### Weighted Allocation Attachments
-In this example, we are using a weighted allocation attachment, which will divide up jobs according to the weightings specified and attach different selectors accordingly. Here, we are saying that 30% of jobs should go to workers with the label `Vendor` set to `A` and 70% should go to workers with the label `Vendor` set to `B`.
+In this example, the Classification Policy is configured with a weighted allocation attachment. This will divide up jobs according to the weightings specified and attach different selectors accordingly. Here, 30% of jobs should go to workers with the label `Vendor` set to `A` and 70% should go to workers with the label `Vendor` set to `B`.
::: zone pivot="programming-language-csharp"
container-apps Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-apps/get-started.md
az containerapp create `
+> [!NOTE]
+> Make sure the value for the `--image` parameter is in lower case.
+ By setting `--ingress` to `external`, you make the container app available to public requests. ## Verify deployment
container-apps Vnet Custom https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-apps/vnet-custom.md
As you create an Azure Container Apps [environment](environment.md), a virtual n
:::image type="content" source="media/networking/azure-container-apps-virtual-network.png" alt-text="Azure Container Apps environments use an existing VNET, or you can provide your own.":::
+## Restrictions
+
+Subnet address ranges can't overlap with the following reserved ranges:
+
+- 169.254.0.0/16
+- 172.30.0.0/16
+- 172.31.0.0/16
+- 192.0.2.0/24
+
+Additionally, subnets must have a size between /21 and /12.
+ ## Subnet types As a Container Apps environment is created, you provide resource IDs for two different subnets. Both subnets must be defined in the same container apps.
As a Container Apps environment is created, you provide resource IDs for two dif
- **App subnet**: Subnet for user app containers. Subnet that contains IP ranges mapped to applications deployed as containers. - **Control plane subnet**: Subnet for [control plane infrastructure](/azure/azure-resource-manager/management/control-plane-and-data-plane) components and user app containers. + If the [platformReservedCidr](#networking-parameters) range is defined, both subnets must not overlap with the IP range defined in `platformReservedCidr`. + ## Accessibility level You can deploy your Container Apps environment with an internet-accessible endpoint or with an IP address in your VNET. The accessibility level determines the type of load balancer used with your Container Apps instance.
Container Apps environments deployed as external resources are available for pub
When set to internal, the environment has no public endpoint. Internal environments are deployed with a virtual IP (VIP) mapped to an internal IP address. The internal endpoint is an Azure internal load balancer (ILB) and IP addresses are issued from the custom VNET's list of private IP addresses. + To create an internal only environment, provide the `--internal-only` parameter to the `az containerapp env create` command. + ## Example The following example shows you how to create a Container Apps environment in an existing virtual network.
az containerapp env create `
+> [!NOTE]
+> As you call `az conatinerapp create` to create the container app inside your environment, make sure the value for the `--image` parameter is in lower case.
+ The following table describes the parameters used in for `containerapp env create`. | Parameter | Description |
az network private-dns record-set a add-record `
#### Networking parameters
-There are three optional networking parameters you can choose to define when calling `containerapp env create`. You must either provide values for all three of these properties, or none of them. If they arenΓÇÖt provided, the CLI generates the values for you.
+There are three optional networking parameters you can choose to define when calling `containerapp env create`. Use these options when you have a peered VNET with separate address ranges. Explicitly configuring these ranges ensures the addresses used by the Container Apps environment doesn't conflict with other ranges in the network infrastructure.
+
+You must either provide values for all three of these properties, or none of them. If they arenΓÇÖt provided, the CLI generates the values for you.
| Parameter | Description | |||
az group delete `
::: zone-end
-## Restrictions
-
-Subnet address ranges can't overlap with the following reserved ranges:
--- 169.254.0.0/16-- 172.30.0.0/16-- 172.31.0.0/16-- 192.0.2.0/24-
-Additionally, subnets must have a size between /21 and /12.
- ## Additional resources - Refer to [What is Azure Private Endpoint](/azure/private-link/private-endpoint-overview) for more details on configuring your private endpoint.
container-instances Container Instances Samples Rm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-samples-rm.md
You have several options for deploying resources with Resource Manager templates
[REST API][deploy-rest] <!-- LINKS - External -->
-[app-nav]: https://github.com/Azure/azure-quickstart-templates/tree/master/demos/aci-dynamicsnav
+[app-nav]: https://github.com/Azure/azure-quickstart-templates/tree/master/demos/
[app-wp]: https://github.com/Azure/azure-quickstart-templates/tree/master/application-workloads/wordpress/aci-wordpress [az-files]: https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.containerinstance/aci-storage-file-share [net-publicip]: https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.containerinstance/aci-linuxcontainer-public-ip
cosmos-db Create Sql Api Spark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/create-sql-api-spark.md
spark.sql("CREATE TABLE IF NOT EXISTS cosmosCatalog.{}.{} using cosmos.oltp TBLP
When creating containers with the Catalog API, you can set the throughput and [partition key path](../partitioning-overview.md#choose-partitionkey) for the container to be created.
-For more information, see the full [Catalog API](https://github.com/Azure/azure-sdk-for-jav) documentation.
+For more information, see the full [Catalog API](https://github.com/Azure/azure-sdk-for-jav) documentation.
## Ingest data
spark.createDataFrame((("cat-alive", "Schrodinger cat", 2, True), ("cat-dead", "
Note that `id` is a mandatory field for Cosmos DB.
-For more information related to ingesting data, see the full [write configuration](https://github.com/Azure/azure-sdk-for-jav#write-config) documentation.
+For more information related to ingesting data, see the full [write configuration](https://github.com/Azure/azure-sdk-for-jav#write-config) documentation.
## Query data
df.filter(col("isAlive") == True)\
.show() ```
-For more information related to querying data, see the full [query configuration](https://github.com/Azure/azure-sdk-for-jav#query-config) documentation.
+For more information related to querying data, see the full [query configuration](https://github.com/Azure/azure-sdk-for-jav#query-config) documentation.
## Schema inference
df = spark.read.format("cosmos.oltp").options(**cfg)\
df.printSchema() ```
-For more information related to schema inference, see the full [schema inference configuration](https://github.com/Azure/azure-sdk-for-jav#schema-inference-config) documentation.
+For more information related to schema inference, see the full [schema inference configuration](https://github.com/Azure/azure-sdk-for-jav#schema-inference-config) documentation.
## Configuration reference
cosmos-db Sql Api Sdk Java Spark V3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-api-sdk-java-spark-v3.md
If you have any feedback or ideas on how to improve your experience create an is
## Documentation links
-* [Getting started](https://github.com/Azure/azure-sdk-for-jav)
-* [Catalog API](https://github.com/Azure/azure-sdk-for-jav)
-* [Configuration Parameter Reference](https://github.com/Azure/azure-sdk-for-jav)
+* [Getting started](https://github.com/Azure/azure-sdk-for-jav)
+* [Catalog API](https://github.com/Azure/azure-sdk-for-jav)
+* [Configuration Parameter Reference](https://github.com/Azure/azure-sdk-for-jav)
## Version compatibility
cosmos-db Troubleshoot Dot Net Sdk Request Timeout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/troubleshoot-dot-net-sdk-request-timeout.md
description: Learn how to diagnose and fix .NET SDK request timeout exceptions.
Previously updated : 03/05/2021 Last updated : 02/02/2022
The timeouts will contain *Diagnostics*, which contain:
* If the `cpu` values are over 70%, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it, or scale the machine to a larger resource size. * If the `threadInfo/isThreadStarving` nodes have `True` values, the cause is thread starvation. In this case the solution is to investigate the source/s of the thread starvation (potentially locked threads), or scale the machine/s to a larger resource size.
+* If the `dateUtc` time in-between measurements is not approximately 10 seconds, it also would indicate contention on the thread pool. CPU is measured as an independent Task that is enqueued in the thread pool every 10 seconds, if the time in-between measurement is longer, it would indicate that the async Tasks are not able to be processed in a timely fashion. Most common scenarios are when doing [blocking calls over async code](https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md#avoid-using-taskresult-and-taskwait) in the application code.
# [Older SDK](#tab/cpu-old)
cosmos-db Troubleshoot Dot Net Sdk Slow Request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/troubleshoot-dot-net-sdk-slow-request.md
description: Learn how to diagnose and fix slow requests when using Azure Cosmos
Previously updated : 01/10/2022 Last updated : 02/02/2022
The timeouts will contain *Diagnostics*, which contain:
* If the `cpu` values are over 70%, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it, or scale the machine to a larger resource size. * If the `threadInfo/isThreadStarving` nodes have `True` values, the cause is thread starvation. In this case the solution is to investigate the source/s of the thread starvation (potentially locked threads), or scale the machine/s to a larger resource size.
+* If the `dateUtc` time in-between measurements is not approximately 10 seconds, it also would indicate contention on the thread pool. CPU is measured as an independent Task that is enqueued in the thread pool every 10 seconds, if the time in-between measurement is longer, it would indicate that the async Tasks are not able to be processed in a timely fashion. Most common scenarios are when doing [blocking calls over async code](https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md#avoid-using-taskresult-and-taskwait) in the application code.
# [Older SDK](#tab/cpu-old)
cost-management-billing Manage Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/manage-automation.md
We recommend that you call the APIs no more than once per day. Cost Management d
To enable a consistent experience for all Cost Management subscribers, Cost Management APIs are rate limited. When you reach the limit, you receive the HTTP status code `429: Too many requests`. The current throughput limits for our APIs are as follows: -- 30 calls per minute - It's done per scope, per user, or application.-- 200 calls per minute - It's done per tenant, per user, or application.
+- 15 calls per minute - It's done per scope, per user, or application.
+- 100 calls per minute - It's done per tenant, per user, or application.
## Next steps
cost-management-billing Quick Acm Cost Analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/quick-acm-cost-analysis.md
If you have a new subscription, you can't immediately use Cost Management featur
## Sign in to Azure -- Sign in to the Azure portal.
+- Sign in to the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_CostManagement/Menu/costanalysis).
## Get started in Cost analysis
cost-management-billing How To Create Azure Support Request Ea https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/how-to-create-azure-support-request-ea.md
Title: How to create an Azure support request for an Enterprise Agreement issue description: Enterprise Agreement customers who need assistance can use the Azure portal to find self-service solutions and to create and manage support requests. Previously updated : 10/07/2021 Last updated : 02/03/2022
Based on the information you provided, we'll show you recommended solutions you
If you're still unable to resolve the issue, continue creating your support request by selecting **Next: Details**.
-### Additional details
+### Other details
Next, we collect more details about the problem. Providing thorough and detailed information in this step helps us route your support request to the right engineer.
A support engineer will contact you using the method you indicated. For informat
## Can't create request with Microsoft Account
-If you have a Microsoft Account (MSA), you can't create an Azure support ticket. Microsoft accounts are created for services including Outlook, Windows Live, and Hotmail.
+If you have a Microsoft Account (MSA) and you aren't able to create an Azure support ticket, use the following steps to file a support case. Microsoft accounts are created for services including Outlook, Windows Live, and Hotmail.
To create an Azure support ticket, an *organizational account* must have the EA administrator role or Partner administrator role.
If you have an MSA, have an administrator create an organizational account for y
Follow these links to learn more:
-* [How to manage an Azure support request]()../../azure-portal/supportability/how-to-manage-azure-support-request.md)
+* [How to manage an Azure support request](../../azure-portal/supportability/how-to-manage-azure-support-request.md)
* [Azure support ticket REST API](/rest/api/support) * Engage with us on [Twitter](https://twitter.com/azuresupport) * Get help from your peers in the [Microsoft Q&A question page](/answers/products/azure)
data-factory Data Flow Parse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-parse.md
Previously updated : 01/19/2022 Last updated : 02/03/2022 # Parse transformation in mapping data flow
Use the Parse transformation to parse text columns in your data that are strings
## Configuration
-In the parse transformation configuration panel, you will first pick the type of data contained in the columns that you wish to parse inline. The parse transformation also contains the following configuration settings.
+In the parse transformation configuration panel, you'll first pick the type of data contained in the columns that you wish to parse inline. The parse transformation also contains the following configuration settings.
:::image type="content" source="media/data-flow/data-flow-parse-1.png" alt-text="Parse settings"::: ### Column
-Similar to derived columns and aggregates, this is where you will either modify an exiting column by selecting it from the drop-down picker. Or you can type in the name of a new column here. ADF will store the parsed source data in this column. In most cases, you will want to define a new column that parses the incoming embedded document string field.
+Similar to derived columns and aggregates, this is where you'll either modify an exiting column by selecting it from the drop-down picker. Or you can type in the name of a new column here. ADF will store the parsed source data in this column. In most cases, you'll want to define a new column that parses the incoming embedded document string field.
### Expression
Use the expression builder to set the source for your parsing. This can be as si
### Output column type
-Here is where you will configure the target output schema from the parsing that will be written into a single column.
+Here is where you'll configure the target output schema from the parsing that will be written into a single column. The easiest way to set a schema for your output from parsing is to click the 'Detect Type' button on the top right of the expression builder. ADF will attempt to autodetect the schema from the string field which you are parsing and set it for you in the output expression.
:::image type="content" source="media/data-flow/data-flow-parse-2.png" alt-text="Parse example":::
databox-online Azure Stack Edge Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-alerts.md
The following alerts indicate an issue with a hardware component, such as physic
|Alert text |Severity |Description / Recommended action | |--||| |{0} on {1} has failed. |Critical |This is because the power supply is not connected properly or has failed. Take the following steps to resolve this issue:<ol><li>Make sure that the power supply connection is proper.</li><li>[Contact Microsoft Support](azure-stack-edge-contact-microsoft-support.md) to order a replacement power supply unit. |
-|Could not reach {1}. |Critical |If the controller is turned off, restart the controller.<br>Make sure that the power supply is functional. For information on monitoring the power supply LEDs, go to https://www.microsoft.com/2.<!--Need new link target. This one goes nowhere--><br>If the issue persists, [contact Microsoft Support](azure-stack-edge-contact-microsoft-support.md). |
+|Could not reach {1}. |Critical |If the controller is turned off, restart the controller.<br>Make sure that the power supply is functional. For information on monitoring the power supply LEDs, go to https://www.microsoft.com/.<!--Need new link target. This one goes nowhere--><br>If the issue persists, [contact Microsoft Support](azure-stack-edge-contact-microsoft-support.md). |
|{0} is powered off. |Warning |Connect the Power Supply Unit to a Power Distribution Unit. | |One or more device components are not working properly. |Critical |[Contact Microsoft Support](azure-stack-edge-contact-microsoft-support.md) for next steps. | |Could not replace {0}. |Warning |[Contact Microsoft Support](azure-stack-edge-contact-microsoft-support.md) for next steps. |
defender-for-iot Concept Rtos Security Alerts Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/concept-rtos-security-alerts-recommendations.md
For a complete list of all Defender for IoT service related alerts and recommend
## Next steps -- [Quickstart: Defender-IoT-micro-agent for Azure RTOS](quickstart-azure-rtos-security-module.md)
+- [Quickstart: Defender-IoT-micro-agent for Azure RTOS](/azure/defender-for-iot/device-builders/how-to-azure-rtos-security-module)
- [Configure and customize Defender-IoT-micro-agent for Azure RTOS](how-to-azure-rtos-security-module.md) - Refer to the [Defender-IoT-micro-agent for Azure RTOS API](azure-rtos-security-module-api.md)
defender-for-iot Concept Rtos Security Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/concept-rtos-security-module.md
Defender-IoT-micro-agent for Azure RTOS is provided as a free download for your
## Next steps -- Get started with Defender-IoT-micro-agent for Azure RTOS [prerequisites and setup](quickstart-azure-rtos-security-module.md).
+- Get started with Defender-IoT-micro-agent for Azure RTOS [prerequisites and setup](/azure/defender-for-iot/device-builders/how-to-azure-rtos-security-module).
- Learn more about Defender-IoT-micro-agent for Azure RTOS [security alerts and recommendation support](concept-rtos-security-alerts-recommendations.md). - Use the Defender-IoT-micro-agent for Azure RTOS [reference API](azure-rtos-security-module-api.md).
defender-for-iot Iot Security Azure Rtos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/iot-security-azure-rtos.md
Defender-IoT-micro-agent for Azure RTOS is provided as a free download for your
In this article, you learned about the Defender-IoT-micro-agent for Azure RTOS. To learn more about the Defender-IoT-micro-agent and get started, see the following articles: - [Azure RTOS IoT Defender-IoT-micro-agent concepts](concept-rtos-security-module.md)-- [Quickstart: Azure RTOS IoT Defender-IoT-micro-agent](quickstart-azure-rtos-security-module.md)
+- [Quickstart: Azure RTOS IoT Defender-IoT-micro-agent](/azure/defender-for-iot/device-builders/how-to-azure-rtos-security-module)
digital-twins Concepts Route Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-route-events.md
This article covers **event routes** and how Azure Digital Twins uses them to se
There are two major cases for sending Azure Digital Twins data: * Sending data from one twin in the Azure Digital Twins graph to another. For instance, when a property on one digital twin changes, you may want to notify and update another digital twin based on the updated data. * Sending data to downstream data services for more storage or processing (also known as *data egress*). For instance,
- - A hospital may want to send Azure Digital Twins event data to [Time Series Insights (TSI)](../time-series-insights/overview-what-is-tsi.md), to record time series data of handwashing-related events for bulk analytics.
+ - A hospital may want to send Azure Digital Twins event data to [Time Series Insights](../time-series-insights/overview-what-is-tsi.md), to record time series data of handwashing-related events for bulk analytics.
- A business that is already using [Azure Maps](../azure-maps/about-azure-maps.md) may want to use Azure Digital Twins to enhance their solution. They can quickly enable an Azure Map after setting up Azure Digital Twins, bring Azure Map entities into Azure Digital Twins as [digital twins](concepts-twins-graph.md) in the twin graph, or run powerful queries using their Azure Maps and Azure Digital Twins data together. Event routes are used for both of these scenarios. ## About event routes
-An event route lets you send event data from digital twins in Azure Digital Twins to custom-defined endpoints in your subscriptions. Three Azure services are currently supported for endpoints: [Event Hubs](../event-hubs/event-hubs-about.md), [Event Grid](../event-grid/overview.md), and [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md). Each of these Azure services can be connected to other services and acts as the middleman, sending data along to final destinations such as TSI or Azure Maps for whatever processing you need.
+An event route lets you send event data from digital twins in Azure Digital Twins to custom-defined endpoints in your subscriptions. Three Azure services are currently supported for endpoints: [Event Hubs](../event-hubs/event-hubs-about.md), [Event Grid](../event-grid/overview.md), and [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md). Each of these Azure services can be connected to other services and acts as the middleman, sending data along to final destinations such as Time Series Insights or Azure Maps for whatever processing you need.
Azure Digital Twins implements **at least once** delivery for data emitted to egress services.
The following diagram illustrates the flow of event data through a larger IoT so
:::image type="content" source="media/concepts-route-events/routing-workflow.png" alt-text="Diagram of Azure Digital Twins routing data through endpoints to several downstream services." border="false":::
-Typical downstream targets for event routes are resources like TSI, Azure Maps, storage, and analytics solutions.
+Typical downstream targets for event routes are resources like Time Series Insights, Azure Maps, storage, and analytics solutions.
### Event routes for internal digital twin events
digital-twins Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/overview.md
Models are defined in a JSON-like language called [Digital Twins Definition Lang
* Models define semantic **relationships** between your entities so that you can connect your twins into a graph that reflects their interactions. You can think of the models as nouns in a description of your world, and the relationships as verbs. * You can also specialize twins using model inheritance. One model can inherit from another.
-DTDL is used for data models throughout other Azure IoT services, including [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) and [Time Series Insights (TSI)](../time-series-insights/overview-what-is-tsi.md). This type of commonality helps you keep your Azure Digital Twins solution connected and compatible with other parts of the Azure ecosystem.
+DTDL is used for data models throughout other Azure IoT services, including [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) and [Time Series Insights](../time-series-insights/overview-what-is-tsi.md). This type of commonality helps you keep your Azure Digital Twins solution connected and compatible with other parts of the Azure ecosystem.
### Live execution environment
You can create a new IoT Hub for this purpose with Azure Digital Twins, or conne
You can also drive Azure Digital Twins from other data sources, using REST APIs or connectors to other services like [Logic Apps](../logic-apps/logic-apps-overview.md).
-### Output to ADX, TSI, storage, and analytics
+### Output to ADX, Time Series Insights, storage, and analytics
The data in your Azure Digital Twins model can be routed to downstream Azure services for more analytics or storage. This functionality is provided through **event routes**, which use [Event Hubs](../event-hubs/event-hubs-about.md), [Event Grid](../event-grid/overview.md), or [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) to drive your data flows. Some things you can do with event routes include: * Sending digital twin data to ADX for querying with the [Azure Digital Twins query plugin for Azure Data Explorer (ADX)](concepts-data-explorer-plugin.md)
-* [Connecting Azure Digital Twins to Time Series Insights (TSI)](how-to-integrate-time-series-insights.md) to track time series history of each twin
+* [Connecting Azure Digital Twins to Time Series Insights](how-to-integrate-time-series-insights.md) to track time series history of each twin
* Aligning a Time Series Model in Time Series Insights with a source in Azure Digital Twins * Storing Azure Digital Twins data in [Azure Data Lake](../storage/blobs/data-lake-storage-introduction.md) * Analyzing Azure Digital Twins data with [Azure Synapse Analytics](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md), or other Microsoft data analytics tools
event-grid Blob Event Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/blob-event-quickstart-portal.md
When you're finished, you see that the event data has been sent to the web app.
5. On the **Review + create** page, review the settings, and select **Create**. >[!NOTE]
- > Only storage accounts of kind **StorageV2 (general purpose v2)** and **BlobStorage** support event integration. **Storage (genral purpose v1)** does *not* support integration with Event Grid.
+ > Only storage accounts of kind **StorageV2 (general purpose v2)** and **BlobStorage** support event integration. **Storage (general purpose v1)** does *not* support integration with Event Grid.
## Create a message endpoint Before subscribing to the events for the Blob storage, let's create the endpoint for the event message. Typically, the endpoint takes actions based on the event data. To simplify this quickstart, you deploy a [pre-built web app](https://github.com/Azure-Samples/azure-event-grid-viewer) that displays the event messages. The deployed solution includes an App Service plan, an App Service web app, and source code from GitHub.
event-grid Delivery Retry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/kubernetes/delivery-retry.md
By default, Event Grid on Kubernetes delivers one event at a time to the subscri
[!INCLUDE [event-grid-preview-feature-note.md](../includes/event-grid-preview-feature-note.md)] > [!NOTE]
-> During the preview, Event Grid on Kubernetes features are supported through API version [2020-10-15-Preview](/rest/api/eventgrid/version2021-06-01-preview/event-subscriptions/create-or-update).
+> During the preview, Event Grid on Kubernetes features are supported through API version [2020-10-15-Preview](/rest/api/eventgrid/controlplane-version2021-06-01-preview/event-subscriptions/create-or-update).
## Retry schedule
There are two configurations that determine retry policy. They are:
An event is dropped if either of the limits of the retry policy is reached. Configuration of these limits is done per subscription basis. The following section describes each one is further detail. ### Configuring defaults per subscriber
-You can also specify retry policy limits on a per subscription basis. See our [API documentation](/rest/api/eventgrid/version2021-06-01-preview/event-subscriptions/create-or-update) for information on configuring defaults per subscriber. Subscription level defaults override the Event Grid module on Kubernetes level configurations.
+You can also specify retry policy limits on a per subscription basis. See our [API documentation](/rest/api/eventgrid/controlplane-version2021-06-01-preview/event-subscriptions/create-or-update) for information on configuring defaults per subscriber. Subscription level defaults override the Event Grid module on Kubernetes level configurations.
The following example sets up a Web hook subscription with `maxNumberOfAttempts` to 3 and `eventTimeToLiveInMinutes` to 30 minutes.
event-grid Event Handlers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/kubernetes/event-handlers.md
# Event handlers destinations in Event Grid on Kubernetes An event handler is any system that exposes an endpoint and is the destination for events sent by Event Grid. An event handler receiving an event acts upon it and uses the event payload to execute some logic, which might lead to the occurrence of new events.
-The way to configure Event Grid to send events to a destination is through the creation of an event subscription. It can be done through [Azure CLI](/cli/azure/eventgrid/event-subscription#az_eventgrid_event_subscription_create), [management SDK](../sdk-overview.md#management-sdks), or using direct HTTPs calls using the [2020-10-15-preview API](/rest/api/eventgrid/version2021-06-01-preview/event-subscriptions/create-or-update) version.
+The way to configure Event Grid to send events to a destination is through the creation of an event subscription. It can be done through [Azure CLI](/cli/azure/eventgrid/event-subscription#az_eventgrid_event_subscription_create), [management SDK](../sdk-overview.md#management-sdks), or using direct HTTPs calls using the [2020-10-15-preview API](/rest/api/eventgrid/controlplane-version2021-06-01-preview/event-subscriptions/create-or-update) version.
In general, Event Grid on Kubernetes can send events to any destination via **Webhooks**. Webhooks are HTTP(s) endpoints exposed by a service or workload to which Event Grid has access. The webhook can be a workload hosted in the same cluster, in the same network space, on the cloud, on-prem or anywhere that Event Grid can reach.
In addition to Webhooks, Event Grid on Kubernetes can send events to the followi
## Feature parity
-Event Grid on Kubernetes offers a good level of feature parity with Azure Event Grid's support for event subscriptions. The following list enumerates the main differences in event subscription functionality. Apart from those differences, you can use Azure Event Grid's [REST api version 2020-10-15-preview](/rest/api/eventgrid/version2021-06-01-preview/event-subscriptions) as a reference when managing event subscriptions on Event Grid on Kubernetes.
+Event Grid on Kubernetes offers a good level of feature parity with Azure Event Grid's support for event subscriptions. The following list enumerates the main differences in event subscription functionality. Apart from those differences, you can use Azure Event Grid's [REST api version 2020-10-15-preview](/rest/api/eventgrid/controlplane-version2021-06-01-preview/event-subscriptions) as a reference when managing event subscriptions on Event Grid on Kubernetes.
-1. Use [REST api version 2020-10-15-preview](/rest/api/eventgrid/version2021-06-01-preview/event-subscriptions).
+1. Use [REST api version 2020-10-15-preview](/rest/api/eventgrid/controlplane-version2021-06-01-preview/event-subscriptions).
2. [Azure Event Grid trigger for Azure Functions](../../azure-functions/functions-bindings-event-grid-trigger.md?tabs=csharp%2Cconsole) isn't supported. You can use a WebHook destination type to deliver events to Azure Functions. 3. There's no [dead letter location](../manage-event-delivery.md#set-dead-letter-location) support. That means that you cannot use ``properties.deadLetterDestination`` in your event subscription payload. 4. Azure Relay's Hybrid Connections as a destination isn't supported yet.
-5. Only CloudEvents schema is supported. The supported schema value is "[CloudEventSchemaV1_0](/rest/api/eventgrid/version2021-06-01-preview/event-subscriptions/create-or-update#eventdeliveryschema)". Cloud Events schema is extensible and based on open standards.
-6. Labels ([properties.labels](/rest/api/eventgrid/version2021-06-01-preview/event-subscriptions/create-or-update#request-body)) aren't applicable to Event Grid on Kubernetes. Hence, they are not available.
-7. [Delivery with resource identity](/rest/api/eventgrid/version2021-06-01-preview/event-subscriptions/create-or-update#deliverywithresourceidentity) isn't supported. So, all properties for [Event Subscription Identity](/rest/api/eventgrid/version2021-06-01-preview/event-subscriptions/create-or-update#eventsubscriptionidentity) aren't supported.
+5. Only CloudEvents schema is supported. The supported schema value is "[CloudEventSchemaV1_0](/rest/api/eventgrid/controlplane-version2021-06-01-preview/event-subscriptions/create-or-update#eventdeliveryschema)". Cloud Events schema is extensible and based on open standards.
+6. Labels ([properties.labels](/rest/api/eventgrid/controlplane-version2021-06-01-preview/event-subscriptions/create-or-update#request-body)) aren't applicable to Event Grid on Kubernetes. Hence, they are not available.
+7. [Delivery with resource identity](/rest/api/eventgrid/controlplane-version2021-06-01-preview/event-subscriptions/create-or-update#deliverywithresourceidentity) isn't supported. So, all properties for [Event Subscription Identity](/rest/api/eventgrid/controlplane-version2021-06-01-preview/event-subscriptions/create-or-update#eventsubscriptionidentity) aren't supported.
8. [Destination endpoint validation](../webhook-event-delivery.md#endpoint-validation-with-event-grid-events) isn't supported yet. ## Event filtering in event subscriptions
event-grid Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/kubernetes/features.md
# Event Grid on Kubernetes with Azure Arc features
-Event Grid on Kubernetes offers a rich set of features that help you integrate your Kubernetes workloads and realize hybrid architectures. It shares the same [rest API](/rest/api/eventgrid/version2021-06-01-preview/topics) (starting with version 2020-10-15-preview), [Event Grid CLI](/cli/azure/eventgrid), Azure portal experience, [management SDKs](../sdk-overview.md#management-sdks), and [data plane SDKs](../sdk-overview.md#data-plane-sdks) with Azure Event Grid, the other edition of the same service. When you're ready to publish events, you can use the [data plane SDK examples provided in different languages](https://devblogs.microsoft.com/azure-sdk/event-grid-ga/) that work for both editions of Event Grid.
+Event Grid on Kubernetes offers a rich set of features that help you integrate your Kubernetes workloads and realize hybrid architectures. It shares the same [rest API](/rest/api/eventgrid/controlplane-version2021-06-01-preview/topics) (starting with version 2020-10-15-preview), [Event Grid CLI](/cli/azure/eventgrid), Azure portal experience, [management SDKs](../sdk-overview.md#management-sdks), and [data plane SDKs](../sdk-overview.md#data-plane-sdks) with Azure Event Grid, the other edition of the same service. When you're ready to publish events, you can use the [data plane SDK examples provided in different languages](https://devblogs.microsoft.com/azure-sdk/event-grid-ga/) that work for both editions of Event Grid.
Although Event Grid on Kubernetes and Azure Event Grid share many features and the goal is to provide the same user experience, there are some differences given the unique requirements they seek to meet and the stage in which they are on their software lifecycle. For example, the only type of topic available in Event Grid on Kubernetes are Event Grid Topics that sometimes are also referred as custom topics. Other types of topics (see below) are either not applicable or support for them is not yet available. The main differences between the two editions of Event Grid are presented in the table below.
Although Event Grid on Kubernetes and Azure Event Grid share many features and t
| Feature | Event Grid on Kubernetes | Azure Event Grid | |:--|:-:|:-:|
-| [Event Grid Topics](/rest/api/eventgrid/version2021-06-01-preview/topics) | Γ£ö | Γ£ö |
+| [Event Grid Topics](/rest/api/eventgrid/controlplane-version2021-06-01-preview/topics) | Γ£ö | Γ£ö |
| [CNCF Cloud Events schema](https://github.com/cloudevents/spec/blob/main/cloudevents/spec.md) | Γ£ö | Γ£ö | | Event Grid and custom schemas | Γ£ÿ* | Γ£ö | | Reliable delivery | Γ£ö | Γ£ö |
Although Event Grid on Kubernetes and Azure Event Grid share many features and t
| Azure Relay's Hybrid Connections as a destination | Γ£ÿ | Γ£ö | | [Advanced filtering](filter-events.md) | Γ£ö*** | Γ£ö | | [Webhook AuthN/AuthZ with AAD](../secure-webhook-delivery.md) | Γ£ÿ | Γ£ö |
-| [Event delivery with resource identity](/rest/api/eventgrid/version2021-06-01-preview/event-subscriptions/create-or-update) | Γ£ÿ | Γ£ö |
+| [Event delivery with resource identity](/rest/api/eventgrid/controlplane-version2021-06-01-preview/event-subscriptions/create-or-update) | Γ£ÿ | Γ£ö |
| Same set of data plane SDKs | Γ£ö | Γ£ö | | Same set of management SDKs | Γ£ö | Γ£ö | | Same Event Grid CLI | Γ£ö | Γ£ö |
event-grid Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/kubernetes/overview.md
Event Grid on Kubernetes supports various event-driven integration scenarios. Ho
"As an owner of a system deployed to a Kubernetes cluster, I want to communicate my system's state changes by publishing events and configuring routing of those events so that event handlers, under my control or otherwise, can process my system's events in a way they see fit."
-**Feature** that helps you realize above requirement: [Event Grid Topics](/rest/api/eventgrid/version2021-06-01-preview/topics).
+**Feature** that helps you realize above requirement: [Event Grid Topics](/rest/api/eventgrid/controlplane-version2021-06-01-preview/topics).
### Event Grid on Kubernetes at a glance From the user perspective, Event Grid on Kubernetes is composed of the following resources in blue:
With Event Grid on Kubernetes, you can forward events to Azure for further proce
Event handler destinations can be any HTTPS or HTTP endpoint to which Event Grid can reach through the network, public or private, and has access (not protected with some authentication mechanism). You define event delivery destinations when you create an event subscription. For more information, see [event handlers](event-handlers.md). ## Features
-Event Grid on Kubernetes supports [Event Grid Topics](/rest/api/eventgrid/version2021-06-01-preview/topics), which is a feature also offered by [Azure Event Grid](../custom-topics.md). Event Grid topics help you realize the [primary integration use case](#use-case) where your requirements call for integrating your system with another workload that you own or otherwise is made accessible to your system.
+Event Grid on Kubernetes supports [Event Grid Topics](/rest/api/eventgrid/controlplane-version2021-06-01-preview/topics), which is a feature also offered by [Azure Event Grid](../custom-topics.md). Event Grid topics help you realize the [primary integration use case](#use-case) where your requirements call for integrating your system with another workload that you own or otherwise is made accessible to your system.
Some of the capabilities you get with Azure Event Grid on Kubernetes are:
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **[Deutsche Telekom AG IntraSelect](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** | Supported |Supported | Frankfurt | | **[Deutsche Telekom AG](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** | Supported |Supported | Frankfurt2 | | **du datamena** |Supported |Supported | Dubai2 |
-| **eir** |Supported |Supported |Dublin|
+| **[eir](https://www.eirevo.ie/cloud-services/cloud-connectivity)** |Supported |Supported |Dublin|
| **[Epsilon Global Communications](https://www.epsilontel.com/solutions/direct-cloud-connect)** |Supported |Supported | Singapore, Singapore2 | | **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** |Supported |Supported | Amsterdam, Amsterdam2, Atlanta, Berlin, Bogota, Canberra2, Chicago, Dallas, Dubai2, Dublin, Frankfurt, Frankfurt2, Geneva, Hong Kong SAR, London, London2, Los Angeles*, Los Angeles2, Melbourne, Miami, Milan, New York, Osaka, Paris, Quebec City, Rio de Janeiro, Sao Paulo, Seattle, Seoul, Silicon Valley, Singapore, Singapore2, Stockholm, Sydney, Tokyo, Toronto, Washington DC, Zurich</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Please create new circuits in Los Angeles2.* | | **Etisalat UAE** |Supported |Supported |Dubai|
expressroute Expressroute Troubleshooting Expressroute Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-troubleshooting-expressroute-overview.md
Test your private peering connectivity by **counting** packets arriving and leav
:::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/connectivity-issues.png" alt-text="Screenshot of connectivity issues option.":::
-1. In the dropdown for *Tell us more about the problem you are experiencing*, select **Connectivity to Azure Private, Azure Public, or Dynamics 365 services.**
+1. In the dropdown for *Tell us more about the problem you're experiencing*, select **Connectivity to Azure Private, Azure Public, or Dynamics 365 services.**
:::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/tell-us-more.png" alt-text="Screenshot of drop-down option for problem user is experiencing.":::
Test your private peering connectivity by **counting** packets arriving and leav
### Interpreting results Your test results for each MSEE device will look like the example below. You'll have two sets of results for the primary and secondary MSEE devices. Review the number of matches in and out and use the following scenarios to interpret the results:
-* **You see packet matches sent and received on both MSEEs:** This indicates healthy traffic inbound to and outbound from the MSEE on your circuit. If loss is occurring either on-premises or in Azure, it is happening downstream from the MSEE.
+* **You see packet matches sent and received on both MSEEs:** This indicates healthy traffic inbound to and outbound from the MSEE on your circuit. If loss is occurring either on-premises or in Azure, it's happening downstream from the MSEE.
* **If testing PsPing from on-premises to Azure *(received)* results show matches, but *sent* results show NO matches:** This indicates that traffic is getting inbound to Azure, but isn't returning to on-prem. Check for return-path routing issues (for example, are you advertising the appropriate prefixes to Azure? Is there a UDR overriding prefixes?). * **If testing PsPing from Azure to on-premises *(sent)* results show NO matches, but *(received)* results show matches:** This indicates that traffic is getting to on-premises, but isn't getting back. You should work with your provider to find out why traffic isn't being routed to Azure via your ExpressRoute circuit. * **One MSEE shows NO matches, while the other shows good matches:** This indicates that one MSEE isn't receiving or passing any traffic. It could be offline (for example, BGP/ARP down).
This test result has the following properties:
* On-prem IP Address CIDR: 10.0.0.0 * Azure IP Address CIDR: 20.0.0.0
+## Verify virtual network gateway availability
+
+The ExpressRoute virtual network gateway facilitates the management and control plane connectivity to private link services and private IPs deployed to an Azure virtual network. The virtual network gateway infrastructure is managed by Microsoft and sometimes undergoes maintenance. During a maintenance period, performance of the virtual network gateway may be reduced. You can use the *Diagnose and Solve* experience within the ExpressRoute Circuit page to troubleshoot connectivity issues to the virtual network and reactively detect if recent maintenance events reduced the virtual network gateway capacity.
+
+1. To access this diagnostic tool, select **Diagnose and solve problems** from your ExpressRoute circuit in the Azure portal.
+
+ :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/diagnose-problems.png" alt-text="Screenshot of selecting the diagnose and solve problem page from ExpressRoute circuit.":::
+
+1. Select the **Performance Issues** option.
+
+ :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/performance-issues.png" alt-text="Screenshot of selecting the performance issue option.":::
+
+1. Wait for the diagnostics to run and interpret the results:
+
+ :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/gateway-result.png" alt-text="Screenshot of the diagnostic results.":::
+
+ Review if your virtual network gateway recently underwent maintenance. If maintenance occurred during a period when you experienced packet loss or latency, it's possible that the reduced capacity of the virtual network gateway contributed to connectivity issues you're experiencing with the target virtual network. Follow the recommended steps and also consider upgrading the [virtual network gateway SKU](expressroute-about-virtual-network-gateways.md#gwsku) to support a higher network throughput and avoid connectivity issues during future maintenance events.
+ ## Next Steps For more information or help, check out the following links:
firewall Premium Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/premium-certificates.md
Previously updated : 08/02/2021 Last updated : 02/03/2022
To configure your key vault:
- It's recommended to use a CA certificate import because it allows you to configure an alert based on certificate expiration date. - After you've imported a certificate or a secret, you need to define access policies in the key vault to allow the identity to be granted get access to the certificate/secret. - The provided CA certificate needs to be trusted by your Azure workload. Ensure they are deployed correctly.
+- The Key Vault Networking must be set to allow access from **All networks**.
+ :::image type="content" source="media/premium-certificates/keyvault-networking.png" alt-text="Screenshot showing Key Vault networking" lightbox="media/premium-certificates/keyvault-networking.png":::
You can either create or reuse an existing user-assigned managed identity, which Azure Firewall uses to retrieve certificates from Key Vault on your behalf. For more information, see [What is managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md)
firewall Premium Deploy Certificates Enterprise Ca https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/premium-deploy-certificates-enterprise-ca.md
Previously updated : 07/15/2021 Last updated : 02/03/2022
To use an Enterprise CA to generate a certificate to use with Azure Firewall Pre
1. Select **Advanced Certificate Request**. 1. Select **Create and Submit a Request to this CA**. 1. Fill out the form using the Subordinate Certification Authority template.
+ :::image type="content" source="media/premium-deploy-certificates-enterprise-ca/advanced-certificate-request.png" alt-text="Screenshot of advanced certificate request":::
1. Submit the request and install the certificate. 1. Assuming this request is made from a Windows Server using Internet Explorer, open **Internet Options**. 1. Navigate to the **Content** tab and select **Certificates**.
+ :::image type="content" source="media/premium-deploy-certificates-enterprise-ca/internet-properties.png" alt-text="Screenshot of Internet properties":::
1. Select the certificate that was just issued and then select **Export**.
+ :::image type="content" source="media/premium-deploy-certificates-enterprise-ca/export-certificate.png" alt-text="Screenshot of export certificate":::
1. Select **Next** to begin the wizard. Select **Yes, export the private key**, and then select **Next**.
+ :::image type="content" source="media/premium-deploy-certificates-enterprise-ca/export-private-key.png" alt-text="Screenshot showing export private key":::
1. .pfx file format is selected by default. Uncheck **Include all certificates in the certification path if possible**. If you export the entire certificate chain, the import process to Azure Firewall will fail.
+ :::image type="content" source="media/premium-deploy-certificates-enterprise-ca/export-file-format.png" alt-text="Screenshot showing export file format":::
1. Assign and confirm a password to protect the key, and then select **Next**.
+ :::image type="content" source="media/premium-deploy-certificates-enterprise-ca/certificate-security.png" alt-text="Screenshot showing certificate security":::
1. Choose a file name and export location and then select **Next**. 1. Select **Finish** and move the exported certificate to a secure location.
To use an Enterprise CA to generate a certificate to use with Azure Firewall Pre
1. In the Azure portal, navigate to the Certificates page of your Key Vault, and select **Generate/Import**. 1. Select **Import** as the method of creation, name the certificate, select the exported .pfx file, enter the password, and then select **Create**.
-1. Navigate to the **TLS Inspection** page of your Firewall policy and select your Managed identity, Key Vault, and certificate.
+ :::image type="content" source="media/premium-deploy-certificates-enterprise-ca/create-a-certificate.png" alt-text="Screenshot showing Key Vault create a certificate":::
+1. Navigate to the **TLS Inspection** page of your Firewall policy and select your Managed identity, Key Vault, and certificate.
+ :::image type="content" source="media/premium-deploy-certificates-enterprise-ca/tls-inspection-certificate.png" alt-text="Screenshot showing Firewall Policy TLS Insepction configuration":::
1. Select **Save**.
- :::image type="content" source="media/premium-deploy-certificates-enterprise-ca/tls-inspection.png" alt-text="TLS inspection":::
## Validate TLS inspection 1. Create an Application Rule using TLS inspection to the destination URL or FQDN of your choice. For example: `*bing.com`.
+ :::image type="content" source="media/premium-deploy-certificates-enterprise-ca/edit-rule-collection.png" alt-text="Screenshot showing edit rule collection":::
1. From a domain-joined machine within the Source range of the rule, navigate to your Destination and select the lock symbol next to the address bar in your browser. The certificate should show that it was issued by your Enterprise CA rather than a public CA.
+ :::image type="content" source="media/premium-deploy-certificates-enterprise-ca/browser-certificate.png" alt-text="Screenshot showing the browser certificate":::
1. Show the certificate to display more details, including the certificate path. :::image type="content" source="media/premium-deploy-certificates-enterprise-ca/certificate-details.png" alt-text="certificate details"::: 1. In Log Analytics, run the following KQL query to return all requests that have been subject to TLS Inspection: ``` AzureDiagnostics
- | where ResourceType == "AZUREFIREWALLS"
- | where Category == "AzureFirewallApplicationRule"
- | where msg_s contains "Url:"
- | sort by TimeGenerated desc
+ | where ResourceType == "AZUREFIREWALLS"
+ | where Category == "AzureFirewallApplicationRule"
+ | where msg_s contains "Url:"
+ | sort by TimeGenerated desc
``` The result shows the full URL of inspected traffic: :::image type="content" source="media/premium-deploy-certificates-enterprise-ca/kql-query.png" alt-text="KQL query":::
firewall Premium Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/premium-migrate.md
Previously updated : 01/31/2022 Last updated : 02/03/2022
The minimum Azure PowerShell version requirement is 6.5.0. For more information,
``` -- Allocate Firewall Premium
+- Allocate Firewall Premium (single public IP address)
```azurepowershell $azfw = Get-AzFirewall -Name "<firewall-name>" -ResourceGroupName "<resource-group-name>"
The minimum Azure PowerShell version requirement is 6.5.0. For more information,
Set-AzFirewall -AzureFirewall $azfw ```
+- Allocate Firewall Premium (multiple public IP addresses)
+
+ ```azurepowershell
+ $azfw = Get-AzFirewall -Name "FW Name" -ResourceGroupName "RG Name"
+ $azfw.Sku.Tier="Premium"
+ $vnet = Get-AzVirtualNetwork -ResourceGroupName "RG Name" -Name "VNet Name"
+ $publicip1 = Get-AzPublicIpAddress -Name "Public IP1 Name" -ResourceGroupName "RG Name"
+ $publicip2 = Get-AzPublicIpAddress -Name "Public IP2 Name" -ResourceGroupName "RG Name"
+ $azfw.Allocate($vnet,@($publicip1,$publicip2))
+ Set-AzFirewall -AzureFirewall $azfw
+ ```
- Allocate Firewall Premium in Forced Tunnel Mode ```azurepowershell
firewall Tutorial Hybrid Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/tutorial-hybrid-portal.md
If you want to use Azure PowerShell instead to complete this procedure, see [Dep
## Prerequisites
-A hybrid network uses the hub-and-spoke architecture model to route traffic between Azure VNets and on-premise networks. The hub-and-spoke architecture has the following requirements:
+A hybrid network uses the hub-and-spoke architecture model to route traffic between Azure VNets and on-premises networks. The hub-and-spoke architecture has the following requirements:
- Set **Use this virtual network's gateway or Route Server** when peering VNet-Hub to VNet-Spoke. In a hub-and-spoke network architecture, a gateway transit allows the spoke virtual networks to share the VPN gateway in the hub, instead of deploying VPN gateways in every spoke virtual network.
You can keep your firewall resources for further testing, or if no longer needed
Next, you can monitor the Azure Firewall logs.
-[Tutorial: Monitor Azure Firewall logs](./firewall-diagnostics.md)
+[Tutorial: Monitor Azure Firewall logs](./firewall-diagnostics.md)
hdinsight Hdinsight 36 Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-36-component-versioning.md
The OSS component versions associated with HDInsight 3.6 are listed in the follo
| Apache Zeppelin | 0.7.3 | | Mono | 4.2.1 |
+## HDInsight 3.6 to 4.0 Migration Guides
+- [Migrate Apache Spark 2.1 and 2.2 workloads to 2.3 and 2.4](spark/migrate-versions.md).
+- [Migrate Azure HDInsight 3.6 Hive workloads to HDInsight 4.0](interactive-query/apache-hive-migrate-workloads.md).
+- [Migrate Apache Kafka workloads to Azure HDInsight 4.0](kafk).
+- [Migrate an Apache HBase cluster to a new version](hbase/apache-hbase-migrate-new-version.md).
+ ## Next steps - [Cluster setup for Apache Hadoop, Spark, and more on HDInsight](hdinsight-hadoop-provision-linux-clusters.md)
iot-central Concepts Faq Start Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-faq-start-iot-central.md
+
+ Title: Why should I start with IoT Central? | Microsoft Docs
+description: Describes why IoT Central should be the first step on your IoT journey.
++ Last updated : 02/02/2022++++++
+# Why should I start with IoT Central?
+
+You should start your IoT journey with Azure IoT Central. Starting as high as possible in the Azure IoT technology stack lets you focus your time on using IoT data to create business value instead of simply getting your IoT data.
+
+## Start with Azure IoT Central
+
+An application platform as a service (aPaaS) streamlines many of the complex decisions you face when you build an IoT solution. Many IoT projects are de-funded because of early-stage requirements in simply getting IoT data. Use the capabilities and experiences in IoT Central to showcase the value of your IoT data without overburdening yourself with building the infrastructure for device connectivity and management.
+
+## The power of Azure PaaS is orchestrated and managed for you
+
+Azure IoT Central is an aPaaS offering that simplifies and accelerates IoT solution assembly and operation. It does this by pre-assembling platform as a service (PaaS) components into an extensible and fully managed app development platform hosted by Microsoft. IoT Central takes out much of the guesswork and complexity that's involved in building reliable, scalable, and secure IoT applications.
+
+## Capabilities to accelerate time to value
+
+An out-of-the box web UX and API surface area make it simple to monitor device conditions, create rules, and manage millions of devices and their data remotely throughout their life-cycles. Furthermore, IoT Central enables you to act on insights from your device data by extending IoT intelligence into line-of-business applications. This extensibility is provided through data export and APIs.
+
+## Support for production deployment and operations
+
+Building and operating cloud solutions comprised of numerous interconnected PaaS services requires expertise. That's why IoT Central also offers built-in disaster recovery, multitenancy, global availability, and a predictable cost structure.
++
+## Next steps
+
+Now that you've learned about starting with IoT Central a suggested next step is to do a [Quickstart](quick-deploy-iot-central.md), read the [overview](overview-iot-central.md), or explore the [architecture](concepts-architecture.md).
iot-central Overview Iot Central Admin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-admin.md
An IoT Central application lets you monitor and manage millions of devices throu
IoT Central application administration includes the following tasks:
+- Create applications
- Manage users and roles in the application. - Create and manage organizations. - Manage security such as device authentication.
IoT Central application administration includes the following tasks:
- Export and share applications. - Monitor application health.
+## Create applications
+
+You use an *application template* to create an application. An application templates consist of:
+
+- Sample dashboards
+- Sample device templates
+- Simulated devices producing real-time data
+- Pre-configured rules and jobs
+- Rich documentation including tutorials and how-tos
+
+You choose the application template when you create your application. You can't change the template an application uses after it's created.
+
+### Custom templates
+
+If you want to create your application from scratch, choose the **Custom application** template.
+
+You can also create and manage your own [custom application templates](howto-create-iot-central-application.md#create-and-use-a-custom-application-template) and [copy applications](howto-create-iot-central-application.md#copy-an-application) to create new ones.
+
+### Industry focused templates
+
+Azure IoT Central is an industry agnostic application platform. Application templates are industry focused examples available for these industries today:
++
+To learn more, see [Create a retail application](../retail/tutorial-in-store-analytics-create-app.md) as an example.
+ ## Users and roles IoT Central uses a role-based access control system to manage user permissions within an application. IoT Central has three built-in roles for administrators, solution builders, and operators. An administrator can create custom roles with specific sets of permissions. An administrator is responsible for adding users to an application and assigning them to roles.
iot-develop Quickstart Devkit Microchip Atsame54 Xpro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-microchip-atsame54-xpro.md
Keep Termite open to monitor device output in the following steps.
* IAR Embedded Workbench for ARM (EW for ARM). You can download and install a [14-day free trial of IAR EW for ARM](https://www.iar.com/products/architectures/arm/iar-embedded-workbench-for-arm/).
-* Download the [Azure_RTOS_6.1_ATSAME54-XPRO_IAR_Samples_2020_10_10.zip](https://github.com/azure-rtos/samples/releases/download/v6.1_rel/Azure_RTOS_6.1_ATSAME54-XPRO_IAR_Samples_2021_11_03.zip) file and extract it to a working directory. Choose a directory with a short path to avoid compiler errors when you build.
+* Download the Microchip ATSAME54-XPRO IAR sample from [Azure RTOS samples](https://github.com/azure-rtos/samples/), and unzip it to a working directory. Choose a directory with a short path to avoid compiler errors when you build.
[!INCLUDE [iot-develop-embedded-create-central-app-with-device](../../includes/iot-develop-embedded-create-central-app-with-device.md)]
Keep Termite open to monitor device output in the following steps.
* [MPLAB XC32/32++ Compiler 2.4.0 or later](https://www.microchip.com/mplab/compilers).
-* Download the [Azure_RTOS_6.1_ATSAME54-XPRO_MPLab_Samples_2020_10_10.zip](https://github.com/azure-rtos/samples/releases/download/v6.1_rel/Azure_RTOS_6.1_ATSAME54-XPRO_MPLab_Samples_2021_11_03.zip) file and unzip it to a working directory. Choose a directory with a short path to avoid compiler errors when you build.
+* Download the Microchip ATSAME54-XPRO MPLab sample from [Azure RTOS samples](https://github.com/azure-rtos/samples/), and unzip it to a working directory. Choose a directory with a short path to avoid compiler errors when you build.
[!INCLUDE [iot-develop-embedded-create-central-app-with-device](../../includes/iot-develop-embedded-create-central-app-with-device.md)]
iot-develop Quickstart Devkit Nxp Mimxrt1060 Evk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-nxp-mimxrt1060-evk.md
Keep Termite open to monitor device output in the following steps.
* IAR Embedded Workbench for ARM (IAR EW). You can download and install a [14-day free trial of IAR EW for ARM](https://www.iar.com/products/architectures/arm/iar-embedded-workbench-for-arm/).
-* Download the [Azure_RTOS_6.1_MIMXRT1060_IAR_Samples_2021_11_03.zip](https://github.com/azure-rtos/samples/releases/download/v6.1_rel/Azure_RTOS_6.1_MIMXRT1060_IAR_Samples_2021_11_03.zip) file and extract it to a working directory. Choose a directory with a short path to avoid compiler errors when you build.
+* Download the NXP MIMXRT1060-EVK IAR sample from [Azure RTOS samples](https://github.com/azure-rtos/samples/), and unzip it to a working directory. Choose a directory with a short path to avoid compiler errors when you build.
[!INCLUDE [iot-develop-embedded-create-central-app-with-device](../../includes/iot-develop-embedded-create-central-app-with-device.md)]
Keep the terminal open to monitor device output in the following steps.
* Download the [MIMXRT1060-EVK SDK 2.9.0 or later](https://mcuxpresso.nxp.com/en/builder). After you sign in, the website lets you build a custom SDK archive to download. After you select the EVK MIMXRT1060 board and click the option to build the SDK, you can download the zip archive. The only SDK component to include is the preselected **SDMMC Stack**.
-* Download the [Azure_RTOS_6.1_MIMXRT1060_IAR_Samples_2021_11_03.zip](https://github.com/azure-rtos/samples/releases/download/v6.1_rel/Azure_RTOS_6.1_MIMXRT1060_IAR_Samples_2021_11_03.zip) file and extract it to a working directory. Choose a directory with a short path to avoid compiler errors when you build.
+* Download the NXP MIMXRT1060-EVK MCUXpresso sample from [Azure RTOS samples](https://github.com/azure-rtos/samples/), and unzip it to a working directory. Choose a directory with a short path to avoid compiler errors when you build.
[!INCLUDE [iot-develop-embedded-create-central-app-with-device](../../includes/iot-develop-embedded-create-central-app-with-device.md)]
iot-hub-device-update Components Enumerator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/components-enumerator.md
+
+ Title: 'Register components with Device Update: Contoso Virtual Vacuum component enumerator | Microsoft Docs'
+description: Follow a Contoso Virtual Vacuum example to implement your own component enumerator by using proxy update.
++ Last updated : 12/3/2021++++
+# Register components with Device Update: Contoso Virtual Vacuum component enumerator
+
+This article shows an example implementation of the Contoso Virtual Vacuum component enumerator. You can reference this example to implement a custom component enumerator for your Internet of Things (IoT) devices. A *component* is an identity beneath the device level that has a composition relationship with the host device.
+
+## What is Contoso Virtual Vacuum?
+
+Contoso Virtual Vacuum is a virtual IoT device that we use to demonstrate the *proxy update* feature.
+
+Proxy update enables updating multiple components on the same IoT device or multiple sensors connected to the IoT device with a single over-the-air deployment. Proxy update supports an installation order for updating components. It also supports multiple-step updating with pre-installation, installation, and post-installation capabilities.
+
+Use cases where proxy updates are applicable include:
+
+- Targeting specific update files to partitions on the device.
+- Targeting specific update files to apps or components on the device.
+- Targeting specific update files to sensors connected to IoT devices over a network protocol (for example, USB or CAN bus).
+
+The Device Update Agent runs on the host device. It can send each update to a specific component or to a group of components of the same hardware class (that is, requiring the same software or firmware update).
+
+## Virtual Vacuum components
+
+For this demonstration, Contoso Virtual Vacuum consists of five logical components:
+
+- Host firmware
+- Host boot file system
+- Host root file system
+- Three motors (left wheel, right wheel, and vacuum)
+- Two cameras (front and rear)
++
+We used the following directory structure to simulate the components:
+
+```sh
+/usr/local/contoso-devices/vacuum-1/hostfw
+/usr/local/contoso-devices/vacuum-1/bootfs
+/usr/local/contoso-devices/vacuum-1/rootfs
+/usr/local/contoso-devices/vacuum-1/motors/0 /* left motor */
+/usr/local/contoso-devices/vacuum-1/motors/1 /* right motor */
+/usr/local/contoso-devices/vacuum-1/motors/2 /* vacuum motor */
+/usr/local/contoso-devices/vacuum-1/cameras/0 /* front camera */
+/usr/local/contoso-devices/vacuum-1/cameras/1 /* rear camera */
+```
+
+Each component's directory contains a JSON file that stores a mock software version number of each component. Example JSON files are *firmware.json* and *diskimage.json*.
+
+> [!NOTE]
+> For this demo, to update the components' firmware, we'll copy *firmware.json* or *diskimage.json* (update payload) to the targeted components' directory.
+
+Here's an example *firmware.json* file:
+
+```json
+{
+ "version": "0.5",
+ "description": "This component is generated for testing purposes."
+}
+```
+
+> [!NOTE]
+> Contoso Virtual Vacuum contains software or firmware versions for the purpose of demonstrating proxy update. It doesn't provide any other functionality.
+
+## What is a component enumerator?
+
+A component enumerator is a Device Update Agent extension that provides information about every component that you need for an over-the-air update via a host device's Azure IoT Hub connection.
+
+The Device Update Agent is device and component agnostic. By itself, the agent doesn't know anything about components on (or connected to) a host device at the time of the update.
+
+To enable proxy updates, device builders must identify all updateable components on the device and assign a unique name to each component. Also, a group name can be assigned to components of the same hardware class, so that the same update can be installed onto all components in the same group. The update content handler can then install and apply the update to the correct components.
++
+Here are the responsibilities of each part of the proxy update flow:
+
+- **Device builder**
+ - Design and build the device.
+ - Integrate the Device Update Agent and its dependencies.
+ - Implement a device-specific component enumerator extension and register with the Device Update Agent.
+
+ The component enumerator uses the information from a component inventory or a configuration file to augment static component data (Device Update required) with dynamic data (for example, firmware version, connection status, and hardware identity).
+ - Create a proxy update that contains one or more child updates that target one or more components on (or connected to) the device.
+ - Send the update to the solution operator.
+- **Solution operator**
+ - Import the update (and manifest) to the Device Update service.
+ - Deploy the update to a group of devices.
+- **Device Update Agent**
+ - Get update information from Azure IoT Hub (via device twin or module twin).
+ - Invoke a *steps handler* to process the proxy update intended for one or more components on the device.
+
+ This example has two updates: `host-fw-1.1` and `motors-fw-1.1`. For each child update, the parent steps handler invokes a child steps handler to enumerate all components that match the `Compatibilities` properties specified in the child update's manifest file. Next, the handler downloads, installs, and applies the child update to all targeted components.
+
+ To get the matching components, the child update calls a `SelectComponents` API provided by the component enumerator. If there are no matching components, the child update is skipped.
+ - Collect all update results from parent and child updates, and report those results to Azure IoT Hub.
+- **Child steps handler**
+ - Iterate through a list of component instances that are compatible with the child update content. For more information, see [Steps handler](https://github.com/Azure/iot-hub-device-update/tree/main/src/content_handlers/steps_handler).
++
+In production, device builders can use [existing handlers](https://github.com/Azure/iot-hub-device-update/tree/main/src/content_handlers) or implement a custom handler that invokes any installer needed for an over-the-air update. For more information, see [Implement a custom update content handler](https://github.com/Azure/iot-hub-device-update/tree/main/docs/agent-reference/how-to-implement-custom-update-handler.md).
+
+## Implement a component enumerator for the Device Update Agent (C language)
+
+### Requirements
+
+Implement all APIs declared in [component_enumerator_extension.hpp](https://github.com/Azure/iot-hub-device-update/tree/main/src/extensions/inc/aduc/component_enumerator_extension.hpp):
+
+| Function | Arguments | Returns |
+||||
+|`char* GetAllComponents()`|None|A JSON string that contains an array of *all* `ComponentInfo` values. For more information, see [Example return values](#example-return-values).|
+|`char* SelectComponents(char* selector)`|A JSON string that contains one or more name/value pairs used for selecting update target components| A JSON string that contains an array of `ComponentInfo` values. For more information, see [Example return values](#example-return-values).|
+|`void FreeComponentsDataString(char* string)`|A pointer to string buffer previously returned by `GetAllComponents` or `SelectComponents` functions|None|
+
+### ComponentInfo
+
+The `ComponentInfo` JSON string must include the following properties:
+
+| Name | Type | Description |
+||||
+|`id`| string | A component's unique identity (device scope). Examples include hardware serial number, disk partition ID, and unique file path of the component.|
+|`name`| string| A component's logical name. This is the name that a device builder assigns to a component that's available in every device of the same `device` class.<br/><br/>For example, every Contoso Virtual Vacuum device contains a motor that drives a left wheel. Contoso assigned *left motor* as a common (logical) name for this motor to easily refer to this component, instead of hardware ID, which can be globally unique.|
+|`group`|string|A group that this component belongs to.<br/><br/>For example, all motors could belong to a *motors* group.|
+|`manufacturer`|string|For a physical hardware component, this is a manufacturer or vendor name.<br/><br/>For a logical component, such as a disk partition or directory, it can be any device builder's defined value.|
+|`model`|string|For a physical hardware component, this is a model name.<br/><br/>For a logical component, such as a disk partition or directory, this can be any device builder's defined value.|
+|`properties`|object| A JSON object that contains any optional device-specific properties.|
+
+Here's an example of `ComponentInfo` code:
+
+```json
+{
+ "id": "contoso-motor-serial-00000",
+ "name": "left-motor",
+ "group": "motors",
+ "manufacturer": "contoso",
+ "model": "virtual-motor",
+ "properties": {
+ "path": "\/usr\/local\/contoso-devices\/vacuum-1\/motors\/0",
+ "firmwareDataFile": "firmware.json",
+ "status": "connected",
+ "version" : "motor-fw-1.0"
+ }
+}
+```
+
+### Example return values
+
+Following is a JSON document returned from the `GetAllComponents` function. It's based on the example implementation of the Contoso component enumerator.
+
+```json
+{
+ "components": [
+ {
+ "id": "hostfw",
+ "name": "hostfw",
+ "group": "firmware",
+ "manufacturer": "contoso",
+ "model": "virtual-firmware",
+ "properties": {
+ "path": "\/usr\/local\/contoso-devices\/vacuum-1\/hostfw",
+ "firmwareDataFile": "firmware.json",
+ "status": "ok",
+ "version" : "host-fw-1.0"
+ }
+ },
+ {
+ "id": "bootfs",
+ "name": "bootfs",
+ "group": "boot-image",
+ "manufacturer": "contoso",
+ "model": "virtual-disk",
+ "properties": {
+ "path": "\/usr\/local\/contoso-devices\/vacuum-1\/bootfs",
+ "firmwareDataFile": "diskimage.json",
+ "status": "ok",
+ "version" : "boot-fs-1.0"
+ }
+ },
+ {
+ "id": "rootfs",
+ "name": "rootfs",
+ "group": "os-image",
+ "manufacturer": "contoso",
+ "model": "virtual-os",
+ "properties": {
+ "path": "\/usr\/local\/contoso-devices\/vacuum-1\/rootfs",
+ "firmwareDataFile": "diskimage.json",
+ "status": "ok",
+ "version" : "root-fs-1.0"
+ }
+ },
+ {
+ "id": "contoso-motor-serial-00000",
+ "name": "left-motor",
+ "group": "motors",
+ "manufacturer": "contoso",
+ "model": "virtual-motor",
+ "properties": {
+ "path": "\/usr\/local\/contoso-devices\/vacuum-1\/motors\/0",
+ "firmwareDataFile": "firmware.json",
+ "status": "ok",
+ "version" : "motor-fw-1.0"
+ }
+ },
+ {
+ "id": "contoso-motor-serial-00001",
+ "name": "right-motor",
+ "group": "motors",
+ "manufacturer": "contoso",
+ "model": "virtual-motor",
+ "properties": {
+ "path": "\/usr\/local\/contoso-devices\/vacuum-1\/motors\/1",
+ "firmwareDataFile": "firmware.json",
+ "status": "ok",
+ "version" : "motor-fw-1.0"
+ }
+ },
+ {
+ "id": "contoso-motor-serial-00002",
+ "name": "vacuum-motor",
+ "group": "motors",
+ "manufacturer": "contoso",
+ "model": "virtual-motor",
+ "properties": {
+ "path": "\/usr\/local\/contoso-devices\/vacuum-1\/motors\/2",
+ "firmwareDataFile": "firmware.json",
+ "status": "ok",
+ "version" : "motor-fw-1.0"
+ }
+ },
+ {
+ "id": "contoso-camera-serial-00000",
+ "name": "front-camera",
+ "group": "cameras",
+ "manufacturer": "contoso",
+ "model": "virtual-camera",
+ "properties": {
+ "path": "\/usr\/local\/contoso-devices\/vacuum-1\/camera\/0",
+ "firmwareDataFile": "firmware.json",
+ "status": "ok",
+ "version" : "camera-fw-1.0"
+ }
+ },
+ {
+ "id": "contoso-camera-serial-00001",
+ "name": "rear-camera",
+ "group": "cameras",
+ "manufacturer": "contoso",
+ "model": "virtual-camera",
+ "properties": {
+ "path": "\/usr\/local\/contoso-devices\/vacuum-1\/camera\/1",
+ "firmwareDataFile": "firmware.json",
+ "status": "ok",
+ "version" : "camera-fw-1.0"
+ }
+ }
+ ]
+}
+```
+
+The following JSON document is returned from the `SelectComponents` function. It's based on the example implementation of the Contoso component enumerator.
+
+Here's the input parameter for selecting the *motors* component group:
+
+```json
+{
+ "group" : "motors"
+}
+```
+
+Here's the output of the parameter. All components belong to the *motors* group.
+
+```json
+{
+ "components": [
+ {
+ "id": "contoso-motor-serial-00000",
+ "name": "left-motor",
+ "group": "motors",
+ "manufacturer": "contoso",
+ "model": "virtual-motor",
+ "properties": {
+ "path": "\/usr\/local\/contoso-devices\/vacuum-1\/motors\/0",
+ "firmwareDataFile": "firmware.json",
+ "status": "ok",
+ "version" : "motor-fw-1.0"
+ }
+ },
+ {
+ "id": "contoso-motor-serial-00001",
+ "name": "right-motor",
+ "group": "motors",
+ "manufacturer": "contoso",
+ "model": "virtual-motor",
+ "properties": {
+ "path": "\/usr\/local\/contoso-devices\/vacuum-1\/motors\/1",
+ "firmwareDataFile": "firmware.json",
+ "status": "ok",
+ "version" : "motor-fw-1.0"
+ }
+ },
+ {
+ "id": "contoso-motor-serial-00002",
+ "name": "vacuum-motor",
+ "group": "motors",
+ "manufacturer": "contoso",
+ "model": "virtual-motor",
+ "properties": {
+ "path": "\/usr\/local\/contoso-devices\/vacuum-1\/motors\/2",
+ "firmwareDataFile": "firmware.json",
+ "status": "ok",
+ "version" : "motor-fw-1.0"
+ }
+ }
+ ]
+}
+```
+
+Here's the input parameter for selecting a single component named *hostfw*:
+
+```json
+{
+ "name" : "hostfw"
+}
+```
+
+Here's the parameter's output for the *hostfw* component:
+
+```json
+{
+ "components": [
+ {
+ "id": "hostfw",
+ "name": "hostfw",
+ "group": "firmware",
+ "manufacturer": "contoso",
+ "model": "virtual-firmware",
+ "properties": {
+ "path": "\/usr\/local\/contoso-devices\/vacuum-1\/hostfw",
+ "firmwareDataFile": "firmware.json",
+ "status": "ok",
+ "version" : "host-fw-1.0"
+ }
+ }
+ ]
+}
+```
+
+> [!NOTE]
+> The preceding example demonstrated that, if needed, it's possible to send a newer update to any instance of a component that's selected by `name` property. For example, deploy the `motor-fw-2.0` update to *vacuum-motor* while continuing to use `motor-fw-1.0` on *left-motor* and *right-motor*.
+
+## Inventory file
+
+The example implementation shown earlier for the Contoso component enumerator will read the device-specific components' information from the *component-inventory.json* file. Note that this example implementation is only for demonstration purposes.
+
+In a production scenario, some properties should be retrieved directly from the actual components. These properties include `id`, `manufacturer`, and `model`.
+
+The device builder defines the `name` and `group` properties. These values should never change after they're defined. The `name` property must be unique within the device.
+
+#### Example component-inventory.json file
+
+> [!NOTE]
+> The content in this file looks almost the same as the returned value from the `GetAllComponents` function. However, `ComponentInfo` in this file doesn't contain `version` and `status` properties. The component enumerator will populate these properties at runtime.
+
+For example, for *hostfw*, the value of the property `properties.version` will be populated from the specified (mock) `firmwareDataFile` value (*/usr/local/contoso-devices/vacuum-1/hostfw/firmware.json*).
+
+```json
+{
+ "components": [
+ {
+ "id": "hostfw",
+ "name": "hostfw",
+ "group": "firmware",
+ "manufacturer": "contoso",
+ "model": "virtual-firmware",
+ "properties": {
+ "path": "\/usr\/local\/contoso-devices\/vacuum-1\/hostfw",
+ "firmwareDataFile": "firmware.json",
+ }
+ },
+ {
+ "id": "bootfs",
+ "name": "bootfs",
+ "group": "boot-image",
+ "manufacturer": "contoso",
+ "model": "virtual-disk",
+ "properties": {
+ "path": "\/usr\/local\/contoso-devices\/vacuum-1\/bootfs",
+ "firmwareDataFile": "diskimage.json",
+ }
+ },
+ {
+ "id": "rootfs",
+ "name": "rootfs",
+ "group": "os-image",
+ "manufacturer": "contoso",
+ "model": "virtual-os",
+ "properties": {
+ "path": "\/usr\/local\/contoso-devices\/vacuum-1\/rootfs",
+ "firmwareDataFile": "diskimage.json",
+ }
+ },
+ {
+ "id": "contoso-motor-serial-00000",
+ "name": "left-motor",
+ "group": "motors",
+ "manufacturer": "contoso",
+ "model": "virtual-motor",
+ "properties": {
+ "path": "\/usr\/local\/contoso-devices\/vacuum-1\/motors\/0",
+ "firmwareDataFile": "firmware.json",
+ }
+ },
+ {
+ "id": "contoso-motor-serial-00001",
+ "name": "right-motor",
+ "group": "motors",
+ "manufacturer": "contoso",
+ "model": "virtual-motor",
+ "properties": {
+ "path": "\/usr\/local\/contoso-devices\/vacuum-1\/motors\/1",
+ "firmwareDataFile": "firmware.json",
+ }
+ },
+ {
+ "id": "contoso-motor-serial-00002",
+ "name": "vacuum-motor",
+ "group": "motors",
+ "manufacturer": "contoso",
+ "model": "virtual-motor",
+ "properties": {
+ "path": "\/usr\/local\/contoso-devices\/vacuum-1\/motors\/2",
+ "firmwareDataFile": "firmware.json",
+ }
+ },
+ {
+ "id": "contoso-camera-serial-00000",
+ "name": "front-camera",
+ "group": "cameras",
+ "manufacturer": "contoso",
+ "model": "virtual-camera",
+ "properties": {
+ "path": "\/usr\/local\/contoso-devices\/vacuum-1\/camera\/0",
+ "firmwareDataFile": "firmware.json",
+ }
+ },
+ {
+ "id": "contoso-camera-serial-00001",
+ "name": "rear-camera",
+ "group": "cameras",
+ "manufacturer": "contoso",
+ "model": "virtual-camera",
+ "properties": {
+ "path": "\/usr\/local\/contoso-devices\/vacuum-1\/camera\/1",
+ "firmwareDataFile": "firmware.json",
+ }
+ }
+ ]
+}
+```
+
+## Next steps
+
+This example is written in C++. You can choose to use C if you prefer. To explore example source codes, see:
+
+- [CMakeLists.txt](https://github.com/Azure/iot-hub-device-update/tree/main/src/extensions/contoso-component-enumerator/CMakeLists.txt)
+- [contoso-component-enumerator.cpp](https://github.com/Azure/iot-hub-device-update/tree/main/src/extensions/contoso-component-enumerator/contoso-component-enumerator.cpp)
+- [inc/aduc/component_enumerator_extension.hpp](https://github.com/Azure/iot-hub-device-update/tree/main/src/extensions/inc/aduc/component_enumerator_extension.hpp)
+
+For various sample updates for components connected to the Contoso Virtual Vacuum device, see [Proxy update demo](https://github.com/Azure/iot-hub-device-update/tree/main/src/extensions/component-enumerators/examples/contoso-component-enumerator/demo/README.md).
iot-hub-device-update Create Device Update Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/create-device-update-account.md
To get started with Device Update you'll need to create a Device Update account,
* [Microsoft Edge](https://www.microsoft.com/edge) * Google Chrome
-## Create a device update account
+## Create a device update account and instance
1. Go to [Azure portal](https://portal.azure.com)
To get started with Device Update you'll need to create a Device Update account,
3. Click **Create** -> **Device Update for IoT Hub**
-4. Specify the Azure Subscription to be associated with your Device Update Account and Resource Group
-
-5. Specify a Name and Location for your Device Update Account
+4. Specify the Azure Subscription to be associated with your Device Update Account and Resource Group. Specify a Name and Location for your Device Update Account
:::image type="content" source="media/create-device-update-account/account-details.png" alt-text="Screenshot of account details." lightbox="media/create-device-update-account/account-details.png"::: > [!NOTE] > You can go to [Azure Products-by-region page](https://azure.microsoft.com/global-infrastructure/services/?products=iot-hub) to discover the regions where Device Update for IoT Hub is available. If Device Update for IoT Hub is not available in your region you can choose to create an account in an available region closest to you.
-6. Optionally, you can check the box to assign the Device Update administrator role to yourself. You can also use the steps listed in the "Configure access control roles" section to provide a combination of roles to users and applications for the right level of access.
-
-8. Click **Next: Review + create>**
-
- :::image type="content" source="media/create-device-update-account/account-review.png" alt-text="Screenshot of account details review." lightbox="media/create-device-update-account/account-review.png":::
-
-7. Review the details and then select **Create**. You will see your deployment is in progress.
-
- :::image type="content" source="media/create-device-update-account/account-deployment-inprogress.png" alt-text="Screenshot of account deployment in progress." lightbox="media/create-device-update-account/account-deployment-inprogress.png":::
-
-8. You will see the deployment status change to "complete" in a few minutes. Click **Go to resource**
+5. Optionally, you can check the box to assign the Device Update administrator role to yourself. You can also use the steps listed in the "Configure access control roles" section to provide a combination of roles to users and applications for the right level of access.
- :::image type="content" source="media/create-device-update-account/account-complete.png" alt-text="Screenshot of account deployment complete." lightbox="media/create-device-update-account/account-complete.png":::
-
-## Create a device update instance
-
-An instance of Device Update is associated with a single IoT hub. Select the IoT hub that will be used with Device Update. We will create a new Shared Access policy during this step to ensure Device Update uses only the required permissions to work with IoT Hub (registry write and service connect). This policy ensures that access is only limited to Device Update.
+6. Click **Next: Instance**
-To create a Device Update instance after an account has been created.
+ An instance of Device Update is associated with a single IoT hub. Select the IoT hub that will be used with Device Update. We will create a new Shared Access policy during this step to ensure Device Update uses only the required permissions to work with IoT Hub (registry write and service connect). This policy ensures that access is only limited to Device Update.
-1. Once you are in your newly created account resource, go to the Instance Management **Instances** blade
-
- :::image type="content" source="media/create-device-update-account/instance-blade.png" alt-text="Screenshot of instance management within account." lightbox="media/create-device-update-account/instance-blade.png":::
-
-2. Click **Create** and specify an instance name and select your IoT Hub
+7. Specify an instance name and select your IoT Hub
:::image type="content" source="media/create-device-update-account/instance-details.png" alt-text="Screenshot of instance details." lightbox="media/create-device-update-account/instance-details.png"::: > [!NOTE] > The IoT Hub you link to your Device Update resource, doesn't need to be in the same region as your Device Update Account. However, for better performance it is recommended that your IoT Hub be in a region same as or close to the region of your Device Update account.
-3. Click **Create**. You will see the instance in a "Creating" state.
-
- :::image type="content" source="media/create-device-update-account/instance-creating.png" alt-text="Screenshot of instance creating." lightbox="media/create-device-update-account/instance-creating.png":::
+8. Click **Next: Review + Create**. After validation, click on **Create**.
-4. Allow 5-10 mins for the instance deployment to complete. Refresh the status till you see the "Provisioning State" turn to "Succeeded".
-
- :::image type="content" source="media/create-device-update-account/instance-succeeded.png" alt-text="Screenshot of instance creation succeeded." lightbox="media/create-device-update-account/instance-succeeded.png":::
-
-## Configure IoT Hub
-
-In order for Device Update to receive change notifications from IoT Hub, Device Update integrates with the "Built-In" Event Hub. Clicking the "Configure IoT Hub" button configures the required message routes and access policy required to communicate with IoT devices.
-
-To configure IoT Hub
-
-1. Once the Instance "Provisioning State" turns to "Succeeded", select the instance in the Instance Management blade. Click **Configure IoT Hub**
-
- :::image type="content" source="media/create-device-update-account/instance-configure.png" alt-text="Screenshot of configuring IoT Hub for an instance." lightbox="media/create-device-update-account/instance-configure.png":::
-
-2. Select **I agree to make these changes**
-
- :::image type="content" source="media/create-device-update-account/instance-configure-selected.png" alt-text="Screenshot of agreeing to configure IoT Hub for an instance." lightbox="media/create-device-update-account/instance-configure-selected.png":::
-
-3. Click **Update**
+ :::image type="content" source="media/create-device-update-account/account-review.png" alt-text="Screenshot of account review." lightbox="media/create-device-update-account/account-review.png":::
+
+9. You will see your deployment is in progress. The deployment status will change to "complete" in a few minutes. Click **Go to resource**
- > [!NOTE]
- > If you are using a Free tier of Azure IoT Hub, the allowed number of message routes are limited to 5. Device Update for IoT Hub needs to configure 4 message routes to work as expected.
+ :::image type="content" source="media/create-device-update-account/account-complete.png" alt-text="Screenshot of account deployment complete." lightbox="media/create-device-update-account/account-complete.png":::
-[Learn about the message routes that are configured.](device-update-resources.md)
## Configure access control roles
iot-hub-device-update Create Update Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/create-update-group.md
# Create device groups in Device Update for IoT Hub Device Update for IoT Hub allows deploying an update to a group of IoT devices.
+ > [!NOTE]
+ > If you would like to deploy to a default group instead of a user-created group, you can directly move to [How to Deploy an Update](deploy-update.md)
++ ## Prerequisites * [Access to an IoT Hub with Device Update for IoT Hub enabled](create-device-update-account.md). It is recommended that you use a S1 (Standard) tier or above for your IoT Hub.
Tags can also be added or updated in Device twin or Module Twin directly.
2. Select the IoT Hub you previously connected to your Device Update instance.
-3. Select the Device Updates option under Automatic Device Management from the left-hand navigation bar.
+3. Select the Updates option under Device Management from the left-hand navigation bar.
-4. Select the Groups tab at the top of the page. You will be able to see the number of devices connected to Device Update that are not grouped yet.
+4. Select the Groups and Deployments tab at the top of the page.
:::image type="content" source="media/create-update-group/ungrouped-devices.png" alt-text="Screenshot of ungrouped devices." lightbox="media/create-update-group/ungrouped-devices.png":::
-5. Select the Add button to create a new group.
+5. Select the "Add group" button to create a new group.
:::image type="content" source="media/create-update-group/add-group.png" alt-text="Screenshot of device group addition." lightbox="media/create-update-group/add-group.png":::
-6. Select an IoT Hub tag from the list and then select Create update group.
+6. Select an IoT Hub tag and Device Class from the list and then select Create group.
:::image type="content" source="media/create-update-group/select-tag.png" alt-text="Screenshot of tag selection." lightbox="media/create-update-group/select-tag.png":::
-7. Once the group is created, you will see that the update compliance chart and groups list are updated. Update compliance chart shows the count of devices in various states of compliance: On latest update, New updates available, Updates in Progress and Devices not yet Grouped. [Learn about update compliance.](device-update-compliance.md)
+7. Once the group is created, you will see that the update compliance chart and groups list are updated. Update compliance chart shows the count of devices in various states of compliance: On latest update, New updates available, and Updates in Progress. [Learn about update compliance.](device-update-compliance.md)
:::image type="content" source="media/create-update-group/updated-view.png" alt-text="Screenshot of update compliance view." lightbox="media/create-update-group/updated-view.png":::
-8. You should see your newly created group and any available updates for the devices in the new group. You can deploy the update to the new group from this view by clicking on the update name. See Next Step: Deploy Update for more details.
+8. You should see your newly created group and any available updates for the devices in the new group. If there are devices that don't meet the device class requirements of the group, they will show up in a corresponding invalid group. You can deploy the best available update to the new user-defined group from this view by clicking on the "Deploy" button next to the group. See Next Step: Deploy Update for more details.
## View Device details for the group you created
iot-hub-device-update Create Update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/create-update.md
+
+ Title: How to prepare an update to be imported into Azure Device Update for IoT Hub | Microsoft Docs
+description: How-To guide for preparing to import a new update into Azure Device Update for IoT Hub.
++ Last updated : 1/28/2022++++
+# Prepare an update to import into Device Update for IoT Hub
+
+Learn how to obtain a new update and prepare the update for importing into Device Update for IoT Hub.
+
+## Prerequisites
+
+* [Access to an IoT Hub with Device Update for IoT Hub enabled](create-device-update-account.md).
+* An IoT device (or simulator) [provisioned for Device Update](device-update-agent-provisioning.md) within IoT Hub.
+* [PowerShell 5](/powershell/scripting/install/installing-powershell) or later (includes Linux, macOS, and Windows installs)
+* Supported browsers:
+ * [Microsoft Edge](https://www.microsoft.com/edge)
+ * Google Chrome
+
+## Obtain an update for your devices
+
+Now that you've set up Device Update and provisioned your devices, you'll need the update file(s) that you'll be deploying to those devices.
+
+* If youΓÇÖve purchased devices from an Original Equipment Manufacturer (OEM) or solution integrator, that organization will most likely provide update files for you, without you needing to create the updates. Contact the OEM or solution integrator to find out how they make updates available.
+
+* If your organization already creates software for the devices you use, that same group will be the ones to create the updates for that software.
+
+When creating an update to be deployed using Device Update for IoT Hub, start with either the [image-based or package-based approach](understand-device-update.md#support-for-a-wide-range-of-update-artifacts) depending on your scenario.
+
+## Create a basic Device Update import manifest
+
+Once you have your update files, create an import manifest to describe the update. If you haven't already done so, be sure to familiarize yourself with the basic [import concepts](import-concepts.md). While it is possible to author an import manifest JSON manually using a text editor, this guide will use PowerShell as example.
+
+> [!TIP]
+> Try the [image-based](device-update-raspberry-pi.md), [package-based](device-update-ubuntu-agent.md), or [proxy update](device-update-howto-proxy-updates.md) tutorials if you haven't already done so. You can also just view sample import manifest files from those tutorials for reference.
+
+1. [Clone](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository) `Azure/iot-hub-device-update` [Git repository](https://github.com/Azure/iot-hub-device-update).
+
+2. Navigate to `Tools/AduCmdlets` in your local clone from PowerShell.
+
+3. Run the following commands after replacing the sample parameter values with your own. See [Import schema and API information](import-schema.md) for details on what values you can use. In particular, be aware that the same exact set of compatibility properties cannot be used with more than one Provider and Name combination.
+
+ ```powershell
+ Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope Process
+
+ Import-Module ./AduUpdate.psm1
+
+ $updateId = New-AduUpdateId -Provider Contoso -Name Toaster -Version 1.0
+
+ $compat = New-AduUpdateCompatibility -Properties @{ deviceManufacturer = 'Contoso'; deviceModel = 'Toaster' }
+
+ $installStep = New-AduInstallationStep -Handler 'microsoft/swupdate:1'-HandlerProperties @{ installedCriteria = '1.0' } -Files 'path to your update file'
+
+ $update = New-AduImportManifest -UpdateId $updateId -Compatibility $compat -InstallationSteps $installStep
+
+ # Write the import manifest to a file, ideally next to the update file(s).
+ $update | Out-File "./$($updateId.provider).$($updateId.name).$($updateId.version).importmanifest.json" -Encoding utf8
+ ```
+
+Once you've created your import manifest, if you're ready to import your update, you can scroll to the Next steps link at the bottom of this page.
+
+## Create an advanced Device Update import manifest for a proxy update
+
+If your update is more complex, such as a [proxy update](device-update-proxy-updates.md), you may need to create multiple import manifests. You can use the same PowerShell script from the previous section to create parent and child import manifests for complex updates. Run the following commands after replacing the sample parameter values with your own. See [Import schema and API information](import-schema.md) for details on what values you can use.
+
+ ```powershell
+ Import-Module $PSScriptRoot/AduUpdate.psm1 -ErrorAction Stop
+
+ # We will use arbitrary files as update payload files.
+ $childFile = "$env:TEMP/childFile.bin.txt"
+ $parentFile = "$env:TEMP/parentFile.bin.txt"
+ "This is a child update payload file." | Out-File $childFile -Force -Encoding utf8
+ "This is a parent update payload file." | Out-File $parentFile -Force -Encoding utf8
+
+ #
+ # Create a child update
+ #
+ Write-Host 'Preparing child update ...'
+
+ $microphoneUpdateId = New-AduUpdateId -Provider Contoso -Name Microphone -Version $UpdateVersion
+ $microphoneCompat = New-AduUpdateCompatibility -DeviceManufacturer Contoso -DeviceModel Microphone
+ $microphoneInstallStep = New-AduInstallationStep -Handler 'microsoft/swupdate:1' -Files $childFile
+ $microphoneUpdate = New-AduImportManifest -UpdateId $microphoneUpdateId `
+ -IsDeployable $false `
+ -Compatibility $microphoneCompat `
+ -InstallationSteps $microphoneInstallStep `
+ -ErrorAction Stop -Verbose:$VerbosePreference
+
+ #
+ # Create another child update
+ #
+ Write-Host 'Preparing another child update ...'
+
+ $speakerUpdateId = New-AduUpdateId -Provider Contoso -Name Speaker -Version $UpdateVersion
+ $speakerCompat = New-AduUpdateCompatibility -DeviceManufacturer Contoso -DeviceModel Speaker
+ $speakerInstallStep = New-AduInstallationStep -Handler 'microsoft/swupdate:1' -Files $childFile
+ $speakerUpdate = New-AduImportManifest -UpdateId $speakerUpdateId `
+ -IsDeployable $false `
+ -Compatibility $speakerCompat `
+ -InstallationSteps $speakerInstallStep `
+ -ErrorAction Stop -Verbose:$VerbosePreference
+
+ #
+ # Create the parent update which parents the child update above
+ #
+ Write-Host 'Preparing parent update ...'
+
+ $parentUpdateId = New-AduUpdateId -Provider Contoso -Name Toaster -Version $UpdateVersion
+ $parentCompat = New-AduUpdateCompatibility -DeviceManufacturer Contoso -DeviceModel Toaster
+ $parentSteps = @()
+ $parentSteps += New-AduInstallationStep -Handler 'microsoft/script:1' -Files $parentFile -HandlerProperties @{ 'arguments'='--pre'} -Description 'Pre-install script'
+ $parentSteps += New-AduInstallationStep -UpdateId $microphoneUpdateId -Description 'Microphone Firmware'
+ $parentSteps += New-AduInstallationStep -UpdateId $speakerUpdateId -Description 'Speaker Firmware'
+ $parentSteps += New-AduInstallationStep -Handler 'microsoft/script:1' -Files $parentFile -HandlerProperties @{ 'arguments'='--post'} -Description 'Post-install script'
+
+ $parentUpdate = New-AduImportManifest -UpdateId $parentUpdateId `
+ -Compatibility $parentCompat `
+ -InstallationSteps $parentSteps `
+ -ErrorAction Stop -Verbose:$VerbosePreference
+
+ #
+ # Write all to files
+ #
+ Write-Host 'Saving manifest and update files ...'
+
+ New-Item $Path -ItemType Directory -Force | Out-Null
+
+ $microphoneUpdate | Out-File "$Path/$($microphoneUpdateId.Provider).$($microphoneUpdateId.Name).$($microphoneUpdateId.Version).importmanifest.json" -Encoding utf8
+ $speakerUpdate | Out-File "$Path/$($speakerUpdateId.Provider).$($speakerUpdateId.Name).$($speakerUpdateId.Version).importmanifest.json" -Encoding utf8
+ $parentUpdate | Out-File "$Path/$($parentUpdateId.Provider).$($parentUpdateId.Name).$($parentUpdateId.Version).importmanifest.json" -Encoding utf8
+
+ Copy-Item $parentFile -Destination $Path -Force
+ Copy-Item $childFile -Destination $Path -Force
+
+ Write-Host "Import manifest JSON files saved to $Path" -ForegroundColor Green
+
+ Remove-Item $childFile -Force -ErrorAction SilentlyContinue | Out-Null
+ Remove-Item $parentFile -Force -ErrorAction SilentlyContinue | Out-Null
+ ```
+
+## Next steps
+
+* [Import an update](import-update.md)
+* [Learn about import concepts](import-concepts.md)
iot-hub-device-update Deploy Update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/deploy-update.md
Learn how to deploy an update to an IoT device using Device Update for IoT Hub.
* [Access to an IoT Hub with Device Update for IoT Hub enabled](create-device-update-account.md). It is recommended that you use a S1 (Standard) tier or above for your IoT Hub. * [At least one update has been successfully imported for the provisioned device.](import-update.md) * An IoT device (or simulator) provisioned for Device Update within IoT Hub.
-* [A tag has been assigned to the IoT device you are trying to update. The device is part of at least one update group.](create-update-group.md)
+* [The device is part of at least one default group or user-created update group.](create-update-group.md)
* Supported browsers: * [Microsoft Edge](https://www.microsoft.com/edge) * Google Chrome
Learn how to deploy an update to an IoT device using Device Update for IoT Hub.
:::image type="content" source="media/deploy-update/device-update-iot-hub.png" alt-text="IoT Hub" lightbox="media/deploy-update/device-update-iot-hub.png":::
-3. Select the Groups tab at the top of the page. [Learn More](device-update-groups.md) about device groups.
+3. Select the Groups and Deployments tab at the top of the page. [Learn More](device-update-groups.md) about device groups.
- :::image type="content" source="media/deploy-update/updated-view.png" alt-text="Groups tab" lightbox="media/deploy-update/updated-view.png":::
+ :::image type="content" source="media/deploy-update/updated-view.png" alt-text="Groups and Deployments tab" lightbox="media/deploy-update/updated-view.png":::
-4. View the update compliance chart and groups list. You should see a new update available for your device group, with a link to the update under Pending Updates (you may need to Refresh once). [Learn More about update compliance.](device-update-compliance.md)
+4. View the update compliance chart and groups list. You should see a new update available for your device group listed under Best Update (you may need to Refresh once). [Learn More about update compliance.](device-update-compliance.md)
-5. Select the available update.
+5. Select the target group by clicking on the group name. You will be directed to the group details under Group basics.
-6. Confirm the correct group is selected as the target group. Schedule your deployment, then select Deploy update.
+ :::image type="content" source="media/deploy-update/group-basics.png" alt-text="Group details" lightbox="media/deploy-update/group-basics.png":::
- :::image type="content" source="media/deploy-update/select-update.png" alt-text="Select update" lightbox="media/deploy-update/select-update.png":::
+6. To initiate the deployment, go to the Current deployment tab. Click the deploy link next to the desired update from the Available updates section. The best, available update for a given group will be denoted with a "Best" highlight.
-7. View the compliance chart. You should see the update is now in progress.
+ :::image type="content" source="media/deploy-update/select-update.png" alt-text="Select update" lightbox="media/deploy-update/select-update.png":::
+
+7. Schedule your deployment to start immediately or in the future, then select Create.
+ > [!TIP]
+ > By default the Start date/time is 24 hrs from your current time. Be sure to select a different date/time if you want the deployment to begin earlier.
+ :::image type="content" source="media/deploy-update/create-deployment.png" alt-text="Create deployment" lightbox="media/deploy-update/create-deployment.png":::
+
+8. The Status under Deployment details should turn to Active, and the deployed update should be marked with "(deploying)".
+
+ :::image type="content" source="media/deploy-update/deployment-active.png" alt-text="Deployment active" lightbox="media/deploy-update/deployment-active.png":::
+
+9. View the compliance chart. You should see the update is now in progress.
:::image type="content" source="media/deploy-update/update-in-progress.png" alt-text="Update in progress" lightbox="media/deploy-update/update-in-progress.png":::
-8. After your device is successfully updated, you should see your compliance chart and deployment details update to reflect the same.
+10. After your device is successfully updated, you should see your compliance chart and deployment details update to reflect the same.
:::image type="content" source="media/deploy-update/update-succeeded.png" alt-text="Update succeeded" lightbox="media/deploy-update/update-succeeded.png"::: ## Monitor an update deployment
-1. Select the Deployments tab at the top of the page.
+1. Select the Deployment history tab at the top of the page.
- :::image type="content" source="media/deploy-update/deployments-tab.png" alt-text="Deployments tab" lightbox="media/deploy-update/deployments-tab.png":::
+ :::image type="content" source="media/deploy-update/deployments-history.png" alt-text="Deployment History" lightbox="media/deploy-update/deployments-history.png":::
-2. Select the deployment you created to view the deployment details.
+2. Select the details link next to the deployment you created.
:::image type="content" source="media/deploy-update/deployment-details.png" alt-text="Deployment details" lightbox="media/deploy-update/deployment-details.png":::
-3. Select Refresh to view the latest status details. Continue this process until the status changes to Succeeded.
+3. Select Refresh to view the latest status details.
## Retry an update deployment If your deployment fails for some reason, you can retry the deployment for failed devices.
-1. Go to the Deployments tab, and select the deployment that has failed.
-
- :::image type="content" source="media/deploy-update/deployment-details.png" alt-text="Deployment details" lightbox="media/deploy-update/deployment-details.png":::
+1. Go to the Current deployment tab from the group details.
-2. Click on the "Failed" Device Status in the detailed Deployment information pane.
+ :::image type="content" source="media/deploy-update/deployment-active.png" alt-text="Deployment active" lightbox="media/deploy-update/deployment-active.png":::
-3. Click on "Retry failed devices" and acknowledge the confirmation notification.
+2. Click on "Retry failed devices" and acknowledge the confirmation notification.
## Next steps
iot-hub-device-update Device Update Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-agent-overview.md
allowing for messaging to flow between the Device Update Agent and Device Update
## The Interface Layer
-The Interface layer is made up of the [ADU Core Interface](https://github.com/Azure/iot-hub-device-update/tree/main/src/agent/adu_core_interface) and the [Device Information Interface](https://github.com/Azure/iot-hub-device-update/tree/main/src/agent/device_info_interface).
+The Interface layer is made up of the [Device Update Core Interface](https://github.com/Azure/iot-hub-device-update/tree/main/src/agent/adu_core_interface) and the [Device Information Interface](https://github.com/Azure/iot-hub-device-update/tree/main/src/agent/device_info_interface).
-These interfaces rely on a configuration file for default values. The default values include aduc_manufacturer and aduc_model for the AzureDeviceUpdateCore interface and model and manufacturer for the DeviceInformation interface. [Learn More](device-update-configuration-file.md) the configuration file.
+These interfaces rely on a configuration file for the device specific values that need to be reported to the Device Update services. [Learn More](device-update-configuration-file.md) about the configuration file.
-### ADU Core Interface
+### Device Update Core Interface
-The 'ADU Core' interface is the primary communication channel between Device Update Agent and Services. [Learn More](device-update-plug-and-play.md#adu-core-interface) about this interface.
+The 'DeviceUpdate Core' interface is the primary communication channel between Device Update Agent and Services. [Learn More](device-update-plug-and-play.md#device-update-core-interface) about this interface.
### Device Information Interface
The Device Information Interface is used to implement the `Azure IoT PnP DeviceI
## The Platform Layer
-There are two implementations of the Platform Layer. The Simulator Platform
-Layer has a trivial implementation of the update actions and is used for quickly
-testing and evaluating Device Update for IoT Hub services and setup. When the Device Update Agent is built with
-the Simulator Platform Layer, we refer to it as the Device Update Simulator Agent or just
-simulator. [Learn More](https://github.com/Azure/iot-hub-device-update/blob/main/docs/agent-reference/how-to-run-agent.md) about how to use the simulator
-agent. The Linux Platform Layer integrates with [Delivery Optimization](https://github.com/microsoft/do-client) for
+The Linux Platform Layer integrates with [Delivery Optimization](https://github.com/microsoft/do-client) for
downloads and is used in our Raspberry Pi reference image, and all clients that run on Linux systems.
-### Simulator Platform Layer
-
-The Simulator Platform Layer implementation can be found in the
-`src/platform_layers/simulator_platform_layer` and can be used for
-testing and evaluating Device Update for IoT Hub. Many of the actions in the
-"simulator" implementation are mocked to reduce physical changes to experiment with Device Update for IoT Hub. An end to end
-"simulated" update can be performed using this Platform Layer. [Learn
-More](https://github.com/Azure/iot-hub-device-update/blob/main/docs/agent-reference/how-to-run-agent.md) about running the simulator agent.
- ### Linux Platform Layer The Linux Platform Layer implementation can be found in the
-`src/platform_layers/linux_platform_layer` and it integrates with the [Delivery Optimization Client](https://github.com/microsoft/do-client/releases) for downloads and is used in our Raspberry Pi reference image, and all clients that run on Linux systems.
+`src/platform_layers/linux_platform_layer` and it integrates with the [Delivery Optimization Client](https://github.com/microsoft/do-client/releases) for downloads.
This layer can integrate with different Update Handlers to implement the
-installer. For
-instance, the `SWUpdate` Update Handler invokes a shell script to call into the
-`SWUpdate` executable to perform an update.
+installers. For instance, the `SWUpdate` update handler, 'Apt' update handler, and 'Script' update handler.
## Update Handlers
-Update Handlers are components that handle content or installer-specific parts
-of the update. Update Handler implementations are in `src/content_handlers`.
-
-### Simulator Update Handler
+Update Handlers used to invoke installers or commands to do an over-the-air update. You can either use [existing update content handlers](https://github.com/Azure/iot-hub-device-update/tree/main/src/content_handlers) or [implement a custom Content Handler](https://github.com/Azure/iot-hub-device-update/tree/main/docs/agent-reference/how-to-implement-custom-update-handler.md) which can invoke any installer and execute the over-the-air update needed for your use case.
-The Simulator Update Handler is used by the Simulator Platform Layer and can
-be used with the Linux Platform Layer to fake interactions with a Content
-Handler. The Simulator Update Handler implements the Update Handler APIs with
-mostly no-ops. The implementation of the Simulator Update Handler can be found below:
-* [Image update simulator](https://github.com/Azure/iot-hub-device-update/blob/main/src/content_handlers/swupdate_handler/inc/aduc/swupdate_simulator_handler.hpp)
-* [Package update apt simulator](https://github.com/Azure/iot-hub-device-update/blob/main/src/content_handlers/apt_handler/inc/aduc/apt_simulator_handler.hpp)
+## Updating to latest Device update agent
->[!Note]
->The InstalledCriteria field in the AzureDeviceUpdateCore PnP interface should be the sha256 hash of the content. This is the same hash that is present in the [Import Manifest
-Object](import-update.md#create-a-device-update-import-manifest). [Learn More](device-update-plug-and-play.md) about `installedCriteria` and the `AzureDeviceUpdateCore` interface.
+We have added many new capabilities to the Device Update agent in the latest Public Preview Refresh agent (version 0.8.0). See [list of new capabilities](https://github.com/Azure/iot-hub-device-update/blob/main/docs/agent-reference/whats-new.md) for details.
-### `SWUpdate` Update Handler
+If you are using the Device Update agent versions 0.6.0 or 0.7.0 please migrate to the latest agent version 0.8.0. See [Public Preview Refresh agent for changes and how to upgrade](migration-pp-to-ppr.md)
-The `SWUpdate` Update Handler integrates with the `SWUpdate` command-line
-executable and other shell commands to implement A/B updates specifically for
-the Raspberry Pi reference image. Find the latest Raspberry Pi reference image [here](https://github.com/Azure/iot-hub-device-update/releases). The `SWUpdate` Update Handler is implemented in [src/content_handlers/swupdate_content_handler](https://github.com/Azure/iot-hub-device-update/tree/main/src/content_handlers/swupdate_handler).
+You can check installed version of the Device Update agent and the Delivery Optimization agent in the Device Properties section of your [IoT device twin](../iot-hub/iot-hub-devguide-device-twins.md). [Learn more about device properties under Device Update Core Interface](device-update-plug-and-play.md#device-properties).
-### APT Update Handler
-
-The APT Update Handler processes an APT-specific Update Manifest and invokes APT to
-install or update the specified Debian package(s).
+## Next Steps
+[Understand Device Update agent configuration file](device-update-configuration-file.md)
-## Self-update Device update agent
+You can use the following tutorials for a simple demonstration of Device Update for IoT Hub:
-The device update agent and its dependencies can be updated through the Device Update for IoT Hub pipeline. If you are using an image-based update, include the latest device update agent in your new image. If you are using a package-based update, include the device update agent and its desired version in the apt manifest like any other package. [Learn more](device-update-apt-manifest.md) about apt manifest. You can check the installed version of the Device Update agent and the Delivery Optimization agent in the Device Properties section of your [IoT device twin](../iot-hub/iot-hub-devguide-device-twins.md). [Learn more about device properties under ADU Core Interface](device-update-plug-and-play.md#device-properties).
+- [Image Update: Getting Started with Raspberry Pi 3 B+ Reference Yocto Image](device-update-raspberry-pi.md) extensible via open source to build you own images for other architecture as needed.
+
+- [Package Update: Getting Started using Ubuntu Server 18.04 x64 Package agent](device-update-ubuntu-agent.md)
+
+- [Proxy Update: Getting Started using Device Update binary agent for downstream devices](device-update-howto-proxy-updates.md)
+
+- [Getting Started Using Ubuntu (18.04 x64) Simulator Reference Agent](device-update-simulator.md)
-## Next steps
-[Understand Device Update agent configuration file](device-update-configuration-file.md)
+- [Device Update for Azure IoT Hub tutorial for Azure-Real-Time-Operating-System](device-update-azure-real-time-operating-system.md)
iot-hub-device-update Device Update Agent Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-agent-provisioning.md
Title: Provisioning Device Update for Azure IoT Hub Agent| Microsoft Docs
description: Provisioning Device Update for Azure IoT Hub Agent Previously updated : 2/16/2021 Last updated : 1/26/2022
The Device Update Module agent can run alongside other system processes and [IoT Edge modules](../iot-edge/iot-edge-modules.md) that connect to your IoT Hub as part of the same logical device. This section describes how to provision the Device Update agent as a module identity.
+## Changes to Device Update agent at Public Preview Refresh
+
+We have added many new capabilities to the Device Update agent in the latest Public Preview Refresh agent (version 0.8.0). See [list of new capabilities](https://github.com/Azure/iot-hub-device-update/blob/main/docs/agent-reference/whats-new.md) for details.
+
+If you are using the Device Update agent versions 0.6.0 or 0.7.0 please migrate to the latest agent version 0.8.0. See [Public Preview Refresh agent for changes and how to upgrade](migration-pp-to-ppr.md)
+
+You can check installed version of the Device Update agent and the Delivery Optimization agent in the Device Properties section of your [IoT device twin](../iot-hub/iot-hub-devguide-device-twins.md). [Learn more about device properties under ADU Core Interface](device-update-plug-and-play.md#device-properties).
## Module identity vs device identity
If you are migrating from a device level agent to adding the agent as a Module i
## Support for Device Update
-The following IoT device types are currently supported with Device Update:
+The following IoT device over the air update types are currently supported with Device Update:
* Linux devices (IoT Edge and Non-IoT Edge devices):
- * Image A/B update:
- - Yocto - ARM64 (reference image), extensible via open source to [build your own images](device-update-agent-provisioning.md#how-to-build-and-run-device-update-agent) for other architecture as needed.
- - Ubuntu 18.04 simulator
-
- * Package Agent supported builds for the following platforms/architectures:
- - Ubuntu Server 18.04 x64 Package Agent
- - Debian 9
-
+ * [Image )
+ * [Package update](device-update-ubuntu-agent.md)
+ * [Proxy update for downstream devices](device-update-howto-proxy-updates.md)
+
* Constrained devices: * AzureRTOS Device Update agent samples: [Device Update for Azure IoT Hub tutorial for Azure-Real-Time-Operating-System](device-update-azure-real-time-operating-system.md)
This section describes how to provision the Device Update agent as a module iden
* IoT Edge enabled devices, or * Non-Edge IoT devices, or * Other IoT devices. +
+To check if you have IoT Edge enabled on your device, please refer to the [IoT Edge installation instructions](../iot-edge/how-to-provision-single-device-linux-symmetric.md?preserve-view=true&view=iotedge-2020-11).
Follow all or any of the below sections to add the Device update agent based on the type of IoT device you are managing.
Follow all or any of the below sections to add the Device update agent based on
Follow these instructions to provision the Device Update agent on [IoT Edge enabled devices](../iot-edge/index.yml).
-1. Follow the instructions to [Manually provision a single Linux IoT Edge device](../iot-edge/how-to-provision-single-device-linux-symmetric.md?preserve-view=true&view=iotedge-2020-11).
+1. Follow the instructions to [Manually provision a single Linux IoT Edge device](../iot-edge/how-to-provision-single-device-linux-symmetric.md?preserve-view=true&view=iotedge-2020-11#install-iot-edge).
1. Install the Device Update image update agent.
- We provide sample images in the [Artifacts](https://github.com/Azure/iot-hub-device-update/releases) repository. The swUpdate file is the base image that you can flash onto a Raspberry Pi B3+ board. The .gz file is the update you would import through Device Update for IoT Hub. For an example, see [How to flash the image to your IoT Hub device](./device-update-raspberry-pi.md#flash-sd-card-with-image).
+ We provide sample images in the [Assets here](https://github.com/Azure/iot-hub-device-update/releases) repository. The swUpdate file is the base image that you can flash onto a Raspberry Pi B3+ board. The .gz file is the update you would import through Device Update for IoT Hub. For an example, see [How to flash the image to your IoT Hub device](./device-update-raspberry-pi.md#flash-sd-card-with-image).
1. Install the Device Update package update agent.
Follow these instructions to provision the Device Update agent on [IoT Edge enab
1. You are now ready to start the Device Update agent on your IoT Edge device.
-### On non-Edge IoT Linux devices
+### On Iot Linux devices without IoT Edge installed
Follow these instructions to provision the Device Update agent on your IoT Linux devices.
-1. Install the IoT Identity Service and add the latest version to your IoT device.
- 1. Log onto the machine or IoT device.
- 1. Open a terminal window.
- 1. Install the latest [IoT Identity Service](https://github.com/Azure/iot-identity-service/blob/main/docs-dev/packaging.md#installing-and-configuring-the-package) on your IoT device using this command:
- > [!Note]
- > The IoT Identity service registers module identities with IoT Hub by using symmetric keys currently.
- ```shell
- sudo apt-get install aziot-identity-service
- ```
-
-1. Provisioning IoT Identity service to get the IoT device information.
-
- Create a custom copy of the configuration template so we can add the provisioning information. In a terminal, enter the following command:
-
- ```shell
- sudo cp /etc/aziot/config.toml.template /etc/aziot/config.toml
- ```
-
-1. Next edit the configuration file to include the connection string of the device you wish to act as the provisioner for this device or machine. In a terminal, enter the below command.
-
- ```shell
- sudo nano /etc/aziot/config.toml
- ```
-
-1. You should see a message like the following example:
-
- :::image type="content" source="media/understand-device-update/config.png" alt-text="Diagram of IoT Identity Service config file." lightbox="media/understand-device-update/config.png":::
+1. Install the IoT Identity Service and add the latest version to your IoT device by following instrucions in [Installing the Azure IoT Identity Service](https://azure.github.io/iot-identity-service/installation.html#install-from-packagesmicrosoftcom).
- 1. In the same nano window, find the block with ΓÇ£Manual provisioning with connection stringΓÇ¥.
- 1. In the window, delete the ΓÇ£#ΓÇ¥ symbol ahead of 'provisioning'
- 1. In the window, delete the ΓÇ£#ΓÇ¥ symbol ahead of 'source'
- 1. In the window, delete the ΓÇ£#ΓÇ¥ symbol ahead of 'connection_string'
- 1. In the window, delete the string within the quotes to the right of 'connection_string' and then add your connection string there
- 1. Save your changes to the file with 'Ctrl+X' and then 'Y' and hit the 'enter' key to save your changes.
+2. Configure the IoT Identity Service by following the instructions in [Configuring the Azure IoT Identity Service](https://azure.github.io/iot-identity-service/configuration.html).
-1. Now apply and restart the IoT Identity service with the command below. You should now see a ΓÇ£Done!ΓÇ¥ printout that means you have successfully configured the IoT Identity Service.
+3. Finally install the Device Update agent. We provide sample images in [Assets here](https://github.com/Azure/iot-hub-device-update/releases), the swUpdate file is the base image that you can flash onto a Raspberry Pi B3+ board, and the .gz file is the update you would import through Device Update for IoT Hub. See example of [how to flash the image to your IoT Hub device](./device-update-raspberry-pi.md#flash-sd-card-with-image).
+
+4. After you've installed the device update agent, you will need to edit the configuration file for Device Update by running the command below.
- > [!Note]
- > The IoT Identity service registers module identities with IoT Hub by using symmetric keys currently.
-
```shell
- sudo aziotctl config apply
+ sudo nano /etc/adu/du-config.json
```
-
-1. Finally install the Device Update agent. We provide sample images in [Artifacts](https://github.com/Azure/iot-hub-device-update/releases). The swUpdate file is the base image that you can flash onto a Raspberry Pi B3+ board, and the .gz file is the update you would import through Device Update for IoT Hub. See example of [how to flash the image to your IoT Hub device](./device-update-raspberry-pi.md#flash-sd-card-with-image).
+ Change the connectionType to "AIS" for agents who will be using the IoT Identity Service for provisioning. The ConnectionData field must be a empty string
-1. You are now ready to start the Device Update agent on your IoT device.
+5. You are now ready to start the Device Update agent on your IoT device.
### Other IoT devices The Device Update agent can also be configured without the IoT Identity service for testing or on constrained devices. Follow the below steps to provision the Device Update agent using a connection string (from the Module or Device).
-1. We provide sample images in the [Artifacts](https://github.com/Azure/iot-hub-device-update/releases) repository. The swUpdate file is the base image that you can flash onto a Raspberry Pi B3+ board. The .gz file is the update you would import through Device Update for IoT Hub. For an example, see [How to flash the image to your IoT Hub device](./device-update-raspberry-pi.md#flash-sd-card-with-image).
+1. We provide sample images in the [Assets here](https://github.com/Azure/iot-hub-device-update/releases) repository. The swUpdate file is the base image that you can flash onto a Raspberry Pi B3+ board. The .gz file is the update you would import through Device Update for IoT Hub. For an example, see [How to flash the image to your IoT Hub device](./device-update-raspberry-pi.md#flash-sd-card-with-image).
1. Log onto the machine or IoT Edge device/IoT device.
The Device Update agent can also be configured without the IoT Identity service
1. Enter the below in the terminal window:
- - [For Package updates](device-update-ubuntu-agent.md) use: sudo nano /etc/adu/adu-conf.txt
- - [For Image updates](device-update-raspberry-pi.md) use: sudo nano /adu/adu-conf.txt
+ - [For Ubuntu agent](device-update-ubuntu-agent.md) use: sudo nano /etc/adu/du-config.json
+ - [For Yocto reference image](device-update-raspberry-pi.md) use: sudo nano /adu/du-config.json
- 1. You should see a window open with some text in it. Delete the entire string following 'connection_String=' the first time you provision the Device Update agent on the IoT device. It is just place holder text.
+ 1. Copy the primary connection string
- 1. In the terminal, replace \<your-connection-string\> with the connection string of the device for your instance of Device Update agent. Select Enter and then **Save.** It should look like this example:
-
- ```text
- connection_string=<ADD CONNECTION STRING HERE>
- ```
-
- > [!Important]
- > Do not add quotes around the connection string.
-
+ - If Device Update agent is configured as a module copy the module's primary connection string.
+ - Otherwise copy the device's primary connection string.
+
+ 3. Enter the copied primary connection string to the 'connectionData' field's value in the du-config.json file. Then save and close the file.
+
1. Now you are now ready to start the Device Update agent on your IoT device. ## How to start the Device Update Agent
If you run into issues, review the Device Update for IoT Hub [Troubleshooting Gu
## Next steps
-You can use the following pre-built images and binaries for a simple demonstration of Device Update for IoT Hub:
+You can use the following tutorials for a simple demonstration of Device Update for IoT Hub:
- [Image Update: Getting Started with Raspberry Pi 3 B+ Reference Yocto Image](device-update-raspberry-pi.md) extensible via open source to build you own images for other architecture as needed.--- [Getting Started Using Ubuntu (18.04 x64) Simulator Reference Agent](device-update-simulator.md)-
+
- [Package Update: Getting Started using Ubuntu Server 18.04 x64 Package agent](device-update-ubuntu-agent.md)
+
+- [Proxy Update: Getting Started using Device Update binary agent for downstream devices](device-update-howto-proxy-updates.md)
+
+- [Getting Started Using Ubuntu (18.04 x64) Simulator Reference Agent](device-update-simulator.md)
- [Device Update for Azure IoT Hub tutorial for Azure-Real-Time-Operating-System](device-update-azure-real-time-operating-system.md)
iot-hub-device-update Device Update Azure Real Time Operating System https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-azure-real-time-operating-system.md
In this tutorial you learn how to:
> * Deploy an image update > * Monitor the update deployment
-If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Prerequisites * Access to an IoT Hub. It is recommended that you use a S1 (Standard) tier or higher.
Learn more about [Azure RTOS](/azure/rtos/).
} ``` + ## Create update group
-1. Go to the IoT Hub you previously connected to your Device Update instance.
-2. Select the Updates option under "Device management" from the left-hand navigation bar.
-3. Select the Groups tab at the top of the page.
-4. Select the Add button to create a new group.
-5. Select the IoT Hub tag that you created in the previous step from the list. Select Create update group.
+1. Go to the Groups and Deployments tab at the top of the page.
+ :::image type="content" source="media/create-update-group/ungrouped-devices.png" alt-text="Screenshot of ungrouped devices." lightbox="media/create-update-group/ungrouped-devices.png":::
+
+2. Select the "Add group" button to create a new group.
+ :::image type="content" source="media/create-update-group/add-group.png" alt-text="Screenshot of device group addition." lightbox="media/create-update-group/add-group.png":::
+
+3. Select an IoT Hub tag and Device Class from the list and then select Create group.
+ :::image type="content" source="media/create-update-group/select-tag.png" alt-text="Screenshot of tag selection." lightbox="media/create-update-group/select-tag.png":::
- :::image type="content" source="media/create-update-group/select-tag.PNG" alt-text="Screenshot showing tag selection." lightbox="media/create-update-group/select-tag.PNG":::
+4. Once the group is created, you will see that the update compliance chart and groups list are updated. Update compliance chart shows the count of devices in various states of compliance: On latest update, New updates available, and Updates in Progress. [Learn about update compliance.](device-update-compliance.md)
+ :::image type="content" source="media/create-update-group/updated-view.png" alt-text="Screenshot of update compliance view." lightbox="media/create-update-group/updated-view.png":::
+
+5. You should see your newly created group and any available updates for the devices in the new group. If there are devices that don't meet the device class requirements of the group, they will show up in a corresponding invalid group. You can deploy the best available update to the new user-defined group from this view by clicking on the "Deploy" button next to the group.
[Learn more](create-update-group.md) about adding tags and creating update groups + ## Deploy new firmware
-1. Once the group is created, you should see a new update available for your device group, with a link to the update under Pending Updates. You might need to Refresh once.
-2. Click on the available update.
-3. Confirm that the correct group is selected as the target group. Schedule your deployment, then select Deploy update.
+1. Once the group is created, you should see a new update available for your device group, with a link to the update under Best Update (you may need to Refresh once). [Learn More about update compliance.](device-update-compliance.md)
+
+2. Select the target group by clicking on the group name. You will be directed to the group details under Group basics.
- :::image type="content" source="media/deploy-update/select-update.png" alt-text="Select update" lightbox="media/deploy-update/select-update.png":::
+ :::image type="content" source="media/deploy-update/group-basics.png" alt-text="Group details" lightbox="media/deploy-update/group-basics.png":::
-4. View the compliance chart. You should see the update is now in progress.
+3. To initiate the deployment, go to the Current deployment tab. Click the deploy link next to the desired update from the Available updates section. The best, available update for a given group will be denoted with a "Best" highlight.
- :::image type="content" source="media/deploy-update/update-in-progress.png" alt-text="Update in progress" lightbox="media/deploy-update/update-in-progress.png":::
+ :::image type="content" source="media/deploy-update/select-update.png" alt-text="Select update" lightbox="media/deploy-update/select-update.png":::
-5. After your device is successfully updated, you should see your compliance chart and deployment details update to reflect the same.
+4. Schedule your deployment to start immediately or in the future, then select Create.
+ > [!TIP]
+ > By default the Start date/time is 24 hrs from your current time. Be sure to select a different date/time if you want the deployment to begin earlier.
+ :::image type="content" source="media/deploy-update/create-deployment.png" alt-text="Create deployment" lightbox="media/deploy-update/create-deployment.png":::
+
+5. The Status under Deployment details should turn to Active, and the deployed update should be marked with "(deploying)".
+
+ :::image type="content" source="media/deploy-update/deployment-active.png" alt-text="Deployment active" lightbox="media/deploy-update/deployment-active.png":::
+
+6. View the compliance chart. You should see the update is now in progress.
+
+7. After your device is successfully updated, you should see your compliance chart and deployment details update to reflect the same.
:::image type="content" source="media/deploy-update/update-succeeded.png" alt-text="Update succeeded" lightbox="media/deploy-update/update-succeeded.png"::: ## Monitor an update deployment
-1. Select the Deployments tab at the top of the page.
+1. Select the Deployment history tab at the top of the page.
- :::image type="content" source="media/deploy-update/deployments-tab.png" alt-text="Deployments tab" lightbox="media/deploy-update/deployments-tab.png":::
+ :::image type="content" source="media/deploy-update/deployments-history.png" alt-text="Deployment History" lightbox="media/deploy-update/deployments-history.png":::
-2. Select the deployment that you created to view the deployment details.
+2. Select the details link next to the deployment you created.
:::image type="content" source="media/deploy-update/deployment-details.png" alt-text="Deployment details" lightbox="media/deploy-update/deployment-details.png":::
-3. Select Refresh to view the latest status details. Continue this process until the status changes to Succeeded.
+3. Select Refresh to view the latest status details.
+ You have now completed a successful end-to-end image update using Device Update for IoT Hub on an Azure RTOS embedded device.
When no longer neededn clean up your device update account, instance, IoT Hub, a
## Next steps
-To learn more about Azure RTOS and how it works with Azure IoT, view https://azure.com/rtos.
+To learn more about Azure RTOS and how it works with Azure IoT, view https://azure.com/rtos.
iot-hub-device-update Device Update Configuration File https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-configuration-file.md
Title: Understand Device Update for Azure IoT Hub Configuration File| Microsoft
description: Understand Device Update for Azure IoT Hub Configuration File. Previously updated : 2/12/2021 Last updated : 12/13/2021 # Device Update for IoT Hub Configuration File
-The "adu-conf.txt" is an optional file that can be created to manage the following configurations.
+The "du-config.json" is a file that contains the below configurations for the Device Update agent. The Device Update Agent will then read these values and report them to the Device Update Service.
* AzureDeviceUpdateCore:4.ClientMetadata:4.deviceProperties["manufacturer"] * AzureDeviceUpdateCore:4.ClientMetadata:4.deviceProperties["model"] * DeviceInformation.manufacturer * DeviceInformation.model
-* Device Connection String (if it is not known by the Device Update Agent).
-
-## Purpose
-The Device Update Agent will first try to get the `manufacturer` and `model` values from the device to use for the [Interface Layer](device-update-agent-overview.md#the-interface-layer). If that fails, the Device Update Agent will next look for the "adu-conf.txt" file and use the values from there. If both attempts are not successful, the Device Update Agent will use [default](https://github.com/Azure/iot-hub-device-update/blob/main/CMakeLists.txt) values.
-
-Learn more about [ADU Core Interface](https://github.com/Azure/iot-hub-device-update/tree/main/src/agent/adu_core_interface) and [Device Information Interface](https://github.com/Azure/iot-hub-device-update/tree/main/src/agent/device_info_interface).
-
+* connectionData
+* connectionType
+
## File location
-Within Linux system, in the partition or disk called `adu`, create a text file called "adu-conf.txt" with the following fields.
+When installing Debian agent on an IoT Device with a Linux OS, modify the '/etc/adu/du-config.json' file to update values. For a Yocto build system, in the partition or disk called 'adu' create a json file called '/adu/du-config.json'.
## List of fields |Name|Description| |--|--|
-|connection_string|Pre-provisioned connection string the device can use to connect to the IoT Hub. Note: Not required if you are provisioning Device Update Agent through the [Azure IoT Identity Service](https://azure.github.io/iot-identity-service/)|
+|SchemaVersion|The schema version that maps the current configuration file format version|
+|aduShellTrustedUsers|The list of users that can launch the 'adu-shell' program. Note, 'adu-shell' is a "broker" program that does various Update Actions, as 'root'. The Device Update default content update handlers invoke 'adu-shell' to do tasks that require "super user" privilege. Examples of tasks that require this privilege are "apt-get install" or executing a privileged scripts.|
|aduc_manufacturer|Reported by the `AzureDeviceUpdateCore:4.ClientMetadata:4` interface to classify the device for targeting the update deployment.| |aduc_model|Reported by the `AzureDeviceUpdateCore:4.ClientMetadata:4` interface to classify the device for targeting the update deployment.|
+|connectionType|Possible values "string" when connecting the device to IoT Hub manually for testing purposes. For production scenarios, use value "AIS" when using the IoT Identity Service to connect the device to IoT Hub. See [understand IoT Identity Service configurations](https://azure.github.io/iot-identity-service/configuration.html)|
+|connectionData|If connectionType = "string", add the value from your IoT Device's, device or module connection string here. If connectionType = "AIS", set the connectionData to empty string("connectionData": "").
|manufacturer|Reported by the Device Update Agent as part of the `DeviceInformation` interface.| |model|Reported by the Device Update Agent as part of the `DeviceInformation` interface.|
-## Example "adu-conf.txt" file contents
+
+## Example "du-config.json" file contents
```markdown
-connection_string = `HostName=<yourIoTHubName>;DeviceId=<yourDeviceId>;SharedAccessKey=<yourSharedAccessKey>`
-aduc_manufacturer = <value to send through `AzureDeviceUpdateCore:4.ClientMetadata:4.deviceProperties["manufacturer"]`
-aduc_model = <value to send through `AzureDeviceUpdateCore:4.ClientMetadata:4.deviceProperties["model"]`
-manufacturer = <value to send through `DeviceInformation.manufacturer`
-model = <value to send through `DeviceInformation.manufacturer`
+{
+ "schemaVersion": "1.1",
+ "aduShellTrustedUsers": [
+ "adu",
+ "do"
+ ],
+ "manufacturer": <Place your device info manufacturer here>,
+ "model": <Place your device info model here>,
+ "agents": [
+ {
+ "name": <Place your agent name here>,
+ "runas": "adu",
+ "connectionSource": {
+ "connectionType": "string", //or ΓÇ£AISΓÇ¥
+ "connectionData": <Place your Azure IoT device connection string here>
+ },
+ "manufacturer": <Place your device property manufacturer here>,
+ "model": <Place your device property model here>
+ }
+ ]
+}
+ ```
iot-hub-device-update Device Update Control Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-control-access.md
Device Update uses Azure RBAC to provide authentication and authorization for us
## Configure access control roles
-In order for other users and applications to have access to Device Update, users or applications must be granted access to this resource. Here are the roles that are supported by Device Update
+In order for other users and applications to have access to Device Update, users or applications must be granted access to this resource. Here are the roles that are supported by Device Update:
| Role Name | Description | | : | :- |
In order for other users and applications to have access to Device Update, users
A combination of roles can be used to provide the right level of access. For example, a developer can import and manage updates using the Device Update Content Administrator role, but needs a Device Update Deployments Reader role to view the progress of an update. Conversely, a solution operator with the Device Update Reader role can view all updates, but needs to use the Device Update Deployments Administrator role to deploy a specific update to devices.
+## Authenticate to Device Update REST APIs
-## Authenticate to Device Update REST APIs for Publishing and Management
-
-Device Update also uses Azure AD for authentication to publish and manage content via service APIs. To get started, you need to create and configure a client application.
+Device Update uses Azure Active Directory (AD) for authentication to its REST APIs. To get started, you need to create and configure a client application.
### Create client Azure AD App
-To integrate an application or service with Azure AD, [first register](../active-directory/develop/quickstart-register-app.md) an application with Azure AD. Client application setup varies depending on the authorization flow used. Configuration below is for guidance when using the Device Update REST APIs.
+To integrate an application or service with Azure AD, [first register](../active-directory/develop/quickstart-register-app.md) a client application with Azure AD. Client application setup will vary depending on the authorization flow you'll need (users, applications or managed identities). For example, to call Device Update from:
+
+* Mobile or desktop application, add `Mobile and desktop applications` platform with https://login.microsoftonline.com/common/oauth2/nativeclient for the Redirect URI.
+* Website with implicit sign-on, add `Web` platform and select `Access tokens (used for implicit flows)`.
+
+### Configure permissions
+
+Next, add permissions for calling Device Update to your app:
+1. Go to `API permissions` page of your app and click `Add a permission`.
+2. Go to `APIs my organization uses` and search for `Azure Device Update`.
+3. Select `user_impersonation` permission and click `Add permissions`.
+
+### Requesting authorization token
+
+Device Update REST API requires OAuth 2.0 authorization token in the request header. Following are some examples of various ways to request an authorization token.
+
+#### Using Azure CLI
+
+```azurecli
+az login
+az account get-access-token --resource 'https://api.adu.microsoft.com/'
+```
+
+#### Using PowerShell MSAL Library
+
+[MSAL.PS](https://github.com/AzureAD/MSAL.PS) PowerShell module is a wrapper over [Microsoft Authentication Library for .NET (MSAL .NET)](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet0). It supports various authentication methods.
+
+_Using user credentials_:
+
+```powershell
+$clientId = '<app_id>'
+$tenantId = '<tenant_id>'
+$authority = "https://login.microsoftonline.com/$tenantId/v2.0"
+$Scope = 'https://api.adu.microsoft.com/user_impersonation'
+
+Get-MsalToken -ClientId $clientId -TenantId $tenantId -Authority $authority -Scopes $Scope
+```
+
+_Using user credentials with device code_:
+
+```powershell
+$clientId = '<app_id>ΓÇÖ
+$tenantId = '<tenant_id>ΓÇÖ
+$authority = "https://login.microsoftonline.com/$tenantId/v2.0"
+$Scope = 'https://api.adu.microsoft.com/user_impersonation'
+
+Get-MsalToken -ClientId $clientId -TenantId $tenantId -Authority $authority -Scopes $Scope -Interactive -DeviceCode
+```
+
+_Using app credentials_:
+
+```powershell
+$clientId = '<app_id>ΓÇÖ
+$tenantId = '<tenant_id>ΓÇÖ
+$cert = '<client_certificate>'
+$authority = "https://login.microsoftonline.com/$tenantId/v2.0"
+$Scope = 'https://api.adu.microsoft.com/.default'
-* Set client authentication: 'redirect URIs for native or web client'.
-* Set API Permissions - Device Update for IoT Hub exposes:
- * Delegated permissions: 'user_impersonation'
- * **Optional**, grant admin consent
+Get-MsalToken -ClientId $clientId -TenantId $tenantId -Authority $authority -Scopes $Scope -ClientCertificate $cert
+```
-[Next steps: Create device update resources and configure access control roles](./create-device-update-account.md)
+## Next Steps
+* Create device update resources and configure access control roles](./create-device-update-account.md)
iot-hub-device-update Device Update Deployments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-deployments.md
+
+ Title: Understand Device Update for Azure IoT Hub deployments | Microsoft Docs
+description: Understand how updates are deployed.
++ Last updated : 12/07/2021++++
+# Update deployments
+
+A deployment is how updates are delivered to one or more devices. Deployments are always associated with a device group. A deployment can be initiated from the API or the UI.
+A device group can only have one active deployment associated with it at any given time. A deployment can be scheduled to begin in the future or start immediately.
+
+## Dynamic deployments
+
+Deployments in Device Update for IoT Hub are dynamic in nature. Dynamic deployments empower users to move towards a set-and-forget management model by automatically deploying
+updates to newly provisioned, applicable devices. Any devices that are provisioned or change their group membership after a deployment is initiated, will automatically receive
+the update deployment as long as the deployment remains active without any other action on part of the user.
+
+## Deployment life cycle
+
+Due to their dynamic nature, deployments remain active and in-progress until they are explicitly canceled. A deployment is considered Inactive and Superseded if a new deployment
+is created targeting the same device group. A deployment can be retried for devices that might fail. Once a deployment is canceled, it cannot be reactivated again.
++
+## Next steps
+
+[Deploy an update](./deploy-update.md)
iot-hub-device-update Device Update Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-diagnostics.md
+
+ Title: Understand Device Update for Azure IoT Hub diagnostic features | Microsoft Docs
+description: Understand what diagnostic features Device Update for IoT Hub has, including deployment error codes in UX and remote log collection.
++ Last updated : 1/26/2021++++
+# Device Update for IoT Hub diagnostics overview
+
+Device Update for IoT Hub has several features focused on making it easier to diagnose and troubleshoot device-side errors. With the release of the v0.8.0 agent, there are two diagnostic features available:
+
+* **Deployment error codes** can now be viewed directly in the latest preview version of the Device Update user interface
+
+* **Remote log collection** enables the creation of log operations, which instruct targeted devices to upload on-device diagnostic logs to a linked Azure Blob storage account
+
+## Deployment error codes in UI
+
+When a device reports a deployment failure to the Device Update service, the Device Update user interface now displays the device's reported resultCode and extendedResultCode in the user interface. You can view these codes using the following steps:
+
+1. Navigate to the **Groups and Deployments** tab.
+
+2. Click on the name of a group with an active deployment to get to the **Group details** page.
+
+3. Click on any device name in the **Device list** to open the device details panel. Here you can see the Result code the device has reported.
+
+4. The DU reference agent follows standard HTTP status code convention for the Result code field (e.g. "200" indicates success). For more information on how to parse Result codes, see the [Device Update client error codes](device-update-error-codes.md) page.
+
+ > [!NOTE]
+ > If you have modified your DU Agent to report customized Result codes, the numerical codes will still be passed through to the Device Update user interface. You may then refer to any documentation you have created to parse these numerical codes.
+
+## Remote log collection
+
+When more information from the device is necessary to diagnose and troubleshoot an error, you can use the log collection feature to instruct targeted devices to upload on-device diagnostic logs to a linked Azure Blob storage account. You can start using this feature by following these [instructions](device-update-log-collection.md).
+
+Device Update's remote log collection is a service-driven, operation-based feature. To take advantage of log collection, a device need only be able to implement the Diagnostics interface and configuration file, and be able to upload files to Azure Blob storage via SDK.
+
+From a high level, the log collection feature works as follows:
+
+1. The user creates a new log operation using the Device Update user interface or APIs, targeting up to 100 devices that have implemented the Diagnostics Interface.
+
+2. The DU service sends a log collection start message to the targeted devices using the Diagnostics Interface. This start message includes the log operation ID and a SAS token for uploading to the associated Azure Storage account.
+
+3. Upon receiving the start message, the DU agent of the targeted device will attempt to collect and upload the files in the pre-defined filepath(s) specified in the on-device agent configuration file. The DU reference agent is configured to upload the DU Agent diagnostic log ("aduc.log"), and the DO Agent diagnostic log ("do-agent.log") by default.
+
+4. The DU agent then reports the state of the operation ("Succeeded" / "Failed") back to the service, including the log operation ID, a ResultCode, and an ExtendedResultCode. If the DU Agent fails a log operation, it will automatically attempt to retry three times, reporting only the final state back to the service.
+
+5. Once all targeted devices have reported their terminal state back to the DU service, the DU service marks the log operation as "Succeeded" or "Failed." "Succeeded" indicates that all targeted devices successfully completed the log operation. "Failed" indicates that at least one targeted device failed the log operation.
+
+ > [!NOTE]
+ > Since the log operation is carried out in parallel by the targeted devices, it is possible that some targeted devices successfully uploaded logs, but the overall log operation is marked as "Failed." You can see which devices succeeded and which failed by viewing the log operation details through the user interface or APIs.
+## Next steps
+
+Learn how to use Device Update's remote log collection feature:
+
+ - [Remotely collect diagnostic logs from devices using Device Update for IoT Hub](device-update-log-collection.md)
+
iot-hub-device-update Device Update Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-error-codes.md
Title: Client error codes for Device Update for Azure IoT Hub | Microsoft Docs
-description: This document provides a table of client error codes for various Device Update components.
+ Title: Error codes for Device Update for Azure IoT Hub | Microsoft Docs
+description: This document provides a table of error codes for various Device Update components.
Previously updated : 2/18/2021 Last updated : 1/26/2022 # Device Update for IoT Hub Error Codes
-This document provides a table of error codes for various Device Update components. This is meant to be used as a reference for users who want to try parsing their own error codes to diagnose and troubleshoot issues.
+This document provides a table of error codes for various Device Update components. It is meant to be used as a reference for users who want to parse their own error codes to diagnose and troubleshoot issues.
-There are two primary client-side components that may throw error codes: the Device Update agent, and the Delivery Optimization agent.
+There are two primary client-side components that may throw error codes: the Device Update agent, and the Delivery Optimization agent. Error codes also come from the Device Update content service.
## Device Update agent ### ResultCode and ExtendedResultCode
-The Device Update for IoT Hub Core PnP interface reports `ResultCode` and
-`ExtendedResultCode`, which can be used to diagnose failures. [Learn
-More](device-update-plug-and-play.md) about the Device Update Core PnP interface.
+The Device Update for IoT Hub Core PnP interface reports `ResultCode` and `ExtendedResultCode`, which can be used to diagnose failures. [Learn More](device-update-plug-and-play.md) about the Device Update Core PnP interface.
-#### ResultCode
+`ResultCode` is a general status code and `ExtendedResultCode` is an integer with encoded error information.
-`ResultCode` is a general status code and follows http status code convention.
-[Learn More](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html) about http
-status codes.
-
-#### ExtendedResultCode
-
-`ExtendedResultCode` is an integer with encoded error information.
-
-You will most likely see the `ExtendedResultCode` as a signed integer in the PnP
-interface. To decode the `ExtendedResultCode`, convert the signed integer to
-unsigned hex. Only the first 4 bytes of the `ExtendedResultCode` are used and
-are of the form `F` `FFFFFFF` where the first nibble is the **Facility Code** and
+You will most likely see the `ExtendedResultCode` as a signed integer in the PnP interface. To decode the `ExtendedResultCode`, convert the signed integer to
+unsigned hex. Only the first 4 bytes of the `ExtendedResultCode` are used and are of the form `F` `FFFFFFF` where the first nibble is the **Facility Code** and
the rest of the bits are the **Error Code**.
-**Facility Codes**
-
-| Facility Code | Description |
-|-|--|
-| D | Error raised from the DO SDK|
-| E | Error code is an errno |
--
-For example:
-
-`ExtendedResultCode` is `-536870781`
-
-The unsigned hex representation of `-536870781` is `FFFFFFFF E0000083`.
-
-| Ignore | Facility Code | Error Code |
-|--|-|--|
-| FFFFFFFF | E | 0000083 |
-
-`0x83` in hex is `131` in decimal, which is the errno value for `ENOLCK`.
-
-## Delivery Optimization agent
-The following table lists error codes pertaining to the Delivery Optimization (DO) component of the Device Update client. The DO component is responsible for downloading update content onto the IoT device.
-
-The DO error code can be obtained by examining the exceptions thrown in response to an API call. All DO error codes can be identified by the 0x80D0 prefix.
+```text
+ 0 00 00000 Total 4 bytes (32 bits)
+ - -- --
+ | | |
+ | | |
+ | | + Error code (20 bits)
+ | |
+ | +- Component/Area code (8 bits)
+ |
+ + Facility code (4 bits)
+ ```
-| Error Code | String Error | Type | Description |
-|-||-|-|
-| 0x80D01001L | DO_E_NO_SERVICE | n/a | Delivery Optimization was unable to provide the service |
-| 0x80D02002L | DO_E_DOWNLOAD_NO_PROGRESS | Download Job | Download of a file saw no progress within the defined period |
-| 0x80D02011L | DO_E_UNKNOWN_PROPERTY_ID | Download Job | SetProperty() or GetProperty() called with an unknown property ID |
-| 0x80D02012L | DO_E_READ_ONLY_PROPERTY | Download Job | Unable to call SetProperty() on a read-only property |
-| 0x80D02013L | DO_E_INVALID_STATE | Download Job | The requested action is not allowed in the current job state. The job might have been canceled or completed transferring. It is in a read-only state now. |
-| 0x80D02018L | DO_E_FILE_DOWNLOADSINK_UNSPECIFIED | Download Job | Unable to start a download because no download sink (either local file or stream interface) was specified |
-| 0x80D02200L | DO_E_DOWNLOAD_NO_URI | IDODownload Interface| The download was started without providing a URI |
-| 0x80D03805L | DO_E_BLOCKED_BY_NO_NETWORK | Transient conditions | Download paused due to loss of network connectivity |
+Please refer to [Device Update Agent result codes and extended result codes](https://github.com/Azure/iot-hub-device-update/tree/main/src/content_handlers) or [implement a custom Content Handler](https://github.com/Azure/iot-hub-device-update/tree/main/docs/agent-reference/device-update-agent-extended-result-codes.md) for details.
## Device Update content service
-The following table lists error codes pertaining to the content service component of the Device Update service. The content service component is responsible for handling importing of update content.
+The following table lists error codes pertaining to the content service component of the Device Update service. The content service component is responsible for handling importing of update content. Additional troubleshooting information is also available for [importing proxy updates](device-update-proxy-update-troubleshooting.md).
| Error Code | String Error | Next steps | |-|-|| | "UpdateAlreadyExists" | Update with the same identity already exists. | Make sure you are importing an update that hasnΓÇÖt already been imported into this instance of Device Update for IoT Hub. | | "DuplicateContentImport" | Identical content imported simultaneously multiple times. | Same as for UpdateAlreadyExists. |
-| "CannotProcessImportManifest" | Error processing import manifest. | Refer to [import concepts](./import-concepts.md) and [import update](./import-update.md) documentation for proper import manifest formatting. |
+| "CannotProcessImportManifest" | Error processing import manifest. | Refer to [import concepts](./import-concepts.md) and [import update](./create-update.md) documentation for proper import manifest formatting. |
| "CannotDownload" | Cannot download import manifest. | Check to make sure the URL for the import manifest file is still valid. |
-| "CannotParse" | Cannot parse import manifest. | Check your import manifest for accuracy against the schema defined in the [import update](./import-update.md) documentation. |
-| "UnsupportedVersion" | Import manifest schema version is not supported. | Make sure your import manifest is using the latest schema defined in the [import update](./import-update.md) documentation. |
+| "CannotParse" | Cannot parse import manifest. | Check your import manifest for accuracy against the schema defined in the [import update](./create-update.md) documentation. |
+| "UnsupportedVersion" | Import manifest schema version is not supported. | Make sure your import manifest is using the latest schema defined in the [import update](./create-update.md) documentation. |
| "UpdateLimitExceeded" | Error importing update due to exceeded limit. | You have reached a limit on the number of different Providers, Names or Versions allowed in your instance of Device Update for IoT Hub. Delete some updates from your instance and try again. | | "UpdateProvider" | Cannot import a new update provider. | You have reached a limit on the number of different __Providers__ allowed in your instance of Device Update for IoT Hub. Delete some updates from your instance and try again. | | "UpdateName" | Cannot import a new update name for the specified provider. | You have reached a limit on the number of different __Names__ allowed under one Provider in your instance of Device Update for IoT Hub. Delete some updates from your instance and try again. |
iot-hub-device-update Device Update Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-groups.md
Tags enable users to group devices. Devices need to have a ADUGroup key and a va
} ```
+## Default device group
-## Uncategorized device group
-
-Uncategorized is a reserved word that is used to group devices that:
-- Don't have the ADUGroup device or module twin tag.-- Have ADUGroup device or module twin tag but a group is not created with this group name.
+Any device that has the Device Update agent installed and provisioned, but does not have a ADUGroup tag added to its device or module twin will be added to a default group. Default groups or system-assigned groups help reduce the overhead of tagging and grouping devices, so customers can easily deploy updates to them. Default groups cannot be deleted or re-created by customers. Customers cannot change the definition or add/remove devices from a default group manually. Devices with the same device class are grouped together in a default group. Default group names are reserved within an IOT solution. Default groups will be named in the format ΓÇ£Default-(deviceClassID)ΓÇ¥. All deployment features that are available for user-defined groups are also available for default, system-assigned groups.
For example consider the devices with their device twin tags below:
Below are the devices and the possible groups that can be created for them.
|Device1 |Group1| |Device2 |Group1| |Device3 |Group2|
-|Device4 |Uncategorized|
+|Device4 |DefaultGroup1-(deviceClassId)|
++
+## Invalid group
+A corresponding invalid group is created for every user-defined group. A device is added to the invalid group if it doesn't meet the compatibility requirements of the user-defined group. This can be resolved by either re-tagging and regrouping the device under a new group, or modifying it's compatibility properties through the agent configuration file.
+An invalid group only exists for diagnostic purposes. Updates cannot be deployed to invalid groups
## Next steps
iot-hub-device-update Device Update Howto Proxy Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-howto-proxy-updates.md
+
+ Title: Complete a proxy update by using Device Update for Azure IoT Hub | Microsoft Docs
+description: Get started with Device Update for Azure IoT Hub by using the Device Update binary agent for proxy updates.
++ Last updated : 1/26/2022++++
+# Tutorial: Complete a proxy update by using Device Update for Azure IoT Hub
+
+If you haven't already done so, review [Using proxy updates with Device Update for Azure IoT Hub](device-update-proxy-updates.md).
+
+## Set up a test device or virtual machine
+
+This tutorial uses an Ubuntu Server 18.04 LTS virtual machine (VM) as an example.
+
+### Install the Device Update Agent and dependencies
+
+1. Register *packages.microsoft.com* in an APT package repository:
+
+ ```sh
+ sudo apt-get update
+
+ sudo apt install curl
+
+ curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list > ~/microsoft-prod.list
+
+ sudo cp ~/microsoft-prod.list /etc/apt/sources.list.d/
+
+ curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > ~/microsoft.gpg
+
+ sudo cp ~/microsoft.gpg /etc/apt/trusted.gpg.d/
+
+ sudo apt-get update
+ ```
+
+2. Install the **deviceupdate-agent** on the IoT device. Download the latest Device Update Debian file from *packages.microsoft.com*:
+
+ ```sh
+ sudo apt-get install deviceupdate-agent
+ ```
+
+ Alternatively, copy the downloaded Debian file to the test VM. If you're using PowerShell on your computer, run the following shell command:
+
+ ```sh
+ scp <path to the .deb file> tester@<your vm's ip address>:~
+ ```
+
+ Then remote into your VM and run the following shell command in the *home* folder:
+
+ ```sh
+ #go to home folder
+ cd ~
+ #install latest Device Update agent
+ sudo apt-get install ./<debian file name from the previous step>
+ ```
+
+3. Go to Azure IoT Hub and copy the primary connection string for your IoT device's Device Update module. Replace any default value for the `connectionData` field with the primary connection string in the *du-config.json* file:
+
+ ```sh
+ sudo nano /etc/adu/du-config.json
+ ```
+
+ > [!NOTE]
+ > You can copy the primary connection string for the device instead, but we recommend that you use the string for the Device Update module. For information about setting up the module, see [Device Update Agent provisioning](device-update-agent-provisioning.md).
+
+4. Ensure that */etc/adu/du-diagnostics-config.json* contains the correct settings for log collection. For example:
+
+ ```sh
+ {
+ "logComponents":[
+ {
+ "componentName":"adu",
+ "logPath":"/var/log/adu/"
+ },
+ {
+ "componentName":"do",
+ "logPath":"/var/log/deliveryoptimization-agent/"
+ }
+ ],
+ "maxKilobytesToUploadPerLogPath":50
+ }
+ ```
+
+5. Restart the Device Update agent:
+
+ ```sh
+ sudo systemctl restart adu-agent
+ ```
+
+### Set up mock components
+
+For testing and demonstration purposes, we'll create the following mock components on the device:
+
+- Three motors
+- Two cameras
+- "hostfs"
+- "rootfs"
+
+> [!IMPORTANT]
+> The preceding component configuration is based on the implementation of an example component enumerator extension called *libcontoso-component-enumerator.so*. It also requires this mock component inventory data file: */usr/local/contoso-devices/components-inventory.json*.
+
+1. Copy the [demo](https://github.com/Azure/iot-hub-device-update/tree/main/src/extensions/component-enumerators/examples/contoso-component-enumerator/demo) folder to your home directory on the test VM. Then, run the following command to copy required files to the right locations:
+
+ ```markup
+ `~/demo/tools/reset-demo-components.sh`
+ ```
+
+ The `reset-demo-components.sh` command takes the following steps on your behalf:
+
+ 1. It copies [components-inventory.json](https://github.com/Azure/iot-hub-device-update/tree/main/src/extensions/component-enumerators/examples/contoso-component-enumerator/demo/demo-devices/contoso-devices/components-inventory.json) and adds it to the */usr/local/contoso-devices* folder.
+
+ 2. It copies the Contoso component enumerator extension (*libcontoso-component-enumerator.so*) from the [Assets folder](https://github.com/Azure/iot-hub-device-update/releases) and adds it to the */var/lib/adu/extensions/sources* folder.
+
+ 3. It registers the extension:
+
+ ```sh
+ sudo /usr/bin/AducIotAgent -E /var/lib/adu/extensions/sources/libcontoso-component-enumerator.so
+ ```
+
+2. View and record the current components' software version by using the following command to set up the VM to support proxy updates:
+
+ ```markup
+ ~/demo/show-demo-components.sh
+ ```
+
+## Import an example update
+
+If you haven't already done so, create a [Device Update account and instance](create-device-update-account.md), including configuring an IoT hub. Then start the following procedure.
+
+1. From the [latest Device Update release](https://github.com/Azure/iot-hub-device-update/releases), under **Assets**, download the import manifests and images for proxy updates.
+2. Sign in to the [Azure portal](https://portal.azure.com/) and go to your IoT hub with Device Update. On the left pane, select **Device Management** > **Updates**.
+3. Select the **Updates** tab.
+4. Select **+ Import New Update**.
+5. Select **+ Select from storage container**, and then choose your storage account and container.
+
+ :::image type="content" source="media/understand-device-update/one-import.png" alt-text="Screenshot that shows the button for selecting to import from a storage container." lightbox="media/understand-device-update/one-import.png":::
+6. Select **Upload** to add the files that you downloaded in step 1.
+7. Upload the parent import manifest, child import manifest, and payload files to your container.
+
+ The following example shows sample files uploaded to update cameras connected to a smart vacuum cleaner device. It also includes a pre-installation script to turn off the cameras before the over-the-air update.
+
+ In the example, the parent import manifest is *contoso.Virtual-Vacuum-virtual-camera.1.4.importmanifest.json*. The child import manifest with details for updating the camera is *Contoso.Virtual-Vacuum.3.3.importmanifest.json*. Note that both manifest file names follow the required format and end with *.importmanifest.json*.
+
+ :::image type="content" source="media/understand-device-update/two-containers.png" alt-text="Screenshot that shows sample files uploaded to update cameras connected to a smart vacuum cleaner device." lightbox="media/understand-device-update/two-containers.png":::
+
+8. Choose **Select**.
+9. The UI now shows the list of files that will be imported to Device Update. Select **Import update**.
+
+ :::image type="content" source="media/understand-device-update/three-confirm-import.png" alt-text="Screenshot that shows listed files and the button for importing an update." lightbox="media/understand-device-update/three-confirm-import.png":::
+
+10. The import process begins, and the screen changes to the **Import History** section. Select **Refresh** to view progress until the import process finishes. Depending on the size of the update, the import might finish in a few minutes or take longer.
+11. When the **Status** column indicates that the import has succeeded, select the **Available Updates** tab. You should see your imported update in the list now.
+
+ :::image type="content" source="media/understand-device-update/four-update-added.png" alt-text="Screenshot that shows the imported update added to the list." lightbox="media/understand-device-update/four-update-added.png":::
+
+[Learn more](import-update.md) about importing updates.
+
+## Create update group
+
+1. Go to the Groups and Deployments tab at the top of the page.
+ :::image type="content" source="media/create-update-group/ungrouped-devices.png" alt-text="Screenshot of ungrouped devices." lightbox="media/create-update-group/ungrouped-devices.png":::
+
+2. Select the "Add group" button to create a new group.
+ :::image type="content" source="media/create-update-group/add-group.png" alt-text="Screenshot of device group addition." lightbox="media/create-update-group/add-group.png":::
+
+3. Select an IoT Hub tag and Device Class from the list and then select Create group.
+ :::image type="content" source="media/create-update-group/select-tag.png" alt-text="Screenshot of tag selection." lightbox="media/create-update-group/select-tag.png":::
+
+4. Once the group is created, you will see that the update compliance chart and groups list are updated. Update compliance chart shows the count of devices in various states of compliance: On latest update, New updates available, and Updates in Progress. [Learn about update compliance.](device-update-compliance.md)
+ :::image type="content" source="media/create-update-group/updated-view.png" alt-text="Screenshot of update compliance view." lightbox="media/create-update-group/updated-view.png":::
+
+5. You should see your newly created group and any available updates for the devices in the new group. If there are devices that don't meet the device class requirements of the group, they will show up in a corresponding invalid group. You can deploy the best available update to the new user-defined group from this view by clicking on the "Deploy" button next to the group.
+
+[Learn more](create-update-group.md) about adding tags and creating update groups
++
+## Deploy update
+
+1. Once the group is created, you should see a new update available for your device group, with a link to the update under Best Update (you may need to Refresh once). [Learn More about update compliance.](device-update-compliance.md)
+
+2. Select the target group by clicking on the group name. You will be directed to the group details under Group basics.
+
+ :::image type="content" source="media/deploy-update/group-basics.png" alt-text="Group details" lightbox="media/deploy-update/group-basics.png":::
+
+3. To initiate the deployment, go to the Current deployment tab. Click the deploy link next to the desired update from the Available updates section. The best, available update for a given group will be denoted with a "Best" highlight.
+
+ :::image type="content" source="media/deploy-update/select-update.png" alt-text="Select update" lightbox="media/deploy-update/select-update.png":::
+
+4. Schedule your deployment to start immediately or in the future, then select Create.
+
+ :::image type="content" source="media/deploy-update/create-deployment.png" alt-text="Create deployment" lightbox="media/deploy-update/create-deployment.png":::
+
+5. The Status under Deployment details should turn to Active, and the deployed update should be marked with "(deploying)".
+
+ :::image type="content" source="media/deploy-update/deployment-active.png" alt-text="Deployment active" lightbox="media/deploy-update/deployment-active.png":::
+
+6. View the compliance chart. You should see the update is now in progress.
+
+7. After your device is successfully updated, you should see your compliance chart and deployment details update to reflect the same.
+
+ :::image type="content" source="media/deploy-update/update-succeeded.png" alt-text="Update succeeded" lightbox="media/deploy-update/update-succeeded.png":::
+
+## Monitor an update deployment
+
+1. Select the Deployment history tab at the top of the page.
+
+ :::image type="content" source="media/deploy-update/deployments-history.png" alt-text="Deployment History" lightbox="media/deploy-update/deployments-history.png":::
+
+2. Select the details link next to the deployment you created.
+
+ :::image type="content" source="media/deploy-update/deployment-details.png" alt-text="Deployment details" lightbox="media/deploy-update/deployment-details.png":::
+
+3. Select Refresh to view the latest status details.
+
+You've now completed a successful end-to-end proxy update by using Device Update for IoT Hub.
+
+## Clean up resources
+
+When you no longer need them, clean up your Device Update account, instance, IoT hub, and IoT device.
+
+## Next steps
+
+You can use the following tutorials for a simple demonstration of Device Update for IoT Hub:
+
+- [Device Update for Azure IoT Hub tutorial using the Raspberry Pi 3 B+ reference image](device-update-raspberry-pi.md) (extensible via open source to build your own images for other architectures as needed)
+
+- [Device Update for Azure IoT Hub tutorial using the package agent on Ubuntu Server 18.04 x64](device-update-ubuntu-agent.md)
+
+- [Device Update for Azure IoT Hub tutorial using the Ubuntu (18.04 x64) Simulator Reference Agent](device-update-simulator.md)
+
+- [Device Update for Azure IoT Hub tutorial using the Azure real-time operating system](device-update-azure-real-time-operating-system.md)
iot-hub-device-update Device Update Log Collection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-log-collection.md
+
+ Title: Device Update for Azure IoT Hub log collection | Microsoft Docs
+description: Device Update for IoT Hub enables remote collection of diagnostic logs from connected IoT devices.
++ Last updated : 12/22/2021++++
+# Remotely collect diagnostic logs from devices using Device Update for IoT Hub
+Learn how to initiate a Device Update for IoT Hub log operation and view collected logs within Azure blob storage.
+
+## Prerequisites
+* [Access to an IoT Hub with Device Update for IoT Hub enabled](create-device-update-account.md).
+* An IoT device (or simulator) [provisioned for Device Update](device-update-agent-provisioning.md) within IoT Hub and implementing the Diagnostic Interface.
+* An [Azure Blob storage account](../storage/common/storage-account-create.md) under the same subscription as your Device Update for IoT Hub account.
+
+> [!NOTE]
+> The remote log collection feature is currently compatible only with devices that implement the Diagnostic Interface and are able to upload files to Azure Blob storage. The reference agent implementation also expects the device to write log files to a user-specified file path on the device.
+
+## Link your Azure Blob storage account to your Device Update instance
+
+In order to use the remote log collection feature, you must first link an Azure Blob storage account with your Device Update instance. This Azure Blob storage account is where your devices will upload diagnostic logs to.
+
+1. Navigate to your Device Update for IoT Hub resource.
+
+2. Select "Instance" under the "Instance Management" section of the navigation pane.
+
+3. Select your Device Update instance from the list, then "Configure Diagnostics."
+
+4. Select the "Customer Diagnostics" tab, then "Select Azure Storage Account."
+
+5. Choose your desired storage account from the list and select "Save."
+
+6. Once back on the instance list, select "Refresh" periodically until the instance's Provisioning State shows "Succeeded." This usually takes 2-3 minutes.
+
+## Configure which log files are collected from your device
+
+The Device Update agent on a device will collect files from specific file paths on the device when it receives a log upload start signal from the Device Update service. These file paths are defined by a configuration file on the device, located at **/etc/adu/du-diagnostics-config.json** in the reference agent.
+
+Within the configuration file, each log file to be collected and uploaded is represented as a "logComponent" object with componentName and logPath properties. This can be modified as desired.
+
+## Configure max log file size
+
+The Device Update agent will only collect log files under a certain file size. This max file size is defined by a configuration file on the device, located at **/etc/adu/du-diagnostics-config.json** in the reference agent.
+
+The relevant parameter "maxKilobytesToUploadPerLogPath" will apply to each logComponent object, and can be modified as desired.
+
+## Create a new log operation within Device Update for IoT Hub.
+
+Log operations are a new service-driven action that you can instruct your IoT devices to perform through the Device Update service. For a more detailed explanation of how log operations function, please see the [Device update diagnostics](device-update-diagnostics.md) concept page.
+
+1. Navigate to your IoT Hub and select the **Updates** tab under the **Device Management** section of the navigation pane.
+
+2. Select the **Diagnostics** tab in the UI. If you don't see a Diagnostics tab, make sure you're using the newest version of the Device Update for IoT Hub user interface. If you see "Diagnostics must be enabled for this Device Update instance," make sure you've linked an Azure Blob storage account with your Device Update instance.
+
+3. Select **Add log upload operation** to navigate to the log operation creation page.
+
+4. Enter a name (ID) and description for your new log operation, then select **Add devices** to select which IoT devices you want to collect diagnostic logs from.
+
+5. Select **Add**.
+
+6. Once back on the Diagnostics tab, select **Refresh** until you see your log operation listed in the Operation Table.
+
+7. Once the operation status is **Succeeded** or **Failed**, select the operation name to view its details. An operation will be marked "Succeeded" only if all targeted devices successfully completed the log upload. If some targeted devices succeeded and some failed, the log operation will be marked "Failed." You can use the log operation details blade to see which devices succeeded and which failed.
+
+8. In the log operation details, you can view the device-specific status and see the log location path. This path corresponds to the virtual directory path within your Azure Blob storage account where the diagnostic logs have been uploaded.
+
+## View and export collected diagnostic logs
+
+1. Once your log operation has succeeded, navigate to your Azure Blob storage account.
+
+2. Select **Containers** under the **Data storage** section of the navigation pane.
+
+3. Select the container with the same name as your Device Update instance.
+
+4. Use the log location path from the log operation details to navigate to the correct directory containing the logs. By default, the remote log collection feature instructs targeted devices to upload diagnostic logs using the following directory path model: **Blob storage container/Target device ID/Log operation ID/On-device log path**
+
+5. If you haven't modified the diagnostic component of the DU Agent, the device will respond to any log operation by attempting to upload two plaintext log files: the DU Agent diagnostic log ("aduc.log"), and the DO Agent diagnostic log ("do-agent.log"). You can learn more about which log files the DU reference agent collects by reading the [Device update diagnostics](device-update-diagnostics.md) concept page.
+
+6. You can view the log file's contents by selecting the file name, then selecting the menu element (ellipsis) and clicking **View/edit**. You can also download or delete the log file by selecting the respectively labeled options.
+ :::image type="content" source="media/device-update-log-collection/blob-storage-log.png" alt-text="Screenshot of log file within Azure Blob storage." lightbox="media/device-update-log-collection/blob-storage-log.png":::
+
+## Next steps
+
+Learn more about Device Update's diagnostic capabilities:
+
+ - [Device update diagnostic feature overview](device-update-diagnostics.md)
+
iot-hub-device-update Device Update Multi Step Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-multi-step-updates.md
+
+ Title: Using multiple steps for Updates with Device Update for Azure IoT Hub| Microsoft Docs
+description: Using multiple steps for Updates with Device Update for Azure IoT Hub
++ Last updated : 11/12/2021++++
+# Multi-Step Ordered Execution
+Based on customer requests we have added the ability to run pre-install and post-install tasks when deploying an over-the-air update. This capability is called Multi-Step Ordered Execution (MSOE) and is part of the Public Preview Refresh Update Manifest v4 schema.
+
+See the [Update Manifest](update-manifest.md) documentation before reviewing the following changes as part of the Public Preview Refresh release.
+
+With MSOE we have introduced are two types of Steps:
+
+- Inline Step (Default)
+- Reference Step
+
+Example Update Manifest with one Inline Step:
+
+```json
+{
+ "updateId": {...},
+ "isDeployable": true,
+ "compatibility": [
+ {
+ "deviceManufacturer": "du-device",
+ "deviceModel": "e2e-test"
+ }
+ ],
+ "instructions": {
+ "steps": [
+ {
+ "description": "Example APT update that install libcurl4-doc on a host device.",
+ "handler": "microsoft/apt:1",
+ "files": [
+ "apt-manifest-1.0.json"
+ ],
+ "handlerProperties": {
+ "installedCriteria": "apt-update-test-1.0"
+ }
+ }
+ ]
+ },
+ "manifestVersion": "4.0",
+ "importedDateTime": "2021-11-16T14:54:55.8858676Z",
+ "createdDateTime": "2021-11-16T14:50:47.3511877Z"
+}
+```
+
+Example Update Manifest with two Inline Steps:
+
+```json
+{
+ "updateId": {...},
+ "isDeployable": true,
+ "compatibility": [
+ {
+ "deviceManufacturer": "du-device",
+ "deviceModel": "e2e-test"
+ }
+ ],
+ "instructions": {
+ "steps": [
+ {
+ "description": "Install libcurl4-doc on host device",
+ "handler": "microsoft/apt:1",
+ "files": [
+ "apt-manifest-1.0.json"
+ ],
+ "handlerProperties": {
+ "installedCriteria": "apt-update-test-2.2"
+ }
+ },
+ {
+ "description": "Install tree on host device",
+ "handler": "microsoft/apt:1",
+ "files": [
+ "apt-manifest-tree-1.0.json"
+ ],
+ "handlerProperties": {
+ "installedCriteria": "apt-update-test-tree-2.2"
+ }
+ }
+ ]
+ },
+ "manifestVersion": "4.0",
+ "importedDateTime": "2021-11-16T20:21:33.6514738Z",
+ "createdDateTime": "2021-11-16T20:19:29.4019035Z"
+}
+```
+
+Example Update Manifest with one Reference Step:
+
+- Parent Update
+
+```json
+{
+ "updateId": {...},
+ "isDeployable": true,
+ "compatibility": [
+ {
+ "deviceManufacturer": "du-device",
+ "deviceModel": "e2e-test"
+ }
+ ],
+ "instructions": {
+ "steps": [
+ {
+ "type": "reference",
+ "description": "Cameras Firmware Update",
+ "updateId": {
+ "provider": "contoso",
+ "name": "virtual-camera",
+ "version": "1.2"
+ }
+ }
+ ]
+ },
+ "manifestVersion": "4.0",
+ "importedDateTime": "2021-11-17T07:26:14.7484389Z",
+ "createdDateTime": "2021-11-17T07:22:10.6014567Z"
+}
+```
+
+- Child Update
+
+```json
+{
+ "updateId": {
+ "provider": "contoso",
+ "name": "virtual-camera",
+ "version": "1.2"
+ },
+ "isDeployable": false,
+ "compatibility": [
+ {
+ "group": "cameras"
+ }
+ ],
+ "instructions": {
+ "steps": [
+ {
+ "description": "Cameras Update - pre-install step",
+ "handler": "microsoft/script:1",
+ "files": [
+ "contoso-camera-installscript.sh"
+ ],
+ "handlerProperties": {
+ "scriptFileName": "contoso-camera-installscript.sh",
+ "arguments": "--pre-install-sim-success --component-name --component-name-val --component-group --component-group-val --component-prop path --component-prop-val path",
+ "installedCriteria": "contoso-virtual-camera-1.2-step-0"
+ }
+ },
+ {
+ "description": "Cameras Update - firmware installation (failure - missing file)",
+ "handler": "microsoft/script:1",
+ "files": [
+ "contoso-camera-installscript.sh",
+ "camera-firmware-1.1.json"
+ ],
+ "handlerProperties": {
+ "scriptFileName": "missing-contoso-camera-installscript.sh",
+ "arguments": "--firmware-file camera-firmware-1.1.json --component-name --component-name-val --component-group --component-group-val --component-prop path --component-prop-val path",
+ "installedCriteria": "contoso-virtual-camera-1.2-step-1"
+ }
+ },
+ {
+ "description": "Cameras Update - post-install step",
+ "handler": "microsoft/script:1",
+ "files": [
+ "contoso-camera-installscript.sh"
+ ],
+ "handlerProperties": {
+ "scriptFileName": "contoso-camera-installscript.sh",
+ "arguments": "--post-install-sim-success --component-name --component-name-val --component-group --component-group-val --component-prop path --component-prop-val path",
+ "installedCriteria": "contoso-virtual-camera-1.2-stop-2"
+ }
+ }
+ ]
+ },
+ "referencedBy": [
+ {
+ "provider": "DU-Client-Eng",
+ "name": "MSOE-Update-Demo",
+ "version": "3.1"
+ }
+ ],
+ "manifestVersion": "4.0",
+ "importedDateTime": "2021-11-17T07:26:14.7376536Z",
+ "createdDateTime": "2021-11-17T07:22:09.2232968Z",
+ "etag": "\"ad7a553d-24a8-492b-9885-9af424d44d58\""
+}
+```
+
+## Parent Update vs. Child Update
+
+For Public Preview Refresh, we will refer to the top-level Update Manifest as `Parent Update` and refer to an Update Manifest specified in a Reference Step as `Child Update`.
+
+Currently, a `Child Update` must not contain any reference steps. This restriction is validated at import time and if not followed the import will fail.
+
+### Inline Step In Parent Update
+
+Inline step(s) specified in `Parent Update` will be applied to the Host Device. Here the ADUC_WorkflowData object that is passed to a Step Handler (also known as Update Content Handler) and it will not contain the `Selected Components` data. The handler for this type of step should *not* be a `Component-Aware` handler.
+
+> [!NOTE]
+> See [Steps Content Handler](https://github.com/Azure/iot-hub-device-update/tree/main/src/content_handlers/steps_handler/README.md) and [Implementing a custom component-Aware Content Handler](https://github.com/Azure/iot-hub-device-update/tree/main/docs/agent-reference/how-to-implement-custom-update-handler.md) for more details.
+
+### Reference Step In Parent Update
+
+Reference step(s) specified in `Parent Update` will be applied to the component on or components connected to the Host Device. A **Reference Step** is a step that contains update identifier of another Update, called as a `Child Update`. When processing a Reference Step, the Steps Handler will download a Detached Update Manifest file specified in the Reference Step data, then validate the file integrity.
+
+Next, the Steps Handler will parse the Child Update Manifest and create ADUC_Workflow object (also known as Child Workflow Data) by combining the data from Child Update Manifest and File URLs information from the Parent Update Manifest. This Child Workflow Data also has a 'level' property set to '1'.
+
+> [!NOTE]
+> For Update Manfiest version v4, the Child Udpate cannot contain any Reference Steps.
+
+## Detached Update Manifest
+
+To avoid deployment failure because of IoT Hub Twin Data Size Limits, any large Update Manifest will be delivered in the form of a JSON data file, also called as a 'Detached Update Manifest'.
+
+If an update with large content is imported into Device Update for IoT Hub, the generated Update Manifest will contain another payload file called `Detached Update Manifest`, which contains the full data of the Update Manifest.
+
+The `UpdateManifest` property in the Device or Module Twin will contain the Detached Update Manifest file information.
+
+When processing PnP Property Changed Event, the Device Update Agent will automatically download the Detached Update Manifest file, and create ADUC_WorkflowData object that contains the full Update Manifest data.
+
+
iot-hub-device-update Device Update Plug And Play https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-plug-and-play.md
Title: Understand how Device Update for IoT Hub uses IoT Plug and Play | Microso
description: Device Update for IoT Hub uses to discover and manage devices that are over-the-air update capable. Previously updated : 2/14/2021 Last updated : 1/26/2022 # Device Update for IoT Hub and IoT Plug and Play
-Device Update for IoT Hub uses [IoT Plug and Play](../iot-develop/index.yml) to discover and manage devices that are over-the-air update capable. The Device Update service will send and receive properties and messages to and from devices using IoT Plug and Play interfaces. Device Update for IoT Hub requires IoT devices to implement the following interfaces and model-id as described below.
+Device Update for IoT Hub uses [IoT Plug and Play](../iot-develop/index.yml) to discover and manage devices that are over-the-air update capable. The Device Update service sends and receives properties and messages to and from devices using IoT Plug and Play interfaces. Device Update for IoT Hub requires IoT devices to implement the following interfaces and model-id.
-Concepts:
-* Understand the [IoT Plug and Play device client](../iot-develop/concepts-developer-guide-device.md?pivots=programming-language-csharp).
+Concepts:
+* Understand the [IoT Plug and Play device client](../iot-develop/concepts-developer-guide-device.md?pivots=programming-language-csharp).
* See how the [Device Update agent is implemented](https://github.com/Azure/iot-hub-device-update/blob/main/docs/agent-reference/how-to-build-agent-code.md).
-## ADU Core Interface
+## Device Update Core Interface
-The 'ADUCoreInterface' interface is used to send update actions and metadata to devices and receive update status from devices. The 'ADU Core' interface is split into two Object properties.
+The 'DeviceUpdateCore' interface is used to send update actions and metadata to devices and receive update status from devices. The 'DeviceUpdateCore' interface is split into two Object properties.
-The expected component name in your model is **"deviceUpdate"** when implementing this interface. [Learn more about Azure IoT Plug and Play Components](../iot-develop/concepts-modeling-guide.md)
+The expected component name in your model is **"deviceUpdate"** when this interface is implemented. [Learn more about Azure IoT Plug and Play Components](../iot-develop/concepts-modeling-guide.md)
### Agent Metadata
-Agent Metadata contains fields that the device or Device Update agent uses to send
-information and status to Device Update services.
+The Device Update agent uses Agent Metadata fields to send
+information to Device Update services.
|Name|Schema|Direction|Description|Example| |-|||--|--|
-|resultCode|integer|device to cloud|A code that contains information about the result of the last update action. Can be populated for either success or failure and should follow [http status code specification](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html).|500|
+|deviceProperties|Map|device to cloud|The set of properties that contain the manufacturer, model, and other device information.|See other examples for details|
+|compatPropertyNames|String (Comma separated)|device to cloud|The device reported properties that are used to check for compatibility of the device to target the update deployment. Limited to five device properties|"compatPropertyNames": "manufacturer,model"|
+|lastInstallResult|Map|device to cloud|The result reported by the agent. It contains result code, extended result code, and result details for main update and other step updates||
+|resultCode|integer|device to cloud|A code that contains information about the result of the last update action. Can be populated for either success or failure.|700|
|extendedResultCode|integer|device to cloud|A code that contains additional information about the result. Can be populated for either success or failure.|0x80004005|
-|state|integer|device to cloud|It is an integer that indicates the current state of the Device Update Agent. See below for details |Idle|
-|installedUpdateId|string|device to cloud|An ID of the update that is currently installed (through Device Update). This value will be a string capturing the Update Id JSON or null for a device that has never taken an update through Device Update.|"{\"provider\":\"contoso\",\"name\":\"image-update\",\"version\":\"1.0.0\"}"|
-|`deviceProperties`|Map|device to cloud|The set of properties that contain the manufacturer and model.|See below for details
+|resultDetails|string|device to cloud|Customer-defined free form string to provide additional result details. Returned to the twin without parsing||
+|stepResults|map|device to cloud|The result reported by the agent containing result code, extended result code, and result details for step updates | "step_1": { "resultCode": 0,"extendedResultCode": 0, "resultDetails": ""}|
+|state|integer|device to cloud|It is an integer that indicates the current state of the Device Update agent. See State section for details |0|
+|workflow|complex|device to cloud|It is a set of values that indicates which deployment the agent is currently working on, ID of current deployment, and acknowledgment of any retry request sent from service to agent.|"workflow": {"action": 3,"ID": "11b6a7c3-6956-4b33-b5a9-87fdd79d2f01","retryTimestamp": "2022-01-26T11:33:29.9680598Z"}|
+|installedUpdateId|string|device to cloud|An ID of the update that is currently installed (through Device Update). This value will be a string capturing the Update ID JSON or null for a device that has never taken an update through Device Update.|installedUpdateID{\"provider\":\"contoso\",\"name\":\"image-update\",\"version\":\"1.0.0\"}"|
+ #### State
-It is the status reported by the Device Update Agent after receiving an action from the Device Update Service. `State` is reported in response to an `Action` (see `Actions` below) sent to the Device Update Agent from the Device Update Service. See the [overview workflow](understand-device-update.md#device-update-agent) for requests that flow between the Device Update Service and the Device Update Agent.
+It is the status reported by the Device Update (DU) agent after receiving an action from the Device Update service. `State` is reported in response to an `Action` (see `Actions` section) sent to the Device Update agent from the Device Update service. See the [overview workflow](understand-device-update.md#device-update-agent) for requests that flow between the Device Update service and the Device Update agent.
|Name|Value|Description| ||--|--|
-|Idle|0|The device is ready to receive an action from the Device Update Service. After a successful update, state is returned to the `Idle` state.|
-|DownloadSucceeded|2|A successful download.|
-|InstallSucceeded|4|A successful install.|
+|Idle|0|The device is ready to receive an action from the Device Update service. After a successful update, state is returned to the `Idle` state.|
+|DeploymentInprogress|6| A deployment in progress|
|Failed|255|A failure occurred during updating.|
+|DownloadSucceeded|2|A successful download. This status is only reported by devices with agent version 0.7.0 or older.|
+|InstallSucceeded|4|A successful install. This status is only reported by devices with agent version 0.7.0 or older.|
#### Device Properties
It is the set of properties that contain the manufacturer and model.
|Name|Schema|Direction|Description| |-|||--|
-|manufacturer|string|device to cloud|The device manufacturer of the device, reported through `deviceProperties`. This property is read from one of two places-the 'AzureDeviceUpdateCore' interface will first attempt to read the 'aduc_manufacturer' value from the [Configuration file](device-update-configuration-file.md) file. If the value is not populated in the configuration file, it will default to reporting the compile-time definition for ADUC_DEVICEPROPERTIES_MANUFACTURER. This property will only be reported at boot time. Default value 'Contoso'|
-|model|string|device to cloud|The device model of the device, reported through `deviceProperties`. This property is read from one of two places-the AzureDeviceUpdateCore interface will first attempt to read the 'aduc_model' value from the [Configuration file](device-update-configuration-file.md) file. If the value is not populated in the configuration file, it will default to reporting the compile-time definition for ADUC_DEVICEPROPERTIES_MODEL. This property will only be reported at boot time. Default value 'Video'|
+|manufacturer|string|device to cloud|The device manufacturer of the device, reported through `deviceProperties`. This property is read from one of two places - the 'DeviceUpdateCore' interface will first attempt to read the 'aduc_manufacturer' value from the [Configuration file](device-update-configuration-file.md) file. If the value is not populated in the configuration file, it will default to reporting the compile-time definition for ADUC_DEVICEPROPERTIES_MANUFACTURER. This property will only be reported at boot time. Default value 'Contoso'|
+|model|string|device to cloud|The device model of the device, reported through `deviceProperties`. This property is read from one of two - the DeviceUpdateCore interface will first attempt to read the 'aduc_model' value from the [Configuration file](device-update-configuration-file.md) file. If the value is not populated in the configuration file, it will default to reporting the compile-time definition for ADUC_DEVICEPROPERTIES_MODEL. This property will only be reported at boot time. Default value 'Video'|
+|interfaceId|string|device to cloud|This property is used by the service to identify the interface version being used by the Device Update agent. It is required by Device Update service to manage and communicate with the agent. This property is set at 'dtmi:azure:iot:deviceUpdate;1' for device using DU agent version 0.8.0.|
|aduVer|string|device to cloud|Version of the Device Update agent running on the device. This value is read from the build only if during compile time ENABLE_ADU_TELEMETRY_REPORTING is set to 1 (true). Customers can choose to opt-out of version reporting by setting the value to 0 (false). [How to customize Device Update agent properties](https://github.com/Azure/iot-hub-device-update/blob/main/docs/agent-reference/how-to-build-agent-code.md).| |doVer|string|device to cloud|Version of the Delivery Optimization agent running on the device. The value is read from the build only if during compile time ENABLE_ADU_TELEMETRY_REPORTING is set to 1 (true). Customers can choose to opt-out of the version reporting by setting the value to 0 (false).[How to customize Delivery Optimization agent properties](https://github.com/microsoft/do-client/blob/main/README.md#building-do-client-components).|
+|Custom compatibility Properties|User Defined|device to cloud|Implementer can define other device properties to be used for the compatibility check while targeting the update deployment|
+ IoT Hub Device Twin sample ```json
- "azureDeviceUpdateAgent": {
- "__t": "c",
- "client": {
- "state": 0,
- "resultCode": 200,
- "extendedResultCode": 0,
- "deviceProperties": {
- "manufacturer": "Contoso",
- "model": "Video",
- "aduVer": "DU;agent/0.6.0",
- "doVer": "DU;lib/v0.4.0,DU;agent/v0.4.0,DU;plugin-apt/v0.2.0"
- },
- "installedUpdateId": "{\"provider\":\"Contoso\",\"name\":\"SampleUpdate1\",\"version\":\"1.0.4\"}"
- },
+"deviceUpdate": {
+ "__t": "c",
+ "agent": {
+ "deviceProperties": {
+ "manufacturer": "contoso",
+ "model": "virtual-vacuum-v1",
+ "interfaceId": "dtmi:azure:iot:deviceUpdate;1",
+ "aduVer": "DU;agent/0.8.0-rc1-public-preview",
+ "doVer": "DU;lib/v0.6.0+20211001.174458.c8c4051,DU;agent/v0.6.0+20211001.174418.c8c4051"
+ },
+ "compatPropertyNames": "manufacturer,model",
+ "lastInstallResult": {
+ "resultCode": 700,
+ "extendedResultCode": 0,
+ "resultDetails": "",
+ "stepResults": {
+ "step_0": {
+ "resultCode": 700,
+ "extendedResultCode": 0,
+ "resultDetails": ""
}
+ }
+ },
+ "state": 0,
+ "workflow": {
+ "action": 3,
+ "id": "11b6a7c3-6956-4b33-b5a9-87fdd79d2f01"
+ "retryTimestamp": "2022-01-26T11:33:29.9680598Z"
+ },
+ "installedUpdateId": "{\"provider\":\"Contoso\",\"name\":\"Virtual-Vacuum\",\"version\":\"5.0\"}"
+ },
```
-Note:
-The device or module must add the {"__t": "c"} marker to indicate that the element refers to a component, learn more [here](../iot-develop/concepts-convention.md#sample-multiple-components-writable-property).
+>[!NOTE]
+>The device or module must add the `{"__t": "c"}` marker to indicate that the element refers to a component, learn more [here](../iot-develop/concepts-convention.md#sample-multiple-components-writable-property).
### Service Metadata
Service Metadata contains fields that the Device Update services uses to communi
|Name|Schema|Direction|Description| |-|||--|
-|action|integer|cloud to device|It is an integer that corresponds to an action the agent should perform. Values listed below.|
-|updateManifest|string|cloud to device|Used to describe the content of an update. Generated from the [Import Manifest](import-update.md#create-a-device-update-import-manifest)|
+|action|integer|cloud to device|It is an integer that corresponds to an action the agent should perform. Values listed in the Action section.|
+|updateManifest|string|cloud to device|Used to describe the content of an update. Generated from the [Import Manifest](create-update.md)|
|updateManifestSignature|JSON Object|cloud to device|A JSON Web Signature (JWS) with JSON Web Keys used for source verification.|
-|fileUrls|Map|cloud to device|Map of `FileHash` to `DownloadUri`. Tells the agent, which files to download and the hash to use to verify the files were downloaded correctly.|
+|fileUrls|Map|cloud to device|Map of `FileID` to `DownloadUrl`. Tells the agent, which files to download and the hash to use to verify that the files were downloaded correctly.|
#### Action
-`Actions` below represents the actions taken by the Device Update Agent as instructed by the Device Update Service. The Device Update Agent will report a `State` (see `State` section above) processing the `Action` received. See the [overview workflow](understand-device-update.md#device-update-agent) for requests that flow between the Device Update Service and the Device Update Agent.
+`Actions` in this section represents the actions taken by the Device Update agent as instructed by the Device Update service. The Device Update agent will report a `State` (see `State` section) processing the `Action` received. See the [overview workflow](understand-device-update.md#device-update-agent) for requests that flow between the Device Update service and the Device Update agent.
|Name|Value|Description| ||--|--|
-|Download|0|Download published content or update and any other content needed|
-|Install|1|Install the content or update. Typically this means calling the installer for the content or update.|
-|Apply|2|Finalize the update. It signals the system to reboot if necessary.|
-|Cancel|255|Stop processing the current action and go back to `Idle`. Will also be used to tell the agent in the `Failed` state to go back to `Idle`.|
+|ApplyDeployment|3|Apply the update. It signals to the device to apply the deployed update|
+|Cancel|255|Stop processing the current action and go back to `Idle`. It is also be used to tell the agent in the `Failed` state to go back to `Idle`.|
+|Download|0|Download published content or update and any other content needed. This action is only sent to devices with agent version 0.7.0 or older.|
+|Install|1|Install the content or update. Typically this action means to call the installer for the content or update. This action is only sent to devices with agent version 0.7.0 or older.|
+|Apply|2|Finalize the update. It signals the system to reboot if necessary. This action is only sent to devices with agent version 0.7.0 or older.|
## Device Information Interface
-The Device Information Interface is a concept used within [IoT Plug and Play architecture](../iot-develop/overview-iot-plug-and-play.md). It contains device to cloud properties that provide information about the hardware and operating system of the device. Device Update for IoT Hub uses the DeviceInformation.manufacturer and DeviceInformation.model properties for telemetry and diagnostics. To learn more about Device Information interface, see this [example](https://devicemodels.azure.com/dtmi/azure/devicemanagement/deviceinformation-1.json).
+The Device Information interface is a concept used within [IoT Plug and Play architecture](../iot-develop/overview-iot-plug-and-play.md). It contains device to cloud properties that provide information about the hardware and operating system of the device. Device Update for IoT Hub uses the DeviceInformation.manufacturer and DeviceInformation.model properties for telemetry and diagnostics. To learn more about Device Information interface, see this [example](https://devicemodels.azure.com/dtmi/azure/devicemanagement/deviceinformation-1.json).
-The expected component name in your model is **deviceInformation** when implementing this interface. [Learn about Azure IoT Plug and Play Components](../iot-develop/concepts-modeling-guide.md)
+The expected component name in your model is **deviceInformation** when this interface is implemented. [Learn about Azure IoT Plug and Play Components](../iot-develop/concepts-modeling-guide.md)
|Name|Type|Schema|Direction|Description|Example| |-|-|||--|--|
-|manufacturer|Property|string|device to cloud|Company name of the device manufacturer. This could be the same as the name of the original equipment manufacturer (OEM).|Contoso|
+|manufacturer|Property|string|device to cloud|Company name of the device manufacturer. This property could be the same as the name of the original equipment manufacturer (OEM).|Contoso|
|model|Property|string|device to cloud|Device model name or ID.|IoT Edge Device| |swVersion|Property|string|device to cloud|Version of the software on your device. swVersion could be the version of your firmware.|4.15.0-122| |osName|Property|string|device to cloud|Name of the operating system on the device.|Ubuntu Server 18.04|
The expected component name in your model is **deviceInformation** when implemen
|totalStorage|Property|string|device to cloud|Total available storage on the device in kilobytes.|2048| |totalMemory|Property|string|device to cloud|Total available memory on the device in kilobytes.|256|
-## Model ID
+## Model ID
Model ID is how smart devices advertise their capabilities to Azure IoT applications with IoT Plug and Play.To learn more on how to build smart devices to advertise their capabilities to Azure IoT applications visit [IoT Plug and Play device developer guide](../iot-develop/concepts-developer-guide-device.md).
-Device Update for IoT Hub requires the IoT Plug and Play smart device to announce a model ID with a value of **"dtmi:AzureDeviceUpdate;1"** as part of the device connection. [Learn how to announce a model ID](../iot-develop/concepts-developer-guide-device.md#model-id-announcement).
+Device Update for IoT Hub requires the IoT Plug and Play smart device to announce a model ID with a value of **"dtmi:azure:iot:deviceUpdate;1"** as part of the device connection. [Learn how to announce a model ID](../iot-develop/concepts-developer-guide-device.md#model-id-announcement).
iot-hub-device-update Device Update Proxy Update Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-proxy-update-troubleshooting.md
+
+ Title: Troubleshooting for importing proxy updates to Device Update for Azure IoT Hub | Microsoft Docs
+description: This document provides troubleshooting steps for error messages that may occur when importing proxy update to Device Update for IoT Hub.
++ Last updated : 1/5/2022++++
+# Device Update for IoT troubleshooting guide for importing proxy updates
+
+This document provides troubleshooting steps and a table of error messages that you may encounter when importing [proxy updates](device-update-proxy-updates.md) into Device Update for IoT Hub.
++
+## Error messages
+
+| Error message | How to troubleshoot |
+|-|-|
+|**No import manifest was found in this upload. The file extension for import manifests is _.importmanifest.json_** | At least one import manifest is required for any update to be imported into Device Update for IoT Hub. A proxy update may have both a parent import manifest and also some number of child import manifests referenced from the parent. <br><br> A possible reason for this error is that you have valid import manifest(s) but they don't have the _.importmanifest.json_ extension at the end of the file name. This extension is required for the import manifests to be recognized by the import process in the Azure portal. If the extension is correct, you should review the [schema](import-schema.md) of each import manifest in your update for any issues. [Learn more about import manifests.](create-update.md) |
+|**This upload is missing a required parent manifest. The file extension for import manifests is _.importmanifest.json_** | A proxy update may have a parent import manifest and also some number of child import manifests referenced from the parent. A parent manifest must be included when any child updates are being imported, as it contains required information about those child updates. <br><br> A possible reason for this error is that you have a valid parent import manifest but it doesn't have the _.importmanifest.json_ extension at the end of the file name. This extension is required for the import manifests to be recognized by the import process in the Azure portal. If the extension is correct, you should review the [schema](import-schema.md) of the parent import manifest for any issues. [Learn more about import manifests.](create-update.md) |
+|**This upload contains _[n]_ parent manifests. Only one is allowed. Delete the manifests you donΓÇÖt want to use and try uploading again.** | A proxy update may have a parent import manifest and also some number of child import manifests referenced from the parent. Only one parent manifest can be included for a given update, though there can be any number of child import manifests. If you see this error along with a list of import manifest files, _each_ of those files has information indicating it's a parent import manifest. <br><br> To address this issue, first determine which parent import manifest matches the update you're importing, and then remove any others so there's just one parent import manifest. [Learn more about import manifests.](create-update.md) |
+|**Upload is missing one or more of the child manifests listed in the parent** _('parentimportfile.importmanifest.json')_**. Add the required child manifests for these update IDs** | A proxy update may have a parent import manifest and also some number of child import manifests. The parent import manifest includes references to all the child import manifests in your update. If you see this error, your parent import manifest references a child import manifest for each of the listed update IDs, but those child import manifest(s) aren't part of your update. <br><br> To address this issue, you'll need to add each of those child import manifests to your update, or else remove the references in the parent import manifest. [Learn more about import manifests.](create-update.md) |
+|**Upload contains child manifest file(s)** _('childmanifest.importmanifest.json')_ **that aren't listed in the parent. Delete it and try again.** | A proxy update may have a parent import manifest and also some number of child import manifests. The parent import manifest includes references to all the child import manifests in your update. If you see this error, each listed manifest is a child import manifest which is present in the update but isn't referenced in the parent import manifest. <br><br> To address this issue, you'll need to remove those child import manifests. Or, you can add references to them in your parent import manifest. [Learn more about import manifests.](create-update.md) |
+|**Some required update files were missing. Include them, and try your upload again.** | A proxy update may have multiple import manifests, each referencing multiple update files. If any of the files referenced aren't included when you import your update, you'll see this error. <br><br> To address this issue, you'll need to add the files that are missing, or else remove the references to those files from the import manifest that includes them. [Learn more about import manifests.](create-update.md) |
+|**Upload contains one or more files that arenΓÇÖt listed in the manifest. Delete the extra files and try your upload again.** | A proxy update may have multiple import manifests, each referencing multiple update files. You'll see this error if you try to import any update files that aren't referenced in an import manifest. <br><br> To address this issue, remove the files listed in the error message. Or, add a reference for each file to one of your import manifests. [Learn more about import manifests.](create-update.md) |
+|**Upload contains duplicate file names. Delete or rename files so that each name is unique.** | An update can contain multiple files, but each file must have a unique file name. If you try to import any update files that have the same name, you'll see this error. <br><br> To address this issue, remove or rename the files listed in the error message. If you rename any files, be sure to also change the associated reference for each file in the appropriate import manifest. [Learn more about import manifests.](create-update.md) |
+|**One or more import manifest wasnΓÇÖt formatted correctly. Delete the file or adjust its syntax, and try again.** | If you see this error, there's an issue with how your import manifest(s) were created. To resolve this issue, review each listed import manifest and check that there are no [schema](import-schema.md) issues. [Learn more about import manifests.](create-update.md) |
++
+<!-- Make sections visible when content is available --
+## Troubleshooting
+
+## FAQs
+-->
+
+## Next steps
+
+- [Troubleshoot other issues with Device Update](troubleshoot-device-update.md)
iot-hub-device-update Device Update Proxy Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-proxy-updates.md
+
+ Title: Using Proxy Updates with Device Update for Azure IoT Hub| Microsoft Docs
+description: Using Proxy Updates with Device Update for Azure IoT Hub
++ Last updated : 11/12/2021++++
+# Proxy Updates and multi-component updating
+
+Proxy Updates can support updating multiple **component(s)** on a target IoT device connected to IoT Hub. With Proxy updates, you can (1) target over-the-air updates to multiple components on the IoT device or (2) target over-the-air updates to multiple sensors connected to the IoT device. Use cases where proxy updates is applicable include:
+
+* Targeting specific update files to different partitions on the device.
+* Targeting specific update files to different apps/components on the device
+* Targeting specific update files to sensors connected to an IoT devices. These sensors could be connected to the IoT device over a network protocol (for example, USB, CANbus etc.).
+
+## Pre-requisite
+In order to update a component or components that connected to a target IoT Device, the device builder must register a custom **Component Enumerator Extension** that is built specifically for their IoT devices. The Component Enumerator Extension is required so that the Device Update Agent can map a **'child update'** with a specific component, or group of components, which the update is intended for. See [Contoso Component Enumerator](components-enumerator.md) for an example on how to implement and register a custom Component Enumerator extension.
+
+> [!NOTE]
+> Device Update *service* does not know anything about **component(s)** on the target device. Only the Device Update agent does the above mapping.
+
+## Example Proxy update
+In the following example, we will demonstrate how to do a Proxy update and use the multi-step ordered execution feature introduced in the Public Preview Refresh Release. Multi-step ordered execution feature allows for granular update controls including an install order, pre-install, install, and post-install steps. Use cases include, for example, a required preinstall check that is needed to validate the device state before starting an update, etc. Learn more about [multi-step ordered execution](device-update-multi-step-updates.md).
+
+See this tutorial on how to do a [Proxy update using the Device Update agent](device-update-howto-proxy-updates.md) with sample updates for components connected to a Contoso Virtual Vacuum device.
iot-hub-device-update Device Update Raspberry Pi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-raspberry-pi.md
Title: Device Update for Azure IoT Hub tutorial using the Raspberry Pi 3 B+ Refe
description: Get started with Device Update for Azure IoT Hub using the Raspberry Pi 3 B+ Reference Yocto Image. Previously updated : 2/11/2021 Last updated : 1/26/2022 # Device Update for Azure IoT Hub tutorial using the Raspberry Pi 3 B+ Reference Image
-Device Update for IoT Hub supports two forms of updates ΓÇô image-based
-and package-based.
+Device Update for IoT Hub supports image-based, package-based, and script-based updates.
Image updates provide a higher level of confidence in the end-state of the device. It is typically easier to replicate the results of an image-update between a pre-production environment and a production environment, since it doesnΓÇÖt pose the same challenges as packages and their dependencies. Due to their atomic nature, one can also adopt an A/B failover model easily.
We provide sample images in "Assets" on the [Device Update GitHub releases page]
## Flash SD card with image Using your favorite OS flashing tool, install the Device Update base image
-(adu-base-image) on the SD Card that will be used in the Raspberry Pi 3 B+
+(adu-base-image) on the SD card that will be used in the Raspberry Pi 3 B+
device. ### Using bmaptool to flash SD card
Device Update for Azure IoT Hub software is subject to the following license ter
* [Device update for IoT Hub license](https://github.com/Azure/iot-hub-device-update/blob/main/LICENSE.md) * [Delivery optimization client license](https://github.com/microsoft/do-client/blob/main/LICENSE)
-Read the license terms prior to using the agent. Your installation and use constitutes your acceptance of these terms. If you do not agree with the license terms, do not use the Device update for IoT Hub agent.
+Read the license terms prior to using the agent. Your installation and use constitutes your acceptance of these terms. If you do not agree with the license terms, do not use the Device Update for IoT Hub agent.
## Create device or module in IoT Hub and get connection string
Now, the device needs to be added to the Azure IoT Hub. From within Azure
IoT Hub, a connection string will be generated for the device. 1. From the Azure portal, launch the Azure IoT Hub.
-2. Create a new device.
-3. On the left-hand side of the page, navigate to 'IoT Devices' >
- Select "New".
-4. Provide a name for the device under 'Device ID'--Ensure that "Autogenerate
- keys" is checkbox is selected.
-5. Select 'Save'.
-6. Now you will be returned to the 'Devices' page and the device you created should be in the list.
-7. Get the device connection string:
+
+3. Create a new device.
+
+5. On the left-hand side of the page, navigate to 'IoT Devices' > Select "New".
+
+7. Provide a name for the device under 'Device ID'--Ensure that "Autogenerate keys" is checkbox is selected.
+
+9. Select 'Save'. Now you will be returned to the 'Devices' page and the device you created should be in the list.
+
+13. Get the device connection string:
- Option 1 Using Device Update agent with a module identity: From the same 'Devices' page click on '+ Add Module Identity' on the top. Create a new Device Update module with the name 'IoTHubDeviceUpdate', choose other options as it applies to your use case and then click 'Save'. Click on the newly created 'Module' and in the module view, select the 'Copy' icon next to 'Primary Connection String'.+ - Option 2 Using Device Update agent with the device identity: In the device view, select the 'Copy' icon next to 'Primary Connection String'.
+
8. Paste the copied characters somewhere for later use in the steps below. **This copied string is your device connection string**.
-## Provision connection string on SD card
+## Prepare On-Device Configurations for Device Update for IotHub
+
+There are two configuration files that are required to be on the device for Device Update for IotHub to properly be configured. The first is the `du-config.json` file which must exist at `/adu/du-config.json`. The second is the `du-diagnostics-config.json` which must exist at `/adu/du-diagnostics-config.json`.
+
+Here are two examples for the `du-config.json` and the `du-diagnostics-config.json` files:
+
+### Example du-config.json
+```JSON
+ {
+ "schemaVersion": "1.0",
+ "aduShellTrustedUsers": [
+ "adu",
+ "do"
+ ],
+ "manufacturer": "fabrikam",
+ "model": "vacuum",
+ "agents": [
+ {
+ "name": "main",
+ "runas": "adu",
+ "connectionSource": {
+ "connectionType": "string",
+ "connectionData": "HostName=example-connection-string.azure-devices.net;DeviceId=example-device;SharedAccessKey=M5oK/rOP12aB5678YMWv5vFWHFGJFwE8YU6u0uTnrmU="
+ },
+ "manufacturer": "fabrikam",
+ "model": "vacuum"
+ }
+ ]
+ }
+```
+
+### Example du-diagnostics-config.json
+```JSON
+ {
+ "logComponents":[
+ {
+ "componentName":"adu",
+ "logPath":"/adu/logs/"
+ },
+ {
+ "componentName":"do",
+ "logPath":"/var/log/deliveryoptimization-agent/"
+ }
+ ],
+ "maxKilobytesToUploadPerLogPath":50
+ }
+```
+## Instructions for Configuring the Device Update Agent on the RaspberryPi
1. Make sure that the Raspberry Pi3 is connected to the network.
-2. In PowerShell, use the below command to ssh into the device
- ```markdown
- ssh raspberrypi3 -l root
- ```
-4. Enter login as 'root', and password should be left as empty.
-5. After you successfully ssh into the device, run the below commands
+
+2. Follow the instruction below to add the configuration details:
+
+ 1. First ssh into the machine using the following command in the PowerShell window
+
+ ```shell
+ ssh raspberrypi3 -l root
+ ```
+ 1. Once logged into the device you can create/open the du-config.json file for editing using
+
+ ```bash
+ nano /adu/du-config.json
+ ```
+ 2. After running the command you should see an open editor with the file. If you have never created the file it will be empty. Now copy the above example du-config.json contents and substitute the configurations required for your device. You will also need to replace the example connection string with the one for the device you created in the steps above.
+
+ 4. Once you have completed your changes press `Ctrl+X` to exit the editor and then enter `y` to confirm you want to save the changes.
+
+ 6. Now we need to create the du-diagnostics-config.json file using similar commands. Start by creating/openning the du-diagnostics-config.json file for editing using:
+ ```bash
+ nano /adu/du-diagnostics-config.json
+ ```
+ 5. Copy the above example du-diagnostics-config.json contents and substitute any configurations which differ from the default build. Please note the example du-diagnostics-config.json file represents the default log locations for Device Update for IotHub. You will only need to change these if your implementation differs.
+
+ 7. Once you have completed your changes press `Ctrl+X` to exit the editor and then enter `y` to confirm you want to save the changes.
-Replace `<device connection string>` with your connection string
- ```markdown
- echo "connection_string=<device connection string>" > /adu/adu-conf.txt
- echo "aduc_manufacturer=ADUTeam" >> /adu/adu-conf.txt
- echo "aduc_model=RefDevice" >> /adu/adu-conf.txt
- ```
+ 9. Now use the following command to show the files located in the `/adu/` directory. You should see both of your configuration files.du-diagnostics-config.json file for editing using:
+
+ ```bash
+ ls -la /adu/
+ ```
+
+3. You will need to restart the Device Update system daemon to make sure that the configurations have been applied. You can do so using the following command within the terminal logged into the raspberrypi.
+
+```markdown
+ systemctl start adu-agent
+```
+
+4. You now need to check that the agent is live using the following command:
+
+```markdown
+ systemctl status adu-agent
+```
+ You should see the status come back as alive and green.
## Connect the device in Device Update IoT Hub 1. On the left-hand side of the page, select 'IoT Devices'. 2. Select the link with your device name. 3. At the top of the page, select 'Device Twin' if directly connecting to Device Update using the IoT device identity. Otherwise select the module you created above and click on its ΓÇÿModule TwinΓÇÖ.
-4. Under the 'reported' section of the device twin properties, look for the Linux kernel version.
+4. Under the 'reported' section of the Device Twin properties, look for the Linux kernel version.
For a new device, which hasn't received an update from Device Update, the [DeviceManagement:DeviceInformation:1.swVersion](device-update-plug-and-play.md) value will represent the firmware version running on the device. Once an update has been applied to a device, Device Update will
Use that version number in the Import Update step below.
## Import update
-1. Download the [sample import manifest](https://github.com/Azure/iot-hub-device-update/releases/download/0.7.0/TutorialImportManifest_Pi.json) and [sample image update](https://github.com/Azure/iot-hub-device-update/releases/download/0.7.0-rc1/adu-update-image-raspberrypi3-0.6.5073.1.swu).
-2. Log in to the [Azure portal](https://portal.azure.com/) and navigate to your IoT Hub with Device Update. Then, select the Device Updates option under Automatic Device Management from the left-hand navigation bar.
+1. Download the Download the sample tutorial manifest (Tutorial Import Manifest_Pi.json) and sample update (adu-update-image-raspberrypi3-0.6.5073.1.swu) from [Release Assets](https://github.com/Azure/iot-hub-device-update/releases) for the latest agent.
+
+2. Log in to the [Azure portal](https://portal.azure.com/) and navigate to your IoT Hub with Device Update. Then, select the Updates option under Automatic Device Management from the left-hand navigation bar.
+ 3. Select the Updates tab.+ 4. Select "+ Import New Update".
-5. Select the folder icon or text box under "Select an Import Manifest File". You will see a file picker dialog. Select the _sample import manifest_ you downloaded in step 1 above. Next, select the folder icon or text box under "Select one or more update files". You will see a file picker dialog. Select the _sample update file_ that you downloaded in step 1 above.
+
+5. Select "+ Select from storage container". Select an existing account or create a new account using "+ Storage account". Then select an existing container or create a new container using "+ Container". This container will be used to stage your update files for importing.
+ > [!NOTE]
+ > We recommend using a new container each time you import an update to avoid accidentally importing files from previous updates. If you don't use a new container, be sure to delete any files from the existing container before completing this step.
- :::image type="content" source="media/import-update/select-update-files.png" alt-text="Screenshot showing update file selection." lightbox="media/import-update/select-update-files.png":::
+ :::image type="content" source="media/import-update/storage-account-ppr.png" alt-text="Storage Account" lightbox="media/import-update/storage-account-ppr.png":::
-6. Select the folder icon or text box under "Select a storage container". Then select the appropriate storage account.
+6. In your container, select "Upload" and navigate to files downloaded in **Step 1**. When you've selected all your update files, select "Upload" Then click the "Select" button to return to the "Import update" page.
-7. If youΓÇÖve already created a container, you can reuse it. (Otherwise, select "+ Container" to create a new storage container for updates.) Select the container you wish to use and click "Select".
-
- :::image type="content" source="media/import-update/container.png" alt-text="Screenshot showing container selection." lightbox="media/import-update/container.png":::
+ :::image type="content" source="media/import-update/import-select-ppr.png" alt-text="Select Uploaded Files" lightbox="media/import-update/import-select-ppr.png":::
+ _This screenshot shows the import step and file names may not match the ones used in the example_
-8. Select "Submit" to start the import process.
+8. On the Import update page, review the files to be imported. Then select "Import update" to start the import process.
-9. The import process begins, and the screen changes to the "Import History" section. Select "Refresh" to view progress until the import process completes. Depending on the size of the update, this may complete in a few minutes but could take longer.
-
- :::image type="content" source="media/import-update/update-publishing-sequence-2.png" alt-text="Screenshot showing update import sequence." lightbox="media/import-update/update-publishing-sequence-2.png":::
+ :::image type="content" source="media/import-update/import-start-2-ppr.png" alt-text="Import Start" lightbox="media/import-update/import-start-2-ppr.png":::
-10. When the Status column indicates the import has succeeded, select the "Ready to Deploy" header. You should see your imported update in the list now.
+9. The import process begins, and the screen switches to the "Import History" section. When the `Status` column indicates the import has succeeded, select the "Available Updates" header. You should see your imported update in the list now.
+ :::image type="content" source="media/import-update/update-ready-ppr.png" alt-text="Job Status" lightbox="media/import-update/update-ready-ppr.png":::
+
[Learn more](import-update.md) about importing updates. ## Create update group
-1. Go to the IoT Hub you previously connected to your Device Update instance.
-
-2. Select the Updates option under Device Management from the left-hand navigation bar.
+1. Go to the Groups and Deployments tab at the top of the page.
+ :::image type="content" source="media/create-update-group/ungrouped-devices.png" alt-text="Screenshot of ungrouped devices." lightbox="media/create-update-group/ungrouped-devices.png":::
-3. Select the Groups tab at the top of the page.
+2. Select the "Add group" button to create a new group.
+ :::image type="content" source="media/create-update-group/add-group.png" alt-text="Screenshot of device group addition." lightbox="media/create-update-group/add-group.png":::
-4. Select the Add button to create a new group.
+3. Select an IoT Hub tag and Device Class from the list and then select Create group.
+ :::image type="content" source="media/create-update-group/select-tag.png" alt-text="Screenshot of tag selection." lightbox="media/create-update-group/select-tag.png":::
-5. Select the IoT Hub tag you created in the previous step from the list. Select Create group.
+4. Once the group is created, you will see that the update compliance chart and groups list are updated. Update compliance chart shows the count of devices in various states of compliance: On latest update, New updates available, and Updates in Progress. [Learn about update compliance.](device-update-compliance.md)
+ :::image type="content" source="media/create-update-group/updated-view.png" alt-text="Screenshot of update compliance view." lightbox="media/create-update-group/updated-view.png":::
- :::image type="content" source="media/create-update-group/select-tag.PNG" alt-text="Screenshot showing tag selection." lightbox="media/create-update-group/select-tag.PNG":::
+5. You should see your newly created group and any available updates for the devices in the new group. If there are devices that don't meet the device class requirements of the group, they will show up in a corresponding invalid group. You can deploy the best available update to the new user-defined group from this view by clicking on the "Deploy" button next to the group.
[Learn more](create-update-group.md) about adding tags and creating update groups ## Deploy update
-1. Once the group is created, you should see a new update available for your device group, with a link to the update under Pending Updates. You may need to Refresh once.
+1. Once the group is created, you should see a new update available for your device group, with a link to the update under Best Update (you may need to Refresh once). [Learn More about update compliance.](device-update-compliance.md)
-2. Click on the available update.
+2. Select the target group by clicking on the group name. You will be directed to the group details under Group basics.
-3. Confirm the correct group is selected as the target group. Schedule your deployment, then select Deploy update.
+ :::image type="content" source="media/deploy-update/group-basics.png" alt-text="Group details" lightbox="media/deploy-update/group-basics.png":::
- :::image type="content" source="media/deploy-update/select-update.png" alt-text="Select update" lightbox="media/deploy-update/select-update.png":::
+3. To initiate the deployment, go to the Current deployment tab. Click the deploy link next to the desired update from the Available updates section. The best, available update for a given group will be denoted with a "Best" highlight.
-4. View the compliance chart. You should see the update is now in progress.
+ :::image type="content" source="media/deploy-update/select-update.png" alt-text="Select update" lightbox="media/deploy-update/select-update.png":::
- :::image type="content" source="media/deploy-update/update-in-progress.png" alt-text="Update in progress" lightbox="media/deploy-update/update-in-progress.png":::
+4. Schedule your deployment to start immediately or in the future, then select Create.
-5. After your device is successfully updated, you should see your compliance chart and deployment details update to reflect the same.
+ :::image type="content" source="media/deploy-update/create-deployment.png" alt-text="Create deployment" lightbox="media/deploy-update/create-deployment.png":::
+
+5. The Status under Deployment details should turn to Active, and the deployed update should be marked with "(deploying)".
+
+ :::image type="content" source="media/deploy-update/deployment-active.png" alt-text="Deployment active" lightbox="media/deploy-update/deployment-active.png":::
+
+6. View the compliance chart. You should see the update is now in progress.
+
+7. After your device is successfully updated, you should see your compliance chart and deployment details update to reflect the same.
:::image type="content" source="media/deploy-update/update-succeeded.png" alt-text="Update succeeded" lightbox="media/deploy-update/update-succeeded.png"::: ## Monitor an update deployment
-1. Select the Deployments tab at the top of the page.
+1. Select the Deployment history tab at the top of the page.
- :::image type="content" source="media/deploy-update/deployments-tab.png" alt-text="Deployments tab" lightbox="media/deploy-update/deployments-tab.png":::
+ :::image type="content" source="media/deploy-update/deployments-history.png" alt-text="Deployment History" lightbox="media/deploy-update/deployments-history.png":::
-2. Select the deployment you created to view the deployment details.
+2. Select the details link next to the deployment you created.
:::image type="content" source="media/deploy-update/deployment-details.png" alt-text="Deployment details" lightbox="media/deploy-update/deployment-details.png":::
-3. Select Refresh to view the latest status details. Continue this process until the status changes to Succeeded.
+3. Select Refresh to view the latest status details.
+ You have now completed a successful end-to-end image update using Device Update for IoT Hub on a Raspberry Pi 3 B+ device. ## Clean up resources
-When no longer needed, clean up your device update account, instance, IoT Hub and IoT device.
+When no longer needed, clean up your Device Update account, instance, IoT Hub and IoT device.
## Next steps
iot-hub-device-update Device Update Simulator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-simulator.md
Title: Device Update for Azure IoT Hub tutorial using the Ubuntu (18.04 x64) Sim
description: Get started with Device Update for Azure IoT Hub using the Ubuntu (18.04 x64) Simulator Reference Agent. Previously updated : 2/11/2021 Last updated : 1/26/2022 # Device Update for Azure IoT Hub tutorial using the Ubuntu (18.04 x64) Simulator Reference Agent
-Device Update for IoT Hub supports two forms of updates ΓÇô image-based
-and package-based.
+Device Update for IoT Hub supports image-based, package-based and script-based updates.
Image updates provide a higher level of confidence in the end-state of the device. It is typically easier to replicate the results of an image-update between a pre-production environment and a production environment, since it doesnΓÇÖt pose the same challenges as packages and their dependencies. Due to their atomic nature, one can also adopt an A/B failover model easily.
In this tutorial you will learn how to:
## Prerequisites * If you haven't already done so, create a [Device Update account and instance](create-device-update-account.md), including configuring an IoT Hub.
-### Download and install
-
-* Az (Azure CLI) cmdlets for PowerShell:
- * Open PowerShell > Install Azure CLI ("Y" for prompts to install from "untrusted" source)
-
-```powershell
-PS> Install-Module Az -Scope CurrentUser
-```
-
-### Enable WSL on your Windows device (Windows Subsystem for Linux)
-
-1. Open PowerShell as Administrator on your machine and run the following command (you might be asked to restart after each step; restart when asked):
-
-```powershell
-PS> Enable-WindowsOptionalFeature -Online -FeatureName VirtualMachinePlatform
-PS> Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux
-```
-
- (*You may be prompted to restart after this step*)
-
-2. Go to the Microsoft Store on the web and install [Ubuntu 18.04 LTS](https://www.microsoft.com/p/ubuntu-1804-lts/9n9tngvndl3q?activetab=pivot:overviewtab`).
-
-3. Start "Ubuntu 18.04 LTS" and install.
-
-4. When installed, you'll be asked to set root name (username) and password. Be sure to use a memorable root name password.
-
-5. In PowerShell, run the following command to set Ubuntu to be the default Linux distribution:
-
-```powershell
-PS> wsl --setdefault Ubuntu-18.04
-```
-
-6. List all Linux distributions, making sure that Ubuntu is the default one.
-
-```powershell
-PS> wsl --list
-```
-
-7. You should see: **Ubuntu-18.04 (Default)**
-
-## Download Device Update Ubuntu (18.04 x64) Simulator Reference Agent
-
-The Ubuntu reference agent can be downloaded from the *Assets* section from release notes [here](https://github.com/Azure/iot-hub-device-update/releases).
-
-There are two versions of the agent. For this tutorial, since you're exercising the image-based scenario, use AducIotAgentSim-microsoft-swupdate. If you were going to exercise the package-based scenario instead, you would use AducIotAgentSim-microsoft-apt.
-
-## Install Device Update Agent simulator
-
-1. Start Ubuntu WSL and enter the following command (note that extra space and dot at the end).
-
-```shell
-explorer.exe .
-```
-
-2. Copy AducIotAgentSim-microsoft-swupdate (or AducIotAgentSim-microsoft-apt) from your local folder where it was downloaded under /mnt to your home folder in WSL.
-
-3. Run the following command to make the binaries executable.
-
-```shell
-sudo chmod u+x AducIotAgentSim-microsoft-swupdate
-```
-
- or
-
-```shell
-sudo chmod u+x AducIotAgentSim-microsoft-apt
-```
-Device Update for Azure IoT Hub software is subject to the following license terms:
- * [Device update for IoT Hub license](https://github.com/Azure/iot-hub-device-update/blob/main/LICENSE.md)
- * [Delivery optimization client license](https://github.com/microsoft/do-client/blob/main/LICENSE)
-
-Read the license terms prior to using the agent. Your installation and use constitutes your acceptance of these terms. If you do not agree with the license terms, do not use the Device update for IoT Hub agent.
- ## Add device to Azure IoT Hub
-Once the Device Update Agent is running on an IoT device, the device needs to be added to the Azure IoT Hub. From within Azure IoT Hub, a connection string will be generated for a particular device.
+Once the Device Update agent is running on an IoT device, the device needs to be added to the Azure IoT Hub. From within Azure IoT Hub, a connection string will be generated for a particular device.
1. From the Azure portal, launch the Device Update IoT Hub. 2. Create a new device.
Once the Device Update Agent is running on an IoT device, the device needs to be
7. In the device view, select the 'Copy' icon next to 'Primary Connection String'. 8. Paste the copied characters somewhere for later use in the steps below. **This copied string is your device connection string**.
-## Add connection string to simulator
+## Install Device Update agent to test it as a simulator
-Start Device Update Agent on your new Software Devices.
+1. Follow the instructions to [Install the Azure IoT Edge runtime](../iot-edge/how-to-provision-single-device-linux-symmetric.md?view=iotedge-2020-11&preserve-view=true).
+ > [!NOTE]
+ > The Device Update agent doesn't depend on IoT Edge. But, it does rely on the IoT Identity Service daemon that is installed with IoT Edge (1.2.0 and higher) to obtain an identity and connect to IoT Hub.
+ >
+ > Although not covered in this tutorial, the [IoT Identity Service daemon can be installed standalone on Linux-based IoT devices](https://azure.github.io/iot-identity-service/installation.html). The sequence of installation matters. The Device Update package agent must be installed _after_ the IoT Identity Service. Otherwise, the package agent will not be registered as an authorized component to establish a connection to IoT Hub.
+1. Then, install the Device Update agent .deb packages.
-1. Start Ubuntu.
-2. Run the Device Update Agent and specify the device connection string from the previous section wrapped with apostrophes:
+ ```bash
+ sudo apt-get install deviceupdate-agent deliveryoptimization-plugin-apt
+ ```
+
+2. Enter your IoT device's module (or device, depending on how you [provisioned the device with Device Update](device-update-agent-provisioning.md)) primary connection string in the configuration file by running the command below.
-Replace `<device connection string>` with your connection string
-```shell
-sudo ./AducIotAgentSim-microsoft-swupdate "<device connection string>"
-```
+ ```bash
+ sudo nano /etc/adu/du-config.json
+ ```
+
+3. Set up the agent to run as a simulator. Run following command on the IoT device so that the Device Update agent will invoke the simulator handler to process an package update with APT ('microsoft/apt:1').
-or
+ ```sh
+ sudo /usr/bin/AducIotAgent --register-content-handler /var/lib/adu/extensions/sources/libmicrosoft_simulator_1.so --update-type 'microsoft/apt:1'
+ ```
+
+ To register and invoke the simulator handler the command must follow the below format:
+
+ sudo /usr/bin/AducIotAgent --register--content-handler <full path to the handler file> --update-type <update type name>
-```shell
-./AducIotAgentSim-microsoft-apt -c '<device connection string>'
-```
+4. Download the sample-du-simulator-data.json from [Release Assets](https://github.com/Azure/iot-hub-device-update/releases). Run the command below to create and edit the du-simulator-data.json in the tmp folder.
+
+ ```sh
+ sudo nano /tmp/du-simulator-data.json
+ sudo chown adu:adu /tmp/du-simulator-data.json
+ sudo chmod 664 /tmp/du-simulator-data.json
+ ```
+ Copy the contents from the downloaded file into the du-simulator-data.json. Press Ctrl + X to save the changes.
+
+ If /tmp doesn't exist then
-3. Scroll up and look for the string indicating that the device is in "Idle" state. An "Idle" state signifies that the device is ready for service commands:
+ ```sh
+ sudo mkdir/tmp
+ sudo chown root:root/tmp
+ sudo chmod 1777/tmp
+ ```
+
+5. Restart the Device Update agent by running the command below.
-```markdown
-Agent running. [main]
+ ```bash
+ sudo systemctl restart adu-agent
+ ```
+
+Device Update for Azure IoT Hub software is subject to the following license terms:
+ * [Device Update for IoT Hub license](https://github.com/Azure/iot-hub-device-update/blob/main/LICENSE.md)
+ * [Delivery optimization client license](https://github.com/microsoft/do-client/blob/main/LICENSE)
+
+Read the license terms prior to using the agent. Your installation and use constitutes your acceptance of these terms. If you do not agree with the license terms, do not use the Device Update for IoT Hub agent.
+
+> [!NOTE]
+> After your testing with the simulator run the below command to invoke the APT handler and [deploy over-the-air Package Updates](device-update-ubuntu-agent.md)
+```sh
+# sudo /usr/bin/AducIotAgent --register-content-handler /var/lib/adu/extensions/sources/libmicrosoft_apt_1.so --update-type 'microsoft/a pt:1'
``` + ## Add a tag to your device 1. Log into [Azure portal](https://portal.azure.com) and navigate to the IoT Hub.
Agent running. [main]
4. Add a new Device Update tag value as shown below.
-```JSON
- "tags": {
- "ADUGroup": "<CustomTagValue>"
- }
-```
+ ```JSON
+ "tags": {
+ "ADUGroup": "<CustomTagValue>"
+ }
+ ```
## Import update
-1. Download the [sample import manifest](https://github.com/Azure/iot-hub-device-update/releases/download/0.7.0/TutorialImportManifest_Sim.json) and [sample image update](https://github.com/Azure/iot-hub-device-update/releases/download/0.7.0-rc1/adu-update-image-raspberrypi3-0.6.5073.1.swu). _Note_: these are re-used update files from the Raspberry Pi tutorial, because the update in this tutorial will be simulated and therefore the specific file content doesn't matter.
-2. Log in to the [Azure portal](https://portal.azure.com/) and navigate to your IoT Hub with Device Update. Then, select the Device Updates option under Automatic Device Management from the left-hand navigation bar.
+1. Download the sample tutorial manifest (Tutorial Import Manifest_Sim.json) and sample update (adu-update-image-raspberrypi3-0.6.5073.1.swu) from [Release Assets](https://github.com/Azure/iot-hub-device-update/releases) for the latest agent. _Note_: the update file is re-used update files from the Raspberry Pi tutorial, because the update in this tutorial will be simulated and therefore the specific file content doesn't matter.
+
+2. Log in to the [Azure portal](https://portal.azure.com/) and navigate to your IoT Hub with Device Update. Then, select the Updates option under Automatic Device Management from the left-hand navigation bar.
3. Select the Updates tab. 4. Select "+ Import New Update".
-5. Select the folder icon or text box under "Select an Import Manifest File". You will see a file picker dialog. Select the _sample import manifest_ you downloaded in step 1 above. Next, select the folder icon or text box under "Select one or more update files". You will see a file picker dialog. Select the _sample image update_ that you downloaded in step 1 above.
+5. Select "+ Select from storage container". Select an existing account or create a new account using "+ Storage account". Then select an existing container or create a new container using "+ Container". This container will be used to stage your update files for importing.
+ > [!NOTE]
+ > We recommend using a new container each time you import an update to avoid accidentally importing files from previous updates. If you don't use a new container, be sure to delete any files from the existing container before completing this step.
+
+ :::image type="content" source="media/import-update/storage-account-ppr.png" alt-text="Storage Account" lightbox="media/import-update/storage-account-ppr.png":::
- :::image type="content" source="media/import-update/select-update-files.png" alt-text="Screenshot showing update file selection." lightbox="media/import-update/select-update-files.png":::
+6. In your container, select "Upload" and navigate to files downloaded in **Step 1**. When you've selected all your update files, select "Upload" Then click the "Select" button to return to the "Import update" page.
-6. Select the folder icon or text box under "Select a storage container". Then select the appropriate storage account.
+ :::image type="content" source="media/import-update/import-select-ppr.png" alt-text="Select Uploaded Files" lightbox="media/import-update/import-select-ppr.png":::
+ _This screenshot shows the import step and file names may not match the ones used in the example_
-7. If youΓÇÖve already created a container, you can reuse it. (Otherwise, select "+ Container" to create a new storage container for updates.). Select the container you wish to use and click "Select".
-
- :::image type="content" source="media/import-update/container.png" alt-text="Screenshot showing container selection." lightbox="media/import-update/container.png":::
+8. On the Import update page, review the files to be imported. Then select "Import update" to start the import process.
-8. Select "Submit" to start the import process.
+ :::image type="content" source="media/import-update/import-start-2-ppr.png" alt-text="Import Start" lightbox="media/import-update/import-start-2-ppr.png":::
-9. The import process begins, and the screen changes to the "Import History" section. Select "Refresh" to view progress until the import process completes. Depending on the size of the update, this may complete in a few minutes but could take longer.
-
- :::image type="content" source="media/import-update/update-publishing-sequence-2.png" alt-text="Screenshot showing update import sequence." lightbox="media/import-update/update-publishing-sequence-2.png":::
-
-10. When the Status column indicates the import has succeeded, select the "Ready to Deploy" header. You should see your imported update in the list now.
+9. The import process begins, and the screen switches to the "Import History" section. When the `Status` column indicates the import has succeeded, select the "Available Updates" header. You should see your imported update in the list now.
+ :::image type="content" source="media/import-update/update-ready-ppr.png" alt-text="Job Status" lightbox="media/import-update/update-ready-ppr.png":::
+
[Learn more](import-update.md) about importing updates. ## Create update group
-1. Go to the IoT Hub you previously connected to your Device Update instance.
+1. Go to the Groups and Deployments tab at the top of the page.
+ :::image type="content" source="media/create-update-group/ungrouped-devices.png" alt-text="Screenshot of ungrouped devices." lightbox="media/create-update-group/ungrouped-devices.png":::
-2. Select the Device Updates option under Automatic Device Management from the left-hand navigation bar.
+2. Select the "Add group" button to create a new group.
+ :::image type="content" source="media/create-update-group/add-group.png" alt-text="Screenshot of device group addition." lightbox="media/create-update-group/add-group.png":::
-3. Select the Groups tab at the top of the page.
+3. Select an IoT Hub tag and Device Class from the list and then select Create group.
+ :::image type="content" source="media/create-update-group/select-tag.png" alt-text="Screenshot of tag selection." lightbox="media/create-update-group/select-tag.png":::
-4. Select the Add button to create a new group.
+4. Once the group is created, you will see that the update compliance chart and groups list are updated. Update compliance chart shows the count of devices in various states of compliance: On latest update, New updates available, and Updates in Progress. [Learn about update compliance.](device-update-compliance.md)
+ :::image type="content" source="media/create-update-group/updated-view.png" alt-text="Screenshot of update compliance view." lightbox="media/create-update-group/updated-view.png":::
-5. Select the IoT Hub tag you created in the previous step from the list. Select Create update group.
-
- :::image type="content" source="media/create-update-group/select-tag.PNG" alt-text="Screenshot showing tag selection." lightbox="media/create-update-group/select-tag.PNG":::
+5. You should see your newly created group and any available updates for the devices in the new group. If there are devices that don't meet the device class requirements of the group, they will show up in a corresponding invalid group. You can deploy the best available update to the new user-defined group from this view by clicking on the "Deploy" button next to the group.
[Learn more](create-update-group.md) about adding tags and creating update groups ## Deploy update
-1. Once the group is created, you should see a new update available for your device group, with a link to the update under Pending Updates. You may need to Refresh once.
+1. Once the group is created, you should see a new update available for your device group, with a link to the update under Best Update (you may need to Refresh once). [Learn More about update compliance.](device-update-compliance.md)
+
+2. Select the target group by clicking on the group name. You will be directed to the group details under Group basics.
+
+ :::image type="content" source="media/deploy-update/group-basics.png" alt-text="Group details" lightbox="media/deploy-update/group-basics.png":::
+
+3. To initiate the deployment, go to the Current deployment tab. Click the deploy link next to the desired update from the Available updates section. The best, available update for a given group will be denoted with a "Best" highlight.
+
+ :::image type="content" source="media/deploy-update/select-update.png" alt-text="Select update" lightbox="media/deploy-update/select-update.png":::
-2. Click on the available update.
+4. Schedule your deployment to start immediately or in the future, then select Create.
-3. Confirm the correct group is selected as the target group. Schedule your deployment, then select Deploy update.
+ :::image type="content" source="media/deploy-update/create-deployment.png" alt-text="Create deployment" lightbox="media/deploy-update/create-deployment.png":::
- :::image type="content" source="media/deploy-update/select-update.png" alt-text="Select update" lightbox="media/deploy-update/select-update.png":::
+5. The Status under Deployment details should turn to Active, and the deployed update should be marked with "(deploying)".
-4. View the compliance chart. You should see the update is now in progress.
+ :::image type="content" source="media/deploy-update/deployment-active.png" alt-text="Deployment active" lightbox="media/deploy-update/deployment-active.png":::
- :::image type="content" source="media/deploy-update/update-in-progress.png" alt-text="Update in progress" lightbox="media/deploy-update/update-in-progress.png":::
+6. View the compliance chart. You should see the update is now in progress.
-5. After your device is successfully updated, you should see your compliance chart and deployment details update to reflect the same.
+7. After your device is successfully updated, you should see your compliance chart and deployment details update to reflect the same.
:::image type="content" source="media/deploy-update/update-succeeded.png" alt-text="Update succeeded" lightbox="media/deploy-update/update-succeeded.png"::: ## Monitor an update deployment
-1. Select the Deployments tab at the top of the page.
+1. Select the Deployment history tab at the top of the page.
- :::image type="content" source="media/deploy-update/deployments-tab.png" alt-text="Deployments tab" lightbox="media/deploy-update/deployments-tab.png":::
+ :::image type="content" source="media/deploy-update/deployments-history.png" alt-text="Deployment History" lightbox="media/deploy-update/deployments-history.png":::
-2. Select the deployment you created to view the deployment details.
+2. Select the details link next to the deployment you created.
:::image type="content" source="media/deploy-update/deployment-details.png" alt-text="Deployment details" lightbox="media/deploy-update/deployment-details.png":::
-3. Select Refresh to view the latest status details. Continue this process until the status changes to Succeeded.
+3. Select Refresh to view the latest status details.
You have now completed a successful end-to-end image update using Device Update for IoT Hub using the Ubuntu (18.04 x64) Simulator Reference Agent. ## Clean up resources
-When no longer needed, clean up your device update account, instance, IoT Hub and IoT device.
+When no longer needed, clean up your Device Update account, instance, IoT Hub and IoT device.
## Next steps
iot-hub-device-update Device Update Ubuntu Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-ubuntu-agent.md
Title: Device Update for Azure IoT Hub tutorial using the Ubuntu Server 18.04 x6
description: Get started with Device Update for Azure IoT Hub using the Ubuntu Server 18.04 x64 Package agent. Previously updated : 2/16/2021 Last updated : 1/26/2022 # Device Update for Azure IoT Hub tutorial using the package agent on Ubuntu Server 18.04 x64
-Device Update for IoT Hub supports two forms of updates ΓÇô image-based and package-based.
+Device Update for IoT Hub supports image-based, package-based and script-based updates.
Package-based updates are targeted updates that alter only a specific component or application on the device. They lead to lower consumption of bandwidth and helps reduce the time to download and install the update. Package-based updates also typically allow for less downtime of devices when applying an update and avoid the overhead of creating images. They use an [APT manifest](device-update-apt-manifest.md) which provides the Device Update Agent with the information it needs to download and install the packages specified in the APT Manifest file (as well as their dependencies) from a designated repository.
In this tutorial you will learn how to:
* If you haven't already done so, create a [Device Update account and instance](create-device-update-account.md), including configuring an IoT Hub. * The [connection string for an IoT Edge device](../iot-edge/how-to-provision-single-device-linux-symmetric.md?view=iotedge-2020-11&preserve-view=true#view-registered-devices-and-retrieve-provisioning-information).
+* If you used the [Simulator agent tutorial](device-update-simulator.md) for testing prior to this, run the below command to invoke the APT handler and can deploy over-the-air Package Updates in this tutorial.
+
+```sh
+# sudo /usr/bin/AducIotAgent --register-content-handler /var/lib/adu/extensions/sources/libmicrosoft_apt_1.so --update-type 'microsoft/a pt:1'
+```
## Prepare a device ### Using the Automated Deploy to Azure Button
For convenience, this tutorial uses a [cloud-init](../virtual-machines/linux/usi
> [!TIP] > If you want to SSH into this VM after setup, use the associated **DNS Name** with the command: `ssh <adminUsername>@<DNS_Name>`
-### (Optional) Manually prepare a device
+### Manually prepare a device
Similar to the steps automated by the [cloud-init script](https://github.com/Azure/iotedge-vm-deploy/blob/1.2.0-rc4/cloud-init.txt), following are manual steps to install and configure the device. These steps can be used to prepare a physical device. 1. Follow the instructions to [Install the Azure IoT Edge runtime](../iot-edge/how-to-provision-single-device-linux-symmetric.md?view=iotedge-2020-11&preserve-view=true). > [!NOTE]
- > The Device Update package agent doesn't depend on IoT Edge. But, it does rely on the IoT Identity Service daemon that is installed with IoT Edge (1.2.0 and higher) to obtain an identity and connect to IoT Hub.
+ > The Device Update agent doesn't depend on IoT Edge. But, it does rely on the IoT Identity Service daemon that is installed with IoT Edge (1.2.0 and higher) to obtain an identity and connect to IoT Hub.
> > Although not covered in this tutorial, the [IoT Identity Service daemon can be installed standalone on Linux-based IoT devices](https://azure.github.io/iot-identity-service/installation.html). The sequence of installation matters. The Device Update package agent must be installed _after_ the IoT Identity Service. Otherwise, the package agent will not be registered as an authorized component to establish a connection to IoT Hub. 1. Then, install the Device Update agent .deb packages.
Similar to the steps automated by the [cloud-init script](https://github.com/Azu
```bash sudo apt-get install deviceupdate-agent deliveryoptimization-plugin-apt ```
+
+1. Enter your IoT device's module (or device, depending on how you [provisioned the device with Device Update](device-update-agent-provisioning.md)) primary connection string in the configuration file by running the command below.
+
+ ```markdown
+ /etc/adu/du-config.json
+ ```
+
+1. Finally restart the Device Update agent by running the command below.
+
+ ```markdown
+ sudo systemctl restart adu-agent
+ ```
Device Update for Azure IoT Hub software packages are subject to the following license terms: * [Device update for IoT Hub license](https://github.com/Azure/iot-hub-device-update/blob/main/LICENSE.md)
Read the license terms prior to using a package. Your installation and use of a
## Import update
-1. Go to [Device Update releases](https://github.com/Azure/iot-hub-device-update/releases) in GitHub and click the "Assets" drop-down.
-
-3. Download the `Edge.package.update.samples.zip` by clicking on it.
+1. Go to [Device Update releases](https://github.com/Azure/iot-hub-device-update/releases) in GitHub and click the "Assets" drop-down. Download the `Edge.package.update.samples.zip` by clicking on it. Extract the contents of the folder to discover a sample APT manifest(sample-1.0.1-aziot-edge-apt-manifest.json) and its corresponding import manifest(sample-1.0.1-aziot-edge-importManifest.json).
-5. Extract the contents of the folder to discover a sample [APT manifest](device-update-apt-manifest.md) and its corresponding [import manifest](import-concepts.md).
-
-2. In Azure portal, select the Device Updates option under Automatic Device Management from the left-hand navigation bar in your IoT Hub.
+2. Log in to the [Azure portal](https://portal.azure.com/) and navigate to your IoT Hub with Device Update. Then, select the Updates option under Automatic Device Management from the left-hand navigation bar.
3. Select the Updates tab. 4. Select "+ Import New Update".
-5. Select the folder icon or text box under "Select an Import Manifest File". You will see a file picker dialog. Select the `sample-1.0.1-aziot-edge-importManifest.json` import manifest from the folder you downloaded previously. Next, select the folder icon or text box under "Select one or more update files". You will see a file picker dialog. Select the `sample-1.0.1-aziot-edge-apt-manifest.json` apt manifest update file from the folder you downloaded previously.
-This update will update the `aziot-identity-service` and the `aziot-edge` packages to version 1.2.0~rc4-1 on your device.
-
- :::image type="content" source="media/import-update/select-update-files.png" alt-text="Screenshot showing update file selection." lightbox="media/import-update/select-update-files.png":::
-
-6. Select the folder icon or text box under "Select a storage container". Then select the appropriate storage account.
-
-7. If youΓÇÖve already created a container, you can reuse it. (Otherwise, select "+ Container" to create a new storage container for updates.). Select the container you wish to use and click "Select".
+5. Select "+ Select from storage container". Select an existing account or create a new account using "+ Storage account". Then select an existing container or create a new container using "+ Container". This container will be used to stage your update files for importing.
+ > [!NOTE]
+ > We recommend using a new container each time you import an update to avoid accidentally importing files from previous updates. If you don't use a new container, be sure to delete any files from the existing container before completing this step.
+
+ :::image type="content" source="media/import-update/storage-account-ppr.png" alt-text="Storage Account" lightbox="media/import-update/storage-account-ppr.png":::
- :::image type="content" source="media/import-update/container.png" alt-text="Screenshot showing container selection." lightbox="media/import-update/container.png":::
+6. In your container, select "Upload" and navigate to files downloaded in **Step 1**. When you've selected all your update files, select "Upload" Then click the "Select" button to return to the "Import update" page.
-8. Select "Submit" to start the import process.
+ :::image type="content" source="media/import-update/import-select-ppr.png" alt-text="Select Uploaded Files" lightbox="media/import-update/import-select-ppr.png":::
+ _This screenshot shows the import step and file names may not match the ones used in the example_
-9. The import process begins, and the screen changes to the "Import History" section. Select "Refresh" to view progress until the import process completes. Depending on the size of the update, the import process may complete in a few minutes but could take longer.
+8. On the Import update page, review the files to be imported. Then select "Import update" to start the import process.
- :::image type="content" source="media/import-update/update-publishing-sequence-2.png" alt-text="Screenshot showing update import sequence." lightbox="media/import-update/update-publishing-sequence-2.png":::
+ :::image type="content" source="media/import-update/import-start-2-ppr.png" alt-text="Import Start" lightbox="media/import-update/import-start-2-ppr.png":::
-10. When the Status column indicates the import has succeeded, select the "Ready to Deploy" header. You should see your imported update in the list now.
+9. The import process begins, and the screen switches to the "Import History" section. When the `Status` column indicates the import has succeeded, select the "Available Updates" header. You should see your imported update in the list now.
+ :::image type="content" source="media/import-update/update-ready-ppr.png" alt-text="Job Status" lightbox="media/import-update/update-ready-ppr.png":::
+
[Learn more](import-update.md) about importing updates. ## Create update group
-1. Go to the IoT Hub you previously connected to your Device Update instance.
+1. Go to the Groups and Deployments tab at the top of the page.
+ :::image type="content" source="media/create-update-group/ungrouped-devices.png" alt-text="Screenshot of ungrouped devices." lightbox="media/create-update-group/ungrouped-devices.png":::
-1. Select the Device Updates option under Automatic Device Management from the left-hand navigation bar.
+2. Select the "Add group" button to create a new group.
+ :::image type="content" source="media/create-update-group/add-group.png" alt-text="Screenshot of device group addition." lightbox="media/create-update-group/add-group.png":::
-1. Select the Groups tab at the top of the page.
+3. Select an IoT Hub tag and Device Class from the list and then select Create group.
+ :::image type="content" source="media/create-update-group/select-tag.png" alt-text="Screenshot of tag selection." lightbox="media/create-update-group/select-tag.png":::
-1. Select the Add button to create a new group.
+4. Once the group is created, you will see that the update compliance chart and groups list are updated. Update compliance chart shows the count of devices in various states of compliance: On latest update, New updates available, and Updates in Progress. [Learn about update compliance.](device-update-compliance.md)
+ :::image type="content" source="media/create-update-group/updated-view.png" alt-text="Screenshot of update compliance view." lightbox="media/create-update-group/updated-view.png":::
-1. Select the IoT Hub tag you created in the previous step from the list. Select Create update group.
-
- :::image type="content" source="media/create-update-group/select-tag.PNG" alt-text="Screenshot showing tag selection." lightbox="media/create-update-group/select-tag.PNG":::
+5. You should see your newly created group and any available updates for the devices in the new group. If there are devices that don't meet the device class requirements of the group, they will show up in a corresponding invalid group. You can deploy the best available update to the new user-defined group from this view by clicking on the "Deploy" button next to the group.
[Learn more](create-update-group.md) about adding tags and creating update groups ## Deploy update
-1. Once the group is created, you should see a new update available for your device group, with a link to the update in the _Available updates_ column. You may need to Refresh once.
+1. Once the group is created, you should see a new update available for your device group, with a link to the update under Best Update (you may need to Refresh once). [Learn More about update compliance.](device-update-compliance.md)
+
+2. Select the target group by clicking on the group name. You will be directed to the group details under Group basics.
-1. Click on the link to the available update.
+ :::image type="content" source="media/deploy-update/group-basics.png" alt-text="Group details" lightbox="media/deploy-update/group-basics.png":::
-1. Confirm the correct group is selected as the target group and schedule your deployment
+3. To initiate the deployment, go to the Current deployment tab. Click the deploy link next to the desired update from the Available updates section. The best, available update for a given group will be denoted with a "Best" highlight.
- :::image type="content" source="media/deploy-update/select-update.png" alt-text="Select update" lightbox="media/deploy-update/select-update.png":::
+ :::image type="content" source="media/deploy-update/select-update.png" alt-text="Select update" lightbox="media/deploy-update/select-update.png":::
+4. Schedule your deployment to start immediately or in the future, then select Create.
> [!TIP] > By default the Start date/time is 24 hrs from your current time. Be sure to select a different date/time if you want the deployment to begin earlier.
-1. Select Deploy update.
+ :::image type="content" source="media/deploy-update/create-deployment.png" alt-text="Create deployment" lightbox="media/deploy-update/create-deployment.png":::
-1. View the compliance chart. You should see the update is now in progress.
+5. The Status under Deployment details should turn to Active, and the deployed update should be marked with "(deploying)".
- :::image type="content" source="media/deploy-update/update-in-progress.png" alt-text="Update in progress" lightbox="media/deploy-update/update-in-progress.png":::
+ :::image type="content" source="media/deploy-update/deployment-active.png" alt-text="Deployment active" lightbox="media/deploy-update/deployment-active.png":::
-1. After your device is successfully updated, you should see your compliance chart and deployment details update to reflect the same.
+6. View the compliance chart. You should see the update is now in progress.
+
+7. After your device is successfully updated, you should see your compliance chart and deployment details update to reflect the same.
:::image type="content" source="media/deploy-update/update-succeeded.png" alt-text="Update succeeded" lightbox="media/deploy-update/update-succeeded.png"::: ## Monitor an update deployment
-1. Select the Deployments tab at the top of the page.
+1. Select the Deployment history tab at the top of the page.
- :::image type="content" source="media/deploy-update/deployments-tab.png" alt-text="Deployments tab" lightbox="media/deploy-update/deployments-tab.png":::
+ :::image type="content" source="media/deploy-update/deployments-history.png" alt-text="Deployment History" lightbox="media/deploy-update/deployments-history.png":::
-1. Select the deployment you created to view the deployment details.
+2. Select the details link next to the deployment you created.
:::image type="content" source="media/deploy-update/deployment-details.png" alt-text="Deployment details" lightbox="media/deploy-update/deployment-details.png":::
-1. Select Refresh to view the latest status details. Continue this process until the status changes to Succeeded.
+3. Select Refresh to view the latest status details.
+ You have now completed a successful end-to-end package update using Device Update for IoT Hub on an Ubuntu Server 18.04 x64 device.
When no longer needed, clean up your device update account, instance, IoT Hub, a
## Next steps
-> [!div class="nextstepaction"]
-> [Image Update on Raspberry Pi 3 B+ tutorial](device-update-raspberry-pi.md)
+You can use the following tutorials for a simple demonstration of Device Update for IoT Hub:
+
+- [Image Update: Getting Started with Raspberry Pi 3 B+ Reference Yocto Image](device-update-raspberry-pi.md) extensible via open source to build you own images for other architecture as needed.
+
+- [Proxy Update: Getting Started using Device Update binary agent for downstream devices](device-update-howto-proxy-updates.md)
+
+- [Getting Started Using Ubuntu (18.04 x64) Simulator Reference Agent](device-update-simulator.md)
+
+- [Device Update for Azure IoT Hub tutorial for Azure-Real-Time-Operating-System](device-update-azure-real-time-operating-system.md)
iot-hub-device-update Import Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/import-concepts.md
# Importing updates into Device Update for IoT Hub
-In order to deploy an update to devices from Device Update for IoT Hub, you first have to _import_ that update into the Device Update service. Here is an overview of some important concepts to understand when it comes to importing updates.
-## Limits on importing updates
-Certain limits are enforced for each Device Update for IoT Hub instance. If you haven't already reviewed them, please see [Device Update limits](./device-update-limits.md).
+In order to deploy an update to devices from Device Update for IoT Hub, you first have to _import_ that update into the Device Update service. Here is an overview of some important concepts to understand when it comes to importing updates.
## Import manifest An import manifest is a JSON file that defines important information about the update that you are importing. You will submit both your import manifest and associated update file or files (such as a firmware update package) as part of the import process. The metadata that is defined in the import manifest is used to ingest the update. Some of the metadata is also used at deployment time - for example, to validate if an update was installed correctly.
-The import manifest contains several items which represent important Device Update for IoT Hub concepts. These concepts are outlined below.
-
-### Update identity (Update ID)
-
-The update identity represents the unique identifer of an update. It defines important properties about an update that is being imported. The update identity is composed of three parts:
-* Provider: this is the entity who is creating or directly responsible for the update. It will often be a company name.
-* Name: an identifier for a class of updates. The class can be anything you choose. It will often be a device or model name.
-* Version: this is a version number distinguishing this update from others that have the same Provider and Name. This version is used by the Device Update for IoT Hub service, and may or may not match a version of an individual software component on the device.
+**Example**
+
+```json
+{
+ "updateId": {
+ "provider": "Contoso",
+ "name": "Toaster",
+ "version": "1.0"
+ },
+ "isDeployable": false,
+ "compatibility": [
+ {
+ "deviceManufacturer": "Contoso",
+ "deviceModel": "Toaster"
+ }
+ ],
+ "instructions": {
+ "steps": [
+ {
+ "handler": "microsoft/swupdate:1",
+ "files": [
+ "firmware.swu"
+ ],
+ "handlerProperties": {
+ "installedCriteria": "1.0"
+ }
+ }
+ ]
+ },
+ "files": [
+ {
+ "filename": "firmware.swu",
+ "sizeInBytes": 7558,
+ "hashes": {
+ "sha256": "/CD7Sn6fiknWa3NgcFjGlJ+ccA81s1QAXX4oo5GHiFA="
+ }
+ }
+ ],
+ "createdDateTime": "2022-01-19T06:23:52.6996916Z",
+ "manifestVersion": "4.0"
+}
+```
+
+The import manifest contains several items which represent important Device Update for IoT Hub concepts. These are outlined in this section. The full schema is documented [here](./import-schema.md).
+
+### Update identity (updateId)
+
+*Update identity* is the unique identifer for an update in Device Update for IoT Hub. It is composed of three parts:
+- **Provider**: entity who is creating or directly responsible for the update. It will often be a company name.
+- **Name**: identifier for a class of updates. It will often be a device class or model name.
+- **Version**: a version number distinguishing this update from others that have the same Provider and Name.
+
+> [!NOTE]
+> UpdateId is used by Device Update for IoT Hub service only, and may be different from identity of actual software component on the device.
### Compatibility
-To simplify update deployments, Device Update for IoT Hub compares compatibility properties for an update, which are defined in the import manifest, with corresponding device properties. Only updates which have matching properties will be returned and available for deployment.
+*Compatibility* defines the criteria of a device that can install the update. It contains device properties - a set of arbitrary key value pairs that are reported from a device. Only devices with matching properties will be eligible for deployment. An update may be compatible with multiple device classes by having more than one set of device properties.
+
+Here is an example of an update that can only be deployed to a device that reports *Contoso* and *Toaster* as its device manufacturer and model.
+
+```json
+{
+ "compatibility": [
+ {
+ "deviceManufacturer": "Contoso",
+ "deviceModel": "Toaster"
+ }
+ ]
+}
+```
+
+### Instructions
+
+The *Instructions* part contains the necessary information or *steps* for device agent to install the update. The simplest update contains a single *inline* step. That step executes the included payload file using a *handler* registered with the device agent:
+
+```json
+{
+ "instructions": {
+ "steps": [
+ {
+ "handler": "microsoft/swupdate:1",
+ "files": [
+ "contoso.toaster.1.0.swu"
+ ]
+ }
+ ]
+ }
+}
+```
+
+> [!TIP]
+> `handler` is equivalent to `updateType` in import manifest version 3.0 or older.
+
+An update may contain more than one step:
+
+```json
+{
+ "instructions": {
+ "steps": [
+ {
+ "description": "pre-install script",
+ "handler": "microsoft/script:1",
+ "handlerProperties": {
+ "arguments": "--pre-install"
+ },
+ "files": [
+ "configure.sh"
+ ]
+ },
+ {
+ "description": "firmware package",
+ "handler": "microsoft/swupdate:1",
+ "files": [
+ "contoso.toaster.1.0.swu"
+ ]
+ }
+ ]
+ }
+}
+```
+
+An update may contain *reference* step which instructs device agent to install another update with its own import manifest altogether, establishing a *parent* and *child* update relationship. For example an update for a toaster may contain two child updates:
+
+```json
+{
+ "instructions": {
+ "steps": [
+ {
+ "type": "reference",
+ "updateId": {
+ "provider": "Contoso",
+ "name": "Toaster.HeatingElement",
+ "version": "1.0"
+ }
+ },
+ {
+ "type": "reference",
+ "updateId": {
+ "provider": "Contoso",
+ "name": "Toaster.Sensors",
+ "version": "1.0"
+ }
+ }
+ ]
+ }
+}
+```
+
+> [!NOTE]
+> An update may contain any combination of *inline* and *reference* steps.
+
+### Files
+
+The *Files* part contains the metadata of update payload files like their names, sizes, and hash. Device Update for IoT Hub uses this metadata for integrity validation during import process. The same information is then forwarded to device agent to repeat the integrity validation prior to installation.
+
+> [!NOTE]
+> An update that contains *reference* steps only will not have any update payload file in the parent update.
+
+## Create an import manifest
+
+You may use any text editor to create import manifest JSON file. There are also sample scripts for creating import manifest programmatically in [Azure/iot-hub-device-update](https://github.com/Azure/iot-hub-device-update/tree/main/tools/AduCmdlets) on GitHub.
+
+> [!IMPORTANT]
+> Import manifest JSON filename must end with `.importmanifest.json` when imported through Microsoft Azure portal.
+
+> [!TIP]
+> Use [Visual Studio Code](https://code.visualstudio.com) to enable autocomplete and JSON schema validation when creating an import manifest.
-### InstalledCriteria
-
-The InstalledCriteria is used by the update agent on a device to determine if an update has been installed successfully.
+## Limits on importing updates
+Certain limits are enforced for each Device Update for IoT Hub instance. If you have not already reviewed them, please see [Device Update limits](./device-update-limits.md).
## Next steps
-If you're ready, try out the [Import How-To guide](./import-update.md), which will walk you through the import process step by step.
--
+- Try out the [Import How-To guide](./create-update.md), which will walk you through the import process step by step.
+- Review [Import Manifest Schema](./import-schema.md).
iot-hub-device-update Import Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/import-schema.md
# Importing updates into Device Update for IoT Hub - schema and other information
-If you want to import an update into Device Update for IoT Hub, be sure you've reviewed the [concepts](import-concepts.md) and [How-To guide](import-update.md) first. If you're interested in the details of the schema used when constructing an import manifest, or information about related objects, see below.
+If you want to import an update into Device Update for IoT Hub, be sure you've reviewed the [concepts](import-concepts.md) and [How-To guide](import-update.md) first. If you're interested in the details of import manifest schema, or information about API permissions, see below.
-## Import manifest schema
+## Import manifest JSON schema version 4.0
-| Name | Type | Description | Restrictions |
-| | | | |
-| UpdateId | `UpdateId` object | Update identity. |
-| UpdateType | string | Update type: <br/><br/> * Specify `microsoft/apt:1` when performing a package-based update using reference agent.<br/> * Specify `microsoft/swupdate:1` when performing an image-based update using reference agent.<br/> * Specify `microsoft/simulator:1` when using sample agent simulator.<br/> * Specify a custom type if developing a custom agent. | Format: <br/> `{provider}/{type}:{typeVersion}`<br/><br/> Maximum of 32 characters total |
-| InstalledCriteria | string | String interpreted by the agent to determine whether the update was applied successfully: <br/> * Specify **value** of SWVersion for update type `microsoft/swupdate:1`.<br/> * Specify `{name}-{version}` for update type `microsoft/apt:1`, of which name and version are obtained from the APT file.<br/> * Specify a custom string if developing a custom agent.<br/> | Maximum of 64 characters |
-| Compatibility | Array of `CompatibilityInfo` [objects](#compatibilityinfo-object) | Compatibility information of device compatible with this update. | Maximum of 10 items |
-| CreatedDateTime | date/time | Date and time at which the update was created. | Delimited ISO 8601 date and time format, in UTC |
-| ManifestVersion | string | Import manifest schema version. Specify `2.0`, which will be compatible with `urn:azureiot:AzureDeviceUpdateCore:1` interface and `urn:azureiot:AzureDeviceUpdateCore:4` interface. | Must be `2.0` |
-| Files | Array of `File` objects | Update payload files | Maximum of five files |
+Import manifest JSON schema is hosted at [SchemaStore.org](https://json.schemastore.org/azure-deviceupdate-import-manifest-4.0.json).
-## UpdateId Object
+### Schema
-| Name | Type | Description | Restrictions |
-| | | | |
-| Provider | string | Provider part of the update identity. | 1-64 characters, alphanumeric, dot, and dash. |
-| Name | string | Name part of the update identity. | 1-64 characters, alphanumeric, dot, and dash. |
-| Version | version | Version part of the update identity. | 2 to 4 part, dot-separated version number. The total number of _each_ dot-separated part can be between 0 and 2147483647. Leading zeroes are not supported.
+**Properties**
-## File Object
+|Name|Type|Description|Required|
+|||||
+|**$schema**|`string`|JSON schema reference.|No|
+|**updateId**|`updateId`|Unique update identifier.|Yes|
+|**description**|`string`|Optional update description.|No|
+|**compatibility**|`compatibility`|List of device property sets this update is compatible with.|Yes|
+|**instructions**|`instructions`|Update installation instructions.|Yes|
+|**files**|`file` `[0-10]`|List of update payload files. Sum of all file sizes may not exceed 2 GB. May be empty or null if all instruction steps are reference steps.|No|
+|**manifestVersion**|`string`|Import manifest schema version. Must be 4.0.|Yes|
+|**createdDateTime**|`string`|Date & time import manifest was created in ISO 8601 format.|Yes|
-| Name | Type | Description | Restrictions |
-| | | | |
-| Filename | string | Name of file | Must be no more than 255 characters. Must be unique within an update |
-| SizeInBytes | Int64 | Size of file in bytes. | See [Device Update limits](./device-update-limits.md) for maximum size per individual file and collectively per update |
-| Hashes | `Hashes` object | JSON object containing hash(es) of the file |
+Additional properties are not allowed.
-## CompatibilityInfo Object
+#### $schema
-| Name | Type | Description | Restrictions |
-| | | | |
-| DeviceManufacturer | string | Manufacturer of the device the update is compatible with. | 1-64 characters, alphanumeric, dot and dash. |
-| DeviceModel | string | Model of the device the update is compatible with. | 1-64 characters, alphanumeric, dot and dash. |
+JSON schema reference.
-## Hashes Object
+* **Type**: `string`
+* **Required**: No
-| Name | Required | Type | Description |
-| | | | |
-| Sha256 | True | string | Base64-encoded hash of the file using the SHA-256 algorithm. See the relevant sections of the import manifest generation [PowerShell](https://github.com/Azure/iot-hub-device-update/blob/release/2021-q2/tools/AduCmdlets/AduUpdate.psm1#L81) and [bash](https://github.com/Azure/iot-hub-device-update/blob/release/2021-q2/tools/AduCmdlets/create-adu-import-manifest.sh#L266) scripts.|
+#### updateId
-## Example import request body
+Unique update identifier.
-If you are using the sample import manifest output from the [How to add a new update](./import-update.md#review-the-generated-import-manifest) page, and want to call the Device Update [REST API](/rest/api/deviceupdate/updates) directly to perform the import, the corresponding request body should look like this:
+* **Type**: `updateId`
+* **Required**: Yes
-```json
-{
- "importManifest": {
- "url": "http://<your Azure Storage location file path>/importManifest.json",
- "sizeInBytes": <size of import manifest file>,
- "hashes": {
- "sha256": "<hash of import manifest file>"
- }
- },
- "files": [
- {
- "filename": "file1.json",
- "url": "http://<your Azure Storage location file path>/file1.json"
- },
- {
- "filename": "file2.zip",
- "url": "http://<your Azure Storage location file path>/file2.zip"
- },
- ]
-}
-```
+#### description
-## OAuth authorization when calling Device Update APIs
+Optional update description.
-**azure_auth**
+* **Type**: `string`
+* **Required**: No
+* **Minimum Length**: `>= 1`
+* **Maximum Length**: `<= 512`
-Azure Active Directory OAuth2 Flow
-Type: oauth2
-Flow: any
+#### compatibility
-Authorization URL: https://login.microsoftonline.com/common/oauth2/authorize
+List of device property sets this update is compatible with.
-**Scopes**
+* **Type**: `compatibility`
+* **Required**: Yes
-| Name | Description |
-| | |
-| `https://api.adu.microsoft.com/user_impersonation` | Impersonate your user account |
-| `https://api.adu.microsoft.com/.default` | Client credential flows |
+#### instructions
+Update installation instructions.
-**Permissions**
+* **Type**: `instructions`
+* **Required**: Yes
-If an Azure AD application is used to sign the user in, the scope needs to have /user_impersonation.
+#### files
-You will need to add permissions to your Azure AD app (in the API permissions tab in Azure AD Application view) to use Azure Device Update API. Request API permission to Azure Device Update (located in "APIs my organization uses") and grant the delegated user_impersonation permission.
+List of update payload files. Sum of all file sizes may not exceed 2 GB. May be empty or null if all instruction steps are reference steps.
-ADU accepts tokens acquiring tokens using any of the Azure AD supported flows for users, applications, or managed identities. However, some of the flows require extra Azure AD application setup:
+* **Type**: `file` `[0-10]`
+* **Required**: No
-* For public client flows, make sure to enable mobile and desktop flows.
-* For implicit flows make, sure to add a Web platform and select "Access tokens" for the authorization endpoint.
+#### manifestVersion
-**Example using Azure CLI:**
+Import manifest schema version. Must be `4.0`.
-```azurecli
-az login
+* **Type**: `string`
+* **Required**: Yes
-az account get-access-token --resource 'https://api.adu.microsoft.com/'
-```
+#### createdDateTime
-**Examples to acquire a token using PowerShell MSAL library:**
+Date & time import manifest was created in ISO 8601 format.
-_Using user credentials_
+* **Type**: `string`
+* **Required**: Yes
+* **Examples**:
+ * `"2020-10-02T22:18:04.9446744Z"`
-```powershell
-$clientId = '<app_id>ΓÇÖ
-$tenantId = '<tenant_id>ΓÇÖ
-$authority = "https://login.microsoftonline.com/$tenantId/v2.0"
-$Scope = 'https://api.adu.microsoft.com/user_impersonation'
+### updateId object
-Get-MsalToken -ClientId $clientId -TenantId $tenantId -Authority $authority -Scopes $Scope
-```
+Unique update identifier.
-_Using user credentials with device code_
+**`Update identity` Properties**
-```powershell
-$clientId = '<app_id>ΓÇÖ
-$tenantId = '<tenant_id>ΓÇÖ
-$authority = "https://login.microsoftonline.com/$tenantId/v2.0"
-$Scope = 'https://api.adu.microsoft.com/user_impersonation'
+|Name|Type|Description|Required|
+|||||
+|**provider**|`string`|Entity who is creating or directly responsible for the update. It can be a company name.|Yes|
+|**name**|`string`|Identifier for a class of update. It can be a device class or model name.|Yes|
+|**version**|`string`|Two to four part dot separated numerical version numbers. Each part must be a number between 0 and 2147483647 and leading zeroes will be dropped.|Yes|
-Get-MsalToken -ClientId $clientId -TenantId $tenantId -Authority $authority -Scopes $Scope -Interactive -DeviceCode
-```
+Additional properties are not allowed.
-_Using app credentials_
+#### updateId.provider
-```powershell
-$clientId = '<app_id>ΓÇÖ
-$tenantId = '<tenant_id>ΓÇÖ
-$cert = '<client_certificate>'
-$authority = "https://login.microsoftonline.com/$tenantId/v2.0"
-$Scope = 'https://api.adu.microsoft.com/.default'
+Entity who is creating or directly responsible for the update. It can be a company name.
-Get-MsalToken -ClientId $clientId -TenantId $tenantId -Authority $authority -Scopes $Scope -ClientCertificate $cert
-```
+* **Type**: `string`
+* **Required**: Yes
+* **Pattern**: `^[a-zA-Z0-9.-]+$`
+* **Minimum Length**: `>= 1`
+* **Maximum Length**: `<= 64`
+
+#### updateId.name
+
+Identifier for a class of update. It can be a device class or model name.
+
+* **Type**: `string`
+* **Required**: Yes
+* **Pattern**: `^[a-zA-Z0-9.-]+$`
+* **Minimum Length**: `>= 1`
+* **Maximum Length**: `<= 64`
+
+#### updateId.version
+
+Two to four part dot separated numerical version numbers. Each part must be a number between 0 and 2147483647 and leading zeroes will be dropped.
+
+* **Type**: `string`
+* **Required**: Yes
+* **Pattern**: `^\d+(?:\.\d+)+$`
+* **Examples**:
+ * `"1.0"`
+ * `"2021.11.8"`
+
+### compabilityInfo object
+
+Properties of a device this update is compatible with.
+
+* **Type**: `object`
+* **Minimum Properties**: `1`
+* **Maximum Properties**: `5`
+
+Each property is a name-value pair of type string.
+
+* **Minimum Property Name Length**: `1`
+* **Maximum Property Name Length**: `32`
+* **Minimum Property Value Length**: `1`
+* **Maximum Property Value Length**: `64`
+
+_Note that the same exact set of compatibility properties cannot be re-used with a different Provider and Name combination._
+
+### instructions object
+
+Update installation instructions.
+
+**Properties**
+
+|Name|Type|Description|Required|
+|||||
+|**steps**|`array[1-10]`||Yes|
+
+Additional properties are not allowed.
+
+#### instructions.steps
+
+* **Type**: `array[1-10]`
+ * Each element in the array must be one of the following values:
+ * `inlineStep` object
+ * `referenceStep` object
+* **Required**: Yes
+
+### inlineStep object
+
+Installation instruction step that performs code execution.
+
+**Properties**
+
+|Name|Type|Description|Required|
+|||||
+|**type**|`string`|Instruction step type that performs code execution.|No|
+|**description**|`string`|Optional instruction step description.|No|
+|**handler**|`string`|Identity of handler on device that can execute this step.|Yes|
+|**files**|`string` `[1-10]`|Names of update files that agent will pass to handler.|Yes|
+|**handlerProperties**|`inlineStepHandlerProperties`|JSON object that agent will pass to handler as arguments.|No|
+
+Additional properties are not allowed.
+
+#### inlineStep.type
+
+Instruction step type that performs code execution. Must be `inline`.
+
+* **Type**: `string`
+* **Required**: No
+
+#### inlineStep.description
+
+Optional instruction step description.
+
+* **Type**: `string`
+* **Required**: No
+* **Minimum Length**: `>= 1`
+* **Maximum Length**: `<= 64`
+
+#### inlineStep.handler
+
+Identity of handler on device that can execute this step.
+
+* **Type**: `string`
+* **Required**: Yes
+* **Pattern**: `^\S+/\S+:\d{1,5}$`
+* **Minimum Length**: `>= 5`
+* **Maximum Length**: `<= 32`
+* **Examples**:
+ * `microsoft/script:1`
+ * `microsoft/swupdate:1`
+ * `microsoft/apt:1`
+
+#### inlineStep.files
+
+Names of update files that agent will pass to handler.
+
+* **Type**: `string` `[1-10]`
+ * Each element in the array must have length between `1` and `255`.
+* **Required**: Yes
+
+#### inlineStep.handlerProperties
+
+JSON object that agent will pass to handler as arguments.
+
+* **Type**: `object`
+* **Required**: No
+
+### referenceStep object
+
+Installation instruction step that installs another update.
+
+**Properties**
+
+|Name|Type|Description|Required|
+|||||
+|**type**|`referenceStepType`|Instruction step type that installs another update.|Yes|
+|**description**|`stepDescription`|Optional instruction step description.|No|
+|**updateId**|`updateId`|Unique update identifier.|Yes|
+
+Additional properties are not allowed.
+
+#### referenceStep.type
+
+Instruction step type that installs another update. Must be `reference`.
+
+* **Type**: `string`
+* **Required**: Yes
+
+#### referenceStep.description
+
+Optional instruction step description.
+
+* **Type**: `string`
+* **Required**: No
+* **Minimum Length**: `>= 1`
+* **Maximum Length**: `<= 64`
+
+#### referenceStep.updateId
+
+Unique update identifier.
+
+* **Type**: `updateId`
+* **Required**: Yes
+
+### file object
+
+Update payload file, e.g. binary, firmware, script, etc. Must be unique within update.
+
+**Properties**
+
+|Name|Type|Description|Required|
+|||||
+|**filename**|`string`|Update payload file name.|Yes|
+|**sizeInBytes**|`number`|File size in number of bytes.|Yes|
+|**hashes**|`fileHashes`|Base64-encoded file hashes with algorithm name as key. At least SHA-256 algorithm must be specified, and additional algorithm may be specified if supported by agent.|Yes|
+
+Additional properties are not allowed.
+
+#### file.filename
+
+Update payload file name.
+
+* **Type**: `string`
+* **Required**: Yes
+* **Minimum Length**: `>= 1`
+* **Maximum Length**: `<= 255`
+
+#### file.sizeInBytes
+
+File size in number of bytes.
+
+* **Type**: `number`
+* **Required**: Yes
+* **Minimum**: ` >= 1`
+* **Maximum**: ` <= 2147483648`
+
+#### file.hashes
+
+File hashes.
+
+* **Type**: `fileHashes`
+* **Required**: Yes
+* **Type of each property**: `string`
+
+### fileHashes object
+
+Base64-encoded file hashes with algorithm name as key. At least SHA-256 algorithm must be specified, and additional algorithm may be specified if supported by agent.
+
+**Properties**
+
+|Name|Type|Description|Required|
+|||||
+|**sha256**|`string`|Base64-encoded file hash value using SHA-256 algorithm.|Yes|
+
+Additional properties are allowed.
+
+#### fileHashes.sha256
+
+Base64-encoded file hash value using SHA-256 algorithm.
+
+* **Type**: `string`
+* **Required**: Yes
## Next steps
iot-hub-device-update Import Update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/import-update.md
Title: How to add a new update | Microsoft Docs
-description: How-To guide for adding a new update into Device Update for IoT Hub.
+ Title: Add an update to Device Update for IoT Hub | Microsoft Docs
+description: How-To guide to add an update into Device Update for IoT Hub.
Previously updated : 4/19/2021 Last updated : 1/31/2022
-# Add an update to Device Update for IoT Hub
-Learn how to obtain a new update and import it into Device Update for IoT Hub.
+# Import an update to Device Update for IoT Hub
+
+Learn how to obtain a new update and import it into Device Update for IoT Hub. If you haven't already, be sure to review the key [import concepts](import-concepts.md) and [how to prepare an update to be imported](create-update.md).
## Prerequisites
-* [Access to an IoT Hub with Device Update for IoT Hub enabled](create-device-update-account.md).
+* [Access to an IoT Hub with Device Update for IoT Hub enabled](create-device-update-account.md).
* An IoT device (or simulator) [provisioned for Device Update](device-update-agent-provisioning.md) within IoT Hub.
-* [PowerShell 5](/powershell/scripting/install/installing-powershell) or later (includes Linux, macOS and Windows installs)
+* [PowerShell 5](/powershell/scripting/install/installing-powershell) or later (includes Linux, macOS, and Windows installs)
* Supported browsers: * [Microsoft Edge](https://www.microsoft.com/edge) * Google Chrome
-> [!NOTE]
-> Some data submitted to this service might be processed in a region outside the region this instance was created in.
-
-## Obtain an update for your devices
-
-Now that you've set up Device Update and provisioned your devices, you will need the update file(s) that you will be deploying to those devices.
-
-If youΓÇÖve purchased devices from an OEM or solution integrator, that organization will most likely provide update files for you, without you needing to create the updates. Contact the OEM or solution integrator to find out how they make updates available.
-
-If your organization already creates software for the devices you use, that same group will be the ones to create the updates for that software. When creating an update to be deployed using Device Update for IoT Hub, start with either the [image-based or package-based approach](understand-device-update.md#support-for-a-wide-range-of-update-artifacts) depending on your scenario. Note: if you want to create your own updates but are just starting out, GitHub is an excellent option to manage your development. You can store and manage your source code, and do Continuous Integration (CI) and Continuous Deployment (CD) using [GitHub Actions](https://docs.github.com/en/actions/guides/about-continuous-integration).
-
-## Create a Device Update import manifest
-
-If you haven't already done so, be sure to familiarize yourself with the basic [import concepts](import-concepts.md), and try out an [image-based](device-update-raspberry-pi.md) or [package-based](device-update-ubuntu-agent.md) tutorial first.
-
-1. Ensure that your update file(s) are located in a directory accessible from PowerShell.
-
-2. Create a text file named **AduUpdate.psm1** in the directory where your update image file or APT Manifest file is located. Then open the [AduUpdate.psm1](https://github.com/Azure/iot-hub-device-update/tree/main/tools/AduCmdlets) PowerShell cmdlet, copy the contents to your text file, and then save the text file.
-
-3. In PowerShell, navigate to the directory where you created your PowerShell cmdlet from step 2. Use the Copy option below and then paste into PowerShell to run the commands:
-
- ```powershell
- Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope Process
- Import-Module .\AduUpdate.psm1
- ```
-
-4. Run the following commands by replacing the sample parameter values to generate an import manifest, a JSON file that describes the update:
- ```powershell
- $compat = New-AduUpdateCompatibility -DeviceManufacturer 'deviceManufacturer' -DeviceModel 'deviceModel'
-
- $importManifest = New-AduImportManifest -Provider 'updateProvider' -Name 'updateName' -Version 'updateVersion' `
- -UpdateType 'updateType' -InstalledCriteria 'installedCriteria' `
- -CompatibilityΓÇ»$compat -Files 'updateFilePath(s)'
-
- $importManifest | Out-File '.\importManifest.json' -Encoding UTF8
- ```
-
- The following table is a quick reference for how to populate the above parameters. If you need more information, you can also view the complete [import manifest schema](import-schema.md).
-
- | Parameter | Description |
- | | -- |
- | deviceManufacturer | Manufacturer of the device the update is compatible with, for example, Contoso. Must match _manufacturer_ [device property](./device-update-plug-and-play.md#device-properties).
- | deviceModel | Model of the device the update is compatible with, for example, Toaster. Must match _model_ [device property](./device-update-plug-and-play.md#device-properties).
- | updateProvider | Entity who is creating or directly responsible for the update. It will often be a company name.
- | updateName | Identifier for a class of updates. The class can be anything you choose. It will often be a device or model name.
- | updateVersion | Version number distinguishing this update from others that have the same Provider and Name. Does not have match a version of an individual software component on the device (but can if you choose).
- | updateType | <ul><li>Specify `microsoft/swupdate:1` for image update</li><li>Specify `microsoft/apt:1` for package update</li></ul>
- | installedCriteria | Used during deployment to compare the version already on the device with the version of the update. Deploying the update to the device will return a ΓÇ£failedΓÇ¥ result if the installedCriteria value doesn't match the version that is on the device.<ul><li>For `microsoft/swupdate:1` update type, specify value of SWVersion </li><li>For `microsoft/apt:1` update type, specify **name-version**, where _name_ is the name of the APT Manifest and _version_ is the version of the APT Manifest. For example, contoso-iot-edge-1.0.0.0.
- | updateFilePath(s) | Path to the update file(s) on your computer.
--
-## Review the generated import manifest
-
-An example manifest output is below. For this example, there are two files that comprise this update: a .json file and a .zip file. If you have questions about any of the items, view the complete [import manifest schema](import-schema.md).
-```json
-{
- "updateId": {
- "provider": "Microsoft",
- "name": "Toaster",
- "version": "2.0"
- },
- "updateType": "microsoft/swupdate:1",
- "installedCriteria": "5",
- "compatibility": [
- {
- "deviceManufacturer": "Fabrikam",
- "deviceModel": "Toaster"
- },
- {
- "deviceManufacturer": "Contoso",
- "deviceModel": "Toaster"
- }
- ],
- "files": [
- {
- "filename": "file1.json",
- "sizeInBytes": 7,
- "hashes": {
- "sha256": "K2mn97qWmKSaSaM9SFdhC0QIEJ/wluXV7CoTlM8zMUo="
- }
- },
- {
- "filename": "file2.zip",
- "sizeInBytes": 11,
- "hashes": {
- "sha256": "gbG9pxCr9RMH2Pv57vBxKjm89uhUstD06wvQSioLMgU="
- }
- }
- ],
- "createdDateTime": "2020-10-08T03:32:52.477Z",
- "manifestVersion": "2.0"
-}
-```
- ## Import an update > [!NOTE]
-> The instructions below show how to import an update via the Azure portal UI. You can also use the [Device Update for IoT Hub APIs](#if-youre-importing-via-apis-instead) to import an update instead.
+> The following instructions show how to import an update via the Azure portal UI. You can also use the [Device Update for IoT Hub APIs](#if-youre-importing-via-apis-instead) to import an update instead.
1. Log in to the [Azure portal](https://portal.azure.com) and navigate to your IoT Hub with Device Update.
-2. On the left-hand side of the page, select "Device Updates" under "Automatic Device Management".
+2. On the left-hand side of the page, select `Updates` under `Device Management`.
- :::image type="content" source="media/import-update/import-updates-3.png" alt-text="Import Updates" lightbox="media/import-update/import-updates-3.png":::
+ :::image type="content" source="media/import-update/import-updates-3-ppr.png" alt-text="Import Updates" lightbox="media/import-update/import-updates-3-ppr.png":::
-3. You will see several tabs across the top of the screen. Select the Updates tab.
+3. Select the `Updates` tab from the list of tabs across the top of the screen.
- :::image type="content" source="media/import-update/updates-tab.png" alt-text="Updates" lightbox="media/import-update/updates-tab.png":::
+ :::image type="content" source="media/import-update/updates-tab-ppr.png" alt-text="Updates" lightbox="media/import-update/updates-tab-ppr.png":::
-4. Select "+ Import New Update" below the "Ready to Deploy" header.
+4. Select `+ Import a new update` below the `Available Updates` header.
- :::image type="content" source="media/import-update/import-new-update-2.png" alt-text="Import New Update" lightbox="media/import-update/import-new-update-2.png":::
+ :::image type="content" source="media/import-update/import-new-update-2-ppr.png" alt-text="Import New Update" lightbox="media/import-update/import-new-update-2-ppr.png":::
-5. Select the folder icon or text box under "Select an Import Manifest File". You will see a file picker dialog. Select the Import Manifest you created previously using the PowerShell cmdlet. Next, select the folder icon or text box under "Select one or more update files". You will see a file picker dialog. Select the same update file(s) that you included when you created your import manifest.
+5. Select `+ Select from storage container`. The Storage accounts UI is shown. Select an existing account, or create an account using `+ Storage account`. This account is used for a container to stage your updates for import.
- :::image type="content" source="media/import-update/select-update-files.png" alt-text="Select Update Files" lightbox="media/import-update/select-update-files.png":::
+ :::image type="content" source="media/import-update/select-update-files-ppr.png" alt-text="Select Update Files" lightbox="media/import-update/select-update-files-ppr.png":::
-6. Select the folder icon or text box under "Select a storage container". Then select the appropriate storage account. The storage container is used to stage the update files temporarily.
+6. Once you've selected a Storage account, the Containers UI is shown. Select an existing container, or create a container using `+ Container`. This container is used to stage your update files for importing _Recommendation: use a new container each time you import an update to avoid accidentally importing files from previous updates. If you don't use a new container, be sure to delete any files from the existing container before you complete this step._
- :::image type="content" source="media/import-update/storage-account.png" alt-text="Storage Account" lightbox="media/import-update/storage-account.png":::
+ :::image type="content" source="media/import-update/storage-account-ppr.png" alt-text="Storage Account" lightbox="media/import-update/storage-account-ppr.png":::
-7. If youΓÇÖve already created a container, you can reuse it. (Otherwise, select "+ Container" to create a new storage container for updates.). Select the container you wish to use and click "Select".
+7. In your container, select `Upload`. The Upload UI is shown.
- :::image type="content" source="media/import-update/container.png" alt-text="Select Container" lightbox="media/import-update/container.png":::
+ :::image type="content" source="media/import-update/container-ppr.png" alt-text="Select Container" lightbox="media/import-update/container-ppr.png":::
-8. Select "Submit" to start the import process.
+8. Select the folder icon on the right side of the `Files` section under the `Upload blob` header. Use the file picker to navigate to the location of your update files and import manifest, select all of the files, then select `Open`. _You can hold the Shift key and click to multi-select files._
- :::image type="content" source="media/import-update/publish-update.png" alt-text="Publish Update" lightbox="media/import-update/publish-update.png":::
+ :::image type="content" source="media/import-update/container-picker-ppr.png" alt-text="Publish Update" lightbox="media/import-update/container-picker-ppr.png":::
-9. The import process begins, and the screen switches to to the "Import History" section. Select "Refresh" to view progress until the import process completes (depending on the size of the update, this may complete in a few minutes but could take longer).
+9. When you've selected all your update files, select `Upload`.
- :::image type="content" source="media/import-update/update-publishing-sequence-2.png" alt-text="Update Import Sequencing" lightbox="media/import-update/update-publishing-sequence-2.png":::
+ :::image type="content" source="media/import-update/container-upload-ppr.png" alt-text="Container Upload" lightbox="media/import-update/container-picker-ppr.png":::
-10. When the Status column indicates the import has succeeded, select the "Ready to Deploy" header. You should see your imported update in the list now.
+10. Select the uploaded files to designate them to be imported . Then click the `Select` button to return to the `Import update` page.
- :::image type="content" source="media/import-update/update-ready.png" alt-text="Job Status" lightbox="media/import-update/update-ready.png":::
+ :::image type="content" source="media/import-update/import-select-ppr.png" alt-text="Select Uploaded Files" lightbox="media/import-update/import-select-ppr.png":::
-## Next Steps
+11. On the Import update page, review the files to be imported. Then select `Import update` to start the import process. _To resolve any errors, see the [Proxy Update Troubleshooting](device-update-proxy-update-troubleshooting.md) page ._
+
+ :::image type="content" source="media/import-update/import-start-2-ppr.png" alt-text="Import Start" lightbox="media/import-update/import-start-2-ppr.png":::
+
+12. The import process begins, and the screen switches to the `Import History` section. Select `Refresh` to view progress until the import process completes (depending on the size of the update, the process might complete in a few minutes but could take longer).
+
+ :::image type="content" source="media/import-update/update-publishing-sequence-2-ppr.png" alt-text="Update Import Sequencing" lightbox="media/import-update/update-publishing-sequence-2-ppr.png":::
-[Create Groups](create-update-group.md)
+13. When the `Status` column indicates that the import has succeeded, select the `Available Updates` header. You should see your imported update in the list now.
-[Learn about import concepts](import-concepts.md)
+ :::image type="content" source="media/import-update/update-ready-ppr.png" alt-text="Job Status" lightbox="media/import-update/update-ready-ppr.png":::
## If you're importing via APIs instead
-If you want to use the [Device Update for IoT Hub Update APIs](/rest/api/deviceupdate/updates) to import an update instead of importing via the Azure portal, note the following:
- - You will need to upload your update file(s) to an Azure Blob Storage location before you call the Update APIs.
- - You can reference this [sample API call](import-schema.md#example-import-request-body) which uses the import manifest you created above.
- - If you re-use the same SAS URL while testing, you may encounter errors when the token expires. This is the case when submitting the import manifest as well as the update content itself.
+In addition to importing via the Azure portal, you can also import an update programmatically by:
+* Using `Azure SDK` for [.NET](https://docs.microsoft.com/dotnet/api/azure.iot.deviceupdate), [Java](https://docs.microsoft.com/java/api/com.azure.iot.deviceupdate), [JavaScript](https://docs.microsoft.com/javascript/api/@azure/iot-device-update) or [Python](https://docs.microsoft.com/python/api/azure-mgmt-deviceupdate/azure.mgmt.deviceupdate)
+* Using [Import Update REST API](https://docs.microsoft.com/rest/api/deviceupdate/updates/import-update)
+* Using [sample PowerShell modules](https://github.com/Azure/iot-hub-device-update/tree/main/tools/AduCmdlets)
+
+> [!NOTE]
+> Refer to [Device update user roles and access](device-update-control-access.md) for required API permission.
+
+Update files and import manifest must be uploaded to an Azure Storage Blob container for staging. To import the staged files, provide the blob URL, or shared access signature (SAS) for private blobs, to the Device Update API. If using a SAS, be sure to provide a three hour or greater expiration window.
+
+> [!TIP]
+> To upload large update files to Azure Storage Blob container, you may use one of the following for better performance:
+> - [AzCopy](https://docs.microsoft.com/azure/storage/common/storage-use-azcopy-v10)
+> - [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer)
+
+## Next Steps
+
+* [Create Groups](create-update-group.md)
+* [Learn about import concepts](import-concepts.md)
iot-hub-device-update Migration Pp To Ppr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/migration-pp-to-ppr.md
+
+ Title: Migrating to the latest Device Update for Azure IoT Hub release | Microsoft Docs
+description: Understand how to migrate to latest Device Update for Azure IoT Hub release
++ Last updated : 1/14/2022++++
+# Migrate devices and groups from Public Preview to Public Preview Refresh
+
+As the Device Update for IoT Hub service releases new versions, you'll want to update your devices for the latest features and security improvements. This article provides information about how to migrate from the Public Preview release to the current, Public Preview Refresh (PPR) release. This article also explains the group and UX behavior across these releases. If you do not have devices, groups, and deployments that use the Public Preview release, you can ignore this page.
+
+To migrate successfully, you will have to upgrade the DU agent running on your devices. You will also have to create new device groups to deploy and manage updates. Note that as there are major changes with the PPR release, we recommend that you follow the instructions closely to avoid errors.
+
+## Update the device update agent
+
+For the Public Preview Refresh release, the Device Update agent needs to be updated manually as described below. Updating the agent through a Device Update deployment is not supported due to major changes across the Public Preview and PPR release.
+
+1. To view devices using older agents (versions 0.7.0/0.6.0) and groups created before 02/03/2022, navigate to the public preview portal, which can be accessed through the banner.
+
+ :::image type="content" source="media/migration/switch-banner.png" alt-text="Screenshot of banner." lightbox="media/migration/switch-banner.png":::
+
+2. Create a new IoT/IoT Edge device on the Azure portal. Copy the primary connection string for the device from the device view for later. For more details, refer the [Add Device to IoT Hub](device-update-simulator.md#add-device-to-azure-iot-hub) section.
+
+3. Then, SSH into your device and remove any old Device Update agent.
+ ```bash
+ sudo apt remove deviceupdate-agent
+ sudo apt remove adu-agent
+ ```
+
+4. Remove the old configuration file
+ ```bash
+ rm -f /etc/adu/adu-conf.txt
+ ```
+
+5. Install the new agent
+ ```bash
+ sudo apt-get install deviceupdate-agent
+ ```
+ Alternatively, you can get the .deb asset from [GitHub](https://github.com/Azure/iot-hub-device-update) and install the agent
+
+ ```bash
+ sudo apt install <file>.deb
+ ```
+
+ Trying to upgrade the Device Update agent without removing the old agent and configuration files will result in the error shown below.
+
+ :::image type="content" source="media/migration/update-error.png" alt-text="Screenshot of update error." lightbox="media/migration/update-error.png":::
+
+
+6. Enter your IoT device's device (or module, depending on how you [provisioned the device with Device Update](device-update-agent-provisioning.md)) primary connection string in the configuration file by running the command below.
+
+ ```markdown
+ /etc/adu/du-config.json
+ ```
+ 7. Add your model and manufacturer details in the configuration file
+
+ 8. Delete the old IoT/IoT Edge device from the public preview portal.
+
+> [!NOTE]
+> Attempting to update the agent through a DU deployment will lead to the device no longer being manageable by Device Update. The device will have to be re-provisioned to be managed from Device Update.
+
+## Migrate groups to Public Preview Refresh
+
+1. If your devices are using Device Update agent versions 0.6.0 or 0.7.0, upgrade to the latest agent version 0.8.0 following the steps above.
+
+2. Delete the existing groups in the public preview portal by navigating through the banner.
+
+3. Add group tag to the device twin for the updated devices. For more details, refer the [Add a tag to your device](device-update-simulator.md#add-device-to-azure-iot-hub) section.
+
+4. Recreate the groups in the PPR portal by going to ΓÇÿAdd GroupsΓÇÖ and selecting the corresponding groups tag from the drop-down list.
+
+5. Note that a group with the same name cannot be created in the PPR portal if the group in the public preview portal is not deleted.
+
+## Group and deployment behavior across releases
+
+- Groups created in the Public Preview Refresh release portal will only allow addition of devices with the latest Device Update Agent (0.8.0). Devices with older agents (0.7.0/0.6.0) cannot be added to these groups.
+
+- Any new devices using the latest agent will automatically be added to a Default DeviceClass Group in the ΓÇÿGroups and DeploymentsΓÇÖ tab. If a group tag is added to the device properties, then the device will be added to that group if a group for that tag exists.
+
+- For the device using the latest agent, if a group tag is added to the device properties but the corresponding group is not yet created the device will not be visible in the ΓÇÿGroups and DeploymentsΓÇÖ tab.
+
+- Devices using the older agents will show up as ungrouped in the old portal if the group tag is not added.
+
+## Next steps
+[Understand Device Update agent configuration file](device-update-configuration-file.md)
+
+You can use the following tutorials for a simple demonstration of Device Update for IoT Hub:
+
+- [Image Update: Getting Started with Raspberry Pi 3 B+ Reference Yocto Image](device-update-raspberry-pi.md) extensible via open source to build you own images for other architecture as needed.
+
+- [Package Update: Getting Started using Ubuntu Server 18.04 x64 Package agent](device-update-ubuntu-agent.md)
+
+- [Proxy Update: Getting Started using Device Update binary agent for downstream devices](device-update-howto-proxy-updates.md)
+
+- [Getting Started Using Ubuntu (18.04 x64) Simulator Reference Agent](device-update-simulator.md)
+
+- [Device Update for Azure IoT Hub tutorial for Azure-Real-Time-Operating-System](device-update-azure-real-time-operating-system.md)
iot-hub-device-update Troubleshoot Device Update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/troubleshoot-device-update.md
_You may not have access permissions configured correctly. Please ensure you hav
### Q: I'm encountering a 500-type error when importing content to the Device Update service. _An error code in the 500 range may indicate an issue with the Device Update service. Please wait 5 minutes, then try again. If the same error persists, please follow the instructions in the [Contacting Microsoft Support](#contact) section to file a support request with Microsoft._
-### Q: I'm encountering an error code when importing content and would like to parse it.
-_Please refer to the [Device Update Error Codes](./device-update-error-codes.md) documentation for information on parsing error codes._
+### Q: I'm encountering an error message when importing content and would like to understand more about it.
+_Please refer to the [Device Update Error Codes](./device-update-error-codes.md#device-update-content-service) documentation for more detailed information on import-related error messages._
## <a name="device-failure"></a>Device failures
iot-hub-device-update Update Manifest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/update-manifest.md
## Overview
-Device Update for IoT Hub uses an _update manifest_ to communicate actions and metadata that supports those actions through the
- [AzureDeviceUpdateCore.OrchestratorMetadata:4](./device-update-plug-and-play.md)interface properties.
- This document describes the fundamentals of how the `updateManifest` property, in the
- `AzureDeviceUpdateCore.OrchestratorMetadata:4` interface, is used by the Device Update Agent. The
- `AzureDeviceUpdateCore.OrchestratorMetadata:4` interface properties are sent from the Device Update for IoT Hub service
- to the Device Update Agent. The `updateManifest` is a serialized JSON Object that is parsed by the Device Update Agent.
-
-The update manifest is auto generated after creation of an import manifest. For more information on how to generate an import manifest, [see here](./import-update.md).
-
-### An example update manifest
-
-```JSON
-{
- "manifestVersion": "1",
- "updateId": {
- "provider": "DuTest",
- "name": "DuTestUser",
- "version": "2020.611.534.16"
- },
- "updateType": "microsoft/swupdate:1",
- "installedCriteria": "1.0",
- "files": {
- "00000": {
- "fileName": "image.swu",
- "sizeInBytes": 256000,
- "hashes": {
- "sha256": "IhIIxBJpLfazQOk/PVi6SzR7BM0jf4HDqw+6gdZ3vp8="
- }
- }
- },
- "createdDateTime": "2020-06-12T00:38:13.9350278"
-}
-```
-
-The purpose of the update manifest is to describe the contents of an update, namely its identity, type,
-installed criteria, and update file metadata. In addition, the update manifest is cryptographically signed to
-allow the Device Update Agent to verify its authenticity. For more information, see the document on [Device Update security](./device-update-security.md).
+Device Update for IoT Hub uses [IoT Plug and Play](./device-update-plug-and-play.md) to send data to devices during deployment. One of them is _update manifest_, a serialized JSON object string containing metadata of the update to install. It is also cryptographically signed to allow the Device Update Agent to verify its authenticity. Refer to [Device Update security](./device-update-security.md) for more information on how the update manifest is used to securely install content.
## Import manifest vs update manifest
-It is important to understand the differences between the import manifest and the update manifest.
-* The [import manifest](./import-concepts.md) is created by whoever creates the corresponding update. It describes the contents of the update that will be imported into Device Update for IoT Hub.
-* The update manifest is automatically generated by the Device Update for IoT Hub service, using some of the properties that were defined in the import manifest. It is used to communicate relevant information to the Device Update Agent during the update process.
+It is important to understand the differences between the import manifest and the update manifest concepts in Device Update for IoT Hub:
-Each manifest type has its own schema and schema version.
-
-## Update manifest properties
+* The [import manifest](./import-concepts.md) is created by whoever creates the corresponding update. It describes the contents of the update that will be imported into Device Update for IoT Hub.
+* The update manifest is automatically generated by the Device Update for IoT Hub service, using some of the properties that were defined in the import manifest. It is used to communicate relevant information to the Device Update Agent during the update process.
-The high-level definitions of the update manifest properties can be found in the interface definitions found
-[here](./device-update-plug-and-play.md). To provide deeper context, let's take a closer look
-at the properties and how they are used in the system.
-
-### updateId
+Each manifest type has its own schema and schema version.
-Contains the `provider`, `name`, and `version`, which represents the exact Device Update for IoT Hub update identity used
-to determine compatible devices for the update.
+## Update manifest schema
-### updateType
+> [!IMPORTANT]
+> Update manifest JSON schema version 4 is hosted at [SchemaStore.org](https://json.schemastore.org/azure-deviceupdate-update-manifest-4.json).
-Represents the type of update that is handled by a specific type of update handler. It follows the form
-of `microsoft/swupdate:1` for an image-based update and `microsoft/apt:1` for a package-based update (see `Update Handler Types` section below).
+### Example update manifest
-### installedCriteria
+```JSON
+{
+ "manifestVersion": "4",
+ "updateId": {
+ "provider": "Contoso",
+ "name": "Toaster",
+ "version": "1.0"
+ },
+ "compatibility": [
+ {
+ "deviceManufacturer": "Contoso",
+ "deviceModel": "Toaster"
+ }
+ ],
+ "instructions": {
+ "steps": [
+ {
+ "handler": "microsoft/swupdate:1",
+ "handlerProperties": {
+ "installedCriteria": "1.0"
+ },
+ "files": [
+ "fileId0"
+ ]
+ }
+ ]
+ },
+ "files": {
+ "fileId0": {
+ "filename": "contoso.toaster.1.0.swu",
+ "sizeInBytes": 718,
+ "hashes": {
+ "sha256": "mcB5SexMU4JOOzqmlJqKbue9qMskWY3EI/iVjJxCtAs="
+ }
+ }
+ },
+ "createdDateTime": "2021-09-28T18:32:01.8404544Z"
+}
+```
-A string that contains information needed by Device Update Agent's Update Handler to determine whether the update is
-installed on the device. The `Update Handler Types` section documents the format of the `installedCriteria`,
-for each update type supported by Device Update for IoT Hub.
+### Full vs mini update manifest
-### files
+When an update manifest exceeds a certain size that prevents it from being communicated efficiently, Device Update for IoT Hub will send it to device in _detached_ format, also known as _mini update manifest_. A mini manifest is technically _metadata for update manifest_ and contains information needed for Device Update Agent to download the _full_ update manifest and verify its authenticity.
-Tells the Device Update Agent which files to download, and the hash that will be used to verify that the files downloaded correctly.
-Here's a closer look at the `files` property contents:
+Example mini update manifest:
```json
-"files":{
- <FILE_ID_STRING>:{
- "fileName":<STRING>,
- "sizeInBytes":<INTEGER>,
- "hashes":{
- <HASH-TYPE>:<HASH-STRING>
- }
- }
+{
+ "manifestVersion": "4",
+ "updateId": {
+ "provider": "Contoso",
+ "name": "Toaster",
+ "version": "1.0"
+ },
+ "detachedManifestFileId": "fileId1",
+ "files": {
+ "fileId1": {
+ "filename": "contoso.toaster.1.0.updatemanifest.json",
+ "sizeInBytes": 2048,
+ "hashes": {
+ "sha256": "789s9PDfX4uA9wFUubyC30BWkLFbgmpkpmz1fEdqo2U="
+ }
}
+ }
+}
```-
-Outside of the `updateManifest` is the `fileUrls` array of JSON Object.
-
-```json
-"fileUrls":{
- <FILE_ID_STRING>: <URL-in-String-Format>
- }
-```
-
-Both the `FILE_ID_STRING`, within `fileUrls`, and `files` are the same (for example, "0000" in `files` has the url
-at "0000" within `fileUrls`).
-
-### manifestVersion
-
-A string that represents the schema version.
-
-## Update Handler Types
-
-|Update Method|Update Handler Type|Update Type|Installed Criteria|Expected Files for Publishing|
-|-|-|-|--|--|
-|Image-based|SWUpdate|"microsoft/swupdate:version"|The reference image saves the hint of its version in the /etc/adu-version file. |.swu file that contains SWUpdate image|
-|Package-based|APT|"microsoft/apt:version"|`<name>` + "-" + `<version>` (defined properties in the APT Manifest file|`<APT Update Manifest>`.json that contains the APT configuration and package list|
-
key-vault About Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/about-certificates.md
The addressable key becomes more relevant with non-exportable KV certificates. T
The type of key pair to supported for certificates
+ - Supported keytypes: RSA, RSA-HSM, EC, EC-HSM, oct (listed [here](/rest/api/keyvault/certificates/create-certificate/create-certificate#jsonwebkeytype))
Exportable is only allowed with RSA, EC. HSM keys would be non-exportable. |Key type|About|Security|
At a high level, a certificate policy contains the following information (their
- X509 certificate properties: Contains subject name, subject alternate names, and other properties used to create an x509 certificate request. - Key Properties: contains key type, key length, exportable, and ReuseKeyOnRenewal fields. These fields instruct key vault on how to generate a key.
- - Supported keytypes: RSA, RSA-HSM, EC, EC-HSM, oct (listed [here](/rest/api/keyvault/createcertificate/createcertificate#jsonwebkeytype))
+ - Supported keytypes: RSA, RSA-HSM, EC, EC-HSM, oct (listed [here](/rest/api/keyvault/certificates/create-certificate/create-certificate#jsonwebkeytype))
- Secret properties: contains secret properties such as content type of addressable secret to generate the secret value, for retrieving certificate as a secret. - Lifetime Actions: contains lifetime actions for the KV Certificate. Each lifetime action contains:
key-vault Certificate Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/certificate-scenarios.md
Certificates are composed of three interrelated resources linked together as a K
**Step 3** - A Contoso admin, along with a Contoso employee (Key Vault user) who owns certificates, depending on the CA, can get a certificate from the admin or directly from the account with the CA. -- Begin an add credential operation to a key vault by [setting a certificate issuer](/rest/api/keyvault/setcertificateissuer/setcertificateissuer) resource. A certificate issuer is an entity represented in Azure Key Vault (KV) as a CertificateIssuer resource. It is used to provide information about the source of a KV certificate; issuer name, provider, credentials, and other administrative details.
+- Begin an add credential operation to a key vault by [setting a certificate issuer](/rest/api/keyvault/certificates/set-certificate-issuer/set-certificate-issuer) resource. A certificate issuer is an entity represented in Azure Key Vault (KV) as a CertificateIssuer resource. It is used to provide information about the source of a KV certificate; issuer name, provider, credentials, and other administrative details.
- Ex. MyDigiCertIssuer - Provider - Credentials ΓÇô CA account credentials. Each CA has its own specific data. For more information on creating accounts with CA Providers, see the related post on the [Key Vault blog](/archive/blogs/kv/manage-certificates-via-azure-key-vault).
-**Step 3.1** - Set up [certificate contacts](/rest/api/keyvault/setcertificatecontacts/setcertificatecontacts) for notifications. This is the contact for the Key Vault user. Key Vault does not enforce this step.
+**Step 3.1** - Set up [certificate contacts](/rest/api/keyvault/certificates/set-certificate-contacts/set-certificate-contacts) for notifications. This is the contact for the Key Vault user. Key Vault does not enforce this step.
Note - This process, through step 3.1, is a onetime operation.
Note - This process, through step 3.1, is a onetime operation.
- Renewal information - > ex. 90 days before expiry - A certificate creation process is usually an asynchronous process and involves polling your key vault for the state of the create certificate operation.
-[Get certificate operation](/rest/api/keyvault/getcertificateoperation/getcertificateoperation)
+[Get certificate operation](/rest/api/keyvault/certificates/get-certificate-operation/get-certificate-operation)
- Status: completed, failed with error information or, canceled - Because of the delay to create, a cancel operation can be initiated. The cancel may or may not be effective.
When you are importing the certificate, you need to ensure that the key is inclu
### Formats of Merge CSR we support AKV supports 2 PEM based formats. You can either merge a single PKCS#8 encoded certificate or a base64 encoded P7B (chain of certificates signed by CA).
-If you need to covert the P7B's format to the supported one, you can use [certutil -encode](https://docs.microsoft.com/windows-server/administration/windows-commands/certutil#-encode)
+If you need to covert the P7B's format to the supported one, you can use [certutil -encode](/windows-server/administration/windows-commands/certutil#-encode)
--BEGIN CERTIFICATE-- --END CERTIFICATE--
key-vault Howto Logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/howto-logging.md
To configure diagnostic settings in the Azure portal, follow these steps:
:::image type="content" source="../media/diagnostics-portal-2.png" alt-text="Screenshot that shows adding a diagnostic setting.":::
-1. Select a name for your diagnostic setting. To configure logging for Azure Monitor for Key Vault, select **AuditEvent** and **Send to Log Analytics workspace**. Then choose the subscription and Log Analytics workspace to which you want to send your logs.
+1. Select a name for your diagnostic setting. To configure logging for Azure Monitor for Key Vault, select **AuditEvent** and **Send to Log Analytics workspace**. Then choose the subscription and Log Analytics workspace to which you want to send your logs. You can also select the option to **Archive to a storage account**.
:::image type="content" source="../media/diagnostics-portal-3.png" alt-text="Screenshot of diagnostic settings options.":::
key-vault Logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/logging.md
The following table lists the **operationName** values and corresponding REST AP
| operationName | REST API command | | | | | **Authentication** |Authenticate via Azure Active Directory endpoint |
-| **VaultGet** |[Get information about a key vault](/rest/api/keyvault/vaults) |
-| **VaultPut** |[Create or update a key vault](/rest/api/keyvault/vaults) |
-| **VaultDelete** |[Delete a key vault](/rest/api/keyvault/vaults) |
-| **VaultPatch** |[Update a key vault](/rest/api/keyvault/vaults) |
-| **VaultList** |[List all key vaults in a resource group](/rest/api/keyvault/vaults) |
-| **VaultPurge** |[Purge deleted vault](/rest/api/keyvault/vaults/purgedeleted) |
+| **VaultGet** |[Get information about a key vault](/rest/api/keyvault/keyvault/vaults) |
+| **VaultPut** |[Create or update a key vault](/rest/api/keyvault/keyvault/vaults) |
+| **VaultDelete** |[Delete a key vault](/rest/api/keyvault/keyvault/vaults) |
+| **VaultPatch** |[Update a key vault](/rest/api/keyvault/keyvault/vaults) |
+| **VaultList** |[List all key vaults in a resource group](/rest/api/keyvault/keyvault/vaults) |
+| **VaultPurge** |[Purge deleted vault](/rest/api/keyvault/keyvault/vaults/purge-deleted) |
| **VaultRecover** |Recover deleted vault|
-| **VaultGetDeleted** |[Get deleted vault](/rest/api/keyvault/vaults/getdeleted) |
-| **VaultListDeleted** |[List deleted vaults](/rest/api/keyvault/vaults/listdeleted) |
+| **VaultGetDeleted** |[Get deleted vault](/rest/api/keyvault/keyvault/vaults/get-deleted) |
+| **VaultListDeleted** |[List deleted vaults](/rest/api/keyvault/keyvault/vaults/list-deleted) |
| **VaultAccessPolicyChangedEventGridNotification** | Vault access policy changed event published | # [Keys](#tab/Keys) | operationName | REST API command | | | |
-| **KeyCreate** |[Create a key](/rest/api/keyvault/createkey) |
-| **KeyGet** |[Get information about a key](/rest/api/keyvault/getkey) |
-| **KeyImport** |[Import a key into a vault](/rest/api/keyvault/vaults) |
-| **KeyDelete** |[Delete a key](/rest/api/keyvault/deletekey) |
-| **KeySign** |[Sign with a key](/rest/api/keyvault/sign) |
-| **KeyVerify** |[Verify with a key](/rest/api/keyvault/vaults) |
-| **KeyWrap** |[Wrap a key](/rest/api/keyvault/wrapkey) |
-| **KeyUnwrap** |[Unwrap a key](/rest/api/keyvault/unwrapkey) |
-| **KeyEncrypt** |[Encrypt with a key](/rest/api/keyvault/encrypt) |
-| **KeyDecrypt** |[Decrypt with a key](/rest/api/keyvault/decrypt) |
-| **KeyUpdate** |[Update a key](/rest/api/keyvault/updatekey) |
-| **KeyList** |[List the keys in a vault](/rest/api/keyvault/getkeys) |
-| **KeyListVersions** |[List the versions of a key](/rest/api/keyvault/getkeyversions) |
-| **KeyPurge** |[Purge a key](/rest/api/keyvault/purgedeletedkey) |
-| **KeyBackup** |[Backup a key](/rest/api/keyvault/backupkey) |
-| **KeyRestore** |[Restore a key](/rest/api/keyvault/restorekey) |
-| **KeyRecover** |[Recover a key](/rest/api/keyvault/recoverdeletedkey) |
-| **KeyGetDeleted** |[Get deleted key](/rest/api/keyvault/getdeletedkey) |
-| **KeyListDeleted** |[List the deleted keys in a vault](/rest/api/keyvault/getdeletedkeys) |
+| **KeyCreate** |[Create a key](/rest/api/keyvault/keys/create-key) |
+| **KeyGet** |[Get information about a key](/rest/api/keyvault/keys/get-key) |
+| **KeyImport** |[Import a key into a vault](/rest/api/keyvault/keyvault/vaults) |
+| **KeyDelete** |[Delete a key](/rest/api/keyvault/keys/delete-key) |
+| **KeySign** |[Sign with a key](/rest/api/keyvault/keys/sign) |
+| **KeyVerify** |[Verify with a key](/rest/api/keyvault/keyvault/vaults) |
+| **KeyWrap** |[Wrap a key](/rest/api/keyvault/keys/wrap-key) |
+| **KeyUnwrap** |[Unwrap a key](/rest/api/keyvault/keys/unwrap-key) |
+| **KeyEncrypt** |[Encrypt with a key](/rest/api/keyvault/keys/encrypt) |
+| **KeyDecrypt** |[Decrypt with a key](/rest/api/keyvault/keys/decrypt) |
+| **KeyUpdate** |[Update a key](/rest/api/keyvault/keys/update-key) |
+| **KeyList** |[List the keys in a vault](/rest/api/keyvault/keys/get-keys) |
+| **KeyListVersions** |[List the versions of a key](/rest/api/keyvault/keys/get-key-versions) |
+| **KeyPurge** |[Purge a key](/rest/api/keyvault/keys/purge-deleted-key) |
+| **KeyBackup** |[Backup a key](/rest/api/keyvault/keys/backup-key) |
+| **KeyRestore** |[Restore a key](/rest/api/keyvault/keys/restore-key) |
+| **KeyRecover** |[Recover a key](/rest/api/keyvault/keys/recover-deleted-key) |
+| **KeyGetDeleted** |[Get deleted key](/rest/api/keyvault/keys/get-deleted-key) |
+| **KeyListDeleted** |[List the deleted keys in a vault](/rest/api/keyvault/keys/get-deleted-keys) |
| **KeyNearExpiryEventGridNotification** |Key near expiry event published | | **KeyExpiredEventGridNotification** |Key expired event published |
The following table lists the **operationName** values and corresponding REST AP
| operationName | REST API command | | | |
-| **SecretSet** |[Create a secret](/rest/api/keyvault/updatecertificate) |
-| **SecretGet** |[Get a secret](/rest/api/keyvault/getsecret) |
-| **SecretUpdate** |[Update a secret](/rest/api/keyvault/updatesecret) |
-| **SecretDelete** |[Delete a secret](/rest/api/keyvault/deletesecret) |
-| **SecretList** |[List secrets in a vault](/rest/api/keyvault/getsecrets) |
-| **SecretListVersions** |[List versions of a secret](/rest/api/keyvault/getsecretversions) |
-| **SecretPurge** |[Purge a secret](/rest/api/keyvault/purgedeletedsecret) |
-| **SecretBackup** |[Backup a secret](/rest/api/keyvault/backupsecret) |
-| **SecretRestore** |[Restore a secret](/rest/api/keyvault/restoresecret) |
-| **SecretRecover** |[Recover a secret](/rest/api/keyvault/recoverdeletedsecret) |
-| **SecretGetDeleted** |[Get deleted secret](/rest/api/keyvault/getdeletedsecret) |
-| **SecretListDeleted** |[List the deleted secrets in a vault](/rest/api/keyvault/getdeletedsecrets) |
+| **SecretSet** |[Create a secret](/rest/api/keyvault/certificates/update-certificate) |
+| **SecretGet** |[Get a secret](/rest/api/keyvault/secrets/get-secret) |
+| **SecretUpdate** |[Update a secret](/rest/api/keyvault/secrets/update-secret) |
+| **SecretDelete** |[Delete a secret](/rest/api/keyvault/secrets/delete-secret) |
+| **SecretList** |[List secrets in a vault](/rest/api/keyvault/secrets/get-secret) |
+| **SecretListVersions** |[List versions of a secret](/rest/api/keyvault/secrets/get-secret-versions) |
+| **SecretPurge** |[Purge a secret](/rest/api/keyvault/secrets/purge-deleted-secret) |
+| **SecretBackup** |[Backup a secret](/rest/api/keyvault/secrets/backup-secret) |
+| **SecretRestore** |[Restore a secret](/rest/api/keyvault/secrets/restore-secret) |
+| **SecretRecover** |[Recover a secret](/rest/api/keyvault/secrets/recover-deleted-secret) |
+| **SecretGetDeleted** |[Get deleted secret](/rest/api/keyvault/secrets/get-deleted-secret) |
+| **SecretListDeleted** |[List the deleted secrets in a vault](/rest/api/keyvault/secrets/get-deleted-secrets) |
| **SecretNearExpiryEventGridNotification** |Secret near expiry event published | | **SecretExpiredEventGridNotification** |Secret expired event published |
The following table lists the **operationName** values and corresponding REST AP
| operationName | REST API command | | | |
-| **CertificateGet** |[Get information about a certificate](/rest/api/keyvault/getcertificate) |
-| **CertificateCreate** |[Create a certificate](/rest/api/keyvault/createcertificate) |
-| **CertificateImport** |[Import a certificate into a vault](/rest/api/keyvault/importcertificate) |
-| **CertificateUpdate** |[Update a certificate](/rest/api/keyvault/updatecertificate) |
-| **CertificateList** |[List the certificates in a vault](/rest/api/keyvault/getcertificates) |
-| **CertificateListVersions** |[List the versions of a certificate](/rest/api/keyvault/getcertificateversions) |
-| **CertificateDelete** |[Delete a certificate](/rest/api/keyvault/deletecertificate) |
-| **CertificatePurge** |[Purge a certificate](/rest/api/keyvault/purgedeletedcertificate) |
-| **CertificateBackup** |[Backup a certificate](/rest/api/keyvault/backupcertificate) |
-| **CertificateRestore** |[Restore a certificate](/rest/api/keyvault/restorecertificate) |
-| **CertificateRecover** |[Recover a certificate](/rest/api/keyvault/recoverdeletedcertificate) |
-| **CertificateGetDeleted** |[Get deleted certificate](/rest/api/keyvault/getdeletedcertificate) |
-| **CertificateListDeleted** |[List the deleted certificates in a vault](/rest/api/keyvault/getdeletedcertificates) |
-| **CertificatePolicyGet** |[Get certificate policy](/rest/api/keyvault/getcertificatepolicy) |
-| **CertificatePolicyUpdate** |[Update certificate policy](/rest/api/keyvault/updatecertificatepolicy) |
-| **CertificatePolicySet** |[Create certificate policy](/rest/api/keyvault/createcertificate) |
-| **CertificateContactsGet** |[Get certificate contacts](/rest/api/keyvault/getcertificatecontacts) |
-| **CertificateContactsSet** |[Set certificate contacts](/rest/api/keyvault/setcertificatecontacts) |
-| **CertificateContactsDelete** |[Delete certificate contacts](/rest/api/keyvault/deletecertificatecontacts) |
-| **CertificateIssuerGet** |[Get certificate issuer](/rest/api/keyvault/getcertificateissuer) |
-| **CertificateIssuerSet** |[Set certificate issuer](/rest/api/keyvault/setcertificateissuer) |
-| **CertificateIssuerUpdate** |[Update certificate issuer](/rest/api/keyvault/updatecertificateissuer) |
-| **CertificateIssuerDelete** |[Delete certificate issuer](/rest/api/keyvault/deletecertificateissuer) |
-| **CertificateIssuersList** |[List the certificate issuers](/rest/api/keyvault/getcertificateissuers) |
+| **CertificateGet** |[Get information about a certificate](/rest/api/keyvault/certificates/get-certificate) |
+| **CertificateCreate** |[Create a certificate](/rest/api/keyvault/certificates/create-certificate) |
+| **CertificateImport** |[Import a certificate into a vault](/rest/api/keyvault/certificates/import-certificate) |
+| **CertificateUpdate** |[Update a certificate](/rest/api/keyvault/certificates/update-certificate) |
+| **CertificateList** |[List the certificates in a vault](/rest/api/keyvault/certificates/get-certificate) |
+| **CertificateListVersions** |[List the versions of a certificate](/rest/api/keyvault/certificates/get-certificate-versions) |
+| **CertificateDelete** |[Delete a certificate](/rest/api/keyvault/certificates/delete-certificate) |
+| **CertificatePurge** |[Purge a certificate](/rest/api/keyvault/certificates/purge-deleted-certificate) |
+| **CertificateBackup** |[Backup a certificate](/rest/api/keyvault/certificates/backup-certificate) |
+| **CertificateRestore** |[Restore a certificate](/rest/api/keyvault/certificates/restore-certificate) |
+| **CertificateRecover** |[Recover a certificate](/rest/api/keyvault/certificates/recover-deleted-certificate) |
+| **CertificateGetDeleted** |[Get deleted certificate](/rest/api/keyvault/certificates/get-deleted-certificate) |
+| **CertificateListDeleted** |[List the deleted certificates in a vault](/rest/api/keyvault/certificates/get-deleted-certificates) |
+| **CertificatePolicyGet** |[Get certificate policy](/rest/api/keyvault/certificates/get-certificate-policy) |
+| **CertificatePolicyUpdate** |[Update certificate policy](/rest/api/keyvault/certificates/update-certificate-policy) |
+| **CertificatePolicySet** |[Create certificate policy](/rest/api/keyvault/certificates/create-certificate) |
+| **CertificateContactsGet** |[Get certificate contacts](/rest/api/keyvault/certificates/get-certificate-contacts) |
+| **CertificateContactsSet** |[Set certificate contacts](/rest/api/keyvault/certificates/set-certificate-contacts) |
+| **CertificateContactsDelete** |[Delete certificate contacts](/rest/api/keyvault/certificates/delete-certificate-contacts) |
+| **CertificateIssuerGet** |[Get certificate issuer](/rest/api/keyvault/certificates/get-certificate-issuer) |
+| **CertificateIssuerSet** |[Set certificate issuer](/rest/api/keyvault/certificates/set-certificate-issuer) |
+| **CertificateIssuerUpdate** |[Update certificate issuer](/rest/api/keyvault/certificates/update-certificate-issuer) |
+| **CertificateIssuerDelete** |[Delete certificate issuer](/rest/api/keyvault/certificates/delete-certificate-issuer) |
+| **CertificateIssuersList** |[List the certificate issuers](/rest/api/keyvault/certificates/get-certificate-issuers) |
| **CertificateEnroll** |Enroll a certificate | | **CertificateRenew** |Renew a certificate | | **CertificatePendingGet** |Retrieve pending certificate |
key-vault Security Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/security-features.md
Azure Private Link Service enables you to access Azure Key Vault and Azure hoste
- Despite known vulnerabilities in TLS protocol, there is no known attack that would allow a malicious agent to extract any information from your key vault when the attacker initiates a connection with a TLS version that has vulnerabilities. The attacker would still need to authenticate and authorize itself, and as long as legitimate clients always connect with recent TLS versions, there is no way that credentials could have been leaked from vulnerabilities at old TLS versions. > [!NOTE]
-> For Azure Key Vault, ensure that the application accessing the Keyvault service should be running on a platform that supports TLS 1.2 or recent version. If the application is dependent on .Net framework, it should be updated as well. You can also make the registry changes mentioned in [this article](/troubleshoot/azure/active-directory/enable-support-tls-environment) to explicitly enable the use of TLS 1.2 at OS level and for .Net framework.
+> For Azure Key Vault, ensure that the application accessing the Keyvault service should be running on a platform that supports TLS 1.2 or recent version. If the application is dependent on .Net framework, it should be updated as well. You can also make the registry changes mentioned in [this article](/troubleshoot/azure/active-directory/enable-support-tls-environment) to explicitly enable the use of TLS 1.2 at OS level and for .Net framework. To meet with compliance obligations and to improve security posture, Key Vault will deprecate supporting TLS 1.0, 1.1 starting May 2022.
## Key Vault authentication options
key-vault About Keys Details https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/about-keys-details.md
The following permissions can be granted, on a per user / service principal basi
- Permissions for privileged operations - *purge*: Purge (permanently delete) a deleted key
-For more information on working with keys, see [Key operations in the Key Vault REST API reference](/rest/api/keyvault). For information on establishing permissions, see [Vaults - Create or Update](/rest/api/keyvault/vaults/createorupdate) and [Vaults - Update Access Policy](/rest/api/keyvault/vaults/updateaccesspolicy).
+For more information on working with keys, see [Key operations in the Key Vault REST API reference](/rest/api/keyvault). For information on establishing permissions, see [Vaults - Create or Update](/rest/api/keyvault/keyvault/vaults/create-or-update) and [Vaults - Update Access Policy](/rest/api/keyvault/keyvault/vaults/update-access-policy).
## Next steps - [About Key Vault](../general/overview.md)
key-vault About Managed Storage Account Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/secrets/about-managed-storage-account-keys.md
The following permissions can be used when authorizing a user or application pri
- Permissions for privileged operations - *purge*: Purge (permanently delete) a managed storage account
-For more information, see the [Storage account operations in the Key Vault REST API reference](/rest/api/keyvault). For information on establishing permissions, see [Vaults - Create or Update](/rest/api/keyvault/vaults/createorupdate) and [Vaults - Update Access Policy](/rest/api/keyvault/vaults/updateaccesspolicy).
+For more information, see the [Storage account operations in the Key Vault REST API reference](/rest/api/keyvault). For information on establishing permissions, see [Vaults - Create or Update](/rest/api/keyvault/vaults/createorupdate) and [Vaults - Update Access Policy](/rest/api/keyvault/keyvault/vaults/update-access-policy).
## Next steps
key-vault Quick Create Node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/secrets/quick-create-node.md
Title: Quickstart - Azure Key Vault secret client library for JavaScript (versi
description: Learn how to create, retrieve, and delete secrets from an Azure key vault using the JavaScript client library Previously updated : 12/13/2021 Last updated : 02/03/2022
The code samples below will show you how to create a client, set a secret, retri
const client = new SecretClient(url, credential); // Create a secret
+ // The secret can be a string of any kind. For example,
+ // a multiline text block such as an RSA private key with newline characters,
+ // or a stringified JSON object, like `JSON.stringify({ mySecret: 'MySecretValue'})`.
const uniqueString = new Date().getTime(); const secretName = `secret${uniqueString}`; const result = await client.setSecret(secretName, "MySecretValue");
load-testing Reference Test Config Yaml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/reference-test-config-yaml.md
A test configuration uses the following keys:
| `testName` | string | *Required*. Name of the test to run. The results of various test runs will be collected under this test name in the Azure portal. | | `testPlan` | string | *Required*. Relative path to the Apache JMeter test script to run. | | `engineInstances` | integer | *Required*. Number of parallel instances of the test engine to execute the provided test plan. You can update this property to increase the amount of load that the service can generate. |
-| `configurationFiles` | array | List of relevant configuration files or other files that you reference in the Apache JMeter script. For example, a CSV data set file, images, or any other data file. These files will be uploaded to the Azure Load Testing resource alongside the test script. If the files are in a subfolder on your local machine, use file paths that are relative to the location of the test script. <BR><BR>Azure Load Testing currently doesn't support the use of file paths in the JMX file. When you reference an external file in the test script, make sure to only specify the file name.<BR><BR>By default, the wildcard `*.csv` is generated to reference all *.csv* files in the test plan's folder. |
+| `configurationFiles` | array | List of relevant configuration files or other files that you reference in the Apache JMeter script. For example, a CSV data set file, images, or any other data file. These files will be uploaded to the Azure Load Testing resource alongside the test script. If the files are in a subfolder on your local machine, use file paths that are relative to the location of the test script. <BR><BR>Azure Load Testing currently doesn't support the use of file paths in the JMX file. When you reference an external file in the test script, make sure to only specify the file name. |
| `description` | string | Short description of the test run. | | `failureCriteria` | object | Criteria that indicate failure of the test. Each criterion is in the form of:<BR>`[Aggregate_function] ([client_metric]) > [value]`<BR><BR>- `[Aggregate function] ([client_metric])` is either `avg(response_time_ms)` or `percentage(error).`<BR>- `value` is an integer number. | | `secrets` | object | List of secrets that the Apache JMeter script references. |
testPlan: SampleTest.jmx
description: Load test website home page engineInstances: 1 configurationFiles:
- - '*.csv'
+ - 'SampleData.csv'
failureCriteria: - avg(response_time_ms) > 300 - percentage(error) > 50
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-mlflow.md
Title: MLflow and Azure Machine Learning
description: Learn about MLflow with Azure Machine Learning to log metrics and artifacts from ML models, and deploy your ML models as a web service. --++ - Last updated 10/21/2021
machine-learning How To Monitor Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-monitor-online-endpoints.md
description: Monitor managed online endpoints and create alerts with Application Insights. ++ Last updated 10/21/2021
machine-learning How To Setup Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-setup-authentication.md
Title: Set up authentication
description: Learn how to set up and configure authentication for various resources and workflows in Azure Machine Learning. ---++ Last updated 02/02/2022
machine-learning How To Train Mlflow Projects https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-mlflow-projects.md
Title: Train with MLflow Projects
description: Set up MLflow with Azure Machine Learning to log metrics and artifacts from ML models --++ - Last updated 06/16/2021
machine-learning How To Understand Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-understand-automated-ml.md
Title: Evaluate AutoML experiment results
description: Learn how to view and evaluate charts and metrics for each of your automated machine learning experiment runs. -++ Last updated 10/21/2021
machine-learning How To Use Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-mlflow.md
Title: MLflow Tracking for ML experiments
+ Title: MLflow Tracking for models
description: Set up MLflow Tracking with Azure Machine Learning to log metrics and artifacts from ML models. --++ - Last updated 10/21/2021
machine-learning How To View Online Endpoints Costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-view-online-endpoints-costs.md
description: 'Learn to how view costs for a managed online endpoint in Azure Machine Learning.' ++ Last updated 05/03/2021
machine-learning Migrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/migrate-overview.md
Last updated 10/21/2021
-# Migrate to Azure Machine Learning
+# Migrate to Azure Machine Learning from ML Studio (classic)
> [!IMPORTANT] > Support for Machine Learning Studio (classic) will end on 31 August 2024. We recommend you transition to [Azure Machine Learning](./overview-what-is-azure-machine-learning.md) by that date.
In this article, you learned the high-level requirements for migrating to Azure
1. [Integrate an Azure Machine Learning web service with client apps](migrate-rebuild-integrate-with-client-app.md). 1. [Migrate Execute R Script](migrate-execute-r-script.md).
-See the [Azure Machine Learning Adoption Framework](https://aka.ms/mlstudio-classic-migration-repo) for additional migration resources.
+See the [Azure Machine Learning Adoption Framework](https://aka.ms/mlstudio-classic-migration-repo) for additional migration resources.
machine-learning Migrate Register Dataset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/migrate-register-dataset.md
To download datasets directly:
1. Select the dataset(s) you want to download. 1. In the bottom action bar, select **Download**.
- ![Screenshot showing how to download a dataset in Studio (classic)](./media/migrate-register-dataset/download-dataset.png)
+ :::image type="content" source="./media/migrate-register-dataset/download-dataset.png" alt-text="AScreenshot showing how to download a dataset in Studio (classic)." lightbox = "./media/migrate-register-dataset/download-dataset.png":::
For the following data types, you must use the **Convert to CSV** module to download datasets.
To convert your dataset to a CSV and download the results:
1. Right-click the **Convert to CSV** module. 1. Select **Results dataset** > **Download**.
- ![Screenshot showing how to setup a convert to CSV pipeline](./media/migrate-register-dataset/csv-download-dataset.png)
+ :::image type="content" source="./media/migrate-register-dataset/csv-download-dataset.png" alt-text="Screenshot showing how to setup a convert to CSV pipeline." lightbox = "./media/migrate-register-dataset/csv-download-dataset.png":::
### Upload your dataset to Azure Machine Learning
After you download the data file, you can register the dataset in Azure Machine
1. Go to Azure Machine Learning studio ([ml.azure.com](https://ml.azure.com)). 1. In the left navigation pane, select the **Datasets** tab. 1. Select **Create dataset** > **From local files**.
- ![Screenshot showing the datasets tab and the button for creating a local file](./media/migrate-register-dataset/register-dataset.png)
+
+ :::image type="content" source="./media/migrate-register-dataset/register-dataset.png" alt-text="Screenshot showing the datasets tab and the button for creating a local file.":::
1. Enter a name and description. 1. For **Dataset type**, select **Tabular**.
machine-learning Overview What Is Machine Learning Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/overview-what-is-machine-learning-studio.md
The studio offers multiple authoring experiences depending on the type project a
Write and run your own code in managed [Jupyter Notebook servers](how-to-run-jupyter-notebooks.md) that are directly integrated in the studio. + **Azure Machine Learning designer** Use the designer to train and deploy machine learning models without writing any code. Drag and drop datasets and components to create ML pipelines. Try out the [designer tutorial](tutorial-designer-automobile-price-train-score.md).
- ![Azure Machine Learning designer example](media/concept-designer/designer-drag-and-drop.gif)
+ :::image type="content" source="media/concept-designer/designer-drag-and-drop.gif" alt-text="Azure Machine Learning designer example.":::
+ **Automated machine learning UI** Learn how to create [automated ML experiments](tutorial-first-experiment-automated-ml.md) with an easy-to-use interface.
- ![AutoML in the Azure Machine Learning studio navigation pane](./media/overview-what-is-azure-ml-studio/azure-machine-learning-automated-ml-ui.jpg)
+ :::image type="content" source="./media/overview-what-is-azure-ml-studio/azure-machine-learning-automated-ml-ui.jpg" alt-text="AutoML in the Azure Machine Learning studio navigation pane." lightbox = "./media/overview-what-is-azure-ml-studio/azure-machine-learning-automated-ml-ui.jpg":::
+ **Data labeling**
Start with [Quickstart: Get started with Azure Machine Learning](quickstart-crea
+ [Use a Jupyter notebook to train image classification models](tutorial-train-deploy-notebook.md) + [Use automated machine learning to train & deploy models](tutorial-first-experiment-automated-ml.md) + [Use the designer to train & deploy models](tutorial-designer-automobile-price-train-score.md)
- + [Use studio in a secured virtual network](how-to-enable-studio-virtual-network.md)
+ + [Use studio in a secured virtual network](how-to-enable-studio-virtual-network.md)
marketplace Plans Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/plans-pricing.md
Previously updated : 12/03/2021 Last updated : 02/03/2022 # Plans and pricing for commercial marketplace offers
Plans are not supported for the following offer types:
- Consulting service - Dynamics 365 Business Central-- Dynamics 365 apps on Dataverse and Power Apps - Dynamics 365 Operations Apps - Power BI app
+- Power BI Visual
## Plan information
The following screenshot shows two draft offers.
The commercial marketplace operates on an agency model, whereby publishers set prices, Microsoft bills customers, and Microsoft pays revenue to publishers while withholding an agency fee. You define your offerΓÇÖs markets, visibility, and pricing (when applicable) on the **Pricing and availability** or **Availability** tab. - **Markets**: Every plan must be available in at least one market. You have the option to select only "Tax Remitted" countries, in which Microsoft remits sales and use tax on your behalf.-- **Pricing**: Pricing models only apply to plans for Azure managed application, SaaS, and Azure virtual machine offers. All plans for the same offer must use the same pricing model.
+- **Pricing**: Pricing models only apply to plans for Azure managed application, SaaS, and Azure virtual machine offers. An offer can have only one pricing model. For example, a SaaS offer cannot have one plan that's flat rate and another plan thatΓÇÖs per user.
- **Plan visibility**: Depending on the offer type, you can define a private audience or hide the offer or plan from Azure Marketplace. This is explained in more detail in [Plan visibility](#plan-visibility) later in this article. > [!TIP]
You must associate a pricing model with each plan for the following offer types.
- **Software as a service**: flat rate (monthly or annual), per user, and usage-based pricing (metering service dimensions). - **Azure virtual machine**: Bring your own license (BYOL) and usage-based pricing. For a usage-based pricing model, you can charge per core, per core size, or per market and core size. A BYOL license model does not allow for additional, usage-based charges. (BYOL virtual machine offers do not require a pricing model.)
-All plans for the same offer must use the same pricing model. For example, a SaaS offer cannot have one plan that's flat rate and another plan thatΓÇÖs per user. See specific offer documentation for detailed information.
+An offer can have only one pricing model. For example, a SaaS offer cannot have one plan that's flat rate and another plan thatΓÇÖs per user. However, a SaaS offer can have some plans with flat rate with metered billing and other flat rate plans without metered billing. See specific offer documentation for detailed information.
If you have already set prices for your plan in United States Dollars (USD) and add another market location, the price for the new market will be calculated according to the current exchange rates. After saving your changes, you will see an **Export prices (xlsx)** link that you can use to review and change the price for each market before publishing.
mysql Tutorial Deploy Wordpress On Aks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/tutorial-deploy-wordpress-on-aks.md
# Tutorial: Deploy WordPress app on AKS with Azure Database for MySQL - Flexible Server
-[[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
In this quickstart, you deploy a WordPress application on Azure Kubernetes Service (AKS) cluster with Azure Database for MySQL - Flexible Server using the Azure CLI. **[AKS](../../aks/intro-kubernetes.md)** is a managed Kubernetes service that lets you quickly deploy and manage clusters. **[Azure Database for MySQL - Flexible Server](overview.md)** is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings.
mysql Tutorial Php Database App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/tutorial-php-database-app.md
# Tutorial: Build a PHP (Laravel) and MySQL Flexible Server app in Azure App Service
-[[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
:::image type="content" source="media/tutorial-php-database-app/complete-checkbox-published.png" alt-text="PHP Web App in Azure with Flexible Server":::
postgresql Concepts Configuration Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-configuration-options.md
Previously updated : 01/12/2022 Last updated : 02/02/2022 # Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) configuration options
Hyperscale (Citus) server groups are available in the following Azure regions:
* East US * East US 2 * North Central US
+ * South Central US
* West Central US * West US * West US 2
postgresql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-read-replicas.md
Previously updated : 08/03/2021 Last updated : 02/03/2022 # Read replicas in Azure Database for PostgreSQL - Hyperscale (Citus)
The feature is meant for scenarios where replication lag is acceptable, and is
meant for offloading queries. It isn't meant for synchronous replication scenarios where replica data is expected to be up to date. There will be a measurable delay between the primary and the replica. The delay can be minutes
-or even hours depending on the workload and the latency between the primary and
-the replica. The data on the replica eventually becomes consistent with the
+or even hours, depending on the workload and the latency between primary and
+replica. The data on the replica eventually becomes consistent with the
data on the primary. Use this feature for workloads that can accommodate this delay.
portal](howto-read-replicas-portal.md).
When you create a replica, it doesn't inherit firewall rules the primary server group. These rules must be set up independently for the replica.
-The replica inherits the admin ("citus") account from the primary server group.
+The replica inherits the admin (`citus`) account from the primary server group.
All user accounts are replicated to the read replicas. You can only connect to a read replica by using the user accounts that are available on the primary server. You can connect to the replica's coordinator node by using its hostname and a valid user account, as you would on a regular Hyperscale (Citus) server group.
-For a server named **my replica** with the admin username **citus**, you can
-connect to the coordinator node of the replica by using psql:
+For instance, given a server named **my replica** with the admin username
+**citus**, you can connect to the coordinator node of the replica by using
+psql:
```bash psql -h c.myreplica.postgres.database.azure.com -U citus@myreplica -d postgres
another read replica.
### Replica configuration
-A replica is created by using the same compute, storage, and worker node
-settings as the primary. After a replica is created, several settings can be
-changed, including storage and backup retention period. Other settings can't be
-changed in replicas, such as storage size and number of worker nodes.
+Replicas inherit compute, storage, and worker node settings from their
+primaries. You can change some--but not all--settings on a replica. For
+instance, you can change compute, firewall rules for public access, and private
+endpoints for private access. You can't change the storage size or number of
+worker nodes.
Remember to keep replicas strong enough to keep up changes arriving from the primary. For instance, be sure to upscale compute power in replicas if you upscale it on the primary.
-Firewall rules and parameter settings are not inherited from the primary server
+Firewall rules and parameter settings aren't inherited from the primary server
to the replica when the replica is created or afterwards. ### Regions
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
|||| | Azure Automation / (Microsoft.Automation/automationAccounts) / Webhook, DSCAndHybridWorker | privatelink.azure-automation.net | azure-automation.net | | Azure SQL Database (Microsoft.Sql/servers) / sqlServer | privatelink.database.windows.net | database.windows.net |
+| **Azure SQL Managed Instance** (Microsoft.Sql/managedInstances) | privatelink.{dnsPrefix}.database.windows.net | {instanceName}.{dnsPrefix}.database.windows.net |
| Azure Synapse Analytics (Microsoft.Synapse/workspaces) / Sql | privatelink.sql.azuresynapse.net | sql.azuresynapse.net | | Azure Synapse Analytics (Microsoft.Synapse/workspaces) / SqlOnDemand | privatelink.sql.azuresynapse.net | sqlondemand.azuresynapse.net | | Azure Synapse Analytics (Microsoft.Synapse/workspaces) / Dev | privatelink.dev.azuresynapse.net | dev.azuresynapse.net |
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/private-endpoint-overview.md
The table below lists the available resources that support a private endpoint:
| **SignalR** | Microsoft.SignalRService/SignalR | signalr | | **SignalR** | Microsoft.SignalRService/webPubSub | webpubsub | | **Azure SQL Database** | Microsoft.Sql/servers | Sql Server (sqlServer) |
+| **Azure SQL Managed Instance** | Microsoft.Sql/managedInstances | Sql Managed Instance (managedInstance) |
| **Azure Storage** | Microsoft.Storage/storageAccounts | Blob (blob, blob_secondary)<BR> Table (table, table_secondary)<BR> Queue (queue, queue_secondary)<BR> File (file, file_secondary)<BR> Web (web, web_secondary) | | **Azure File Sync** | Microsoft.StorageSync/storageSyncServices | File Sync Service | | **Azure Synapse** | Microsoft.Synapse/privateLinkHubs | synapse |
purview Catalog Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-conditional-access.md
Last updated 01/14/2022
# Conditional Access with Azure Purview
-[Azure Purview](/overview.md) supports Microsoft Conditional Access.
+[Azure Purview](./overview.md) supports Microsoft Conditional Access.
The following steps show how to configure Azure Purview to enforce a Conditional Access policy.
The following steps show how to configure Azure Purview to enforce a Conditional
## Next steps -- [Use Azure Purview Studio](use-azure-purview-studio.md)
+- [Use Azure Purview Studio](./use-purview-studio.md)
purview Concept Best Practices Asset Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-best-practices-asset-lifecycle.md
+
+ Title: Azure Purview asset management processes
+description: This article provides process and best practice guidance to effectively manage the lifecycle of assets in the Azure Purview catalog
++++ Last updated : 01/06/2022++
+# Business processes for managing data effectively
+
+As data and content has a lifecycle that requires active management (for example, acquisition - processing - disposal) assets in the Azure Purview data catalog need active management in a similar way. "Assets" in the catalog include the technical metadata that describes collection, lineage and scan information. Metadata describing the business structure of data such as glossary, classifications and ownership also needs to be managed.
+
+To manage data assets, responsible people in the organization must understand how and when to apply data governance processes and manage workflows.
+
+## Why do you need business processes for managing assets in Azure Purview?
+
+An organization employing Azure Purview should define processes and people structure to manage the lifecycle of assets and ensure data is valuable to users of the catalog. Metadata in the catalog must be maintained to be able to manage data at scale for discovery, quality, security and privacy.
+
+### Benefits
+
+- Agreed definition and structure of data is required for the Azure Purview data catalog to provide effective data search and protection functionality at scale across organizations' data estates.
+
+- Defining and using processes for asset lifecycle management is key to maintaining accurate asset metadata, which will improve usability of the catalog and the ability to protect relevant data.
+
+- Business users looking for data will be more likely to use the catalog to search for data when it is maintained using data governance processes.
+
+### Best practice processes that should be considered when starting the data governance journey with Azure Purview:
+
+- **Capture and maintain assets** - Understand how to initially structure and record assets in the catalog for management
+- **Glossary and Classification management** - Understand how to effectively manage the catalog metadata needed to apply and maintain a business glossary
+- **Moving and deleting assets** ΓÇô Managing collections and assets by understanding how to move assets from one collection to another or delete asset metadata from Azure Purview
+
+## Data curator organizational personas
+
+The [Data Curator](catalog-permissions.md) role in Azure Purview controls read/write permission to assets within a collection group. To support the data governance processes, the Data Curator role has been granted to separate data governance personas in the organization:
+
+> [!Note]
+> The 4 **personas** listed are suggested read/write users, and would all be assigned Data Curator role in Azure Purview.
+
+- Data Owner or Data Expert:
+
+ - A Data Owner is typically a senior business stakeholder with authority and budget who is accountable for overseeing the quality and protection of a data subject area. This person is accountable for making decisions on who has the right to access data and how it is used.
+
+ - A Data Expert is an individual who is an authority in the business process, data manufacturing process or data consumption patterns.
+
+- Data Steward or Data Custodian
+
+ - A Data Steward is typically a business professional responsible for overseeing the definition, quality and management of a data subject area or data entity. They are typically experts in the data domain and work with other data stewards to make decisions on how to apply all aspects of data management.
+
+ - A Data Custodian is an individual responsible for performing one or more data controls.
+
+## 1. Capture and maintain assets
+
+This process describes the high-level steps and suggested roles to capture and maintain assets in the Azure Purview data catalog.
++
+### Process Guidance
+
+| Process Step | Guidance |
+| | -- |
+| 1 | [Azure Purview collections architecture and best practices](concept-best-practices-collections.md) |
+| 2 | [How to create and manage collections](how-to-create-and-manage-collections.md)
+| 3 & 4 | [Understand Azure Purview access and permissions](catalog-permissions.md)
+| 5 | [Azure Purview connector overview](purview-connector-overview.md) <br> [Azure Purview private endpoint networking](catalog-private-link.md) |
+| 6 | [How to manage multi-cloud data sources](manage-data-sources.md)
+| 7 | [Best practices for scanning data sources in Azure Purview](concept-best-practices-scanning.md)
+| 8, 9 & 10 | [Search the data catalog](how-to-search-catalog.md) <br> [Browse the data catalog](how-to-browse-catalog.md)
+
+## 2. Glossary and classification maintenance
+
+This process describes the high-level steps and roles to manage and define the business glossary and classifications metadata to enrich the Azure Purview data catalog.
++
+### Process Guidance
+
+| Process Step | Guidance |
+| | -- |
+| 1 & 2 | [Understand Azure Purview access and permissions](catalog-permissions.md) |
+| 3 | [Create custom classifications and classification rules](create-a-custom-classification-and-classification-rule.md)
+| 4 | [Create a scan rule set](create-a-scan-rule-set.md)
+| 5 & 6 | [Apply classifications to assets](apply-classifications.md)
+| 7 & 8 | [Understand business glossary features](concept-business-glossary.md)
+| 9 & 10 | [Create, import and export glossary terms](how-to-create-import-export-glossary.md)
+| 11 | [Search the Data Catalog](how-to-search-catalog.md)
+| 12 & 13 | [Browse the Data Catalog](how-to-browse-catalog.md)
+
+> [!Note]
+> It is not currently possible to edit glossary term attributes (for example, Status) in bulk using the Azure Purview UI, but it is possible to export the glossary in bulk, edit in Excel and re-import with amendments.
+
+## 3. Moving assets between collections
+
+This process describes the high-level steps and roles to move assets between collections using the Azure Purview portal.
++
+### Process Guidance
+
+| Process Step | Guidance |
+| | -- |
+| 1 & 2 | [Azure Purview collections architecture and best practice](concept-best-practices-collections.md) |
+| 3 | [Create a collection](quickstart-create-collection.md)
+| 4 | [Understand access and permissions](catalog-permissions.md)
+| 5 | [How to manage collections](how-to-create-and-manage-collections.md#add-assets-to-collections)
+| 6 | [Check collection permissions](how-to-create-and-manage-collections.md#prerequisites)
+| 7 | [Browse the Azure Purview Catalog](how-to-browse-catalog.md)
+
+> [!Note]
+> It is not currently possible to bulk move assets from one collection to another using the Azure Purview portal.
+
+## 4. Deleting asset metadata
+
+This process describes the high-level steps and roles to delete asset metadata from the data catalog using the Azure Purview portal.
+
+Asset Metadata may need to be deleted manually for many reasons:
+
+- To remove asset metadata where the data is deleted (if a full re-scan is not performed)
+- To remove asset metadata where the data is purged according to its retention period
+- To reduce/manage the size of the data map
++
+> [!Note]
+> Before deleting assets, please refer to the how-to guide to review considerations: [How to delete assets](catalog-asset-details.md#deleting-assets)
++
+### Process Guidance
+
+| Process Step | Guidance |
+| | -- |
+| 1 & 2 | Manual steps |
+| 3 | [Data catalog lineage user guide](catalog-lineage-user-guide.md)
+| 4 | Manual step
+| 5 | [How to view, edit and delete assets](catalog-asset-details.md#deleting-assets)
+| 6 | [Scanning best practices](concept-best-practices-scanning.md)
+
+> [!Note]
+> - Deleting a collection, registered source or scan from Azure Purview does not delete all associated asset metadata.
+> - It is not possible to bulk delete asset metadata using the Azure Purview Portal
+> - Deleting the asset metadata does not delete all associated lineage or other relationship data (for example, glossary or classification assignments) about the asset from the data map. The asset information and relationships will no longer be visible in the portal.
+
+## Next steps
+- [Azure Purview accounts architectures and best practices](concept-best-practices-accounts.md)
+- [Azure Purview collections architectures and best practices](concept-best-practices-collections.md)
+- [Azure Purview glossary best practices](concept-best-practices-glossary.md)
+- [Azure Purview classifications best practices](concept-best-practices-classification.md)
purview How To Data Owner Policy Authoring Generic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-data-owner-policy-authoring-generic.md
Title: Authoring and publishing data owner access policies description: Step-by-step guide on how a data owner can author and publish access policies in Azure Purview-+
purview Tutorial Data Owner Policies Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/tutorial-data-owner-policies-resource-group.md
Title: Resource group and subscription access provisioning by data owner description: Step-by-step guide showing how a data owner can create access policies to resource groups or subscriptions.-+ Previously updated : 2/2/2022 Last updated : 2/3/2022
The limit for Azure Purview policies that can be enforced by Storage accounts is
## Next steps Check blog, demo and related tutorials
-* [What's New in Azure Purview at Microsoft Ignite 2021](https://techcommunity.microsoft.com/t5/azure-purview/what-s-new-in-azure-purview-at-microsoft-ignite-2021/ba-p/2915954)
+* [Blog: resource group-level governance can significantly reduce effort](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-resource-group-level-governance-can/ba-p/3096314)
* [Demo of data owner access policies for Azure Storage](https://www.youtube.com/watch?v=CFE8ltT19Ss)
-* [Enable Azure Purview data owner policies on an Azure Storage account](./tutorial-data-owner-policies-storage.md)
+* [Fine-grain data owner policies on an Azure Storage account](./tutorial-data-owner-policies-storage.md)
purview Tutorial Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/tutorial-data-owner-policies-storage.md
Title: Access provisioning by data owner to Azure Storage datasets description: Step-by-step guide showing how data owners can create access policies to datasets in Azure Storage-+
search Cognitive Search How To Debug Skillset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-how-to-debug-skillset.md
A debug session is a cached indexer and skillset execution, scoped to a single d
1. Select **+ New Debug Session**.
-1. Provide a name for the session, for example *cog-search-debug-sessions*.
+ :::image type="content" source="media/cognitive-search-debug/new-debug-session.png" alt-text="Screenshot of the debug sessions commands in the portal page." border="true":::
-1. Specify a general-purpose storage account that will be used to cache the skill executions. You'll be prompted to select and optionally create a blob container in Blob Storage or Azure Data Lake Storage Gen2. You can reuse the same container for all subsequent debug sessions you create.
+1. In **Debug session name**, provide a name that will help you remember which skillset, indexer, and data source the debug session is about.
-1. Select the indexer that drives the skillset you want to debug. Copies of both the indexer and skillset are used to create the session.
+1. In **Storage connection**, find a general-purpose storage account for caching the debug session. You'll be prompted to select and optionally create a blob container in Blob Storage or Azure Data Lake Storage Gen2. You can reuse the same container for all subsequent debug sessions you create. A helpful container name might be "cognitive-search-debug-sessions".
-1. Choose a document. The session will default to the first document in the data source, but you can also choose which document to step through by providing its URL.
+1. In **Indexer template**, select the indexer that drives the skillset you want to debug. Copies of both the indexer and skillset are used to initialize the session.
- If your document resides in a blob container in the same storage account used to cache your debug session, you can copy the document URL from the blob property page in the portal.
+1. In **Document to debug**, choose the first document in the index or select a specific document. If you select a specific document, depending on the data source, you'll be asked for a URI or a row ID.
+
+ If your specific document is a blob, you'll be asked for the blob URI. You can find the URL in the blob property page in the portal.
:::image type="content" source="media/cognitive-search-debug/copy-blob-url.png" alt-text="Screenshot of the URI property in blob storage." border="true":::
-1. Optionally, specify any indexer execution settings that should be used to create the session. The settings should mimic the settings used by the actual indexer. Any indexer options that you specify in a debug session have no effect on the indexer itself.
+1. Optionally, in **Indexer settings**, specify any indexer execution settings used to create the session. The settings should mimic the settings used by the actual indexer. Any indexer options that you specify in a debug session have no effect on the indexer itself.
-1. Select **Save Session** to get started.
+1. Your configuration should look similar to this screenshot. Select **Save Session** to get started.
:::image type="content" source="media/cognitive-search-debug/debug-session-new.png" alt-text="Screenshot of a debug session page." border="true":::
To prove whether a modification resolves an error, follow these steps:
## View content of enrichment nodes
-AI enrichment pipelines extract or infer information and structure from source documents, creating an enriched document in the process. An enriched document is first created during document cracking and populated with a root node (`/document`), plus nodes for any content that is lifted directly from the data source, such as metadata and the document key. Additional nodes are created by skills during skill execution, where each skill output adds a new node to the enrichment tree.
+AI enrichment pipelines extract or infer information and structure from source documents, creating an enriched document in the process. An enriched document is first created during document cracking and populated with a root node (`/document`), plus nodes for any content that is lifted directly from the data source, such as metadata and the document key. More nodes are created by skills during skill execution, where each skill output adds a new node to the enrichment tree.
Enriched documents are internal, but a debug session gives you access to the content produced during skill execution. To view the content or output of each skill, follow these steps:
The following steps show you how to get information about a skill.
## Check field mappings
-If skills produce output but the search index is empty, check the field mappings that specify how content moves out of the pipeline and into a search index.
+If skills produce output but the search index is empty, check the field mappings. Field mappings specify how content moves out of the pipeline and into a search index.
1. Start with the default views: **AI enrichment > Skill Graph**, with the graph type set to **Dependency Graph**.
-1. Select **Field Mappings** near the top. You should find at least the document key that uniquely identifies and associates each search document in the search index with it's source document in the data source.
+1. Select **Field Mappings** near the top. You should find at least the document key that uniquely identifies and associates each search document in the search index with its source document in the data source.
If you're importing raw content straight from the data source, bypassing enrichment, you should find those fields in **Field Mappings**.
search Search Indexer Howto Access Ip Restricted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-indexer-howto-access-ip-restricted.md
Title: Allow access to indexer IP ranges
+ Title: Connect through firewalls
description: Configure IP firewall rules to allow data access by an Azure Cognitive Search indexer.
Previously updated : 11/11/2021 Last updated : 02/02/2022
-# Configure IP firewall rules to allow indexer connections in Azure Cognitive Search
+# Configure IP firewall rules to allow indexer connections from Azure Cognitive Search
On behalf of an indexer, a search service will issue outbound calls to an external Azure resource to pull in data during indexing. If your Azure resource uses IP firewall rules to filter incoming calls, you'll need to create an inbound rule in your firewall that admits indexer requests.
-This article explains how to find the IP address of your search service, and then use Azure portal to configure an inbound IP rule on an Azure Storage account. While specific to Azure Storage, this approach also works for other Azure resources that use IP firewall rules for data access, such as Cosmos DB and Azure SQL.
+This article explains how to find the IP address of your search service and configure an inbound IP rule on an Azure Storage account. While specific to Azure Storage, this approach also works for other Azure resources that use IP firewall rules for data access, such as Cosmos DB and Azure SQL.
> [!NOTE] > IP firewall rules for a storage account are only effective if the storage account and the search service are in different regions. If your setup does not permit this, we recommend utilizing the [trusted service exception option](search-indexer-howto-access-trusted-service-exception.md) as an alternative. ## Get a search service IP address
-1. Determine the fully qualified domain name (FQDN) of your search service. This will look like `<search-service-name>.search.windows.net`. You can find out the FQDN by looking up your search service on the Azure portal.
+1. Determine the fully qualified domain name (FQDN) of your search service. This will look like `<search-service-name>.search.windows.net`. You can find the FQDN by looking up your search service on the Azure portal.
- ![Obtain service FQDN](media\search-indexer-howto-secure-access\search-service-portal.png "Obtain service FQDN")
+ :::image type="content" source="media\search-indexer-howto-secure-access\search-service-portal.png" alt-text="Screenshot of the search service Overview page." border="true":::
-1. Look up the IP address of the search service by performing a `nslookup` (or a `ping`) of the FQDN on a command prompt.
+1. Look up the IP address of the search service by performing a `nslookup` (or a `ping`) of the FQDN on a command prompt. Make sure you remove the "https://" prefix from the FQDN.
1. Copy the IP address so that you can specify it on an inbound rule in the next step. In the example below, the IP address that you should copy is "150.0.0.1".
This article explains how to find the IP address of your search service, and the
## Get IP addresses for "AzureCognitiveSearch" service tag
-Depending on your search service configuration, you might also need to create an inbound rule that admits requests from a range of IP addresses. Specifically, additional IP addresses are used for requests that originate from the indexer's [multi-tenant execution environment](search-indexer-securing-resources.md#indexer-execution-environment).
+If your search service workloads include skillset execution, create an inbound rule that allows requests from the [multi-tenant execution environment](search-indexer-securing-resources.md#indexer-execution-environment). This step explains how to get the range of IP addresses needed for this inbound rule.
-You can get this IP address range from the `AzureCognitiveSearch` service tag.
+An IP address range is defined for each region that supports Azure Cognitive Search. You can get this IP address range from the `AzureCognitiveSearch` service tag.
1. Get the IP address ranges for the `AzureCognitiveSearch` service tag using either the [discovery API](../virtual-network/service-tags-overview.md#use-the-service-tag-discovery-api) or the [downloadable JSON file](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files).
-1. If the search service is the Azure Public cloud, the [Azure Public JSON file](https://www.microsoft.com/download/details.aspx?id=56519) should be downloaded.
-
- ![Download JSON file](media\search-indexer-howto-secure-access\service-tag.png "Download JSON file")
-
-1. From the JSON file, assuming the search service is in West Central US, the list of IP addresses for the multi-tenant indexer execution environment are listed below.
-
-```json
-{
-"name": "AzureCognitiveSearch.WestCentralUS",
-"id": "AzureCognitiveSearch.WestCentralUS",
-"properties": {
- "changeNumber": 1,
- "region": "westcentralus",
- "platform": "Azure",
- "systemService": "AzureCognitiveSearch",
- "addressPrefixes": [
- "52.150.139.0/26",
- "52.253.133.74/32"
- ]
-}
-}
-```
-
-For `/32` IP addresses, drop the "/32" (52.253.133.74/32 becomes 52.253.133.74 in the rule definition). All other IP addresses can be used verbatim.
+1. If the search service is the Azure Public cloud, download the [Azure Public JSON file](https://www.microsoft.com/download/details.aspx?id=56519).
+
+1. Open the JSON file and search for "AzureCognitiveSearch". For a search service in WestUS2, the IP addresses for the multi-tenant indexer execution environment are:
+
+ ```json
+ {
+ "name": "AzureCognitiveSearch.WestUS2",
+ "id": "AzureCognitiveSearch.WestUS2",
+ "properties": {
+ "changeNumber": 1,
+ "region": "westus2",
+ "regionId": 38,
+ "platform": "Azure",
+ "systemService": "AzureCognitiveSearch",
+ "addressPrefixes": [
+ "20.42.129.192/26",
+ "40.91.93.84/32",
+ "40.91.127.116/32",
+ "40.91.127.241/32",
+ "51.143.104.54/32",
+ "51.143.104.90/32",
+ "2603:1030:c06:1::180/121"
+ ],
+ "networkFeatures": null
+ }
+ },
+ ```
+
+1. For IP addresses have the "/32" suffix, drop the "/32" (40.91.93.84/32 becomes 40.91.93.84 in the rule definition). All other IP addresses can be used verbatim.
## Add IP addresses to IP firewall rules
-Once you have the IP addresses, you are ready to set up the rule. The easiest way to add IP address ranges to a storage account's firewall rule is via the Azure portal.
+Now that you have the necessary IP addresses, you can set up the inbound rule. The easiest way to add IP address ranges to a storage account's firewall rule is through the Azure portal.
+
+1. Locate the storage account on the portal and open **Networking** on the left navigation pane.
-1. Locate the storage account on the portal and navigate to the **Firewalls and virtual networks** tab.
+1. In the **Firewall and virtual networks** tab, choose **Selected networks**.
- ![Firewall and virtual networks](media\search-indexer-howto-secure-access\storage-firewall.png "Firewall and virtual networks")
+ :::image type="content" source="media\search-indexer-howto-secure-access\storage-firewall.png" alt-text="Screenshot of Azure Storage Firewall and virtual networks page" border="true":::
-1. Add the three IP addresses obtained previously (one for the search service IP, two for the `AzureCognitiveSearch` service tag) in the address range and select **Save**.
+1. Add the IP addresses obtained previously (one for the search service IP, plus all of the IP ranges for the "AzureCognitiveSearch" service tag) in the address range and select **Save**.
- ![Firewall IP rules](media\search-indexer-howto-secure-access\storage-firewall-ip.png "Firewall IP rules")
+ :::image type="content" source="media\search-indexer-howto-secure-access\storage-firewall-ip.png" alt-text="Screenshot of the IP address section of the page." border="true":::
It can take five to ten minutes for the firewall rules to be updated, after which indexers should be able to access the data in the storage account.
search Search Indexer Howto Access Private https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-indexer-howto-access-private.md
Title: Indexer connections through a private endpoint
+ Title: Connect through a private endpoint
description: Configure indexer connections to access content from other Azure resources that are protected through a private endpoint.
-+ Last updated 08/13/2021
search Search Indexer Howto Access Trusted Service Exception https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-indexer-howto-access-trusted-service-exception.md
Title: Indexer access to Azure Storage using trusted service exception
+ Title: Connect as trusted service
description: Enable data access by an indexer in Azure Cognitive Search to data stored securely in Azure Storage.
-+ Last updated 05/11/2021
-# Indexer access to Azure Storage using the trusted service exception (Azure Cognitive Search)
+# Make indexer connections to Azure Storage as a trusted service
Indexers in an Azure Cognitive Search service that access data in Azure Storage accounts can make use of the [trusted service exception](../storage/common/storage-network-security.md#exceptions) capability to securely access data. This mechanism offers customers who are unable to grant [indexer access using IP firewall rules](search-indexer-howto-access-ip-restricted.md) a simple, secure, and free alternative for accessing data in storage accounts.
search Search Indexer Securing Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-indexer-securing-resources.md
For any given indexer run, Azure Cognitive Search determines the best environmen
## Granting access to indexer IP ranges
-If the resource that your indexer pulls data from exists behind a firewall, make sure that the IP ranges in inbound rules include all of the IPs from which an indexer request can originate. As stated above, there are two possible environments in which indexers run and from which access requests can originate. You will need to add the IP addresses of **both** environments for indexer access to work.
+If the resource that your indexer pulls data from exists behind a firewall, you'll need [inbound rules that admit indexer connections](search-indexer-howto-access-ip-restricted.md). Make sure that the IP ranges in inbound rules include all of the IPs from which an indexer request can originate. As stated above, there are two possible environments in which indexers run and from which access requests can originate. You will need to add the IP addresses of **both** environments for indexer access to work.
-- To obtain the IP address of the search service specific private environment, use `nslookup` (or `ping`) the fully qualified domain name (FQDN) of your search service. For example, the FQDN of a search service in the public cloud would be `<service-name>.search.windows.net`. This information is available on the Azure portal.
+- To obtain the IP address of the search service private environment, use `nslookup` (or `ping`) the fully qualified domain name (FQDN) of your search service. The FQDN of a search service in the public cloud would be `<service-name>.search.windows.net`.
- To obtain the IP addresses of the multi-tenant environments within which an indexer might run, use the `AzureCognitiveSearch` service tag. [Azure service tags](../virtual-network/service-tags-overview.md) have a published range of IP addresses for each service. You can find these IPs using the [discovery API](../virtual-network/service-tags-overview.md#use-the-service-tag-discovery-api) or a [downloadable JSON file](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files). In either case, IP ranges are broken down by region. You should specify only those IP ranges assigned to the region in which your search service is provisioned.
For certain data sources, the service tag itself can be used directly instead of
- [SQL Managed Instances](./search-howto-connecting-azure-sql-mi-to-azure-search-using-indexers.md#verify-nsg-rules)
-For more information about this connectivity option, see [Indexer connections through an IP firewall](search-indexer-howto-access-ip-restricted.md).
- ## Granting access via private endpoints Indexers can use [private endpoints](../private-link/private-endpoint-overview.md) on connections to resources that are locked down (running on a protected virtual network, or just not available over a public connection).
This functionality is only available in billable search services (Basic and abov
Customers should call the search management operation, [CreateOrUpdate API](/rest/api/searchmanagement/2021-04-01-preview/shared-private-link-resources/create-or-update) on a **shared private link resource**, in order to create a private endpoint connection to their secure resource (for example, a storage account). Traffic that goes over this (outbound) private endpoint connection will originate only from the virtual network that's in the search service specific "private" indexer execution environment.
-Azure Cognitive Search will validate that callers of this API have Azure RBAC permissions to approve private endpoint connection requests to the secure resource. For example, if you request a private endpoint connection to a storage account with read-only permissions, this call will be rejected.
+Azure Cognitive Search will validate that callers of this API have Azure RBAC role permissions to approve private endpoint connection requests to the secure resource. For example, if you request a private endpoint connection to a storage account with read-only permissions, this call will be rejected.
### Step 2: Approve the private endpoint connection
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/feature-availability.md
The following tables display the current Microsoft Sentinel feature availability
| - [Anomalous Windows File Share Access Detection](../../sentinel/fusion.md) | Public Preview | Not Available | | - [Anomalous RDP Login Detection](../../sentinel/data-connectors-reference.md#configure-the-security-events--windows-security-events-connector-for-anomalous-rdp-login-detection)<br>Built-in ML detection | Public Preview | Not Available | | - [Anomalous SSH login detection](../../sentinel/connect-syslog.md#configure-the-syslog-connector-for-anomalous-ssh-login-detection)<br>Built-in ML detection | Public Preview | Not Available |
+ | **Domain solution content** | | |
+| - [Apache Log4j Vulnerability Detection](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Not Available |
+| - [Cybersecurity Maturity Model Certification (CMMC)](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Not Available |
+| - [IoT/OT Threat Monitoring with Defender for IoT](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Not Available |
+| - [Maturity Model for Event Log Management M2131](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Not Available |
+| - [Microsoft Insider Risk Management (IRM)](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Not Available |
+| - [Microsoft Sentinel Deception](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Not Available |
+| - [Zero Trust (TIC3.0)](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Not Available |
| **Azure service connectors** | | | | - [Azure Activity Logs](../../sentinel/data-connectors-reference.md#azure-activity) | GA | GA | | - [Azure Active Directory](../../sentinel/connect-azure-active-directory.md) | GA | GA |
The following tables display the current Microsoft Sentinel feature availability
| - [Microsoft Defender for IoT](../../sentinel/data-connectors-reference.md#microsoft-defender-for-iot) | Public Preview | Not Available | | - [Microsoft Insider Risk Management](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Not Available | | - [Azure Firewall ](../../sentinel/data-connectors-reference.md#azure-firewall) | GA | GA |
-| - [Azure Information Protection](../../sentinel/data-connectors-reference.md#azure-information-protection) | Public Preview | Not Available |
+| - [Azure Information Protection](../../sentinel/data-connectors-reference.md#azure-information-protection-preview) | Public Preview | Not Available |
| - [Azure Key Vault ](../../sentinel/data-connectors-reference.md#azure-key-vault) | Public Preview | Not Available | | - [Azure Kubernetes Services (AKS)](../../sentinel/data-connectors-reference.md#azure-kubernetes-service-aks) | Public Preview | Not Available | | - [Azure SQL Databases](../../sentinel/data-connectors-reference.md#azure-sql-databases) | GA | GA |
security Steps Secure Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/steps-secure-identity.md
Title: Secure your Azure AD identity infrastructure description: This document outlines a list of important actions administrators should implement to help them secure their organization using Azure AD capabilities--
-tags: azuread
+ - Previously updated : 01/29/2020+ Last updated : 06/15/2021+ -+++
+tags: azuread
+ # Five steps to securing your identity infrastructure
-If you're reading this document, you're aware of the significance of security. You likely already carry the responsibility for securing your organization. If you need to convince others of the importance of security, send them to read the latest [Microsoft Security Intelligence report](https://www.microsoft.com/security/business/security-intelligence-report).
+If you're reading this document, you're aware of the significance of security. You likely already carry the responsibility for securing your organization. If you need to convince others of the importance of security, send them to read the latest [Microsoft Digital Defense Report](https://www.microsoft.com/security/business/microsoft-digital-defense-report).
-This document will help you get a more secure posture using the capabilities of Azure Active Directory by using a five-step checklist to inoculate your organization against cyber-attacks.
+This document will help you get a more secure posture using the capabilities of Azure Active Directory by using a five-step checklist to improve your organization's protection against cyber-attacks.
This checklist will help you quickly deploy critical recommended actions to protect your organization immediately by explaining how to:
-* Strengthen your credentials.
-* Reduce your attack surface area.
-* Automate threat response.
-* Utilize cloud intelligence.
-* Enable end-user self-service.
-
-Make sure you keep track of which features and steps are complete while reading this checklist.
+- Strengthen your credentials
+- Reduce your attack surface area
+- Automate threat response
+- Utilize cloud intelligence
+- Enable end-user self-service
> [!NOTE]
-> Many of the recommendations in this document apply only to applications that are configured to use Azure Active Directory as their identity provider. Configuring apps for Single Sign-On assures the benefits of credential policies, threat detection, auditing, logging, and other features add to those applications. [Azure AD Application Management](../../active-directory/manage-apps/what-is-application-management.md) is the foundation - on which all these recommendations are based.
+> Many of the recommendations in this document apply only to applications that are configured to use Azure Active Directory as their identity provider. Configuring apps for Single Sign-On assures the benefits of credential policies, threat detection, auditing, logging, and other features add to those applications. [Azure AD Application Management](../../active-directory/manage-apps/what-is-application-management.md) is the foundation on which all these recommendations are based.
-The recommendations in this document are aligned with the [Identity Secure Score](../../active-directory/fundamentals/identity-secure-score.md), an automated assessment of your Azure AD tenantΓÇÖs identity security configuration. Organizations can use the Identity Secure Score page in the Azure AD portal to find gaps in their current security configuration to ensure they follow current Microsoft [best practices](identity-management-best-practices.md) for security. Implementing each recommendation in the Secure Score page will increase your score and allow you to track your progress, plus help you compare your implementation against other similar size organizations or your industry.
+The recommendations in this document are aligned with the [Identity Secure Score](../../active-directory/fundamentals/identity-secure-score.md), an automated assessment of your Azure AD tenantΓÇÖs identity security configuration. Organizations can use the Identity Secure Score page in the Azure AD portal to find gaps in their current security configuration to ensure they follow current Microsoft best practices for security. Implementing each recommendation in the Secure Score page will increase your score and allow you to track your progress, plus help you compare your implementation against other similar size organizations.
-![Identity Secure Score](./media/steps-secure-identity/azure-ad-sec-steps0.png)
> [!NOTE]
-> Many of the features described here require an Azure AD Premium subscription, while some are free. Please review our [Azure Active Directory pricing](https://azure.microsoft.com/pricing/details/active-directory/) and [Azure AD Deployment checklist](../../active-directory/fundamentals/active-directory-deployment-checklist-p2.md) for more information.
+> Some of the functionality recommended here is available to all customers, while others require an Azure AD Premium subscription. Please review [Azure Active Directory pricing](https://azure.microsoft.com/pricing/details/active-directory/) and [Azure AD Deployment checklist](../../active-directory/fundamentals/active-directory-deployment-checklist-p2.md) for more information.
## Before you begin: Protect privileged accounts with MFA
-Before you begin this checklist, make sure you don't get compromised while you're reading this checklist. You first need to protect your privileged accounts.
+Before you begin this checklist, make sure you don't get compromised while you're reading this checklist. In Azure Active Directory we observe 50 million password attacks daily, yet only 20% of users and 30% of global admins are using strong authentications such as multi-factor authentication (MFA). These statistics are based on data as of August 2021. In Azure AD, users who have privileged roles, such as administrators, are the root of trust to build and manage the rest of the environment. Implement the following practices to minimize the effects of a compromise.
-Attackers who get control of privileged accounts can do tremendous damage, so it's critical to protect these accounts first. Enable and require [Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) (MFA) for all administrators in your organization using [Azure AD Security Defaults](../../active-directory/fundamentals/concept-fundamentals-security-defaults.md) or [Conditional Access](../../active-directory/conditional-access/plan-conditional-access.md). If you haven't implemented MFA, do it now! It's that important.
+Attackers who get control of privileged accounts can do tremendous damage, so it's critical to [protect these accounts before proceeding](../../active-directory/authentication/how-to-authentication-find-coverage-gaps.md). Enable and require [Azure AD Multi-Factor Authentication (MFA)](../../active-directory/authentication/concept-mfa-howitworks.md) for all administrators in your organization using [Azure AD Security Defaults](../../active-directory/fundamentals/concept-fundamentals-security-defaults.md) or [Conditional Access](../../active-directory/conditional-access/howto-conditional-access-policy-admin-mfa.md). It's critical.
All set? Let's get started on the checklist. ## Step 1 - Strengthen your credentials
-Most enterprise security breaches originate with an account compromised with one of a handful of methods such as password spray, breach replay, or phishing. Learn more about these attacks in this video (45 min):
-> [!VIDEO https://www.youtube.com/embed/uy0j1_t5Hd4]
+Although other types of attacks are emerging, including consent phishing and attacks on nonhuman identities, password-based attacks on user identities are still the most prevalent vector of identity compromise. Well-established spear phishing and password spray campaigns by adversaries continue to be successful against organizations that havenΓÇÖt yet implemented multi-factor authentication (MFA) or other protections against this common tactic.
+
+As an organization you need to make sure that your identities are validated and secured with MFA everywhere. In 2020, the [FBI IC3 Report](https://www.ic3.gov/Medi).
### Make sure your organization uses strong authentication
-Given the frequency of passwords being guessed, phished, stolen with malware, or reused, it's critical to back the password with some form of strong credential ΓÇô learn more about [Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md).
+To easily enable the basic level of identity security, you can use the one-click enablement with [Azure AD security defaults](../../active-directory/fundamentals/concept-fundamentals-security-defaults.md). Security defaults enforce Azure AD MFA for all users in a tenant and blocks sign-ins from legacy protocols tenant-wide.
-To easily enable the basic level of identity security, you can use the one-click enablement with [Azure AD Security Defaults](../../active-directory/fundamentals/concept-fundamentals-security-defaults.md). Security defaults enforce Azure AD MFA for all users in a tenant and blocks sign-ins from legacy protocols tenant-wide.
+If your organization has Azure AD P1 or P2 licenses, then you can also use the [Conditional Access insights and reporting workbook](../../active-directory/conditional-access/howto-conditional-access-insights-reporting.md) to help you discover gaps in your configuration and coverage. From these recommendations, you can easily close this gap by creating a policy using the new Conditional Access templates experience. [Conditional Access templates](../../active-directory/conditional-access/concept-conditional-access-policy-common.md) are designed to provide an easy method to deploy new policies that align with Microsoft recommended [best practices](identity-management-best-practices.md), making it easy to deploy common policies to protect your identities and devices.
### Start banning commonly attacked passwords and turn off traditional complexity, and expiration rules.
-Many organizations use the traditional complexity (requiring special characters, numbers, uppercase, and lowercase) and password expiration rules. [Microsoft's research](https://aka.ms/passwordguidance) has shown these policies cause users to choose passwords that are easier to guess.
+Many organizations use traditional complexity and password expiration rules. [Microsoft's research](https://www.microsoft.com/research/publication/password-guidance/) has shown and [NIST guidance](https://pages.nist.gov/800-63-3/sp800-63b.html) states that these policies cause users to choose passwords that are easier to guess. We recommend you use [Azure AD password protection](../../active-directory/authentication/concept-password-ban-bad.md) a dynamic banned password feature using current attacker behavior to prevent users from setting passwords that can easily be guessed. This capability is always on when users are created in the cloud, but is now also available for hybrid organizations when they deploy [Azure AD password protection for Windows Server Active Directory](../../active-directory/authentication/concept-password-ban-bad-on-premises.md). In addition, we recommend you remove expiration policies. Password change offers no containment benefits as cyber criminals almost always use credentials as soon as they compromise them. Refer to the following article to [Set the password expiration policy for your organization](/microsoft-365/admin/manage/set-password-expiration-policy).
-Azure AD's [dynamic banned password](../../active-directory/authentication/concept-password-ban-bad.md) feature uses current attacker behavior to prevent users from setting passwords that can easily be guessed. This capability is always on when users are created in the cloud, but is now also available for hybrid organizations when they deploy [Azure AD password protection for Windows Server Active Directory](../../active-directory/authentication/concept-password-ban-bad-on-premises.md). Azure AD password protection blocks users from choosing these common passwords and can be extended to block password containing custom keywords you specify. For example, you can prevent your users from choosing passwords containing your companyΓÇÖs product names or a local sport team.
+### Protect against leaked credentials and add resilience against outages
-Microsoft recommends adopting the following modern password policy based on [NIST guidance](https://pages.nist.gov/800-63-3/sp800-63b.html):
+The simplest and recommended method for enabling cloud authentication for on-premises directory objects in Azure AD is to enable [password hash synchronization (PHS)](../../active-directory/hybrid/how-to-connect-password-hash-synchronization.md). If your organization uses a hybrid identity solution with pass-through authentication or federation, then you should enable password hash sync for the following two reasons:
-1. Require passwords have at least 8 characters. Longer isn't necessarily better, as they cause users to choose predictable passwords, save passwords in files, or write them down.
-2. Disable expiration rules, which drive users to easily guessed passwords such as **Spring2019!**
-3. Disable character-composition requirements and prevent users from choosing commonly attacked passwords, as they cause users to choose predictable character substitutions in passwords.
+- The [Users with leaked credentials report](../../active-directory/identity-protection/overview-identity-protection.md) in Azure AD warns of username and password pairs, which have been exposed publically. An incredible volume of passwords is leaked via phishing, malware, and password reuse on third-party sites that are later breached. Microsoft finds many of these leaked credentials and will tell you, in this report, if they match credentials in your organization ΓÇô but only if you enable [password hash sync](../../active-directory/hybrid/how-to-connect-password-hash-synchronization.md) or have cloud-only identities.
+- If an on-premises outage happens, like a ransomware attack, you can [switch over to using cloud authentication using password hash sync](../../active-directory/hybrid/choose-ad-authn.md). This backup authentication method will allow you to continue accessing apps configured for authentication with Azure Active Directory, including Microsoft 365. In this case, IT staff won't need to resort to shadow IT or personal email accounts to share data until the on-premises outage is resolved.
-You can use [PowerShell to prevent passwords from expiring](../../active-directory/authentication/concept-sspr-policy.md) for users if you create identities in Azure AD directly. Hybrid organizations should implement these policies using [domain group policy settings](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/hh994572(v%3dws.10)) or [Windows PowerShell](/powershell/module/activedirectory/set-addefaultdomainpasswordpolicy).
+Passwords are never stored in clear text or encrypted with a reversible algorithm in Azure AD. For more information on the actual process of password hash synchronization, see [Detailed description of how password hash synchronization works](../../active-directory/hybrid/how-to-connect-password-hash-synchronization.md#detailed-description-of-how-password-hash-synchronization-works).
-### Protect against leaked credentials and add resilience against outages
+### Implement AD FS extranet smart lockout
-If your organization uses a hybrid identity solution with pass-through authentication or federation, then you should enable password hash sync for the following two reasons:
+Smart lockout helps lock out bad actors that try to guess your users' passwords or use brute-force methods to get in. Smart lockout can recognize sign-ins that come from valid users and treat them differently than ones of attackers and other unknown sources. Attackers get locked out, while your users continue to access their accounts and be productive. Organizations, which configure applications to authenticate directly to Azure AD benefit from Azure AD smart lockout. Federated deployments that use AD FS 2016 and AD FS 2019 can enable similar benefits using [AD FS Extranet Lockout and Extranet Smart Lockout](/windows-server/identity/ad-fs/operations/configure-ad-fs-extranet-smart-lockout-protection.md).
-* The [Users with leaked credentials](../../active-directory/identity-protection/overview-identity-protection.md) report in the Azure AD management warns you of username and password pairs, which have been exposed on the "dark web." An incredible volume of passwords is leaked via phishing, malware, and password reuse on third-party sites that are later breached. Microsoft finds many of these leaked credentials and will tell you, in this report, if they match credentials in your organization ΓÇô but only if you [enable password hash sync](../../active-directory/hybrid/how-to-connect-password-hash-synchronization.md) or have cloud-only identities!
-* In the event of an on-premises outage (for example, in a ransomware attack) you can switch over to using [cloud authentication using password hash sync](../../active-directory/hybrid/choose-ad-authn.md). This backup authentication method will allow you to continue accessing apps configured for authentication with Azure Active Directory, including Microsoft 365. In this case, IT staff won't need to resort to personal email accounts to share data until the on-premises outage is resolved.
+## Step 2 - Reduce your attack surface area
-Learn more about how [password hash sync](../../active-directory/hybrid/how-to-connect-password-hash-synchronization.md) works.
+Given the pervasiveness of password compromise, minimizing the attack surface in your organization is critical. Disabling the use of older, less secure protocols, limiting access entry points, moving to cloud authentication, and exercising more significant control of administrative access to resources and embracing Zero Trust security principles.
-> [!NOTE]
-> If you enable password hash sync and are using Azure AD Domain services, Kerberos (AES 256) hashes and optionally NTLM (RC4, no salt) hashes will also be encrypted and synchronized to Azure AD.
+### Use Cloud Authentication
-### Implement AD FS extranet smart lockout
+Credentials are a primary attack vector. The practices in this blog can reduce the attack surface by using cloud authentication, deploy MFA and use passwordless authentication methods. You can deploy passwordless methods such as Windows Hello for Business, Phone Sign-in with the Microsoft Authenticator App or FIDO.
-Organizations, which configure applications to authenticate directly to Azure AD benefit from [Azure AD smart lockout](../../active-directory/authentication/concept-sspr-howitworks.md). If you use AD FS in Windows Server 2012R2, implement AD FS [extranet lockout protection](/windows-server/identity/ad-fs/operations/configure-ad-fs-extranet-soft-lockout-protection). If you use AD FS on Windows Server 2016, implement [extranet smart lockout](https://support.microsoft.com/help/4096478/extranet-smart-lockout-feature-in-windows-server-2016). AD FS Smart Extranet lockout protects against brute force attacks, which target AD FS while preventing users from being locked out in Active Directory.
+### Block legacy authentication
-### Take advantage of intrinsically secure, easier to use credentials
+Apps using their own legacy methods to authenticate with Azure AD and access company data, pose another risk for organizations. Examples of apps using legacy authentication are POP3, IMAP4, or SMTP clients. Legacy authentication apps authenticate on behalf of the user and prevent Azure AD from doing advanced security evaluations. The alternative, modern authentication, will reduce your security risk, because it supports multi-factor authentication and Conditional Access.
-Using [Windows Hello](/windows/security/identity-protection/hello-for-business/hello-identity-verification), you can replace passwords with strong two-factor authentication on PCs and mobile devices. This authentication consists of a new type of user credential that is tied securely to a device and uses a biometric or PIN.
+We recommend the following actions:
-## Step 2 - Reduce your attack surface
+1. Discover legacy authentication in your organization with Azure AD Sign-In logs and Log Analytic workbooks.
+1. Setup SharePoint Online and Exchange Online to use modern authentication.
+1. If you have Azure AD Premium licenses, use Conditional Access policies to block legacy authentication. For Azure AD free tier, use Azure AD Security Defaults.
+1. Block legacy authentication if you use AD FS.
+1. Block Legacy Authentication with Exchange Server 2019.
+1. Disable legacy authentication in Exchange Online.
-Given the pervasiveness of password compromise, minimizing the attack surface in your organization is critical. Eliminating use of older, less secure protocols, limiting access entry points, and exercising more significant control of administrative access to resources can help reduce the attack surface area.
+For more information, see the article [Blocking legacy authentication protocols in Azure AD](../../active-directory/fundamentals/concept-fundamentals-block-legacy-authentication.md).
-### Block legacy authentication
+### Block invalid authentication entry points
-Apps using their own legacy methods to authenticate with Azure AD and access company data, pose another risk for organizations. Examples of apps using legacy authentication are POP3, IMAP4, or SMTP clients. Legacy authentication apps authenticate on behalf of the user and prevent Azure AD from doing advanced security evaluations. The alternative, modern authentication, will reduce your security risk, because it supports multi-factor authentication and Conditional Access. We recommend the following three actions:
+Using the verify explicitly principle, you should reduce the impact of compromised user credentials when they happen. For each app in your environment, consider the valid use cases: which groups, which networks, which devices and other elements are authorized ΓÇô then block the rest. With Azure AD Conditional Access, you can control how authorized users access their apps and resources based on specific conditions you define.
-1. Block [legacy authentication if you use AD FS](/windows-server/identity/ad-fs/operations/access-control-policies-w2k12).
-2. Setup [SharePoint Online and Exchange Online to use modern authentication](../../active-directory/conditional-access/block-legacy-authentication.md).
-3. If you have Azure AD Premium, use Conditional Access policies to [block legacy authentication](../../active-directory/conditional-access/howto-conditional-access-policy-block-legacy.md), otherwise use [Azure AD Security Defaults](../../active-directory/fundamentals/concept-fundamentals-security-defaults.md).
+### Review and govern admin roles
-### Block invalid authentication entry points
+Another Zero Trust pillar is the need to minimize the likelihood a compromised account can operate with a privileged role. This control can be accomplished by assigning the least amount of privilege to an identity. If youΓÇÖre new to Azure AD Roles, this article will help you understand Azure AD Roles.
-Using the assume breach mentality, you should reduce the impact of compromised user credentials when they happen. For each app in your environment consider the valid use cases: which groups, which networks, which devices and other elements are authorized ΓÇô then block the rest. With [Azure AD Conditional Access](../../active-directory/conditional-access/overview.md), you can control how authorized users access their apps and resources based on specific conditions you define.
+Privileged roles in Azure AD should be cloud only accounts in order to isolate them from any on-premises environments and donΓÇÖt use on-premises password vaults to store the credentials.
-### Restrict user consent operations
+### Implement Privilege Access Management
-ItΓÇÖs important to understand the various [Azure AD application consent experiences](../../active-directory/develop/application-consent-experience.md), the [types of permissions and consent](../../active-directory/develop/v2-permissions-and-consent.md), and their implications on your organizationΓÇÖs security posture. By default, all users in Azure AD can grant applications that leverage the Microsoft identity platform to access your organizationΓÇÖs data. While allowing users to consent by themselves does allow users to easily acquire useful applications that integrate with Microsoft 365, Azure and other services, it can represent a risk if not used and monitored carefully.
+Privileged Identity Management (PIM) provides a time-based and approval-based role activation to mitigate the risks of excessive, unnecessary, or misused access permissions to important resources. These resources include resources in Azure Active Directory (Azure AD), Azure, and other Microsoft Online Services such as Microsoft 365 or Microsoft Intune.
-Microsoft recommends [restricting user consent](../../active-directory/manage-apps/configure-user-consent.md) to allow end-user consent only for apps from verified publishers and only for permissions you select. If end-user consent is restricted, previous consent grants will still be honored but all future consent operations must be performed by an administrator. For restricted cases, admin consent can be requested by users through an integrated [admin consent request workflow](../../active-directory/manage-apps/configure-admin-consent-workflow.md) or through your own support processes. Before restricting end-user consent, use our [recommendations](../../active-directory/manage-apps/manage-consent-requests.md) to plan this change in your organization. For applications you wish to allow all users to access, consider [granting consent on behalf of all users](../../active-directory/develop/v2-admin-consent.md), making sure users who have not yet consented individually will be able to access the app. If you do not want these applications to be available to all users in all scenarios, use [application assignment](../../active-directory/manage-apps/assign-user-or-group-access-portal.md) and Conditional Access to restrict user access to [specific apps](../../active-directory/conditional-access/concept-conditional-access-cloud-apps.md).
+Azure AD Privileged Identity Management (PIM) helps you minimize account privileges by helping you:
-Make sure users can request admin approval for new applications to reduce user friction, minimize support volume, and prevent users from signing up for applications using non-Azure AD credentials. Once you regulate your consent operations, administrators should audit app and consented permissions on a regular basis.
+- Identify and manage users assigned to administrative roles.
+- Understand unused or excessive privilege roles you should remove.
+- Establish rules to make sure privileged roles are protected by multi-factor authentication.
+- Establish rules to make sure privileged roles are granted only long enough to accomplish the privileged task.
+Enable Azure AD PIM, then view the users who are assigned administrative roles and remove unnecessary accounts in those roles. For remaining privileged users, move them from permanent to eligible. Finally, establish appropriate policies to make sure when they need to gain access to those privileged roles, they can do so securely, with the necessary change control.
-### Implement Azure AD Privileged Identity Management
+Azure AD built-in and custom roles operate on concepts similar to roles found in the role-based access control system for Azure resources (Azure roles). The difference between these two role-based access control systems is:
-Another impact of "assume breach" is the need to minimize the likelihood a compromised account can operate with a privileged role.
+- Azure AD roles control access to Azure AD resources such as users, groups, and applications using the Microsoft Graph API
+- Azure roles control access to Azure resources such as virtual machines or storage using Azure Resource Management
-[Azure AD Privileged Identity Management (PIM)](../../active-directory/privileged-identity-management/pim-configure.md) helps you minimized account privileges by helping you:
+Both systems contain similarly used role definitions and role assignments. However, Azure AD role permissions can't be used in Azure custom roles and vice versa. As part of deploying your privileged account process, follow the best practice to create at least two emergency accounts to make sure you still have access to Azure AD if you lock yourself out.
-* Identify and manage users assigned to administrative roles.
-* Understand unused or excessive privilege roles you should remove.
-* Establish rules to make sure privileged roles are protected by multi-factor authentication.
-* Establish rules to make sure privileged roles are granted only long enough to accomplish the privileged task.
+For more information, see the article [Plan a Privileged Identity Management deployment](../../active-directory/privileged-identity-management/pim-deployment-plan.md).
-Enable Azure AD PIM, then view the users who are assigned administrative roles and remove unnecessary accounts in those roles. For remaining privileged users, move them from permanent to eligible. Finally, establish appropriate policies to make sure when they need to gain access to those privileged roles, they can do so securely, with the necessary change control.
+### Restrict user consent operations
+
+ItΓÇÖs important to understand the various Azure AD application consent experiences, the types of permissions and consent, and their implications on your organizationΓÇÖs security posture. While allowing users to consent by themselves does allow users to easily acquire useful applications that integrate with Microsoft 365, Azure, and other services, it can represent a risk if not used and monitored carefully.
+
+Microsoft recommends restricting user consent to allow end-user consent only for apps from verified publishers and only for permissions you select. If end-user consent is restricted, previous consent grants will still be honored but all future consent operations must be performed by an administrator. For restricted cases, admin consent can be requested by users through an integrated admin consent request workflow or through your own support processes. Before restricting end-user consent, use our recommendations to plan this change in your organization. For applications you wish to allow all users to access, consider granting consent on behalf of all users, making sure users who havenΓÇÖt yet consented individually will be able to access the app. If you donΓÇÖt want these applications to be available to all users in all scenarios, use application assignment and Conditional Access to restrict user access to specific apps.
-As part of deploying your privileged account process, follow the [best practice to create at least two emergency accounts](../../active-directory/roles/security-planning.md) to make sure you still have access to Azure AD if you lock yourself out.
+Make sure users can request admin approval for new applications to reduce user friction, minimize support volume, and prevent users from signing up for applications using non-Azure AD credentials. Once you regulate your consent operations, administrators should audit app and consent permissions regularly.
+
+For more information, see the article [Azure Active Directory consent framework(../../active-directory/develop/consent-framework.md).
## Step 3 - Automate threat response Azure Active Directory has many capabilities that automatically intercept attacks, to remove the latency between detection and response. You can reduce the costs and risks, when you reduce the time criminals use to embed themselves into your environment. Here are the concrete steps you can take.
-### Implement user risk security policy using Azure AD Identity Protection
+For more information, see the article [How To: Configure and enable risk policies](../../active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md).
+
+### Implement sign-in risk policy
+
+A sign-in risk represents the probability that a given authentication request isn't authorized by the identity owner. A sign-in risk-based policy can be implemented through adding a sign-in risk condition to your Conditional Access policies that evaluates the risk level to a specific user or group. Based on the risk level (high/medium/low), a policy can be configured to block access or force multi-factor authentication. We recommend that you force multi-factor authentication on Medium or above risky sign-ins.
+
+
+### Implement user risk security policy
+
+User risk indicates the likelihood a user's identity has been compromised and is calculated based on the user risk detections that are associated with a user's identity. A user risk-based policy can be implemented through adding a user risk condition to your Conditional Access policies that evaluates the risk level to a specific user. Based on Low, Medium, High risk-level, a policy can be configured to block access or require a secure password change using multi-factor authentication. Microsoft's recommendation is to require a secure password change for users on high risk.
+
-User risk indicates the likelihood a user's identity has been compromised and is calculated based on the [user risk detections](../../active-directory/identity-protection/overview-identity-protection.md) that are associated with a user's identity. A user risk policy is a Conditional Access policy that evaluates the risk level to a specific user or group. Based on Low, Medium, High risk-level, a policy can be configured to block access or require a secure password change using multi-factor authentication. Microsoft's recommendation is to require a secure password change for users on high risk.
+Included in the user risk detection is a check whether the user's credentials match to credentials leaked by cybercriminals. To function optimally, itΓÇÖs important to implement password hash synchronization with Azure AD Connect sync.
-![Screenshot shows Users flagged for risk, with a user selected.](./media/steps-secure-identity/azure-ad-sec-steps1.png)
+### Integrate Microsoft 365 Defender with Azure AD Identity Protection
-### Implement sign-in risk policy using Azure AD Identity Protection
+For Identity Protection to be able to perform the best risk detection possible, it needs to get as many signals as possible. ItΓÇÖs therefore important to integrate the complete suite of Microsoft 365 Defender
-Sign-in risk is the likelihood someone other than the account owner is attempting to sign on using the identity. A [sign-in risk policy](../../active-directory/identity-protection/overview-identity-protection.md) is a Conditional Access policy that evaluates the risk level to a specific user or group. Based on the risk level (high/medium/low), a policy can be configured to block access or force multi-factor authentication. Make sure you force multi-factor authentication on Medium or above risk sign-ins.
+- Microsoft Defender for Endpoint
+- Microsoft Defender for Office 365
+- Microsoft Defender for Identity
+- Microsoft Defender for Cloud Apps
-![Sign in from anonymous IPs](./media/steps-secure-identity/azure-ad-sec-steps2.png)
+Learn more about Microsoft Threat Protection and the importance of integrating different domains, in the following short video.
+
+> [!VIDEO https://www.microsoft.com/videoplayer/embed/RE4Bzww]
+
+### Set up monitoring and alerting
+
+Monitoring and auditing your logs is important to detect suspicious behavior. The Azure portal has several ways to integrate Azure AD logs with other tools, like Microsoft Sentinel, Azure Monitor, and other SIEM tools. For more information, see the [Azure Active Directory security operations guide](../../active-directory/fundamentals/security-operations-introduction.md#data-sources).
## Step 4 - Utilize cloud intelligence
-Auditing and logging of security-related events and related alerts are essential components of an efficient protection strategy. Security logs and reports provide you with an electronic record of suspicious activities and help you detect patterns that may indicate attempted or successful external penetration of the network, and internal attacks. You can use auditing to monitor user activity, document regulatory compliance, do forensic analysis, and more. Alerts provide notifications of security events.
+Auditing and logging of security-related events and related alerts are essential components of an efficient protection strategy. Security logs and reports provide you with an electronic record of suspicious activities and help you detect patterns that may indicate attempted or successful external penetration of the network, and internal attacks. You can use auditing to monitor user activity, document regulatory compliance, do forensic analysis, and more. Alerts provide notifications of security events. Make sure you have a log retention policy in place for both your sign-in logs and audit logs for Azure AD by exporting into Azure Monitor or a SIEM tool.
### Monitor Azure AD
-Microsoft Azure services and features provide you with configurable security auditing and logging options to help you identify gaps in your security policies and mechanisms and address those gaps to help prevent breaches. You can use [Azure Logging and Auditing](log-audit.md) and use [Audit activity reports in the Azure Active Directory portal](../../active-directory/reports-monitoring/concept-audit-logs.md).
+Microsoft Azure services and features provide you with configurable security auditing and logging options to help you identify gaps in your security policies and mechanisms and address those gaps to help prevent breaches. You can use [Azure Logging and Auditing](log-audit.md) and use [Audit activity reports in the Azure Active Directory portal](../../active-directory/reports-monitoring/concept-audit-logs.md). See the [Azure AD Security Operations guide](../../active-directory/fundamentals/security-operations-introduction.md) for more details on monitoring user accounts, Privileged accounts, apps, and devices.
### Monitor Azure AD Connect Health in hybrid environments
-[Monitoring AD FS with Azure AD Connect Health](../../active-directory/hybrid/how-to-connect-health-adfs.md) provides you with greater insight into potential issues and visibility of attacks on your AD FS infrastructure. Azure AD Connect Health delivers alerts with details, resolution steps, and links to related documentation; usage analytics for several metrics related to authentication traffic; performance monitoring and reports.
-
-![Azure AD Connect Health](./media/steps-secure-identity/azure-ad-sec-steps4.png)
+[Monitoring AD FS with Azure AD Connect Health](../../active-directory/hybrid/how-to-connect-health-adfs.md) provides you with greater insight into potential issues and visibility of attacks on your AD FS infrastructure. You can now view [ADFS sign-ins](../../active-directory/hybrid/how-to-connect-health-ad-fs-sign-in.md) to give greater depth for your monitoring. Azure AD Connect Health delivers alerts with details, resolution steps, and links to related documentation; usage analytics for several metrics related to authentication traffic; performance monitoring and reports. Utilize the [Risky IP WorkBook for ADFS](../../active-directory/hybrid/how-to-connect-health-adfs-risky-ip-workbook.md#) that can help identify the norm for your environment and alert when thereΓÇÖs a change. All Hybrid Infrastructure should be monitored as a Tier 0 asset. Detailed monitoring guidance for these assets can be found in the [Security Operations guide for Infrastructure](../../active-directory/fundamentals/security-operations-infrastructure.md).
### Monitor Azure AD Identity Protection events
-[Azure AD Identity Protection](../../active-directory/identity-protection/overview-identity-protection.md) is a notification, monitoring and reporting tool you can use to detect potential vulnerabilities affecting your organization's identities. It detects risk detections, such as leaked credentials, impossible travel, and sign-ins from infected devices, anonymous IP addresses, IP addresses associated with the suspicious activity, and unknown locations. Enable notification alerts to receive email of users at risk and/or a weekly digest email.
+[Azure AD Identity Protection](../../active-directory/identity-protection/overview-identity-protection.md) provides two important reports you should monitor daily:
-Azure AD Identity Protection provides two important reports you should monitor daily:
-1. Risky sign-in reports will surface user sign-in activities you should investigate, the legitimate owner may not have performed the sign-in.
-2. Risky user reports will surface user accounts that may have been compromised, such as leaked credential that was detected or the user signed in from different locations causing an impossible travel event.
+1. Risky sign-in reports will surface user sign-in activities you should investigate, the legitimate owner may not have performed the sign-in.
+1. Risky user reports will surface user accounts that may have been compromised, such as leaked credential that was detected or the user signed in from different locations causing an impossible travel event.
-![Screenshot shows the Azure A D Identity Protection pane with users and their risk levels.](./media/steps-secure-identity/azure-ad-sec-steps3.png)
### Audit apps and consented permissions
-Users can be tricked into navigating to a compromised web site or apps that will gain access to their profile information and user data, such as their email. A malicious actor can use the consented permissions it received to encrypt their mailbox content and demand a ransom to regain your mailbox data. [Administrators should review and audit](/office365/securitycompliance/detect-and-remediate-illicit-consent-grants) the permissions given by users or disable the ability of users to give consent by default.
-
-In addition to auditing the permissions given by users, you can [locate risky or unwanted OAuth applications](/cloud-app-security/investigate-risky-oauth) in premium environments.
+Users can be tricked into navigating to a compromised web site or apps that will gain access to their profile information and user data, such as their email. A malicious actor can use the consented permissions it received to encrypt their mailbox content and demand a ransom to regain your mailbox data. [Administrators should review and audit](/office365/securitycompliance/detect-and-remediate-illicit-consent-grants) the permissions given by users. In addition to auditing the permissions given by users, you can [locate risky or unwanted OAuth applications](/cloud-app-security/investigate-risky-oauth) in premium environments.
## Step 5 - Enable end-user self-service
-As much as possible you'll want to balance security with productivity. Along the same lines of approaching your journey with the mindset that you're setting a foundation for security in the long run, you can remove friction from your organization by empowering your users while remaining vigilant.
+As much as possible you'll want to balance security with productivity. Approaching your journey with the mindset that you're setting a foundation for security, you can remove friction from your organization by empowering your users while remaining vigilant and reducing your operational overheads.
### Implement self-service password reset
-Azure AD's [self-service password reset (SSPR)](../../active-directory/authentication/tutorial-enable-sspr.md) offers a simple means for IT administrators to allow users to reset or unlock their passwords or accounts without help desk or administrator intervention. The system includes detailed reporting that tracks when users have reset their passwords, along with notifications to alert you to misuse or abuse.
+Azure AD's [self-service password reset (SSPR)](../../active-directory/authentication/tutorial-enable-sspr.md) offers a simple means for IT administrators to allow users to reset or unlock their passwords or accounts without helpdesk or administrator intervention. The system includes detailed reporting that tracks when users have reset their passwords, along with notifications to alert you to misuse or abuse.
### Implement self-service group and application access
-Azure AD provides the ability to non-administrators to manage access to resources, using security groups, Microsoft 365 groups, application roles, and access package catalogs. [Self-service group management](../../active-directory/enterprise-users/groups-self-service-management.md) enables group owners to manage their own groups, without needing to be assigned an administrative role. Users can also create and manage Microsoft 365 groups without relying on administrators to handle their requests, and unused groups expire automatically. [Azure AD entitlement management](../../active-directory/governance/entitlement-management-overview.md) further enables delegation and visibility, with comprehensive access request workflows and automatic expiration. You can delegate to non-administrators the ability to configure their own access packages for groups, Teams, applications, and SharePoint Online sites they own, with custom policies for who is required to approve access, including configuring employee's managers and business partner sponsors as approvers.
+Azure AD can allow non-administrators to manage access to resources, using security groups, Microsoft 365 groups, application roles, and access package catalogs. [Self-service group management](../../active-directory/enterprise-users/groups-self-service-management.md) enables group owners to manage their own groups, without needing to be assigned an administrative role. Users can also create and manage Microsoft 365 groups without relying on administrators to handle their requests, and unused groups expire automatically. [Azure AD entitlement management](../../active-directory/governance/entitlement-management-overview.md) further enables delegation and visibility, with comprehensive access request workflows and automatic expiration. You can delegate to non-administrators the ability to configure their own access packages for groups, Teams, applications, and SharePoint Online sites they own, with custom policies for who is required to approve access, including configuring employee's managers and business partner sponsors as approvers.
### Implement Azure AD access reviews
-With [Azure AD access reviews](../../active-directory/governance/access-reviews-overview.md), you can manage access package and group memberships, access to enterprise applications, and privileged role assignments to make sure you maintain a security standard. Regular oversight by the users themselves, resource owners, and other reviewers ensure that users don't retain access for extended periods of time when they no longer need it.
+With [Azure AD access reviews](../../active-directory/governance/access-reviews-overview.md), you can manage access package and group memberships, access to enterprise applications, and privileged role assignments to make sure you maintain a security standard. Regular oversight by the users themselves, resource owners, and other reviewers ensure that users don't retain access for extended periods of time when they no longer need it.
+
+### Implement automatic user provisioning
+
+Provisioning and deprovisioning are the processes that ensure consistency of digital identities across multiple systems. These processes are typically applied as part of [identity lifecycle management](../../active-directory/governance/what-is-identity-lifecycle-management.md).
+
+Provisioning is the processes of creating an identity in a target system based on certain conditions. De-provisioning is the process of removing the identity from the target system, when conditions are no longer met. Synchronization is the process of keeping the provisioned object, up to date, so that the source object and target object are similar.
+
+Azure AD currently provides three areas of automated provisioning. They are:
+
+- Provisioning from an external non-directory authoritative system of record to Azure AD, via [HR-driven provisioning](../../active-directory/governance/what-is-provisioning.md#hr-driven-provisioning)
+- Provisioning from Azure AD to applications, via [App provisioning](../../active-directory/governance/what-is-provisioning.md#app-provisioning)
+- Provisioning between Azure AD and Active Directory domain services, via [inter-directory provisioning](../../active-directory/governance/what-is-provisioning.md#inter-directory-provisioning)
+
+Find out more here: What is provisioning with Azure Active Directory?
## Summary There are many aspects to a secure Identity infrastructure, but this five-step checklist will help you quickly accomplish a safer and secure identity infrastructure:
-* Strengthen your credentials.
-* Reduce your attack surface area.
-* Automate threat response.
-* Utilize cloud intelligence.
-* Enable more predictable and complete end-user security with self-help.
+- Strengthen your credentials
+- Reduce your attack surface area
+- Automate threat response
+- Utilize cloud intelligence
+- Enable end-user self-service
-We appreciate how seriously you take Identity Security and hope this document is a useful roadmap to a more secure posture for your organization.
+We appreciate how seriously you take security and hope this document is a useful roadmap to a more secure posture for your organization.
## Next steps
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/data-connectors-reference.md
See [Microsoft Defender for Cloud](#microsoft-defender-for-cloud).
| **Supported by** | Microsoft | | | |
-## Azure Information Protection
+## Azure Information Protection (Preview)
| Connector attribute | Description | | | |
site-recovery Hyper V Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/hyper-v-azure-support-matrix.md
Hyper-V without Virtual Machine Manager | You can perform disaster recovery to A
**Server** | **Requirements** | **Details** | |
-Hyper-V (running without Virtual Machine Manager) | Windows Server 2019, Windows Server 2016, Windows Server 2012 R2 with latest updates <br/><br/> **Note:** Server core installation of these operating systems are also supported. | If you have already configured Windows Server 2012 R2 with/or SCVMM 2012 R2 with Azure Site Recovery and plan to upgrade the OS, please follow the guidance [documentation.](upgrade-2012R2-to-2016.md)
-Hyper-V (running with Virtual Machine Manager) | Virtual Machine Manager 2019, Virtual Machine Manager 2016, Virtual Machine Manager 2012 R2 <br/><br/> **Note:** Server core installation of these operating systems are also supported. | If Virtual Machine Manager is used, Windows Server 2019 hosts should be managed in Virtual Machine Manager 2019. Similarly, Windows Server 2016 hosts should be managed in Virtual Machine Manager 2016.
+Hyper-V (running without Virtual Machine Manager) | Windows Server 2022 (Server core not supported), Windows Server 2019, Windows Server 2016, Windows Server 2012 R2 with latest updates <br/><br/> **Note:** Server core installation of these operating systems are also supported. | If you have already configured Windows Server 2012 R2 with/or SCVMM 2012 R2 with Azure Site Recovery and plan to upgrade the OS, please follow the guidance [documentation.](upgrade-2012R2-to-2016.md)
+Hyper-V (running with Virtual Machine Manager) | Virtual Machine Manager 2022 (Server core not supported), Virtual Machine Manager 2019, Virtual Machine Manager 2016, Virtual Machine Manager 2012 R2 <br/><br/> **Note:** Server core installation of these operating systems are also supported. | If Virtual Machine Manager is used, Windows Server 2019 hosts should be managed in Virtual Machine Manager 2019. Similarly, Windows Server 2016 hosts should be managed in Virtual Machine Manager 2016.
> [!NOTE] > Ensure that .NET Framework 4.6.2 or higher is present on the on-premise server.
static-web-apps Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/configuration.md
Previously updated : 12/30/2021 Last updated : 02/03/2022
Common uses cases for wildcard routes include:
- Enforcing authentication and authorization rules - Implementing specialized caching rules
-### Securing routes with roles
+### <a name="securing-routes-with-roles"></a>Securing routes with roles
Routes are secured by adding one or more role names into a rule's `allowedRoles` array. See the [example configuration file](#example-configuration-file) for usage examples.
+> [!IMPORTANT]
+> Routing rules can only secure HTTP requests to routes that are served from Static Web Apps. Many front-end frameworks use client-side routing that modifies routes in the browser without issuing requests to Static Web Apps. Routing rules don't secure client-side routes. Clients should call [HTTP APIs](apis.md) to retrieve sensitive data. Ensure APIs validate a [user's identity](user-information.md) before returning data.
+ By default, every user belongs to the built-in `anonymous` role, and all logged-in users are members of the `authenticated` role. Optionally, users are associated to custom roles via [invitations](./authentication-authorization.md). For instance, to restrict a route to only authenticated users, add the built-in `authenticated` role to the `allowedRoles` array.
In addition to IP address blocks, you can also specify [service tags](../virtual
* [Default authentication providers](authentication-authorization.md#login), don't require settings in the configuration file. * [Custom authentication providers](authentication-custom.md) use the `auth` section of the settings file.
-## Disable cache for authenticated paths
+For details on how to restrict routes to authenticated users, see [Securing routes with roles](#securing-routes-with-roles).
+
+### Disable cache for authenticated paths
If you have enabled [enterprise-grade edge](enterprise-edge.md), or set up [manual integration with Azure Front Door](front-door-manual.md), you may want to disable caching for your secured routes.
storage Storage Quickstart Blobs Go https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-quickstart-blobs-go.md
Title: Azure Quickstart - Create a blob in object storage using Go | Microsoft D
description: In this quickstart, you create a storage account and a container in object (Blob) storage. Then you use the storage client library for Go to upload a blob to Azure Storage, download a blob, and list the blobs in a container. Previously updated : 11/14/2018 Last updated : 12/10/2021
In this quickstart, you learn how to use the Go programming language to upload,
[!INCLUDE [storage-quickstart-prereq-include](../../../includes/storage-quickstart-prereq-include.md)]
-Make sure you have the following additional prerequisites installed:
+Make sure you have the following more prerequisites installed:
-- [Go 1.8 or above](https://golang.org/dl/)-- [Azure Storage Blob SDK for Go](https://github.com/azure/azure-storage-blob-go/), using the following command:
+- [Go 1.17 or above](https://go.dev/dl/)
+- [Azure Storage Blob SDK for Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/), using the following command:
- `go get -u github.com/Azure/azure-storage-blob-go/azblob`
+ `go get -u github.com/Azure/azure-sdk-for-go/sdk/storage/azblob`
> [!NOTE] > Make sure that you capitalize `Azure` in the URL to avoid case-related import problems when working with the SDK. Also capitalize `Azure` in your import statements.
Use [git](https://git-scm.com/) to download a copy of the application to your de
git clone https://github.com/Azure-Samples/storage-blobs-go-quickstart ```
-This command clones the repository to your local git folder. To open the Go sample for Blob storage, look for storage-quickstart.go file.
+This command clones the repository to your local git folder. To open the Go sample for Blob storage, look for `storage-quickstart.go` file.
+## Sign in with Azure CLI
-## Configure your storage connection string
+To support local development, the Azure Identity credential type `DefaultAzureCredential` authenticates users signed into the Azure CLI.
-This solution requires your storage account name and key to be securely stored in environment variables local to the machine running the sample. Follow one of the examples below depending on your operating System to create the environment variables.
+Run the following command to sign into the Azure CLI:
-# [Linux](#tab/linux)
-
-```bash
-export AZURE_STORAGE_ACCOUNT="<youraccountname>"
-export AZURE_STORAGE_ACCESS_KEY="<youraccountkey>"
+```azurecli
+az login
```
-# [Windows](#tab/windows)
+Azure CLI authentication isn't recommended for applications running in Azure.
-```shell
-setx AZURE_STORAGE_ACCOUNT "<youraccountname>"
-setx AZURE_STORAGE_ACCESS_KEY "<youraccountkey>"
-```
--
+To learn more about different authentication methods, check out [Azure authentication with the Azure SDK for Go](/azure/developer/go/azure-sdk-authentication).
## Run the sample
-This sample creates a test file in the current folder, uploads the test file to Blob storage, lists the blobs in the container, and downloads the file into a buffer.
+This sample creates an Azure storage container, uploads a blob, lists the blobs in the container, then downloads the blob data into a buffer.
+
+Before you run the sample, open the `storage-quickstart.go` file. Replace `<StorageAccountName>` with the name of your Azure storage account.
-To run the sample, issue the following command:
+Then run the application with the `go run` command:
-`go run storage-quickstart.go`
+```azurecli
+go run storage-quickstart.go
+```
The following output is an example of the output returned when running the application: ```output Azure Blob storage quick start sample
-Creating a container named quickstart-5568059279520899415
+Creating a container named quickstart-4052363832531531139
Creating a dummy file to test the upload and download
-Uploading the file with blob name: 630910657703031215
-Blob name: 630910657703031215
-Downloaded the blob: hello world
-this is a blob
-Press the enter key to delete the sample files, example container, and exit the application.
+Listing the blobs in the container:
+blob-8721479556813186518
+
+hello world this is a blob
+
+Press enter key to delete the blob fils, example container, and exit the application.
+
+Cleaning up.
+Deleting the blob blob-8721479556813186518
+Deleting the blob quickstart-4052363832531531139
``` When you press the key to continue, the sample program deletes the storage container and the files.
Next, we walk through the sample code so that you can understand how it works.
First, create the references to the ContainerURL and BlobURL objects used to access and manage Blob storage. These objects offer low-level APIs such as Create, Upload, and Download to issue REST APIs. -- Use [**SharedKeyCredential**](https://godoc.org/github.com/Azure/azure-storage-blob-go/azblob#SharedKeyCredential) struct to store your credentials.
+- Authenticate to Azure using the [**DefaultAzureCredential**](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#NewDefaultAzureCredential).
-- Create a [**Pipeline**](https://godoc.org/github.com/Azure/azure-storage-blob-go/azblob#NewPipeline) using the credentials and options. The pipeline specifies things like retry policies, logging, deserialization of HTTP response payloads, and more.
+- Use [**NewServiceClient**](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#NewServiceClient) struct to store your credentials.
-- Instantiate a new [**ContainerURL**](https://godoc.org/github.com/Azure/azure-storage-blob-go/azblob#ContainerURL), and a new [**BlobURL**](https://godoc.org/github.com/Azure/azure-storage-blob-go/azblob#BlobURL) object to run operations on container (Create) and blobs (Upload and Download).
+- Instantiate a new [**ContainerURL**](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#ServiceClient.NewContainerClient), and a new [**BlockBlobClient**](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#NewBlockBlobClient) object to run operations on container (Create) and blobs (Upload and Download).
Once you have the ContainerURL, you can instantiate the **BlobURL** object that points to a blob, and perform operations such as upload, download, and copy.
Once you have the ContainerURL, you can instantiate the **BlobURL** object that
In this section, you create a new container. The container is called **quickstartblobs-[random string]**. ```go
-// From the Azure portal, get your storage account name and key and set environment variables.
-accountName, accountKey := os.Getenv("AZURE_STORAGE_ACCOUNT"), os.Getenv("AZURE_STORAGE_ACCESS_KEY")
-if len(accountName) == 0 || len(accountKey) == 0 {
- log.Fatal("Either the AZURE_STORAGE_ACCOUNT or AZURE_STORAGE_ACCESS_KEY environment variable is not set")
+url := "https://storageblobsgo.blob.core.windows.net/"
+ctx := context.Background()
+
+// Create a default Azure credential
+credential, err := azidentity.NewDefaultAzureCredential(nil)
+if err != nil {
+ log.Fatal("Invalid credentials with error: " + err.Error())
}
-// Create a default request pipeline using your storage account name and account key.
-credential, err := azblob.NewSharedKeyCredential(accountName, accountKey)
+serviceClient, err := azblob.NewServiceClient(url, credential, nil)
if err != nil { log.Fatal("Invalid credentials with error: " + err.Error()) }
-p := azblob.NewPipeline(credential, azblob.PipelineOptions{})
-// Create a random string for the quick start container
+// Create the container
containerName := fmt.Sprintf("quickstart-%s", randomString())
+fmt.Printf("Creating a container named %s\n", containerName)
-// From the Azure portal, get your storage account blob service URL endpoint.
-URL, _ := url.Parse(
- fmt.Sprintf("https://%s.blob.core.windows.net/%s", accountName, containerName))
-
-// Create a ContainerURL object that wraps the container URL and a request
-// pipeline to make requests.
-containerURL := azblob.NewContainerURL(*URL, p)
+containerClient := serviceClient.NewContainerClient(containerName)
+_, err = containerClient.Create(ctx, nil)
-// Create the container
-fmt.Printf("Creating a container named %s\n", containerName)
-ctx := context.Background() // This example uses a never-expiring context
-_, err = containerURL.Create(ctx, azblob.Metadata{}, azblob.PublicAccessNone)
-handleErrors(err)
+if err != nil {
+ log.Fatal(err)
+}
``` ### Upload blobs to the container Blob storage supports block blobs, append blobs, and page blobs. Block blobs are the most commonly used, and that is what is used in this quickstart.
-To upload a file to a blob, open the file using **os.Open**. You can then upload the file to the specified path using one of the REST APIs: Upload (PutBlob), StageBlock/CommitBlockList (PutBlock/PutBlockList).
-
-Alternatively, the SDK offers [high-level APIs](https://github.com/Azure/azure-storage-blob-go/blob/master/azblob/highlevel.go) that are built on top of the low-level REST APIs. As an example, ***UploadFileToBlockBlob*** function uses StageBlock (PutBlock) operations to concurrently upload a file in chunks to optimize the throughput. If the file is less than 256 MB, it uses Upload (PutBlob) instead to complete the transfer in a single transaction.
+The SDK offers [high-level APIs](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/storage/azblob/highlevel.go) that are built on top of the low-level REST APIs. As an example, ***UploadBufferToBlockBlob*** function uses StageBlock (PutBlock) operations to concurrently upload a file in chunks to optimize the throughput. If the file is less than 256 MB, it uses Upload (PutBlob) instead to complete the transfer in a single transaction.
-The following example uploads the file to your container called **quickstartblobs-[randomstring]**.
+The following example uploads the file to your container called **quickstartblob-[randomstring]**.
```go
-// Create a file to test the upload and download.
-fmt.Printf("Creating a dummy file to test the upload and download\n")
-data := []byte("hello world this is a blob\n")
-fileName := randomString()
-err = ioutil.WriteFile(fileName, data, 0700)
-handleErrors(err)
-
-// Here's how to upload a blob.
-blobURL := containerURL.NewBlockBlobURL(fileName)
-file, err := os.Open(fileName)
-handleErrors(err)
-
-// You can use the low-level Upload (PutBlob) API to upload files. Low-level APIs are simple wrappers for the Azure Storage REST APIs.
-// Note that Upload can upload up to 256MB data in one shot. Details: https://docs.microsoft.com/rest/api/storageservices/put-blob
-// To upload more than 256MB, use StageBlock (PutBlock) and CommitBlockList (PutBlockList) functions.
-// Following is commented out intentionally because we will instead use UploadFileToBlockBlob API to upload the blob
-// _, err = blobURL.Upload(ctx, file, azblob.BlobHTTPHeaders{ContentType: "text/plain"}, azblob.Metadata{}, azblob.BlobAccessConditions{})
-// handleErrors(err)
-
-// The high-level API UploadFileToBlockBlob function uploads blocks in parallel for optimal performance, and can handle large files as well.
-// This function calls StageBlock/CommitBlockList for files larger 256 MBs, and calls Upload for any file smaller
-fmt.Printf("Uploading the file with blob name: %s\n", fileName)
-_, err = azblob.UploadFileToBlockBlob(ctx, file, blobURL, azblob.UploadToBlockBlobOptions{
- BlockSize: 4 * 1024 * 1024,
- Parallelism: 16})
-handleErrors(err)
+data := []byte("\nhello world this is a blob\n")
+blobName := "quickstartblob" + "-" + randomString()
+
+var blockOptions azblob.HighLevelUploadToBlockBlobOption
+
+blobClient, err := azblob.NewBlockBlobClient(url+containerName+"/"+blobName, credential, nil)
+if err != nil {
+ log.Fatal(err)
+}
+
+// Upload to data to blob storage
+_, err = blobClient.UploadBufferToBlockBlob(ctx, data, blockOptions)
+
+if err != nil {
+ log.Fatalf("Failure to upload to blob: %+v", err)
+}
``` ### List the blobs in a container
-Get a list of files in the container using the **ListBlobs** method on a **ContainerURL**. ListBlobs returns a single segment of blobs (up to 5000) starting from the specified **Marker**. Use an empty Marker to start enumeration from the beginning. Blob names are returned in lexicographic order. After getting a segment, process it, and then call ListBlobs again passing the previously returned Marker.
+Get a list of files in the container using the **ListBlobs** method on a **ContainerURL**. Blob names are returned in lexicographic order. After getting a segment, process it, and then call ListBlobs again.
```go
-// List the container that we have created above
-fmt.Println("Listing the blobs in the container:")
-for marker := (azblob.Marker{}); marker.NotDone(); {
- // Get a result segment starting with the blob indicated by the current Marker.
- listBlob, err := containerURL.ListBlobsFlatSegment(ctx, marker, azblob.ListBlobsSegmentOptions{})
- handleErrors(err)
-
- // ListBlobs returns the start of the next segment; you MUST use this to get
- // the next segment (after processing the current result segment).
- marker = listBlob.NextMarker
-
- // Process the blobs returned in this result segment (if the segment is empty, the loop body won't execute)
- for _, blobInfo := range listBlob.Segment.BlobItems {
- fmt.Print(" Blob name: " + blobInfo.Name + "\n")
+ // List the blobs in the container
+pager := containerClient.ListBlobsFlat(nil)
+
+for pager.NextPage(ctx) {
+ resp := pager.PageResponse()
+
+ for _, v := range resp.ContainerListBlobFlatSegmentResult.Segment.BlobItems {
+ fmt.Println(*v.Name)
} }+
+if err = pager.Err(); err != nil {
+ log.Fatalf("Failure to list blobs: %+v", err)
+}
+
+// Download the blob
+get, err := blobClient.Download(ctx, nil)
+if err != nil {
+ log.Fatal(err)
+}
``` ### Download the blob
-Download blobs using the **Download** low-level function on a BlobURL. This will return a **DownloadResponse** struct. Run the function **Body** on the struct to get a **RetryReader** stream for reading data. If a connection fails while reading, it will make additional requests to re-establish a connection and continue reading. Specifying a RetryReaderOption's with MaxRetryRequests set to 0 (the default), returns the original response body and no retries will be performed. Alternatively, use the high-level APIs **DownloadBlobToBuffer** or **DownloadBlobToFile** to simplify your code.
+Download blobs using the **Download** low-level function on a BlobURL will return a **DownloadResponse** struct. Run the function **Body** on the struct to get a **RetryReader** stream for reading data. If a connection fails while reading, it will make other requests to re-establish a connection and continue reading. Specifying a RetryReaderOption's with MaxRetryRequests set to 0 (the default), returns the original response body and no retries will be performed. Alternatively, use the high-level APIs **DownloadBlobToBuffer** or **DownloadBlobToFile** to simplify your code.
The following code downloads the blob using the **Download** function. The contents of the blob is written into a buffer and shown on the console. ```go
-// Here's how to download the blob
-downloadResponse, err := blobURL.Download(ctx, 0, azblob.CountToEnd, azblob.BlobAccessConditions{}, false)
+// Download the blob
+get, err := blobClient.Download(ctx, nil)
+if err != nil {
+ log.Fatal(err)
+}
-// NOTE: automatically retries are performed if the connection fails
-bodyStream := downloadResponse.Body(azblob.RetryReaderOptions{MaxRetryRequests: 20})
+downloadedData := &bytes.Buffer{}
+reader := get.Body(azblob.RetryReaderOptions{})
+_, err = downloadedData.ReadFrom(reader)
+if err != nil {
+ log.Fatal(err)
+}
+err = reader.Close()
+if err != nil {
+ log.Fatal(err)
+}
-// read the body into a buffer
-downloadedData := bytes.Buffer{}
-_, err = downloadedData.ReadFrom(bodyStream)
-handleErrors(err)
+fmt.Println(downloadedData.String())
``` ### Clean up resources
handleErrors(err)
If you no longer need the blobs uploaded in this quickstart, you can delete the entire container using the **Delete** method. ```go
-// Cleaning up the quick start by deleting the container and the file created locally
-fmt.Printf("Press enter key to delete the sample files, example container, and exit the application.\n")
+// Cleaning up the quick start by deleting the blob and container
+fmt.Printf("Press enter key to delete the blob fils, example container, and exit the application.\n")
bufio.NewReader(os.Stdin).ReadBytes('\n') fmt.Printf("Cleaning up.\n")
-containerURL.Delete(ctx, azblob.ContainerAccessConditions{})
-file.Close()
-os.Remove(fileName)
+
+// Delete the blob
+fmt.Printf("Deleting the blob " + blobName + "\n")
+
+_, err = blobClient.Delete(ctx, nil)
+if err != nil {
+ log.Fatalf("Failure: %+v", err)
+}
+
+// Delete the container
+fmt.Printf("Deleting the blob " + containerName + "\n")
+_, err = containerClient.Delete(ctx, nil)
+
+if err != nil {
+ log.Fatalf("Failure: %+v", err)
+}
``` ## Resources for developing Go applications with blobs See these additional resources for Go development with Blob storage: -- View and install the [Go client library source code](https://github.com/Azure/azure-storage-blob-go) for Azure Storage on GitHub.-- Explore [Blob storage samples](https://godoc.org/github.com/Azure/azure-storage-blob-go/azblob#pkg-examples) written using the Go client library.
+- View and install the [Go client library source code](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/storage/azblob) for Azure Storage on GitHub.
+- Explore [Blob storage samples](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#example-package) written using the Go client library.
## Next steps
-In this quickstart, you learned how to transfer files between a local disk and Azure blob storage using Go. For more information about the Azure Storage Blob SDK, view the [Source Code](https://github.com/Azure/azure-storage-blob-go/) and [API Reference](https://godoc.org/github.com/Azure/azure-storage-blob-go/azblob).
+In this quickstart, you learned how to transfer files between a local disk and Azure blob storage using Go. For more information about the Azure Storage Blob SDK, view the [Source Code](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/storage/azblob) and [API Reference](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob).
synapse-analytics Continuous Integration Delivery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/cicd/continuous-integration-delivery.md
In your GitHub repository, go to **Actions**.
If you use automated CI/CD and want to change some properties during deployment, but the properties aren't parameterized by default, you can override the default parameter template.
-To override the default parameter template, create a custom parameter template named*template-parameters-definition.json* in the root folder of your Git collaboration branch. You must use this exact file name. When Azure Synapse workspace publishes from the collaboration branch, it reads this file and uses its configuration to generate the parameters. If Azure Synapse workspace doesn't find that file, is uses the default parameter template.
+To override the default parameter template, create a custom parameter template named *template-parameters-definition.json* in the root folder of your Git collaboration branch. You must use this exact file name. When Azure Synapse workspace publishes from the collaboration branch, it reads this file and uses its configuration to generate the parameters. If Azure Synapse workspace doesn't find that file, is uses the default parameter template.
### Custom parameter syntax
Here's an example of what a parameter template definition looks like:
```json {
-"Microsoft.Synapse/workspaces/notebooks": {
- "properties":{
- "bigDataPool":{
+ "Microsoft.Synapse/workspaces/notebooks": {
+ "properties": {
+ "bigDataPool": {
"referenceName": "=" } } }, "Microsoft.Synapse/workspaces/sqlscripts": {
- "properties": {
- "content":{
- "currentConnection":{
- "*":"-"
- }
- }
+ "properties": {
+ "content": {
+ "currentConnection": {
+ "*": "-"
+ }
+ }
}
- },
+ },
"Microsoft.Synapse/workspaces/pipelines": { "properties": {
- "activities": [{
- "typeProperties": {
- "waitTimeInSeconds": "-::int",
- "headers": "=::object"
+ "activities": [
+ {
+ "typeProperties": {
+ "waitTimeInSeconds": "-::int",
+ "headers": "=::object"
+ }
}
- }]
+ ]
} }, "Microsoft.Synapse/workspaces/integrationRuntimes": {
Here's an example of what a parameter template definition looks like:
"*": { "properties": { "typeProperties": {
- "*": "="
+ "*": "="
} } },
synapse-analytics Get Started Analyze Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/get-started-analyze-sql-on-demand.md
Previously updated : 04/15/2021 Last updated : 02/02/2022 # Analyze data with a serverless SQL pool
However, as you continue data exploration, you might want to create some utility
1. Use the `master` database to create a separate database for custom database objects. Custom database objects, cannot be created in the `master` database.
- ```sql
- CREATE DATABASE DataExplorationDB
- COLLATE Latin1_General_100_BIN2_UTF8
- ```
+ ```sql
+ CREATE DATABASE DataExplorationDB
+ COLLATE Latin1_General_100_BIN2_UTF8
+ ```
> [!IMPORTANT] > Use a collation with `_UTF8` suffix to ensure that UTF-8 text is properly converted to `VARCHAR` columns. `Latin1_General_100_BIN2_UTF8` provides > the best performance in the queries that read data from Parquet files and cosmos Db containers.
-2. Switch to `DataExplorationDB` where you can create utility objects such as credentials and data sources.
+1. Switch from master to `DataExplorationDB` using the following command. You can also use the UI control **use database** to switch your current database:
- ```sql
- CREATE EXTERNAL DATA SOURCE ContosoLake
- WITH ( LOCATION = 'https://contosolake.dfs.core.windows.net')
- ```
+ ```sql
+ USE DataExplorationDB
+ ```
+
+1. From the 'DataExplorationDB', create utility objects such as credentials and data sources.
+
+ ```sql
+ CREATE EXTERNAL DATA SOURCE ContosoLake
+ WITH ( LOCATION = 'https://contosolake.dfs.core.windows.net')
+ ```
> [!NOTE] > An external data source can be created without a credential. If a credential does not exist, the caller's identity will be used to access the external data source.
-3. Optionally, use the newly created 'DataExplorationDB' database to create a login for a user in DataExplorationDB that will access external data:
+1. Optionally, use the newly created 'DataExplorationDB' database to create a login for a user in DataExplorationDB that will access external data:
- ```sql
- CREATE LOGIN data_explorer WITH PASSWORD = 'My Very Strong Password 1234!';
- ```
+ ```sql
+ CREATE LOGIN data_explorer WITH PASSWORD = 'My Very Strong Password 1234!';
+ ```
- Then create a database user in `DataExplorationDB` for the login and grant the `ADMINISTER DATABASE BULK OPERATIONS` permission.
- ```sql
- CREATE USER data_explorer FOR LOGIN data_explorer;
- GO
- GRANT ADMINISTER DATABASE BULK OPERATIONS TO data_explorer;
- GO
- ```
+ Next create a database user in 'DataExplorationDB' for the above login and grant the `ADMINISTER DATABASE BULK OPERATIONS` permission.
-4. Explore the content of the file using the relative path and the data source:
+ ```sql
+ CREATE USER data_explorer FOR LOGIN data_explorer;
+ GO
+ GRANT ADMINISTER DATABASE BULK OPERATIONS TO data_explorer;
+ GO
+ ```
- ```sql
- SELECT
- TOP 100 *
- FROM
- OPENROWSET(
- BULK '/users/NYCTripSmall.parquet',
- DATA_SOURCE = 'ContosoLake',
- FORMAT='PARQUET'
- ) AS [result]
- ```
+1. Explore the content of the file using the relative path and the data source:
+
+ ```sql
+ SELECT
+ TOP 100 *
+ FROM
+ OPENROWSET(
+ BULK '/users/NYCTripSmall.parquet',
+ DATA_SOURCE = 'ContosoLake',
+ FORMAT='PARQUET'
+ ) AS [result]
+ ```
+
+1. **Publish** your changes to the workspace.
Data exploration database is just a simple placeholder where you can store your utility objects. Synapse SQL pool enables you to do much more and create a Logical Data Warehouse - a relational layer built on top of Azure data sources. Learn more about building Logical Data Warehouse in this [tutorial](sql/tutorial-data-analyst.md).
synapse-analytics Quickstart Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/quickstart-create-workspace.md
After your Azure Synapse workspace is created, you have two ways to open Synapse
| Setting | Value | | | | | Role | Owner and Storage Blob Data Owner |
- | Assign access to | [USER |
+ | Assign access to | USER |
| Members | your user name | ![Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
synapse-analytics Create Use External Tables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/create-use-external-tables.md
Previously updated : 07/23/2021 Last updated : 02/02/2022
External tables are useful when you want to control access to external data in S
> In dedicated SQL pools you can only use native external tables with a Parquet file type, and this feature is in **public preview**. If you want to use generally available Parquet reader functionality in dedicated SQL pools, or you need to access CSV or ORC files, use Hadoop external tables. Native external tables are generally available in serverless SQL pools. > Learn more about the differences between native and Hadoop external tables in [Use external tables with Synapse SQL](develop-tables-external-tables.md).
+The following table lists the data formats supported:
+
+|Data format (Native external tables) |Serverless SQL pool |Dedicated SQL pool |
+||||
+|Paraquet | Yes (GA) | Yes (public preview) |
+|CSV | Yes | No (Alternatively, use [Hadoop external tables](develop-tables-external-tables.md?tabs=hadoop)) |
+|delta | Yes | No |
+|Spark | Yes | No |
+|Dataverse | Yes | No |
+|Azure Cosmos DB data formats (JSON, BSON etc.) | No (Alternatively, [create views](query-cosmos-db-analytical-store.md?tabs=openrowset-credential#create-view)) | No |
+ ## Prerequisites Your first step is to create a database where the tables will be created. Then create the following objects that are used in this sample:
synapse-analytics Overview Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/overview-features.md
Data that is analyzed can be stored on various storage types. The following tabl
| **Azure Data Lake v2** | Yes | Yes, you can use external tables and the `OPENROWSET` function to read data from ADLS. Learn here how to [setup access control](develop-storage-files-storage-access-control.md). | | **Azure Blob Storage** | Yes | Yes, you can use external tables and the `OPENROWSET` function to read data from Azure Blob Storage. Learn here how to [setup access control](develop-storage-files-storage-access-control.md). | | **Azure SQL/SQL Server (remote)** | No | No, serverless SQL pool cannot reference Azure SQL database. You can reference serverless SQL pools from Azure SQL using [elastic queries](https://devblogs.microsoft.com/azure-sql/read-azure-storage-files-using-synapse-sql-external-tables/) or [linked servers](https://devblogs.microsoft.com/azure-sql/linked-server-to-synapse-sql-to-implement-polybase-like-scenarios-in-managed-instance). |
-| **Dataverse** | No | Yes, you can read Dataverse tables using [Synapse link](https://docs.microsoft.com/powerapps/maker/data-platform/azure-synapse-link-data-lake). |
+| **Dataverse** | No, you can [load CosmosDB data into a dedicated pool using Synapse Link in serverless SQL pool (via ADLS)](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/loading-cosmosdb-and-dataverse-data-into-dedicated-sql-pool-dw/ba-p/3104168) or Spark. | Yes, you can read Dataverse tables using [Synapse link](https://docs.microsoft.com/powerapps/maker/data-platform/azure-synapse-link-data-lake). |
| **Azure Cosmos DB transactional storage** | No | No, you cannot access Cosmos DB containers to update data or read data from the Cosmos DB transactional storage. Use [Spark pools to update the Cosmos DB](../synapse-link/how-to-query-analytical-store-spark.md) transactional storage. |
-| **Azure Cosmos DB analytical storage** | No | Yes, you can [query Cosmos DB analytical storage](query-cosmos-db-analytical-store.md) using [Synapse Link](../../cosmos-db/synapse-link.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json). |
+| **Azure Cosmos DB analytical storage** | No, you can [load CosmosDB data into a dedicated pool using Synapse Link in serverless SQL pool (via ADLS)](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/loading-cosmosdb-and-dataverse-data-into-dedicated-sql-pool-dw/ba-p/3104168), ADF, Spark or some other load tool. | Yes, you can [query Cosmos DB analytical storage](query-cosmos-db-analytical-store.md) using [Synapse Link](../../cosmos-db/synapse-link.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json). |
| **Apache Spark tables (in workspace)** | No | Yes, serverless pool can read PARQUET and CSV tables using [metadata synchronization](develop-storage-files-spark-tables.md). |
-| **Apache Spark tables (remote)** | No | No, serverless pool can access only the PARQUET and CSV tables that are [created in Apache Spark pools in the same Synapse workspace](develop-storage-files-spark-tables.md). |
-| **Databricks tables (remote)** | No | No, serverless pool can access only the PARQUET and CSV tables that are [created in Apache Spark pools in the same Synapse workspace](develop-storage-files-spark-tables.md). |
+| **Apache Spark tables (remote)** | No | No, serverless pool can access only the PARQUET and CSV tables that are [created in Apache Spark pools in the same Synapse workspace](develop-storage-files-spark-tables.md). However, you can manually create an external table that reference external Spark table location. |
+| **Databricks tables (remote)** | No | No, serverless pool can access only the PARQUET and CSV tables that are [created in Apache Spark pools in the same Synapse workspace](develop-storage-files-spark-tables.md). However, you can manually create an external table that reference Databricks table location. |
## Data formats
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/whats-new.md
The Azure Virtual Desktop agent updates at least once per month.
Here's what's changed in the Azure Virtual Desktop Agent:
+- Version 1.0.4009.1500: This update was released in January 2022 and includes the following changes.
+ - Added logging to better capture agent update telemetry.
+ - Updated the agent's Azure Instance Metadata Service health check to be Azure Stack HCI-friendly
- Version 1.0.3855.1400: This update was released December 2021 and has the following changes: - Fixes an issue that caused an unhandled exception. - This version now supports Azure Stack HCI by retrieving VM metadata from the Azure Arc service.
virtual-machines Create Upload Generic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/create-upload-generic.md
This article focuses on general guidance for running your Linux distribution on
6. Linux kernel versions earlier than 2.6.37 don't support NUMA on Hyper-V with larger VM sizes. This issue primarily impacts older distributions using the upstream Red Hat 2.6.32 kernel, and was fixed in Red Hat Enterprise Linux (RHEL) 6.6 (kernel-2.6.32-504). Systems running custom kernels older than 2.6.37, or RHEL-based kernels older than 2.6.32-504 must set the boot parameter `numa=off` on the kernel command line in grub.conf. For more information, see [Red Hat KB 436883](https://access.redhat.com/solutions/436883). 7. Don't configure a swap partition on the OS disk. The Linux agent can be configured to create a swap file on the temporary resource disk, as described in the following steps.
-8. All VHDs on Azure must have a virtual size aligned to 1 MB. When converting from a raw disk to VHD you must ensure that the raw disk size is a multiple of 1 MB before conversion, as described in the following steps.
+8. All VHDs on Azure must have a virtual size aligned to 1 MB (1024 &times; 1024 bytes). When converting from a raw disk to VHD you must ensure that the raw disk size is a multiple of 1 MB before conversion, as described in the following steps.
> [!NOTE] > Make sure **'udf'** (cloud-init >= 21.2) and **'vfat'** modules are enable. Blocklisting the udf module will cause a provisioning failure and backlisting vfat module will cause both provisioning and boot failures. **_Cloud-init < 21.2 are not affected and does not require this change._**
virtual-machines Managed Disks Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/managed-disks-overview.md
description: Overview of Azure managed disks, which handle the storage accounts
Previously updated : 02/01/2022 Last updated : 02/03/2022
A data disk is a managed disk that's attached to a virtual machine to store appl
Every virtual machine has one attached operating system disk. That OS disk has a pre-installed OS, which was selected when the VM was created. This disk contains the boot volume.
-This disk has a maximum capacity of 4,095 GiB, however, most deployments use [master boot record (MBR)](https://wikipedia.org/wiki/Master_boot_record) by default. MBR limits the usable size to 2 TiB. If you need more than 2 TiB, create and attach [data disks](#data-disk) and use them for data storage. If you need to store data on the OS disk and require the additional space, [convert it to GUID Partition Table](/windows-server/storage/disk-management/change-an-mbr-disk-into-a-gpt-disk) (GPT). To learn about the differences between MBR and GPT on Windows deployments, see [Windows and GPT FAQ](/windows-hardware/manufacture/desktop/windows-and-gpt-faq?view=windows-11).
+This disk has a maximum capacity of 4,095 GiB, however, many operating systems are partitioned with [master boot record (MBR)](https://wikipedia.org/wiki/Master_boot_record) by default. MBR limits the usable size to 2 TiB. If you need more than 2 TiB, create and attach [data disks](#data-disk) and use them for data storage. If you need to store data on the OS disk and require the additional space, [convert it to GUID Partition Table](/windows-server/storage/disk-management/change-an-mbr-disk-into-a-gpt-disk) (GPT). To learn about the differences between MBR and GPT on Windows deployments, see [Windows and GPT FAQ](/windows-hardware/manufacture/desktop/windows-and-gpt-faq?view=windows-11).
### Temporary disk
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/service-tags-overview.md
By default, service tags reflect the ranges for the entire cloud. Some service t
| **MicrosoftCloudAppSecurity** | Microsoft Defender for Cloud Apps. | Outbound | No | No | | **MicrosoftContainerRegistry** | Container registry for Microsoft container images. <br/><br/>**Note**: This tag has a dependency on the **AzureFrontDoor.FirstParty** tag. | Outbound | Yes | Yes | | **PowerBI** | Power BI. | Both | No | No|
-| **PowerPlatformInfra** | This tag represents the IP addresses used by the infrastructure to host Power Platform services. | Outbound | Yes | No |
+| **PowerPlatformInfra** | This tag represents the IP addresses used by the infrastructure to host Power Platform services. | Outbound | Yes | Yes |
| **PowerQueryOnline** | Power Query Online. | Both | No | No | | **ServiceBus** | Azure Service Bus traffic that uses the Premium service tier. | Outbound | Yes | Yes | | **ServiceFabric** | Azure Service Fabric.<br/><br/>**Note**: This tag represents the Service Fabric service endpoint for control plane per region. This enables customers to perform management operations for their Service Fabric clusters from their VNET (endpoint eg. https:// westus.servicefabric.azure.com). | Both | No | No |