Updates from: 03/30/2022 01:19:04
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Partner Eid Me https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-eid-me.md
The following architecture diagram shows the implementation.
[Contact eID-Me](https://bluink.ca/contact) and configure a test or production environment to set up Azure AD B2C tenants as a Relying Party. Tenants must determine what identity claims they'll need from their consumers as they sign up using eID-Me.
-## Integrate eID-Me with Azure AD B2C
-
-### Step 1 - Configure an application in eID-Me
+## Step 1: Configure an application in eID-Me
To configure your tenant application as a Relying Party in eID-Me the following information should be supplied to eID-Me:
eID-Me will provide a Client ID and a Client Secret once the Relying Party has b
::: zone pivot="b2c-user-flow"
-### Step 2 - Add a new Identity provider in Azure AD B2C
+## Step 2: Add a new Identity provider in Azure AD B2C
1. Sign in to the [Azure portal](https://portal.azure.com/#home) as the global administrator of your Azure AD B2C tenant.
eID-Me will provide a Client ID and a Client Secret once the Relying Party has b
6. Select **Add**.
-### Step 3 - Configure an Identity provider
+## Step 3: Configure an Identity provider
To configure an identity provider, follow these steps:
To configure an identity provider, follow these steps:
6. Select **Save** to complete the setup for your new OIDC Identity provider.
-### Step 4 - Configure multi-factor authentication
+## Step 4: Configure multi-factor authentication
eID-Me is a decentralized digital identity with strong two-factor user authentication built in. Since eID-Me is already a multi-factor authenticator, you don't need to configure any multi-factor authentication settings in your user flows when using eID-Me. eID-Me offers a fast and simple user experience, which also eliminates the need for any additional passwords.
-### Step 5 - Create a user flow policy
+## Step 5: Create a user flow policy
You should now see eID-Me as a new OIDC Identity provider listed within your B2C identity providers.
For additional information, review the following articles:
>[!NOTE] >In Azure AD B2C, [**custom policies**](./user-flow-overview.md) are designed primarily to address complex scenarios. For most scenarios, we recommend that you use built-in [**user flows**](./user-flow-overview.md).
-### Step 2 - Create a policy key
+## Step 2: Create a policy key
Store the client secret that you previously recorded in your Azure AD B2C tenant.
Store the client secret that you previously recorded in your Azure AD B2C tenant
11. Select **Create**.
-### Step 3- Configure eID-Me as an Identity provider
+## Step 3: Configure eID-Me as an Identity provider
To enable users to sign in using eID-Me decentralized identity, you need to define eID-Me as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify a specific user has authenticated using digital ID available on their device, proving the userΓÇÖs identity.
There are additional identity claims that eID-Me supports and can be added.
```
-### Step 4 - Add a user journey
+## Step 4: Add a user journey
At this point, the identity provider has been set up, but it's not yet available in any of the sign-in pages. If you don't have your own custom user journey, create a duplicate of an existing template user journey, otherwise continue to the next step.
At this point, the identity provider has been set up, but it's not yet available
5. Rename the ID of the user journey. For example, ID=`CustomSignUpSignIn`
-### Step 5 - Add the identity provider to a user journey
+## Step 5: Add the identity provider to a user journey
Now that you have a user journey, add the new identity provider to the user journey.
Now that you have a user journey, add the new identity provider to the user jour
```
-### Step 6 - Configure the relying party policy
+## Step 6: Configure the relying party policy
The relying party policy specifies the user journey which Azure AD B2C will execute. You can also control what claims are passed to your application by adjusting the **OutputClaims** element of the **eID-Me-OIDC-Signup** TechnicalProfile element. In this sample, the application will receive the userΓÇÖs postal code, locality, region, IAL, portrait, middle name, and birth date. It also receives the boolean **signupConditionsSatisfied** claim, which indicates whether an account has been created or not:
The relying party policy specifies the user journey which Azure AD B2C will exec
```
-### Step 7 - Upload the custom policy
+## Step 7: Upload the custom policy
1. Sign in to the [Azure portal](https://portal.azure.com/#home).
The relying party policy specifies the user journey which Azure AD B2C will exec
5. Under Policies, select **Identity Experience Framework**. Select **Upload Custom Policy**, and then upload the two policy files that you changed, in the following order: the extension policy, for example `TrustFrameworkBase.xml`, then the relying party policy, such as `SignUp.xml`.
-### Step 8 - Test your custom policy
+## Step 8: Test your custom policy
1. Select your relying party policy, for example `B2C_1A_signup`.
active-directory-b2c Partner Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-gallery.md
Microsoft partners with the following ISVs for MFA and Passwordless authenticati
| ![Screenshot of a twilio logo.](./medi) provides multiple solutions to enable MFA through SMS one-time password (OTP), time-based one-time password (TOTP), and push notifications, and to comply with SCA requirements for PSD2. | | ![Screenshot of a typingDNA logo](./medi) enables strong customer authentication by analyzing a userΓÇÖs typing pattern. It helps companies enable a silent MFA and comply with SCA requirements for PSD2. | | ![Screenshot of a whoiam logo](./medi) is a Branded Identity Management System (BRIMS) application that enables organizations to verify their user base by voice, SMS, and email. |
+| ![Screenshot of a xid logo](./medi) is a digital ID solution that provides users with passwordless, secure, multifactor authentication. xID-authenticated users obtain their identities verified by a My Number Card, the digital ID card issued by the Japanese government. Organizations can get users verified Personal Identification Information (PII) through the xID API. |
## Role-based access control
active-directory-b2c Partner Xid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-xid.md
+
+ Title: Configure Azure Active Directory B2C with xID
+
+description: Configure Azure Active Directory B2C with xID for passwordless authentication
++++++ Last updated : 03/18/2022++++
+# Configure xID with Azure Active Directory B2C for passwordless authentication
+
+In this sample tutorial, learn how to integrate Azure Active Directory B2C (Azure AD B2C) authentication with the xID digital ID solution. The xID app provides users with passwordless, secure, multifactor authentication. xID-authenticated users obtain their identities verified by a My Number Card, the digital ID card issued by the Japanese government. Organizations can get users verified Personal Identification Information (customer content) through the xID API. Furthermore, the xID app generates a private key in a secure area within userΓÇÖs mobile device, which can be used as a digital signing device.
++
+## Prerequisites
+
+To get started, you'll need:
+
+- An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+
+- An [Azure AD B2C tenant](./tutorial-create-tenant.md) that's linked to your Azure subscription.
+
+- Your xID client information provided by xID inc. [Contact xID](https://xid.inc/contact-us) for the xID client information that should include the following parameters:
+ - Client ID
+ - Client Secret
+ - Redirect URL
+ - Scopes
+- Download and install the [xID app](https://x-id.me/) on your mobile device.
+ - To complete registration, you'll need your own My Number Card.
+ - If you use the UAT version of API, you'll also need UAT version of xID app. To install UAT app, [contact xID inc](https://xid.inc/contact-us).
+
+## Scenario description
+
+The following architecture diagram shows the implementation.
+
+![image shows the architecture diagram](./media/partner-xid/partner-xid-architecture-diagram.png)
+
+| Step | Description |
+|:--|:--|
+| 1. |User opens Azure AD B2C's sign in page, and then signs in or signs up by entering their username. |
+| 2. |Azure AD B2C redirects the user to xID authorize API endpoint using an OpenID Connect (OIDC) request. An OIDC endpoint is available containing information about the endpoints. xID Identity provider (IdP) redirects the user to the xID authorization sign in page, allows the user to fill in or select their email address. |
+| 3. |xID IdP sends the push notification to the userΓÇÖs mobile device. |
+| 4. |The user opens the xID app and checks the request, then enters the PIN or authenticates with their biometrics. If PIN or biometrics is successfully verified, xID app activates the private key and creates an electronic signature. |
+| 5. |xID app sends the signature to xID IdP for verification. |
+| 6. |xID IdP shows consent screen to the user, requesting authorization to give their personal information to the service they're signing in. |
+| 7. |xID IdP returns the OAuth authorization code to Azure AD B2C. |
+| 8. |Using the authorization code, Azure AD B2C sends a token request. |
+| 9. |xID IdP checks the token request, and if still valid, returns the OAuth access token and the ID token containing the requested userΓÇÖs identifier and email address. |
+| 10. |In addition, if the user's customer content is needed, Azure AD B2C calls the xID userdata API. |
+| 11. |The xID userdata API returns the userΓÇÖs encrypted customer content. User can decrypt it with their private key, which they create when they request the xID client information. |
+| 12. | User is either granted or denied access to the customer application based on the verification results. |
++
+## Onboard with xID
+
+Request for API documents by filling out [the form](https://xid.inc/contact-us). In the message field, indicate that you would like to onboard with Azure AD B2C. The xID sales representatives will contact you. Follow the instructions provided in the xID API document and request a xID API client. xID tech team will send client information to you in 3-4 working days.
+
+## Step 1: Create a xID policy key
+
+Store the client secret that you received from xID in your Azure AD B2C tenant.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+2. Make sure you're using the directory that contains your Azure AD B2C tenant:
+
+ a. Select the **Directories + subscriptions** icon in the portal toolbar.
+
+ b. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the Directory name list, and then select **Switch**.
+
+3. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
+
+4. On the Overview page, select **Identity Experience Framework**.
+
+5. Select **Policy Keys** and then select **Add**.
+
+6. For **Options**, choose `Manual`.
+
+7. Enter a **Name** for the policy key. For example, `X-IDClientSecret`. The prefix `B2C_1A_` is added automatically to the name of your key.
+
+8. In **Secret**, enter your client secret that you previously received from xID.
+
+9. For **Key usage**, select `Signature`.
+
+10. Select **Create**.
+
+>[!NOTE]
+>In Azure AD B2C, [**custom policies**](./user-flow-overview.md) are designed primarily to address complex scenarios.
+
+## Step 2: Configure xID as an Identity provider
+
+To enable users to sign in using xID, you need to define xID as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated using digital identity available on their device, proving the userΓÇÖs identity.
+
+Use the following steps to add xID as a claims provider:
+
+1. Get the custom policy starter packs from GitHub, then update the XML files in the SocialAndLocalAccounts starter pack with your Azure AD B2C tenant name:
+
+ i. Download the [.zip file](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/archive/master.zip) or [clone the repository](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack).
+
+ ii. In all of the files in the **LocalAccounts** directory, replace the string `yourtenant` with the name of your Azure AD B2C tenant. For example, if the name of your B2C tenant is `contoso`, all instances of `yourtenant.onmicrosoft.com` become `contoso.onmicrosoft.com`.
+
+2. Open the `LocalAccounts/ TrustFrameworkExtensions.xml`.
+
+3. Find the **ClaimsProviders** element. If it doesn't exist, add it under the root element.
+
+4. Add a new **ClaimsProvider** similar to the one shown below:
+
+ ```xml
+
+ <ClaimsProvider>
+ <Domain>X-ID</Domain>
+ <DisplayName>X-ID</DisplayName>
+ <TechnicalProfiles>
+ <TechnicalProfile Id="X-ID-Oauth2">
+ <DisplayName>X-ID</DisplayName>
+ <Description>Login with your X-ID account</Description>
+ <Protocol Name="OAuth2" />
+ <Metadata>
+ <Item Key="METADATA">https://oidc-uat.x-id.io/.well-known/openid-configuration</Item>
+ <!-- Update the Client ID below to the X-ID Application ID -->
+ <Item Key="client_id">00000000-0000-0000-0000-000000000000</Item>
+ <Item Key="response_types">code</Item>
+ <Item Key="scope">openid verification</Item>
+ <Item Key="response_mode">query</Item>
+ <Item Key="HttpBinding">POST</Item>
+ <Item Key="UsePolicyInRedirectUri">false</Item>
+ <Item Key="DiscoverMetadataByTokenIssuer">true</Item>
+ <Item Key="token_endpoint_auth_method">client_secret_basic</Item>
+ <Item Key="ClaimsEndpoint">https://oidc-uat.x-id.io/userinfo</Item>
+ </Metadata>
+ <CryptographicKeys>
+ <Key Id="client_secret" StorageReferenceId="B2C_1A_X-IDClientSecret" />
+ </CryptographicKeys>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="issuerUserId" PartnerClaimType="sub" />
+ <OutputClaim ClaimTypeReferenceId="tenantId" PartnerClaimType="tid" />
+ <OutputClaim ClaimTypeReferenceId="email" />
+ <OutputClaim ClaimTypeReferenceId="sid" />
+ <OutputClaim ClaimTypeReferenceId="userdataid" />
+ <OutputClaim ClaimTypeReferenceId="X-ID_verified" />
+ <OutputClaim ClaimTypeReferenceId="email_verified" />
+ <OutputClaim ClaimTypeReferenceId="authenticationSource" DefaultValue="socialIdpAuthentication" AlwaysUseDefaultValue="true" />
+ <OutputClaim ClaimTypeReferenceId="identityProvider" PartnerClaimType="iss" DefaultValue="https://oidc-uat.x-id.io/" />
+ <OutputClaim ClaimTypeReferenceId="identityProviderAccessToken" PartnerClaimType="{oauth2:access_token}" />
+ </OutputClaims>
+ <OutputClaimsTransformations>
+ <OutputClaimsTransformation ReferenceId="CreateRandomUPNUserName" />
+ <OutputClaimsTransformation ReferenceId="CreateUserPrincipalName" />
+ <OutputClaimsTransformation ReferenceId="CreateAlternativeSecurityId" />
+ <OutputClaimsTransformation ReferenceId="CreateSubjectClaimFromAlternativeSecurityId" />
+ </OutputClaimsTransformations>
+ <UseTechnicalProfileForSessionManagement ReferenceId="SM-SocialLogin" />
+ </TechnicalProfile>
+
+ <TechnicalProfile Id="X-ID-Userdata">
+ <DisplayName>Userdata (Personal Information)</DisplayName>
+ <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.RestfulProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
+ <Metadata>
+ <Item Key="ServiceUrl">https://api-uat.x-id.io/v4/verification/userdata</Item>
+ <Item Key="SendClaimsIn">Header</Item>
+ <Item Key="AuthenticationType">Bearer</Item>
+ <Item Key="UseClaimAsBearerToken">identityProviderAccessToken</Item>
+ <!-- <Item Key="AllowInsecureAuthInProduction">true</Item> -->
+ <Item Key="DebugMode">true</Item>
+ <Item Key="DefaultUserMessageIfRequestFailed">Cannot process your request right now, please try again later.</Item>
+ </Metadata>
+ <InputClaims>
+ <!-- Claims sent to your REST API -->
+ <InputClaim ClaimTypeReferenceId="identityProviderAccessToken" />
+ </InputClaims>
+ <OutputClaims>
+ <!-- Claims parsed from your REST API -->
+ <OutputClaim ClaimTypeReferenceId="last_name" PartnerClaimType="givenName" />
+ <OutputClaim ClaimTypeReferenceId="first_name" PartnerClaimType="surname" />
+ <OutputClaim ClaimTypeReferenceId="previous_name" />
+ <OutputClaim ClaimTypeReferenceId="year" />
+ <OutputClaim ClaimTypeReferenceId="month" />
+ <OutputClaim ClaimTypeReferenceId="date" />
+ <OutputClaim ClaimTypeReferenceId="prefecture" />
+ <OutputClaim ClaimTypeReferenceId="city" />
+ <OutputClaim ClaimTypeReferenceId="address" />
+ <OutputClaim ClaimTypeReferenceId="sub_char_common_name" />
+ <OutputClaim ClaimTypeReferenceId="sub_char_previous_name" />
+ <OutputClaim ClaimTypeReferenceId="sub_char_address" />
+ <OutputClaim ClaimTypeReferenceId="gender" />
+ <OutputClaim ClaimTypeReferenceId="verified_at" />
+ </OutputClaims>
+ <UseTechnicalProfileForSessionManagement ReferenceId="SM-Noop" />
+ </TechnicalProfile>
+ </TechnicalProfiles>
+ </ClaimsProvider>
+
+ ```
+
+4. Set **client_id** with your xID Application ID.
+
+5. Save the changes.
+
+## Step 3: Add a user journey
+
+At this point, you've set up the identity provider, but it's not yet available in any of the sign in pages. If you've your own custom user journey continue to [step 4](#step-4-add-the-identity-provider-to-a-user-journey), otherwise, create a duplicate of an existing template user journey as follows:
+
+1. Open the `TrustFrameworkBase.xml` file from the starter pack.
+
+2. Find and copy the entire contents of the **UserJourneys** element that includes `ID=SignUpOrSignIn`.
+
+3. Open the `TrustFrameworkExtensions.xml` and find the UserJourneys element. If the element doesn't exist, add one.
+
+4. Paste the entire content of the UserJourney element that you copied as a child of the UserJourneys element.
+
+5. Rename the ID of the user journey. For example, `ID=CustomSignUpSignIn`
+
+## Step 4: Add the identity provider to a user journey
+
+Now that you have a user journey, add the new identity provider to the user journey.
+
+1. Find the orchestration step element that includes Type=`CombinedSignInAndSignUp`, or Type=`ClaimsProviderSelection` in the user journey. It's usually the first orchestration step. The **ClaimsProviderSelections** element contains a list of identity providers that a user can sign in with. The order of the elements controls the order of the sign-in buttons presented to the user. Add a **ClaimsProviderSelection** XML element. Set the value of **TargetClaimsExchangeId** to a friendly name, such as `X-IDExchange`.
+
+2. In the next orchestration step, add a **ClaimsExchange** element. Set the **Id** to the value of the target claims exchange ID to link the xID button to `X-ID-SignIn` action. Update the value of **TechnicalProfileReferenceId** to the ID of the technical profile you created earlier.
+
+ The following XML demonstrates orchestration steps of a user journey with the identity provider:
+
+ ```xml
+
+ <UserJourney Id="X-IDSignUpOrSignIn">
+ <OrchestrationSteps>
+
+ <OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
+ <ClaimsProviderSelections>
+ <ClaimsProviderSelection TargetClaimsExchangeId="X-IDExchange" />
+ </ClaimsProviderSelections>
+ </OrchestrationStep>
+
+ <OrchestrationStep Order="2" Type="ClaimsExchange">
+ <ClaimsExchanges>
+ <ClaimsExchange Id="X-IDExchange" TechnicalProfileReferenceId="X-ID-Oauth2" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+
+ <OrchestrationStep Order="3" Type="ClaimsExchange">
+ <ClaimsExchanges>
+ <ClaimsExchange Id="X-ID-Userdata" TechnicalProfileReferenceId="X-ID-Userdata" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+
+ <!-- For social IDP authentication, attempt to find the user account in the directory. -->
+ <OrchestrationStep Order="4" Type="ClaimsExchange">
+ <ClaimsExchanges>
+ <ClaimsExchange Id="AADUserReadUsingAlternativeSecurityId" TechnicalProfileReferenceId="AAD-UserReadUsingAlternativeSecurityId-NoError" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+
+ <!-- Show self-asserted page only if the directory does not have the user account already (i.e. we do not have an objectId). -->
+ <OrchestrationStep Order="5" Type="ClaimsExchange">
+ <Preconditions>
+ <Precondition Type="ClaimsExist" ExecuteActionsIf="true">
+ <Value>objectId</Value>
+ <Action>SkipThisOrchestrationStep</Action>
+ </Precondition>
+ </Preconditions>
+ <ClaimsExchanges>
+ <ClaimsExchange Id="SelfAsserted-Social" TechnicalProfileReferenceId="SelfAsserted-Social" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+
+ <!-- The previous step (SelfAsserted-Social) could have been skipped if there were no attributes to collect
+ from the user. So, in that case, create the user in the directory if one does not already exist
+ (verified using objectId which would be set from the last step if account was created in the directory. -->
+ <OrchestrationStep Order="6" Type="ClaimsExchange">
+ <Preconditions>
+ <Precondition Type="ClaimsExist" ExecuteActionsIf="true">
+ <Value>objectId</Value>
+ <Action>SkipThisOrchestrationStep</Action>
+ </Precondition>
+ </Preconditions>
+ <ClaimsExchanges>
+ <ClaimsExchange Id="AADUserWrite" TechnicalProfileReferenceId="AAD-UserWriteUsingAlternativeSecurityId" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+
+ <OrchestrationStep Order="7" Type="SendClaims" CpimIssuerTechnicalProfileReferenceId="JwtIssuer" />
+
+ </OrchestrationSteps>
+ <ClientDefinition ReferenceId="DefaultWeb" />
+ </UserJourney>
+
+ ```
+
+## Step 5: Upload the custom policy
+
+1. Sign in to the [Azure portal](https://portal.azure.com/#home).
+
+2. Make sure you're using the directory that contains your Azure AD B2C tenant:
+
+ a. Select the **Directories + subscriptions** icon in the portal toolbar.
+
+ b. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+
+3. In the [Azure portal](https://portal.azure.com/#home), search for and select **Azure AD B2C**.
+
+4. Under Policies, select **Identity Experience Framework**.
+
+5. Select **Upload Custom Policy**, and then upload the files in the **LocalAccounts** starter pack in the following order: the extension policy, for example `TrustFrameworkExtensions.xml`, then the relying party policy, such as `SignUpSignIn.xml`.
+
+## Step 6: Test your custom policy
+
+1. In your Azure AD B2C tenant blade, and under **Policies**, select **Identity Experience Framework**.
+
+1. Under **Custom policies**, select **CustomSignUpSignIn**.
+
+3. For **Application**, select the web application that you previously registered as part of this article's prerequisites. The **Reply URL** should show `https://jwt.ms`.
+
+4. Select **Run now**. Your browser should be redirected to the xID sign in page.
+
+5. If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
+
+## Next steps
+
+For additional information, review the following articles:
+
+- [Custom policies in Azure AD B2C](custom-policy-overview.md)
+
+- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory Concept Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
CAE only has insight into [IP-based named locations](../conditional-access/locat
### Named location limitations
-When the sum of all IP ranges specified in location policies exceeds 5,000 for policies that will be enforced on the Resource provider, user change location flow isn't enforced. In this case, Azure AD will issue a one-hour CAE token and won't enforce client location change; security is improved compared to traditional one-hour tokens since we're still evaluating the [other events](#critical-event-evaluation) besides client location change events.
+When the sum of all IP ranges specified in location policies exceeds 5,000, user change location flow won't be enforced by CAE in real time. In this case, Azure AD will issue a one-hour CAE token. CAE will continue enforcing [all other events and policies](#critical-event-evaluation) besides client location change events. With this change, you still maintain stronger security posture compared to traditional one-hour tokens, since [other events](#critical-event-evaluation) will be evaluated in near real time.
### Office and Web Account Manager settings
active-directory Howto Conditional Access Insights Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-insights-reporting.md
To configure a Conditional Access policy in report-only mode:
In order to access the workbook, you need the proper Azure AD permissions as well as Log Analytics workspace permissions. To test whether you have the proper workspace permissions by running a sample log analytics query: 1. Sign in to the **Azure portal**.
-1. Browse to **Azure Active Directory** > **Logs**.
+1. Browse to **Azure Active Directory** > **Log Analytics**.
1. Type `SigninLogs` into the query box and select **Run**. 1. If the query does not return any results, your workspace may not have been configured correctly.
active-directory Howto Conditional Access Policy All Users Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-all-users-mfa.md
Previously updated : 11/05/2021 Last updated : 03/28/2022
Conditional Access policies are powerful tools, we recommend excluding the follo
* **Emergency access** or **break-glass** accounts to prevent tenant-wide account lockout. In the unlikely scenario all administrators are locked out of your tenant, your emergency-access administrative account can be used to log into the tenant take steps to recover access. * More information can be found in the article, [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md).
-* **Service accounts** and **service principals**, such as the Azure AD Connect Sync Account. Service accounts are non-interactive accounts that aren't tied to any particular user. They're normally used by back-end services allowing programmatic access to applications, but are also used to sign in to systems for administrative purposes. Service accounts like these should be excluded since MFA can't be completed programmatically. Calls made by service principals are not blocked by Conditional Access.
+* **Service accounts** and **service principals**, such as the Azure AD Connect Sync Account. Service accounts are non-interactive accounts that aren't tied to any particular user. They're normally used by back-end services allowing programmatic access to applications, but are also used to sign in to systems for administrative purposes. Service accounts like these should be excluded since MFA can't be completed programmatically. Calls made by service principals aren't blocked by Conditional Access.
* If your organization has these accounts in use in scripts or code, consider replacing them with [managed identities](../managed-identities-azure-resources/overview.md). As a temporary workaround, you can exclude these specific accounts from the baseline policy. ## Application exclusions Organizations may have many cloud applications in use. Not all of those applications may require equal security. For example, the payroll and attendance applications may require MFA but the cafeteria probably doesn't. Administrators can choose to exclude specific applications from their policy.
+### Subscription activation
+
+Organizations that use the [Subscription Activation](/windows/deployment/windows-10-subscription-activation) feature to enable users to ΓÇ£step-upΓÇ¥ from one version of Windows to another, may want to exclude the Universal Store Service APIs and Web Application, AppID 45a330b1-b1ec-4cc1-9161-9f03992aa49f from their all users all cloud apps MFA policy.
+ ## Template deployment Organizations can choose to deploy this policy using the steps outlined below or using the [Conditional Access templates (Preview)](concept-conditional-access-policy-common.md#conditional-access-templates-preview).
active-directory Howto Conditional Access Policy Compliant Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-compliant-device.md
Previously updated : 11/05/2021 Last updated : 03/28/2022
After confirming your settings using [report-only mode](howto-conditional-access
On Windows 7, iOS, Android, macOS, and some third-party web browsers, Azure AD identifies the device using a client certificate that is provisioned when the device is registered with Azure AD. When a user first signs in through the browser the user is prompted to select the certificate. The end user must select this certificate before they can continue to use the browser.
+#### Subscription activation
+
+Organizations that use the [Subscription Activation](/windows/deployment/windows-10-subscription-activation) feature to enable users to ΓÇ£step-upΓÇ¥ from one version of Windows to another, may want to exclude the Universal Store Service APIs and Web Application, AppID 45a330b1-b1ec-4cc1-9161-9f03992aa49f from their device compliance policy.
+ ## Next steps [Conditional Access common policies](concept-conditional-access-policy-common.md)
active-directory Howto Create Self Signed Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-create-self-signed-certificate.md
Use the certificate you create using this method to authenticate from an applica
In an elevated PowerShell prompt, run the following command and leave the PowerShell console session open. Replace `{certificateName}` with the name that you wish to give to your certificate. ```powershell-
-$cert = New-SelfSignedCertificate -Subject "CN={certificateName}" -CertStoreLocation "Cert:\CurrentUser\My" -KeyExportPolicy Exportable -KeySpec Signature -KeyLength 2048 -KeyAlgorithm RSA -HashAlgorithm SHA256 ## Replace {certificateName}
+$certname = "{certificateName}" ## Replace {certificateName}
+$cert = New-SelfSignedCertificate -Subject "CN=$certname" -CertStoreLocation "Cert:\CurrentUser\My" -KeyExportPolicy Exportable -KeySpec Signature -KeyLength 2048 -KeyAlgorithm RSA -HashAlgorithm SHA256
```
The **$cert** variable in the previous command stores your certificate in the cu
```powershell
-Export-Certificate -Cert $cert -FilePath "C:\Users\admin\Desktop\{certificateName}.cer" ## Specify your preferred location and replace {certificateName}
+Export-Certificate -Cert $cert -FilePath "C:\Users\admin\Desktop\$certname.cer" ## Specify your preferred location
```
Use this option to create a certificate and its private key if your application
In an elevated PowerShell prompt, run the following command and leave the PowerShell console session open. Replace `{certificateName}` with name that you wish to give your certificate. ```powershell-
-$cert = New-SelfSignedCertificate -Subject "CN={certificateName}" -CertStoreLocation "Cert:\CurrentUser\My" -KeyExportPolicy Exportable -KeySpec Signature -KeyLength 2048 -KeyAlgorithm RSA -HashAlgorithm SHA256 ## Replace {certificateName}
+$certname = "{certificateName}" ## Replace {certificateName}
+$cert = New-SelfSignedCertificate -Subject "CN=$certname" -CertStoreLocation "Cert:\CurrentUser\My" -KeyExportPolicy Exportable -KeySpec Signature -KeyLength 2048 -KeyAlgorithm RSA -HashAlgorithm SHA256
```
The **$cert** variable in the previous command stores your certificate in the cu
```powershell
-Export-Certificate -Cert $cert -FilePath "C:\Users\admin\Desktop\{certificateName}.cer" ## Specify your preferred location and replace {certificateName}
+Export-Certificate -Cert $cert -FilePath "C:\Users\admin\Desktop\$certname.cer" ## Specify your preferred location
```
Now, using the password you stored in the `$mypwd` variable, secure, and export
```powershell
-Export-PfxCertificate -Cert $cert -FilePath "C:\Users\admin\Desktop\{privateKeyName}.pfx" -Password $mypwd ## Specify your preferred location and replace {privateKeyName}
+Export-PfxCertificate -Cert $cert -FilePath "C:\Users\admin\Desktop\$certname.pfx" -Password $mypwd ## Specify your preferred location
```
If you created the certificate using Option 2, you can delete the key pair from
```powershell
-Get-ChildItem -Path "Cert:\CurrentUser\My" | Where-Object {$_.Subject -Match "{certificateName}"} | Select-Object Thumbprint, FriendlyName ## Replace {privateKeyName} with the name you gave your certificate
+Get-ChildItem -Path "Cert:\CurrentUser\My" | Where-Object {$_.Subject -Match "$certname"} | Select-Object Thumbprint, FriendlyName
```
active-directory Tutorial V2 Windows Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-windows-desktop.md
-# Tutorial: Call the Microsoft Graph API from a Windows Desktop app
+# Tutorial: Sign in users and call Microsoft Graph in Windows Presentation Foundation (WPF) desktop app
In this tutorial, you build a native Windows Desktop .NET (XAML) app that signs in users and gets an access token to call the Microsoft Graph API.
active-directory Directory Delete Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-delete-howto.md
If you find that there are still enterprise applications that you can't delete i
1. Open PowerShell as an administrator. 1. Run `Connect-AzAccount -tenant <TENANT_ID>`. 1. Sign in to Azure AD in the Global Administrator role.
-1. Run `Get-AzADServicePrincipal | ForEach-Object {ΓÇïΓÇïΓÇïΓÇïΓÇï Remove-AzADServicePrincipal -ObjectId $_.Id -Force}ΓÇï`.ΓÇïΓÇïΓÇïΓÇï
+1. Run `Get-AzADServicePrincipal | ForEach-Object {ΓÇïΓÇïΓÇïΓÇïΓÇï Remove-AzADServicePrincipal -ObjectId $_.Id }ΓÇï`.ΓÇïΓÇïΓÇïΓÇï
## Trial subscription that blocks deletion
active-directory Groups Assign Sensitivity Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-assign-sensitivity-labels.md
To apply published labels to groups, you must first enable the feature. These st
``` In the **Sign in to your account** page, enter your admin account and password to connect you to your service, and select **Sign in**.
-1. Fetch the current group settings for the Azure AD organization.
+1. Fetch the current group settings for the Azure AD organization and display the current group settings.
```powershell $grpUnifiedSetting = (Get-AzureADDirectorySetting | where -Property DisplayName -Value "Group.Unified" -EQ)
- $template = Get-AzureADDirectorySettingTemplate -Id 62375ab9-6b52-47ed-826b-58e47e0e304b
- $setting = $template.CreateDirectorySetting()
+ $Setting = $grpUnifiedSetting
+ $grpUnifiedSetting.Values
``` > [!NOTE]
- > If no group settings have been created for this Azure AD organization you will get an error that reads "Cannot bind argument to parameter 'Id' because it is null". In this case, you must first create the settings. Follow the steps in [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-cmdlets.md) to create group settings for this Azure AD organization.
-
-1. Next, display the current group settings.
-
- ```powershell
- $Setting.Values
- ```
+ > If no group settings have been created for this Azure AD organization, you will get an empty screen. In this case, you must first create the settings. Follow the steps in [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-cmdlets.md) to create group settings for this Azure AD organization.
+
+ > [!NOTE]
+ > If the sensitivity label has been enabled previously, you will see **EnableMIPLabels** = **True**. In this case, you do not need to do anything.
1. Enable the feature:
active-directory Groups Dynamic Membership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-membership.md
You can also create a rule that selects device objects for membership in a group
> [!NOTE] > systemlabels is a read-only attribute that cannot be set with Intune. >
-> For Windows 10, the correct format of the deviceOSVersion attribute is as follows: (device.deviceOSVersion -eq "10.0.17763"). The formatting can be validated with the Get-MsolDevice PowerShell cmdlet.
+> For Windows 10, the correct format of the deviceOSVersion attribute is as follows: (device.deviceOSVersion -startsWith "10.0.1"). The formatting can be validated with the Get-MsolDevice PowerShell cmdlet.
The following device attributes can be used.
The following device attributes can be used.
accountEnabled | true false | (device.accountEnabled -eq true) displayName | any string value |(device.displayName -eq "Rob iPhone") deviceOSType | any string value | (device.deviceOSType -eq "iPad") -or (device.deviceOSType -eq "iPhone")<br>(device.deviceOSType -contains "AndroidEnterprise")<br>(device.deviceOSType -eq "AndroidForWork")<br>(device.deviceOSType -eq "Windows")
- deviceOSVersion | any string value | (device.deviceOSVersion -eq "9.1")<br>(device.deviceOSVersion -eq "10.0.17763.0")
+ deviceOSVersion | any string value | (device.deviceOSVersion -eq "9.1")<br>(device.deviceOSVersion -startsWith "10.0.1")
deviceCategory | a valid device category name | (device.deviceCategory -eq "BYOD") deviceManufacturer | any string value | (device.deviceManufacturer -eq "Samsung") deviceModel | any string value | (device.deviceModel -eq "iPad Air")
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 03/24/2022 Last updated : 03/29/2022
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on March 23rd, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on March 29th, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| MICROSOFT 365 AUDIO CONFERENCING FOR GCC | MCOMEETADV_GOC | 2d3091c7-0712-488b-b3d8-6b97bde6a1f5 | EXCHANGE_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MCOMEETADV_GOV (f544b08d-1645-4287-82de-8d91f37c02a1) | EXCHANGE FOUNDATION FOR GOVERNMENT (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MICROSOFT 365 AUDIO CONFERENCING FOR GOVERNMENT (f544b08d-1645-4287-82de-8d91f37c02a1) | | MICROSOFT 365 BUSINESS BASIC | O365_BUSINESS_ESSENTIALS | 3b555118-da6a-4418-894f-7df1e2096870 | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | | MICROSOFT 365 BUSINESS BASIC | SMB_BUSINESS_ESSENTIALS | dab7782a-93b1-4074-8bb1-0e61318bea0b | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | TO-DO (PLAN 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) |
-| MICROSOFT 365 BUSINESS STANDARD | O365_BUSINESS_PREMIUM | f245ecc8-75af-4f8e-b61f-27d8114de5f3 | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>O365_SB_Relationship_Management (5bfe124c-bbdc-4494-8835-f1297d457d79)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)| To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OUTLOOK CUSTOMER MANAGER (5bfe124c-bbdc-4494-8835-f1297d457d79)<br/>OFFICE 365 BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) |
+| Microsoft 365 Business Standard | O365_BUSINESS_PREMIUM | f245ecc8-75af-4f8e-b61f-27d8114de5f3 | CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>KAIZALA_O365_P2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>STREAM_O365_SMB (3c53ea51-d578-46fa-a4c0-fd0a92809a60)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>RMS_S_BASIC (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee) | Common Data Service for Teams (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for Business (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Forms (Plan E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>Microsoft Kaizala Pro (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SharePoint (Plan 1) (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Stream for Office 365 (3c53ea51-d578-46fa-a4c0-fd0a92809a60)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>Common Data Service (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Microsoft Azure Rights Management Service (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>Power Apps for Office 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>Power Automate for Office 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>Power Virtual Agents for Office 365 (041fe683-03e4-45b6-b1af-c0cdc516daee) |
| MICROSOFT 365 BUSINESS STANDARD - PREPAID LEGACY | SMB_BUSINESS_PREMIUM | ac5cef5d-921b-4f97-9ef3-c99076e5470f | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>O365_SB_Relationship_Management (5bfe124c-bbdc-4494-8835-f1297d457d79)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OUTLOOK CUSTOMER MANAGER (5bfe124c-bbdc-4494-8835-f1297d457d79)<br/>OFFICE 365 BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) |
-| MICROSOFT 365 BUSINESS PREMIUM | SPB | cbdc14ab-d96c-4c30-b9f4-6ada7cdc1d46 | AAD_SMB (de377cbc-0019-4ec2-b77c-3f223947e102)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE_S_ARCHIVE_ADDON (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_SMBIZ (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>O365_SB_Relationship_Management (5bfe124c-bbdc-4494-8835-f1297d457d79)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WINBIZ (8e229017-d77b-43d5-9305-903395523b99)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | AZURE ACTIVE DIRECTORY (de377cbc-0019-4ec2-b77c-3f223947e102)<br/>TO-DO (PLAN 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE ONLINE ARCHIVING FOR EXCHANGE ONLINE (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MICROSOFT INTUNE (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFT BOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OUTLOOK CUSTOMER MANAGER (5bfe124c-bbdc-4494-8835-f1297d457d79)<br/>OFFICE 365 BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>MICROSOFT STREAM FOR O365 E1 SKU (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WINDOWS 10 BUSINESS (8e229017-d77b-43d5-9305-903395523b99)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) |
+| Microsoft 365 Business Premium | SPB | cbdc14ab-d96c-4c30-b9f4-6ada7cdc1d46 | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>BPOS_S_DlpAddOn (9bec7e34-c9fa-40b7-a9d1-bd6d1165c7ed)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>EXCHANGE_S_ARCHIVE_ADDON (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MDE_SMB (bfc1bbd9-981b-4f71-9b82-17c35fd0e2a4)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>KAIZALA_O365_P2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICE_SHARED_COMPUTER_ACTIVATION (276d6e8a-f056-4f70-b7e8-4fc27f79f809)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WINBIZ (8e229017-d77b-43d5-9305-903395523b99)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>AAD_SMB (de377cbc-0019-4ec2-b77c-3f223947e102)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_SMBIZ (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER_VIRTUAL_AGENTS_O365_P3 (ded3d325-1bdc-453e-8432-5bac26d7a014) | Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>Data Loss Prevention (9bec7e34-c9fa-40b7-a9d1-bd6d1165c7ed)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Exchange Online Archiving (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for Business (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Defender for Business (bfc1bbd9-981b-4f71-9b82-17c35fd0e2a4)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Forms (Plan E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>Microsoft Kaizala Pro (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Office Shared Computer Activation (276d6e8a-f056-4f70-b7e8-4fc27f79f809)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>SharePoint (Plan 1) (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Business (8e229017-d77b-43d5-9305-903395523b99)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Azure Active Directory (de377cbc-0019-4ec2-b77c-3f223947e102)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Common Data Service (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Intune (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/>Microsoft Stream for Office 365 E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>Power Apps for Office 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>Power Automate for Office 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>Power Virtual Agents for Office 365 (ded3d325-1bdc-453e-8432-5bac26d7a014) |
| Microsoft 365 Business Voice | BUSINESS_VOICE_MED2 | a6051f20-9cbc-47d2-930d-419183bf6cf1 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOPSTN1 (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Domestic Calling Plan (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | | Microsoft 365 Business Voice (US) | BUSINESS_VOICE_MED2_TELCO | 08d7bce8-6e16-490e-89db-1d508e5e9609 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOPSTN1 (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Domestic Calling Plan (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | | Microsoft 365 Business Voice (without calling plan) | BUSINESS_VOICE_DIRECTROUTING | d52db95a-5ecb-46b6-beb0-190ab5cda4a8 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | | Microsoft 365 Business Voice (without Calling Plan) for US | BUSINESS_VOICE_DIRECTROUTING_MED | 8330dae3-d349-44f7-9cad-1b23c64baabe | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | | MICROSOFT 365 DOMESTIC CALLING PLAN (120 Minutes) | MCOPSTN_5 | 11dee6af-eca8-419f-8061-6864517c1875 | MCOPSTN5 (54a152dc-90de-4996-93d2-bc47e670fc06) | MICROSOFT 365 DOMESTIC CALLING PLAN (120 min) (54a152dc-90de-4996-93d2-bc47e670fc06) | | Microsoft 365 Domestic Calling Plan for GCC | MCOPSTN_1_GOV | 923f58ab-fca1-46a1-92f9-89fda21238a8 | MCOPSTN1_GOV (3c8a8792-7866-409b-bb61-1b20ace0368b)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8) | Domestic Calling for Government (3c8a8792-7866-409b-bb61-1b20ace0368b)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8) |
-| MICROSOFT 365 E3 | SPE_E3 | 05e9a617-0261-4cee-bb44-138d3ef5d965 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>FORMS_PLAN_E3 (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>CLOUD APP SECURITY DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>TO-DO (PLAN 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE ONLINE (PLAN 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW FOR OFFICE 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>MICROSOFT FORMS (PLAN E3) (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>POWERAPPS FOR OFFICE 365(c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>MICROSOFT STREAM FOR O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WINDOWS 10 ENTERPRISE (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>YAMMER ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) |
+| Microsoft 365 E3 | SPE_E3 | 05e9a617-0261-4cee-bb44-138d3ef5d965 | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MDE_LITE (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>FORMS_PLAN_E3 (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee) | Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection and Governance Analytics - Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Defender for Endpoint Plan 1 (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>Microsoft Forms (Plan E3) (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>Microsoft Kaizala Pro (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (Original) (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Common Data Service (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Power Virtual Agents for Office 365 (041fe683-03e4-45b6-b1af-c0cdc516daee) |
|Microsoft 365 E3 - Unattended License | SPE_E3_RPA1 | c2ac2ee4-9bb1-47e4-8541-d689c7e83371 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION_unattended (8d77e2d9-9e28-4450-8431-0def64078fc5)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>FORMS_PLAN_E3 (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/> WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Cloud App Security Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Common Data Service - O365 P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Common Data Service for Teams_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for Enterprise (Unattended) (8d77e2d9-9e28-4450-8431-0def64078fc5)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Forms (Plan E3) (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala Pro Plan 3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Power Virtual Agents for Office 365 P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/> To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Windows 10 Enterprise (Original) (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) | | Microsoft 365 E3_USGOV_DOD | SPE_E3_USGOV_DOD | d61d61cc-f992-433f-a577-5bd016037eeb | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS_AR_DOD (fd500458-c24c-478e-856c-a6067a8376cd)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams for DOD (AR) (fd500458-c24c-478e-856c-a6067a8376cd)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office Online (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SharePoint Online (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | | Microsoft 365 E3_USGOV_GCCHIGH | SPE_E3_USGOV_GCCHIGH | ca9d1dd9-dfe9-4fef-b97c-9bc1ea3c3658 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS_AR_GCCHIGH (9953b155-8aef-4c56-92f3-72b0487fce41)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1(6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Cloud App Security Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/> Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/> Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/> Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/> Microsoft Teams for GCCHigh (AR) (9953b155-8aef-4c56-92f3-72b0487fce41)<br/> Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/> Office Online (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/> SharePoint Online (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) |
active-directory Custom Security Attributes Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-overview.md
Previously updated : 02/04/2022 Last updated : 03/28/2022
If you use the Microsoft Graph API, you can use [Graph Explorer](/graph/graph-ex
Here are some of the known issues with custom security attributes: -- Users with attribute set-level role assignments can see other attribute sets and custom security attribute definitions. - Global Administrators can read audit logs for custom security attribute definitions and assignments. - If you have an Azure AD Premium P2 license, you can't add eligible role assignments at attribute set scope. - If you have an Azure AD Premium P2 license, the **Assigned roles** page for a user does not list permanent role assignments at attribute set scope. The role assignments exist, but aren't listed.
active-directory How To Connect Fed Saml Idp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-saml-idp.md
na Previously updated : 01/21/2022 Last updated : 03/29/2022
This procedure shows how to add a single user to Azure AD.
-FirstName Elwood ` -LastName Folk ` -AlternateEmailAddresses "Elwood.Folk@contoso.com" `
- -LicenseAssignment "samlp2test:ENTERPRISEPACK" `
-UsageLocation "US" ```
active-directory Concept Identity Protection User Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-user-experience.md
Previously updated : 10/18/2019 Last updated : 01/21/2022
All of the Identity Protection policies have an impact on the sign in experience
## Multi-factor authentication registration
-Enabling the Identity Protection policy requiring multi-factor authentication registration and targeting all of your users, will make sure that they have the ability to use Azure AD MFA to self-remediate in the future. Configuring this policy gives your users a 14-day period where they can choose to register and at the end are forced to register. The experience for users is outlined below. More information can be found in the end-user documentation in the article, [Overview for two-factor verification and your work or school account](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc).
+Enabling the Identity Protection policy requiring multi-factor authentication registration and targeting all of your users, will make sure that they can use Azure AD MFA to self-remediate in the future. Configuring this policy gives your users a 14-day period where they can choose to register and at the end are forced to register.
### Registration interrupt
Enabling the Identity Protection policy requiring multi-factor authentication re
## Risky sign-in remediation
-When an administrator has configured a policy for sign-in risks, the affected users are notified when they try to sign in and trigger the policies risk level.
+When an administrator has configured a policy for sign-in risks, affected users are interrupted when they hit the configured risk level.
### Risky sign-in self-remediation
-1. The user is informed that something unusual was detected about their sign-in, such as signing in from a new location, device, or app.
+1. The user is informed that something unusual was detected about their sign-in. This could be something like, such as signing in from a new location, device, or app.
![Something unusual prompt](./media/concept-identity-protection-user-experience/120.png)
When an administrator has configured a policy for sign-in risks, the affected us
### Risky sign-in administrator unblock
-Administrators can choose to block users upon sign-in depending on their risk level. To get unblocked, end users must contact their IT staff, or they can try signing in from a familiar location or device. Self-remediation by performing multi-factor authentication is not an option in this case.
+Administrators can choose to block users upon sign-in depending on their risk level. To get unblocked, end users must contact their IT staff, or they can try signing in from a familiar location or device. Self-remediation by performing multi-factor authentication isn't an option in this case.
![Blocked by sign-in risk policy](./media/concept-identity-protection-user-experience/200.png)
When a user risk policy has been configured, users who meet the user risk level
## Risky sign-in administrator unblock
-Administrators can choose to block users upon sign-in depending on their risk level. To get unblocked, end users must contact their IT staff. Self-remediation by performing multi-factor authentication and self-service password reset is not an option in this case.
+Administrators can choose to block users upon sign-in depending on their risk level. To get unblocked, end users must contact their IT staff. Self-remediation by performing multi-factor authentication and self-service password reset isn't an option in this case.
![Blocked by user risk policy](./media/concept-identity-protection-user-experience/104.png) IT staff can follow the instructions in the section [Unblocking users](howto-identity-protection-remediate-unblock.md#unblocking-based-on-user-risk) to allow users to sign back in.
+## High risk technician
+
+If your organization has users who are delegated access to another tenant and they trigger high risk they may be blocked from signing into those other tenants. For example:
+
+1. An organization has a managed service provider (MSP) or cloud solution provider (CSP) who takes care of configuring their cloud environment.
+1. One of the MSPs technicians credentials are leaked and triggers high risk. That technician is blocked from signing in to other tenants.
+1. The technician can self-remediate and sign in if the home tenant has enabled the appropriate policies [requiring password change for high risk users](../conditional-access/howto-conditional-access-policy-risk-user.md) or [MFA for risky users](../conditional-access/howto-conditional-access-policy-risk.md).
+ 1. If the home tenant hasn't enabled self-remediation policies, an administrator in the technician's home tenant will have to [remediate the risk](howto-identity-protection-remediate-unblock.md#remediation).
+ ## See also - [Remediate risks and unblock users](howto-identity-protection-remediate-unblock.md)
active-directory F5 Aad Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-aad-integration.md
Refer to the following guided configuration tutorials using Easy Button template
- [BIG-IP Easy Button for SSO to Oracle JD Edwards](f5-big-ip-oracle-jde-easy-button.md)
+- [BIG-IP Easy Button for SSO to SAP ERP](f5-big-ip-sap-erp-easy-button.md)
+ ## Azure AD B2B guest access Azure AD B2B guest access to SHA protected applications is also possible, but some scenarios may require some additional steps not covered in the tutorials. One example is Kerberos SSO, where a BIG-IP will perform kerberos constrained delegation (KCD) to obtain a service ticket from domain contollers. Without a local representation of a guest user exisiting locally, a domain controller will fail to honour the request on the basis that the user does not exist. To support this scenario, you would need to ensure external identities are flowed down from your Azure AD tenant to the directory used by the application. See [Grant B2B users in Azure AD access to your on-premises applications](../external-identities/hybrid-cloud-to-on-premises.md) for guidance.
active-directory How To Assign App Role Managed Identity Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-assign-app-role-managed-identity-cli.md
In this article, you learn how to assign a managed identity to an application ro
```azurecli roleguid="0566419e-bb95-4d9d-a4f8-ed9a0f147fa6"
- az rest -m POST -u https://graph.microsoft.com/beta/servicePrincipals/$oidForMI/appRoleAssignments -b "{\"principalId\": \"$oidForMI\", \"resourceId\": \"$serverSPOID\",\"appRoleId\": \"$roleguid\"}"
+ az rest -m POST -u https://graph.microsoft.com/v1.0/servicePrincipals/$oidForMI/appRoleAssignments -b "{\"principalId\": \"$oidForMI\", \"resourceId\": \"$serverSPOID\",\"appRoleId\": \"$roleguid\"}"
``` ## Next steps
active-directory Managed Identities Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-status.md
The following Azure services support managed identities for Azure resources:
| API Management | [Use managed identities in Azure API Management](../../api-management/api-management-howto-use-managed-service-identity.md) | | Application Gateway | [TLS termination with Key Vault certificates](../../application-gateway/key-vault-certs.md) | | Azure App Configuration | [How to use managed identities for Azure App Configuration](../../azure-app-configuration/overview-managed-identity.md) |
-| Azure App Services | [How to use managed identities for App Service and Azure Functions](../../app-service/overview-managed-identity.md) |
+| Azure App Services | [How to use managed identities for App Service and Azure Functions](../../app-service/overview-managed-identity.md) |
| Azure Arc enabled Kubernetes | [Quickstart: Connect an existing Kubernetes cluster to Azure Arc](../../azure-arc/kubernetes/quickstart-connect-cluster.md) | | Azure Arc enabled servers | [Authenticate against Azure resources with Azure Arc-enabled servers](../../azure-arc/servers/managed-identity-authentication.md) | | Azure Automanage | [Repair an Automanage Account](../../automanage/repair-automanage-account.md) |
The following Azure services support managed identities for Azure resources:
| Azure Digital Twins | [Enable a managed identity for routing Azure Digital Twins events](../../digital-twins/how-to-enable-managed-identities-portal.md) | | Azure Event Grid | [Event delivery with a managed identity](../../event-grid/managed-service-identity.md) | Azure Image Builder | [Azure Image Builder overview](../../virtual-machines/image-builder-overview.md#permissions) |
-| Azure Import/Export | [Use customer-managed keys in Azure Key Vault for Import/Export service](../../import-export/storage-import-export-encryption-key-portal.md)
+| Azure Import/Export | [Use customer-managed keys in Azure Key Vault for Import/Export service](../../import-export/storage-import-export-encryption-key-portal.md)
| Azure IoT Hub | [IoT Hub support for virtual networks with Private Link and Managed Identity](../../iot-hub/virtual-network-support.md) | | Azure Kubernetes Service (AKS) | [Use managed identities in Azure Kubernetes Service](../../aks/use-managed-identity.md) | | Azure Logic Apps | [Authenticate access to Azure resources using managed identities in Azure Logic Apps](../../logic-apps/create-managed-service-identity.md) | | Azure Log Analytics cluster | [Azure Monitor customer-managed key](../../azure-monitor/logs/customer-managed-keys.md) | Azure Machine Learning Services | [Use Managed identities with Azure Machine Learning](../../machine-learning/how-to-use-managed-identities.md?tabs=python) | | Azure Managed Disk | [Use the Azure portal to enable server-side encryption with customer-managed keys for managed disks](../../virtual-machines/disks-enable-customer-managed-keys-portal.md) |
-| Azure Media services | [Managed identities](../../media-services/latest/concept-managed-identities.md) |
+| Azure Media services | [Managed identities](/media-services/latest/concept-managed-identities) |
| Azure Monitor | [Azure Monitor customer-managed key](../../azure-monitor/logs/customer-managed-keys.md?tabs=portal) | | Azure Policy | [Remediate non-compliant resources with Azure Policy](../../governance/policy/how-to/remediate-resources.md) | | Azure Purview | [Credentials for source authentication in Azure Purview](../../purview/manage-credentials.md) |
active-directory Services Azure Active Directory Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/services-azure-active-directory-support.md
The following services support Azure AD authentication. New services are added t
| Azure Kubernetes Service (AKS) | [Control access to cluster resources using Kubernetes role-based access control and Azure Active Directory identities in Azure Kubernetes Service](../../aks/azure-ad-rbac.md) | | Azure Machine Learning Services | [Set up authentication for Azure Machine Learning resources and workflows](../../machine-learning/how-to-setup-authentication.md) | | Azure Maps | [Manage authentication in Azure Maps](../../azure-maps/how-to-manage-authentication.md) |
-| Azure Media services | [Access the Azure Media Services API with Azure AD authentication](../../media-services/previous/media-services-use-aad-auth-to-access-ams-api.md) |
+| Azure Media services | [Access the Azure Media Services API with Azure AD authentication](/media-services/previous/media-services-use-aad-auth-to-access-ams-api) |
| Azure Monitor | [Azure AD authentication for Application Insights (Preview)](../../azure-monitor/app/azure-ad-authentication.md?tabs=net) | | Azure Resource Manager | [Azure security baseline for Azure Resource Manager](/security/benchmark/azure/baselines/resource-manager-security-baseline?toc=/azure/azure-resource-manager/management/toc.json) | Azure Service Fabric | [Set up Azure Active Directory for client authentication](../../service-fabric/service-fabric-cluster-creation-setup-aad.md) |
active-directory Confluencemicrosoft Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/confluencemicrosoft-tutorial.md
As of now, following versions of Confluence are supported:
- Confluence: 5.0 to 5.10 - Confluence: 6.0.1 to 6.15.9-- Confluence: 7.0.1 to 7.16.2
+- Confluence: 7.0.1 to 7.17.0
> [!NOTE] > Please note that our Confluence Plugin also works on Ubuntu Version 16.04
active-directory Jiramicrosoft Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jiramicrosoft-tutorial.md
Use your Microsoft Azure Active Directory account with Atlassian JIRA server to
To configure Azure AD integration with JIRA SAML SSO by Microsoft, you need the following items: - An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).-- JIRA Core and Software 6.4 to 8.22.0 or JIRA Service Desk 3.0 to 4.22.0 should installed and configured on Windows 64-bit version
+- JIRA Core and Software 6.4 to 8.22.1 or JIRA Service Desk 3.0 to 4.22.1 should installed and configured on Windows 64-bit version
- JIRA server is HTTPS enabled - Note the supported versions for JIRA Plugin are mentioned in below section. - JIRA server is reachable on the Internet particularly to the Azure AD login page for authentication and should able to receive the token from Azure AD
To get started, you need the following items:
## Supported versions of JIRA
-* JIRA Core and Software: 6.4 to 8.22.0
-* JIRA Service Desk 3.0 to 4.22.0
+* JIRA Core and Software: 6.4 to 8.22.1
+* JIRA Service Desk 3.0 to 4.22.1
* JIRA also supports 5.2. For more details, click [Microsoft Azure Active Directory single sign-on for JIRA 5.2](jira52microsoft-tutorial.md) > [!NOTE]
active-directory Policystat Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/policystat-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the **Sign on URL** text box, type a URL using the following pattern: `https://<companyname>.policystat.com` >[!NOTE]
- >These values aren't real. Update these values with the actual Identifier and Sign on URL. Contact [PolicyStat Client support team](https://rldatix.com/services-support/support) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ >These values aren't real. Update these values with the actual Identifier and Sign on URL. Contact [PolicyStat Client support team](https://rldatix.com/en-apac/customer-success/community/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
active-directory Sap Netweaver Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-netweaver-tutorial.md
If you are expecting a role to be assigned to the users, you can select it from
![Configure OAuth](./media/sapnetweaver-tutorial/oauth03.png) > [!NOTE]
- > Message `soft state status is not supported` ΓÇô can be ignored, as no problem. For more details, refer [here](https://help.sap.com/doc/saphelp_nw74/7.4.16/1e/c60c33be784846aad62716b4a1df39/content.htm?no_cache=true).
+ > Message `soft state status is not supported` ΓÇô can be ignored, as no problem.
### Create a service user for the OAuth 2.0 Client
advisor Advisor Reference Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-performance-recommendations.md
Learn more about [AVS Private cloud - vSANCapacity (vSAN capacity utilization ha
Cache instances perform best when not running under high network bandwidth which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce network bandwidth or scale to a different size or sku with more capacity.
-Learn more about [Redis Cache Server - RedisCacheNetworkBandwidth (Improve your Cache and application performance when running with high network bandwidth)](https://aka.ms/redis/recommendations/bandwidth).
+Learn more about [Redis Cache Server - RedisCacheNetworkBandwidth (Improve your Cache and application performance when running with high network bandwidth)](/azure/azure-cache-for-redis/cache-troubleshoot-server#server-side-bandwidth-limitation).
### Improve your Cache and application performance when running with many connected clients
Learn more about [Redis Cache Server - RedisCacheConnectedClients (Improve your
Cache instances perform best when not running under high server load which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce the server load or scale to a different size or sku with more capacity.
-Learn more about [Redis Cache Server - RedisCacheServerLoad (Improve your Cache and application performance when running with high server load)](https://aka.ms/redis/recommendations/cpu).
+Learn more about [Redis Cache Server - RedisCacheServerLoad (Improve your Cache and application performance when running with high server load)](/azure/azure-cache-for-redis/cache-troubleshoot-client#high-client-cpu-usage).
### Improve your Cache and application performance when running with high memory pressure Cache instances perform best when not running under high memory pressure which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce used memory or scale to a different size or sku with more capacity.
-Learn more about [Redis Cache Server - RedisCacheUsedMemory (Improve your Cache and application performance when running with high memory pressure)](https://aka.ms/redis/recommendations/memory).
+Learn more about [Redis Cache Server - RedisCacheUsedMemory (Improve your Cache and application performance when running with high memory pressure)](/azure/azure-cache-for-redis/cache-troubleshoot-client#memory-pressure-on-redis-client).
## Cognitive Service
advisor Advisor Reference Reliability Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-reliability-recommendations.md
Last updated 02/04/2022
# Reliability recommendations
-Azure Advisor helps you ensure and improve the continuity of your business-critical applications. You can get reliability recommendations on the **Reliability** tab on the Advisor dashboard.
+Azure Advisor helps you ensure and improve the continuity of your business-critical applications. You can get reliability recommendations on the **Reliability** tab on the Advisor dashboard.
1. Sign in to the [**Azure portal**](https://portal.azure.com).
Learn more about [Cosmos DB account - CosmosDBMongoSelfServeUpgrade (Upgrade you
### Add a second region to your production workloads on Azure Cosmos DB
-Based on their names and configuration, we have detected the Azure Cosmos DB accounts below as being potentially used for production workloads. These accounts currently run in a single Azure region. You can increase their availability by configuring them to span at least two Azure regions.
+Based on their names and configuration, we have detected the Azure Cosmos DB accounts below as being potentially used for production workloads. These accounts currently run in a single Azure region. You can increase their availability by configuring them to span at least two Azure regions.
> [!NOTE] > Additional regions will incur extra costs.
Learn more about [Cosmos DB account - CosmosDBSingleRegionProdAccounts (Add a se
We observed your account is throwing a TooManyRequests error with the 16500 error code. Enabling Server Side Retry (SSR) can help mitigate this issue for you.
-Learn more about [Cosmos DB account - CosmosDBMongoServerSideRetries (Enable Server Side Retry (SSR) on your Azure Cosmos DB's API for MongoDB account)](/azure/cosmos-db/prevent-rate-limiting-errors).
+Learn more about [Cosmos DB account - CosmosDBMongoServerSideRetries (Enable Server Side Retry (SSR) on your Azure Cosmos DB's API for MongoDB account)](/azure/cosmos-db/cassandra/prevent-rate-limiting-errors).
### Migrate your Azure Cosmos DB API for MongoDB account to v4.0 to save on query/storage costs and utilize new features
Learn more about [Kubernetes - Azure Arc - Arc-enabled K8s agent version upgrade
Please be advised that your media account is about to hit its quota limits. Please review current usage of Assets, Content Key Policies and Stream Policies for the media account. To avoid any disruption of service, you should request quota limits to be increased for the entities that are closer to hitting quota limit. You can request quota limits to be increased by opening a ticket and adding relevant details to it. Please don't create additional Azure Media accounts in an attempt to obtain higher limits.
-Learn more about [Media Service - AccountQuotaLimit (Increase Media Services quotas or limits to ensure continuity of service.)](../media-services/latest/limits-quotas-constraints-reference.md).
+Learn more about [Media Service - AccountQuotaLimit (Increase Media Services quotas or limits to ensure continuity of service.)](/media-services/latest/limits-quotas-constraints-reference).
## Networking
Learn more about [Application gateway - AppGwLog4JCVEPatchNotification (Azure WA
### Additional protection to mitigate Log4j2 vulnerability (CVE-2021-44228)
-To mitigate the impact of Log4j2 vulnerability, we recommend these steps:
+To mitigate the impact of Log4j2 vulnerability, we recommend these steps:
-1) Upgrade Log4j2 to version 2.15.0 on your backend servers. If upgrade isn't possible, follow the system property guidance link below.
+1) Upgrade Log4j2 to version 2.15.0 on your backend servers. If upgrade isn't possible, follow the system property guidance link below.
2) Take advantage of WAF Core rule sets (CRS) by upgrading to WAF SKU Learn more about [Application gateway - AppGwLog4JCVEGenericNotification (Additional protection to mitigate Log4j2 vulnerability (CVE-2021-44228))](https://aka.ms/log4jcve).
aks Azure Disk Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-volume.md
Title: Create a static volume for pods in Azure Kubernetes Service (AKS)
description: Learn how to manually create a volume with Azure disks for use with a pod in Azure Kubernetes Service (AKS) Previously updated : 03/09/2019 Last updated : 03/29/2019 #Customer intent: As a developer, I want to learn how to manually create and attach storage to a specific pod in AKS.
Create a *pvc-azuredisk.yaml* file with a *PersistentVolumeClaim* that uses the
```yaml apiVersion: v1
+kind: PersistentVolumeClaim
metadata: name: pvc-azuredisk spec:
aks Scale Down Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/scale-down-mode.md
Title: Use Scale-down Mode for your Azure Kubernetes Service (AKS) cluster (preview)
+ Title: Use Scale-down Mode for your Azure Kubernetes Service (AKS) cluster
description: Learn how to use Scale-down Mode in Azure Kubernetes Service (AKS).
-# Use Scale-down Mode to delete/deallocate nodes in Azure Kubernetes Service (AKS) (preview)
+# Use Scale-down Mode to delete/deallocate nodes in Azure Kubernetes Service (AKS)
By default, scale-up operations performed manually or by the cluster autoscaler require the allocation and provisioning of new nodes, and scale-down operations delete nodes. Scale-down Mode allows you to decide whether you would like to delete or deallocate the nodes in your Azure Kubernetes Service (AKS) cluster upon scaling down.
-When an Azure VM is in the `Stopped` (deallocated) state, you will not be charged for the VM compute resources. However, you will still need to pay for any OS and data storage disks attached to the VM. This also means that the container images will be preserved on those nodes. For more information, see [States and billing of Azure Virtual Machines][state-billing-azure-vm]. This behavior allows for faster operation speeds, as your deployment leverages cached images. Scale-down Mode allows you to no longer have to pre-provision nodes and pre-pull container images, saving you compute cost.
-
+When an Azure VM is in the `Stopped` (deallocated) state, you will not be charged for the VM compute resources. However, you'll still need to pay for any OS and data storage disks attached to the VM. This also means that the container images will be preserved on those nodes. For more information, see [States and billing of Azure Virtual Machines][state-billing-azure-vm]. This behavior allows for faster operation speeds, as your deployment uses cached images. Scale-down Mode removes the need to pre-provision nodes and pre-pull container images, saving you compute cost.
## Before you begin > [!WARNING] > In order to preserve any deallocated VMs, you must set Scale-down Mode to Deallocate. That includes VMs that have been deallocated using IaaS APIs (Virtual Machine Scale Set APIs). Setting Scale-down Mode to Delete will remove any deallocate VMs.
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster and the latest version of the Azure CLI installed. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
### Limitations -- [Ephemeral OS][ephemeral-os] disks are not supported. Be sure to specify managed OS disks via `--node-osdisk-type Managed` when creating a cluster or node pool.-- [Spot node pools][spot-node-pool] are not supported.-
-### Install aks-preview CLI extension
-
-You also need the *aks-preview* Azure CLI extension version 0.5.30 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
+- [Ephemeral OS][ephemeral-os] disks aren't supported. Be sure to specify managed OS disks via `--node-osdisk-type Managed` when creating a cluster or node pool.
-```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
-
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
-```
-
-### Register the `AKS-ScaleDownModePreview` preview feature
-
-To use the feature, you must also enable the `AKS-ScaleDownModePreview` feature flag on your subscription.
-
-Register the `AKS-ScaleDownModePreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "AKS-ScaleDownModePreview"
-```
-
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKS-ScaleDownModePreview')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
+> [!NOTE]
+> Previously, while Scale-down Mode was in preview, [spot node pools][spot-node-pool] were unsupported. Now that Scale-down Mode is Generally Available, this limitation no longer applies.
## Using Scale-down Mode to deallocate nodes on scale-down
In this example, we create a new node pool with 20 nodes and specify that upon s
az aks nodepool add --node-count 20 --scale-down-mode Deallocate --node-osdisk-type Managed --max-pods 10 --name nodepool2 --cluster-name myAKSCluster --resource-group myResourceGroup ```
-By scaling the node pool and changing the node count to 5, we will deallocate 15 nodes.
+By scaling the node pool and changing the node count to 5, we'll deallocate 15 nodes.
```azurecli-interactive az aks nodepool scale --node-count 5 --name nodepool2 --cluster-name myAKSCluster --resource-group myResourceGroup
az aks nodepool update --scale-down-mode Delete --name nodepool2 --cluster-name
## Using Scale-down Mode to delete nodes on scale-down
-The default behavior of AKS without using Scale-down Mode is to delete your nodes when you scale-down your cluster. Using Scale-down Mode, this can be explicitly achieved by setting `--scale-down-mode Delete`.
+The default behavior of AKS without using Scale-down Mode is to delete your nodes when you scale-down your cluster. With Scale-down Mode, this behavior can be explicitly achieved by setting `--scale-down-mode Delete`.
In this example, we create a new node pool and specify that our nodes will be deleted upon scale-down via `--scale-down-mode Delete`. Scaling operations will be handled via the cluster autoscaler.
api-management Api Management Advanced Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-advanced-policies.md
The `choose` policy applies enclosed policy statements based on the outcome of e
</when> <otherwise> <!ΓÇö one or more policy statements to be applied if none of the above conditions are true -->
-</otherwise>
+ </otherwise>
</choose> ```
This policy can be used in the following policy [sections](./api-management-howt
- **Policy sections:** inbound, outbound, backend - **Policy scopes:** all scopes
api-management Api Management Howto Configure Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-configure-notifications.md
Title: Configure notifications and email templates
-description: Learn how to configure notifications and email templates in Azure API Management.
+description: Learn how to configure notifications and email templates for events in Azure API Management.
- -- Previously updated : 01/10/2020+ Last updated : 03/28/2022
-# How to configure notifications and email templates in Azure API Management
+# How to configure notifications and notification templates in Azure API Management
-API Management provides the ability to configure notifications for specific events, and to configure the email templates that are used to communicate with the administrators and developers of an API Management instance. This article shows how to configure notifications for the available events, and provides an overview of configuring the email templates used for these events.
+API Management provides the ability to configure email notifications for specific events, and to configure the email templates that are used to communicate with the administrators and developers of an API Management instance. This article shows how to configure notifications for the available events, and provides an overview of configuring the email templates used for these events.
## Prerequisites
-If you do not have an API Management service instance, complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md).
+If you don't have an API Management service instance, complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md).
[!INCLUDE [premium-dev-standard-basic.md](../../includes/api-management-availability-premium-dev-standard-basic.md)]
-## <a name="publisher-notifications"> </a>Configure notifications
-1. Select your **API MANAGEMENT** instance.
-2. Click **Notifications** to view the available notifications.
+## <a name="publisher-notifications"> </a>Configure notifications in the portal
- ![Publisher notifications][api-management-publisher-notifications]
+1. In the left navigation of your API Management instance, select **Notifications** to view the available notifications.
The following list of events can be configured for notifications.
- - **Subscription requests (requiring approval)** - The specified email recipients and users will receive email notifications about subscription requests for API products requiring approval.
- - **New subscriptions** - The specified email recipients and users will receive email notifications about new API product subscriptions.
- - **Application gallery requests** - The specified email recipients and users will receive email notifications when new applications are submitted to the application gallery.
+ - **Subscription requests (requiring approval)** - The specified email recipients and users will receive email notifications about subscription requests for products requiring approval.
+ - **New subscriptions** - The specified email recipients and users will receive email notifications about new product subscriptions.
+ - **Application gallery requests** (deprecated) - The specified email recipients and users will receive email notifications when new applications are submitted to the application gallery on the legacy developer portal.
- **BCC** - The specified email recipients and users will receive email blind carbon copies of all emails sent to developers.
- - **New issue or comment** - The specified email recipients and users will receive email notifications when a new issue or comment is submitted on the developer portal.
+ - **New issue or comment** (deprecated) - The specified email recipients and users will receive email notifications when a new issue or comment is submitted on the legacy developer portal.
- **Close account message** - The specified email recipients and users will receive email notifications when an account is closed.
- - **Approaching subscription quota limit** - The following email recipients and users will receive email notifications when subscription usage gets close to usage quota.
+ - **Approaching subscription quota limit** - The specified email recipients and users will receive email notifications when subscription usage gets close to usage quota.
> [!NOTE]
- > Notifications are triggered by the [quota by subscription](api-management-access-restriction-policies.md#SetUsageQuota) policy only. [Quota by key](api-management-access-restriction-policies.md#SetUsageQuotaByKey) policy doesn't generate notifications.
+ > Notifications are triggered by the [quota by subscription](api-management-access-restriction-policies.md#SetUsageQuota) policy only. The [quota by key](api-management-access-restriction-policies.md#SetUsageQuotaByKey) policy doesn't generate notifications.
- For each event, you can specify email recipients using the email address text box or you can select users from a list.
+1. Select a notification, and specify one or more email addresses to be notified:
+ * To add the administrator email address, select **+ Add admin**.
+ * To add another email address, select **+ Add email**, enter an email address, and select **Add**.
+ * Continue adding email addresses as needed.
-3. To specify the email addresses to be notified, enter them in the email address text box. If you have multiple email addresses, separate them using commas.
-
- ![Notification recipients][api-management-email-addresses]
-
-4. Press **Add**.
+ :::image type="content" source="media/api-management-howto-configure-notifications/api-management-email-addresses.png" alt-text="Screenshot showing how to add notification recipients in the portal":::
## <a name="email-templates"> </a>Configure notification templates
-API Management provides notification templates for the email messages that are sent in the course of administering and using the service. The following email templates are provided.
+API Management provides notification templates for the administrative email messages that are sent automatically to developers when they access and use the service. The following notification templates are provided:
-- Application gallery submission approved
+- Application gallery submission approved (deprecated)
- Developer farewell letter - Developer quota limit approaching notification
+- Developer welcome letter
+- Email change notification
- Invite user-- New comment added to an issue-- New issue received
+- New comment added to an issue (deprecated)
+- New developer account confirmation
+- New issue received (deprecated)
- New subscription activated-- Subscription renewed confirmation-- Subscription request declines
+- Password change confirmation
+- Subscription request declined
- Subscription request received
-These templates can be modified as desired.
+Each email template has a subject in plain text, and a body definition in HTML format. Each item can be customized as desired.
-To view and configure the email templates for your API Management instance, click **Notifications templates**.
+To view and configure a notification template in the portal:
-![Email templates][api-management-email-templates]
+1. In the left menu, select **Notification templates**.
+ :::image type="content" source="media/api-management-howto-configure-notifications/api-management-email-templates.png" alt-text="Screenshot of notification templates in the portal":::
-Each email template has a subject in plain text, and a body definition in HTML format. Each item can be customized as desired.
+1. Select a notification template, and configure the template using the editor.
+
+ :::image type="content" source="media/api-management-howto-configure-notifications/api-management-email-template.png" alt-text="Screenshot of notification template editor in the portal":::
+
+ * The **Parameters** list contains a list of parameters, which when inserted into the subject or body, will be replaced by the designated value when the email is sent.
+ * To insert a parameter, place the cursor where you wish the parameter to go, and select the parameter name.
+
+1. To save the changes to the email template, select **Save**, or to cancel the changes select **Discard**.
+
+## Configure email settings
+
+You can modify general e-mail settings for notifications that are sent from your API Management instance. You can change the administrator email address, the name of the organization sending notification, and the originating email address.
+
+To modify email settings:
-![Email template editor][api-management-email-template]
+1. In the left menu, select **Notification templates**.
+1. Select **E-mail settings**.
+1. On the **General email settings** page, enter values for:
+ * **Administrator email** - the email address to receive all system notifications and other configured notifications
+ * **Organization name** - the name of your organization for use in the developer portal and notifications
+ * **Originating email address** - The value of the `From` header for notifications from the API Management instance. API Management sends notifications on behalf of this originating address.
-The **Parameters** list contains a list of parameters, which when inserted into the subject or body, will be replaced the designated value when the email is sent. To insert a parameter, place the cursor where you wish the parameter to go, and click the arrow to the left of the parameter name.
+ :::image type="content" source="media/api-management-howto-configure-notifications/configure-email-settings.png" alt-text="Screenshot of API Management email settings in the portal":::
+1. Select **Save**.
-To save the changes to the email template, click **Save**, or to cancel the changes click **Discard**.
+## Next steps
-[api-management-management-console]: ./media/api-management-howto-configure-notifications/api-management-management-console.png
-[api-management-publisher-notifications]: ./media/api-management-howto-configure-notifications/api-management-publisher-notifications.png
-[api-management-email-addresses]: ./media/api-management-howto-configure-notifications/api-management-email-addresses.png
-[api-management-email-templates]: ./media/api-management-howto-configure-notifications/api-management-email-templates.png
-[api-management-email-templates-list]: ./media/api-management-howto-configure-notifications/api-management-email-templates-list.png
-[api-management-email-template]: ./media/api-management-howto-configure-notifications/api-management-email-template.png
-[configure publisher notifications]: #publisher-notifications
-[configure email templates]: #email-templates
-[how to create and use groups]: api-management-howto-create-groups.md
-[how to associate groups with developers]: api-management-howto-create-groups.md#associate-group-developer
-[get started with azure api management]: get-started-create-service-instance.md
-[create an api management service instance]: get-started-create-service-instance.md
+* [Overview of the developer portal](api-management-howto-developer-portal.md).
+* [How to create and use groups to manage developer accounts](api-management-howto-create-groups.md)
api-management Devops Api Development Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/devops-api-development-templates.md
API developers face challenges when working with Resource Manager templates:
A tool called [Creator](https://github.com/Azure/azure-api-management-devops-resource-kit/blob/main/src/README.md#creator) in the resource kit can help automate the creation of API templates based on an Open API Specification file. Additionally, developers can supply API Management policies for an API in XML format.
-* For customers who are already using API Management, another challenge is to extract existing configurations into Resource Manager templates. For those customers, a tool called [Extractor](https://github.com/Azure/azure-api-management-devops-resource-kit/blob/main/src/APIM_ARMTemplate/README.md#creator) in the resource kit can help generate templates by extracting configurations from their API Management instances.
+* For customers who are already using API Management, another challenge is to extract existing configurations into Resource Manager templates. For those customers, a tool called [Extractor](https://github.com/Azure/azure-api-management-devops-resource-kit/blob/main/src/README.md#Extractor) in the resource kit can help generate templates by extracting configurations from their API Management instances.
## Workflow
app-service Deploy Zip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-zip.md
Publish-AzWebapp -ResourceGroupName <group-name> -Name <app-name> -ArchivePath <
The following example uses the cURL tool to deploy a .war, .jar, or .ear file. Replace the placeholders `<username>`, `<file-path>`, `<app-name>`, and `<package-type>` (`war`, `jar`, or `ear`, accordingly). When prompted by cURL, type in the [deployment password](deploy-configure-credentials.md). ```bash
-curl -X POST -u <username> --data-binary @"<file-path>" https://<app-name>.scm.azurewebsites.net/api/publish&type=<package-type>
+curl -X POST -u <username> --data-binary @"<file-path>" https://<app-name>.scm.azurewebsites.net/api/publish?type=<package-type>
``` [!INCLUDE [deploying to network secured sites](../../includes/app-service-deploy-network-secured-sites.md)]
app-service App Service App Service Environment Control Inbound Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-control-inbound-traffic.md
ms.assetid: 4cc82439-8791-48a4-9485-de6d8e1d1a08 Previously updated : 03/15/2022 Last updated : 03/29/2022
# How To Control Inbound Traffic to an App Service Environment > [!IMPORTANT]
-> This article is about App Service Environment v1. App Service Environment v1 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
> ## Overview
app-service App Service App Service Environment Create Ilb Ase Resourcemanager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-create-ilb-ase-resourcemanager.md
ms.assetid: 091decb6-b0de-42a1-9f2f-c18d9b2e67df Previously updated : 03/15/2022 Last updated : 03/29/2022
# How To Create an ILB ASEv1 Using Azure Resource Manager Templates > [!IMPORTANT]
-> This article is about App Service Environment v1. App Service Environment v1 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
> ## Overview
app-service App Service App Service Environment Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-intro.md
ms.assetid: 78e6d4f5-da46-4eb5-a632-b5fdc17d2394 Previously updated : 03/15/2022 Last updated : 03/29/2022
# Introduction to App Service Environment v1 > [!IMPORTANT]
-> This article is about App Service Environment v1. App Service Environment v1 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
> ## Overview
app-service App Service App Service Environment Layered Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-layered-security.md
ms.assetid: 73ce0213-bd3e-4876-b1ed-5ecad4ad5601 Previously updated : 03/15/2022 Last updated : 03/29/2022
# Implementing a Layered Security Architecture with App Service Environments > [!IMPORTANT]
-> This article is about App Service Environment v1. App Service Environment v1 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
> Since App Service Environments provide an isolated runtime environment deployed into a virtual network, developers can create a layered security architecture providing differing levels of network access for each physical application tier.
app-service App Service App Service Environment Network Architecture Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-network-architecture-overview.md
ms.assetid: 13d03a37-1fe2-4e3e-9d57-46dfb330ba52 Previously updated : 03/15/2022 Last updated : 03/29/2022
# Network Architecture Overview of App Service Environments > [!IMPORTANT]
-> This article is about App Service Environment v1. App Service Environment v1 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
> App Service Environments are always created within a subnet of a [virtual network][virtualnetwork] - apps running in an App Service Environment can communicate with private endpoints located within the same virtual network topology. Since customers may lock down parts of their virtual network infrastructure, it is important to understand the types of network communication flows that occur with an App Service Environment.
app-service App Service App Service Environment Network Configuration Expressroute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-network-configuration-expressroute.md
ms.assetid: 34b49178-2595-4d32-9b41-110c96dde6bf Previously updated : 03/15/2022 Last updated : 03/29/2022
# Network configuration details for App Service Environment for Power Apps with Azure ExpressRoute > [!IMPORTANT]
-> This article is about App Service Environment v1. App Service Environment v1 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
> Customers can connect an [Azure ExpressRoute][ExpressRoute] circuit to their virtual network infrastructure to extend their on-premises network to Azure. App Service Environment is created in a subnet of the [virtual network][virtualnetwork] infrastructure. Apps that run on App Service Environment establish secure connections to back-end resources that are accessible only over the ExpressRoute connection.
app-service App Service App Service Environment Securely Connecting To Backend Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-securely-connecting-to-backend-resources.md
ms.assetid: f82eb283-a6e7-4923-a00b-4b4ccf7c4b5b Previously updated : 03/15/2022 Last updated : 03/29/2022
# Connect securely to back end resources from an App Service environment > [!IMPORTANT]
-> This article is about App Service Environment v1. App Service Environment v1 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
> Since an App Service Environment is always created in **either** an Azure Resource Manager virtual network, **or** a classic deployment model [virtual network][virtualnetwork], outbound connections from an App Service Environment to other backend resources can flow exclusively over the virtual network. As of June 2016, ASEs can also be deployed into virtual networks that use either public address ranges or RFC1918 address spaces (private addresses).
app-service App Service Environment Auto Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-environment-auto-scale.md
ms.assetid: c23af2d8-d370-4b1f-9b3e-8782321ddccb Previously updated : 03/15/2022 Last updated : 03/29/2022
# Autoscaling and App Service Environment v1 > [!IMPORTANT]
-> This article is about App Service Environment v1. App Service Environment v1 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
> Azure App Service environments support *autoscaling*. You can autoscale individual worker pools based on metrics or schedule.
app-service App Service Web Configure An App Service Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-web-configure-an-app-service-environment.md
ms.assetid: b5a1da49-4cab-460d-b5d2-edd086ec32f4 Previously updated : 03/15/2022 Last updated : 03/29/2022
# Configuring an App Service Environment v1 > [!IMPORTANT]
-> This article is about App Service Environment v1. App Service Environment v1 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
> ## Overview
app-service App Service Web Scale A Web App In An App Service Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-web-scale-a-web-app-in-an-app-service-environment.md
ms.assetid: 78eb1e49-4fcd-49e7-b3c7-f1906f0f22e3 Previously updated : 03/15/2022 Last updated : 03/29/2022
# Scaling apps in an App Service Environment v1 > [!IMPORTANT]
-> This article is about App Service Environment v1. App Service Environment v1 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
> In the Azure App Service there are normally three things you can scale:
app-service Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/certificates.md
Title: Certificates bindings
description: Explain numerous topics related to certificates on an App Service Environment v2. Learn how certificate bindings work on the single-tenanted apps in an ASE. Previously updated : 03/15/2022 Last updated : 03/29/2022 # Certificates and the App Service Environment v2 > [!IMPORTANT]
-> This article is about App Service Environment v2 which is used with Isolated App Service plans. App Service Environment v2 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
> The App Service Environment(ASE) is a deployment of the Azure App Service that runs within your Azure Virtual Network(VNet). It can be deployed with an internet accessible application endpoint or an application endpoint that is in your VNet. If you deploy the ASE with an internet accessible endpoint, that deployment is called an External ASE. If you deploy the ASE with an endpoint in your VNet, that deployment is called an ILB ASE. You can learn more about the ILB ASE from the [Create and use an ILB ASE](./create-ilb-ase.md) document.
app-service Configure Network Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/configure-network-settings.md
+
+ Title: Configure App Service Environment v3 network settings
+description: Configure network settings that apply to the entire Azure App Service environment. Learn how to do it with Azure Resource Manager templates.
+
+keywords: ASE, ASEv3, ftp, remote debug
++ Last updated : 03/29/2022+++
+# Network configuration settings
+
+Because App Service Environments are isolated to the individual customer, there are certain configuration settings that can be applied exclusively to App Service Environments. This article documents the various specific network customizations that are available for App Service Environment v3.
+
+> [!NOTE]
+> This article is about App Service Environment v3, which is used with isolated v2 App Service plans.
+
+If you don't have an App Service Environment, see [How to Create an App Service Environment v3](./creation.md).
+
+App Service Environment network customizations are stored in a subresource of the *hostingEnvironments* Azure Resource Manager entity called networking.
+
+The following abbreviated Resource Manager template snippet shows the **networking** resource:
+
+```json
+"resources": [
+{
+ "apiVersion": "2021-03-01",
+ "type": "Microsoft.Web/hostingEnvironments",
+ "name": "[parameter('aseName')]",
+ "location": ...,
+ "properties": {
+ "internalLoadBalancingMode": ...,
+ etc...
+ },
+ "resources": [
+ {
+ "type": "configurations",
+ "apiVersion": "2021-03-01",
+ "name": "networking",
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/hostingEnvironments', parameters('aseName'))]"
+ ],
+ "properties": {
+ "remoteDebugEnabled": true,
+ "ftpEnabled": true,
+ "allowNewPrivateEndpointConnections": true
+ }
+ }
+ ]
+}
+```
+
+The **networking** resource can be included in a Resource Manager template to update the App Service Environment.
+
+## Configure using Azure Resource Explorer
+Alternatively, you can update the App Service Environment by using [Azure Resource Explorer](https://resources.azure.com).
+
+1. In Resource Explorer, go to the node for the App Service Environment (**subscriptions** > **{your Subscription}** > **resourceGroups** > **{your Resource Group}** > **providers** > **Microsoft.Web** > **hostingEnvironments** > **App Service Environment name** > **configurations** > **networking**).
+2. Select **Read/Write** in the upper toolbar to allow interactive editing in Resource Explorer.
+3. Select the blue **Edit** button to make the Resource Manager template editable.
+4. Modify one or more of the settings ftpEnabled, remoteDebugEnabled, allowNewPrivateEndpointConnections, that you want to change.
+5. Select the green **PUT** button that's located at the top of the right pane to commit the change to the App Service Environment.
+6. You may need to select the green **GET** button again to see the changed values.
+
+The change takes effect within a minute.
+
+## Allow new private endpoint connections
+
+For apps hosted on both ILB and External App Service Environment, you can allow creation of private endpoints. The setting is default disabled. If private endpoint has been created while the setting was enabled, they won't be deleted and will continue to work. The setting only prevents new private endpoints from being created.
+
+The following Azure CLI command will enable allowNewPrivateEndpointConnections:
+
+```azurecli
+ASE_NAME="[myAseName]"
+RESOURCE_GROUP_NAME="[myResourceGroup]"
+az appservice ase update --name $ASE_NAME -g $RESOURCE_GROUP_NAME --allow-new-private-endpoint-connection true
+
+az appservice ase list-addresses -n --name $ASE_NAME -g $RESOURCE_GROUP_NAME --query properties.allowNewPrivateEndpointConnections
+```
+
+The setting is also available for configuration through Azure portal at the App Service Environment configuration:
++
+## FTP access
+
+This ftpEnabled setting allows you to allow or deny FTP connections are the App Service Environment level. Individual apps will still need to configure FTP access. If you enable FTP at the App Service Environment level, you may want to [enforce FTPS](../deploy-ftp.md?tabs=cli#enforce-ftps) at the individual app level. The setting is default disabled.
+
+If you want to enable FTP access, you can run the following Azure CLI command:
+
+```azurecli
+ASE_NAME="[myAseName]"
+RESOURCE_GROUP_NAME="[myResourceGroup]"
+az resource update --name $ASE_NAME/configurations/networking --set properties.ftpEnabled=true -g $RESOURCE_GROUP_NAME --resource-type "Microsoft.Web/hostingEnvironments/networkingConfiguration"
+
+az resource show --name $ASE_NAME/configurations/networking -g $RESOURCE_GROUP_NAME --resource-type "Microsoft.Web/hostingEnvironments/networkingConfiguration" --query properties.ftpEnabled
+```
+
+In addition to enabling access, you need to ensure that you have [configured DNS if you are using ILB App Service Environment](./networking.md#dns-configuration-for-ftp-access).
+
+## Remote debugging access
+
+Remote debugging is default disabled at the App Service Environment level. You can enable network level access for all apps using this configuration. You'll still have to [configure remote debugging](../configure-common.md?tabs=cli#configure-general-settings) at the individual app level.
+
+Run the following Azure CLI command to enable remote debugging access:
+
+```azurecli
+ASE_NAME="[myAseName]"
+RESOURCE_GROUP_NAME="[myResourceGroup]"
+az resource update --name $ASE_NAME/configurations/networking --set properties.RemoteDebugEnabled=true -g $RESOURCE_GROUP_NAME --resource-type "Microsoft.Web/hostingEnvironments/networkingConfiguration"
+
+az resource show --name $ASE_NAME/configurations/networking -g $RESOURCE_GROUP_NAME --resource-type "Microsoft.Web/hostingEnvironments/networkingConfiguration" --query properties.remoteDebugEnabled
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Create an App Service Environment from a template](create-from-template.md)
+
+> [!div class="nextstepaction"]
+> [Deploy your app to Azure App Service using FTP](../deploy-ftp.md)
app-service Create External Ase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/create-external-ase.md
Title: Create an external ASE
description: Learn how to create an App Service environment with an app in it, or create a standalone (empty) ASE. Previously updated : 03/15/2022 Last updated : 03/29/2022 # Create an External App Service Environment > [!IMPORTANT]
-> This article is about App Service Environment v2 which is used with Isolated App Service plans. App Service Environment v2 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
> Azure App Service Environment is a deployment of Azure App Service into a subnet in an Azure virtual network (VNet). There are two ways to deploy an App Service Environment (ASE):
app-service Create Ilb Ase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/create-ilb-ase.md
description: Learn how to create an App Service environment with an internal loa
ms.assetid: 0f4c1fa4-e344-46e7-8d24-a25e247ae138 Previously updated : 03/15/2022 Last updated : 03/29/2022
# Create and use an Internal Load Balancer App Service Environment > [!IMPORTANT]
-> This article is about App Service Environment v2 which is used with Isolated App Service plans. App Service Environment v2 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
> The Azure App Service Environment is a deployment of Azure App Service into a subnet in an Azure virtual network (VNet). There are two ways to deploy an App Service Environment (ASE):
app-service Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/firewall-integration.md
description: Learn how to integrate with Azure Firewall to secure outbound traff
ms.assetid: 955a4d84-94ca-418d-aa79-b57a5eb8cb85 Previously updated : 03/15/2022 Last updated : 03/29/2022
# Locking down an App Service Environment > [!IMPORTANT]
-> This article is about App Service Environment v2 which is used with Isolated App Service plans. App Service Environment v2 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
> The App Service Environment (ASE) has many external dependencies that it requires access to in order to function properly. The ASE lives in the customer Azure Virtual Network. Customers must allow the ASE dependency traffic, which is a problem for customers that want to lock down all egress from their virtual network.
app-service Forced Tunnel Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/forced-tunnel-support.md
description: Learn how to enable your App Service Environment to work when outbo
ms.assetid: 384cf393-5c63-4ffb-9eb2-bfd990bc7af1 Previously updated : 03/15/2022 Last updated : 03/29/2022
# Configure your App Service Environment with forced tunneling > [!IMPORTANT]
-> This article is about App Service Environment v2 which is used with Isolated App Service plans. App Service Environment v2 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
> The App Service Environment (ASE) is a deployment of Azure App Service in a customer's Azure Virtual Network. Many customers configure their Azure virtual networks to be extensions of their on-premises networks with VPNs or Azure ExpressRoute connections. Forced tunneling is when you redirect internet bound traffic to your VPN or a virtual appliance instead. Virtual appliances are often used to inspect and audit outbound network traffic.
app-service Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/intro.md
Title: Introduction to ASEv2
description: Learn how Azure App Service Environments v2 help you scale, secure, and optimize your apps in a fully isolated and dedicated environment. Previously updated : 03/15/2022 Last updated : 03/29/2022 # Introduction to App Service Environment v2 > [!IMPORTANT]
-> This article is about App Service Environment v2 which is used with Isolated App Service plans. App Service Environment v2 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
> ## Overview
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Title: Migrate to App Service Environment v3 by using the migration feature
description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 3/14/2022 Last updated : 3/29/2022
With the current version of the migration feature, your new App Service Environm
Note that App Service Environment v3 doesn't currently support the following features that you may be using with your current App Service Environment. If you require any of these features, don't migrate until they're supported. - Sending SMTP traffic. You can still have email triggered alerts but your app can't send outbound traffic on port 25.-- Deploying your apps with FTP.-- Using remote debug with your apps. - Monitoring your traffic with Network Watcher or NSG Flow. - Configuring an IP-based TLS/SSL binding with your apps.
There's no cost to migrate your App Service Environment. You'll stop being charg
- **What happens to my old App Service Environment?** If you decide to migrate an App Service Environment, the old environment gets shut down and deleted and all of your apps are migrated to a new environment. Your old environment will no longer be accessible. - **What will happen to my App Service Environment v1/v2 resources after 31 August 2024?**
- After 31 August 2024, if you haven't migrated to App Service Environment v3, your App Service Environment v1/v2s and the apps deployed in them will no longer be available. App Service Environment v1/v2 is hosted on App Service scale units running on [Cloud Services (classic)](../../cloud-services/cloud-services-choose-me.md) architecture that will be [retired on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Because of this, App Service Environment v1/v2 will no longer be available after that date. Migrate to App Service Environment v3 to keep your apps running or save or back up any resources or data that you need to maintain.
+ After 31 August 2024, if you haven't migrated to App Service Environment v3, your App Service Environment v1/v2s and the apps deployed in them will no longer be available. App Service Environment v1/v2 is hosted on App Service scale units running on [Cloud Services (classic)](../../cloud-services/cloud-services-choose-me.md) architecture that will be [retired on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Because of this, [App Service Environment v1/v2 will no longer be available after that date](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). Migrate to App Service Environment v3 to keep your apps running or save or back up any resources or data that you need to maintain.
## Next steps
app-service Migration Alternatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migration-alternatives.md
Once your migration and any testing with your new environment is complete, delet
- **What properties of my App Service Environment will change?** You'll now be on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. For ILB App Service Environment, you'll keep the same ILB IP address. For internet facing App Service Environment, the public IP address and the outbound IP address will change. Note for internet facing App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses). - **What will happen to my App Service Environment v1/v2 resources after 31 August 2024?**
- After 31 August 2024, if you haven't migrated to App Service Environment v3, your App Service Environment v1/v2s and the apps deployed in them will no longer be available. App Service Environment v1/v2 is hosted on App Service scale units running on [Cloud Services (classic)](../../cloud-services/cloud-services-choose-me.md) architecture that will be [retired on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Because of this, App Service Environment v1/v2 will no longer be available after that date. Migrate to App Service Environment v3 to keep your apps running or save or back up any resources or data that you need to maintain.
+ After 31 August 2024, if you haven't migrated to App Service Environment v3, your App Service Environment v1/v2s and the apps deployed in them will no longer be available. App Service Environment v1/v2 is hosted on App Service scale units running on [Cloud Services (classic)](../../cloud-services/cloud-services-choose-me.md) architecture that will be [retired on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Because of this, [App Service Environment v1/v2 will no longer be available after that date](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). Migrate to App Service Environment v3 to keep your apps running or save or back up any resources or data that you need to maintain.
## Next steps
app-service Network Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/network-info.md
Title: Networking considerations
description: Learn about App Service Environment network traffic, and how to set network security groups and user-defined routes. Previously updated : 03/15/2022 Last updated : 03/29/2022 # Networking considerations for App Service Environment > [!IMPORTANT]
-> This article is about App Service Environment v2 which is used with Isolated App Service plans. App Service Environment v2 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
> [App Service Environment][Intro] is a deployment of Azure App Service into a subnet in your Azure virtual network. There are two deployment types for an App Service Environment:
app-service Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/networking.md
For your app to receive traffic, ensure that inbound network security group (NSG
It's a good idea to configure the following inbound NSG rule:
-|Port|Source|Destination|
-|-|-|-|
-|80,443|Virtual network|App Service Environment subnet range|
+|Source / Destination Port(s)|Direction|Source|Destination|Purpose|
+|-|-|-|-|-|
+|* / 80,443|Inbound|VirtualNetwork|App Service Environment subnet range|Allow app traffic and internal health ping traffic|
The minimal requirement for App Service Environment to be operational is:
-|Port|Source|Destination|
-|-|-|-|
-|80|Azure Load Balancer|App Service Environment subnet range|
+|Source / Destination Port(s)|Direction|Source|Destination|Purpose|
+|-|-|-|-|-|
+|* / 80|Inbound|AzureLoadBalancer|App Service Environment subnet range|Allow internal health ping traffic|
If you use the minimum required rule, you might need one or more rules for your application traffic. If you're using any of the deployment or debugging options, you must also allow this traffic to the App Service Environment subnet. The source of these rules can be the virtual network, or one or more specific client IPs or IP ranges. The destination is always the App Service Environment subnet range.
+The internal health ping traffic on port 80 is isolated between the Load balancer and the internal servers. No outside traffic can reach the health ping endpoint.
-The normal app access ports are as follows:
+The normal app access ports inbound are as follows:
|Use|Ports| |-|-|
To configure DNS in Azure DNS private zones:
In addition to the default domain provided when an app is created, you can also add a custom domain to your app. You can set a custom domain name without any validation on your apps. If you're using custom domains, you need to ensure they have DNS records configured. You can follow the preceding guidance to configure DNS zones and records for a custom domain name (simply replace the default domain name with the custom domain name). The custom domain name works for app requests, but doesn't work for the `scm` site. The `scm` site is only available at *&lt;appname&gt;.scm.&lt;asename&gt;.appserviceenvironment.net*.
+### DNS configuration for FTP access
+
+For FTP access to Internal Load balancer (ILB) App Service Environment v3 specifically, you need to ensure DNS is configured. Configure an Azure DNS private zone or equivalent custom DNS with the following settings:
+
+1. Create an Azure DNS private zone named `ftp.appserviceenvironment.net`.
+1. Create an A record in that zone that points `<App Service Environment-name>` to the inbound IP address.
+
+In addition to setting up DNS, you also need to enable it in the [App Service Environment configuration](./configure-network-settings.md#ftp-access) as well as at the [app level](../deploy-ftp.md?tabs=cli#enforce-ftps).
+ ### DNS configuration from your App Service Environment The apps in your App Service Environment will use the DNS that your virtual network is configured with. If you want some apps to use a different DNS server, you can manually set it on a per app basis, with the app settings `WEBSITE_DNS_SERVER` and `WEBSITE_DNS_ALT_SERVER`. `WEBSITE_DNS_ALT_SERVER` configures the secondary DNS server. The secondary DNS server is only used when there is no response from the primary DNS server.
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md
Title: App Service Environment overview
description: This article discusses the Azure App Service Environment feature of Azure App Service. Previously updated : 01/26/2022 Last updated : 03/29/2022
App Service Environment v3 differs from earlier versions in the following ways:
A few features that were available in earlier versions of App Service Environment aren't available in App Service Environment v3. For example, you can no longer do the following: - Send SMTP traffic. You can still have email triggered alerts but your app can't send outbound traffic on port 25.-- Deploy your apps by using FTP.-- Use remote debugging with your apps. - Monitor your traffic with Network Watcher or network security group (NSG) flow logs. - Configure an IP-based Transport Layer Security (TLS) or Secure Sockets Layer (SSL) binding with your apps. - Configure a custom domain suffix.
App Service Environment v3 is available in the following regions:
## App Service Environment v2 App Service Environment has three versions: App Service Environment v1, App Service Environment v2, and App Service Environment v3. The information in this article is based on App Service Environment v3. To learn more about App Service Environment v2, see [App Service Environment v2 introduction](./intro.md).+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Whitepaper on Using App Service Environment v3 in Compliance-Oriented Industries](https://azure.microsoft.com/resources/using-app-service-environment-v3-in-compliance-oriented-industries/)
app-service Using An Ase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/using-an-ase.md
description: Learn how to create, publish, and scale apps in an App Service Envi
ms.assetid: a22450c4-9b8b-41d4-9568-c4646f4cf66b Previously updated : 03/15/2022 Last updated : 03/29/2022 # Manage an App Service Environment > [!IMPORTANT]
-> This article is about App Service Environment v2 which is used with Isolated App Service plans. App Service Environment v2 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
> An App Service Environment (ASE) is a deployment of Azure App Service into a subnet in a customer's Azure Virtual Network instance. An ASE consists of:
app-service Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/zone-redundancy.md
Title: Availability Zone support for App Service Environment v2
description: Learn how to deploy your App Service Environments so that your apps are zone redundant. Previously updated : 03/15/2022 Last updated : 03/29/2022 # Availability Zone support for App Service Environment v2 > [!IMPORTANT]
-> This article is about App Service Environment v2 which is used with Isolated App Service plans. App Service Environment v2 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
> App Service Environment v2 (ASE) can be deployed into Availability Zones (AZ). Customers can deploy an internal load balancer (ILB) ASEs into a specific AZ within an Azure region. If you pin your ILB ASE to a specific AZ, the resources used by a ILB ASE will either be pinned to the specified AZ, or deployed in a zone redundant manner.
automation Automation Security Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-security-guidelines.md
Review the Azure Policy recommendations for Azure Automation and act as appropri
## Next steps * To learn how to use Azure role-based access control (Azure RBAC), see [Manage role permissions and security in Azure Automation](/azure/automation/automation-role-based-access-control).
-* For information on how Azure protects your privacy and secures your data, see [Azure Automation data security](./automation-managing-data.md).
+* For information on how Azure protects your privacy and secures your data, see [Azure Automation data security](/azure/automation/automation-managing-data).
* To learn about configuring the Automation account to use encryption, see [Encryption of secure assets in Azure Automation](/azure/automation/automation-secure-asset-encryption).
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-region.md
In the Product Catalog, always-available services are listed as "non-regional" s
| **Products** | **Resiliency** | | | |
+| [Azure HPC Cache](../hpc-cache/hpc-cache-overview.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
| [Azure IoT Hub Device Provisioning Service](../iot-dps/about-iot-dps.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | Azure Red Hat OpenShift | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) | | [Azure Managed Instance for Apache Cassandra](../managed-instance-apache-cassandr) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
In the Product Catalog, always-available services are listed as "non-regional" s
### ![An icon that signifies this service is non-regional.](media/icon-always-available.svg) Non-regional services (always-available services)
-| **Products** | **Resiliency** |
+| **Products** | **Resiliency** |
| | | | Azure Active Directory | ![An icon that signifies this service is always available.](media/icon-always-available.svg) | | Azure Advanced Threat Protection | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
azure-arc Concept Log Analytics Extension Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/concept-log-analytics-extension-deployment.md
Title: Deploy Log Analytics agent on Arc-enabled servers description: This article reviews the different methods to deploy the Log Analytics agent on Windows and Linux-based machines registered with Azure Arc-enabled servers in your local datacenter or other cloud environment. Previously updated : 10/22/2021 Last updated : 3/18/2022
Azure Monitor supports multiple methods to install the Log Analytics agent and c
The Log Analytics agent is required if you want to:
-* Monitor the operating system, any workloads running on the machine or server using [VM insights](../../azure-monitor/vm/vminsights-overview.md). Further analyze and alert using other features of [Azure Monitor](../../azure-monitor/overview.md).
+* Monitor the operating system and any workloads running on the machine or server using [VM insights](../../azure-monitor/vm/vminsights-overview.md).
+* Analyze and alert using [Azure Monitor](../../azure-monitor/overview.md).
* Perform security monitoring in Azure by using [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md) or [Microsoft Sentinel](../../sentinel/overview.md). * Manage operating system updates by using [Azure Automation Update Management](../../automation/update-management/overview.md). * Collect inventory and track changes by using [Azure Automation Change Tracking and Inventory](../../automation/change-tracking/overview.md). * Run Automation runbooks directly on the machine and against resources in the environment by using an [Azure Automation Hybrid Runbook Worker](../../automation/automation-hybrid-runbook-worker.md).
-This article reviews the deployment methods for the Log Analytics agent VM extension, across multiple production physical servers or virtual machines in your environment, to help you determine which works best for your organization. If you are interested in the new Azure Monitor agent and want to see a detailed comparison, then review the [Azure Monitor agents overview](../../azure-monitor//agents/agents-overview.md) article.
+This article reviews the deployment methods for the Log Analytics agent VM extension, across multiple production physical servers or virtual machines in your environment, to help you determine which works best for your organization. If you are interested in the new Azure Monitor agent and want to see a detailed comparison, see [Azure Monitor agents overview](../../azure-monitor/agents/agents-overview.md).
## Installation options
-You can use different methods to install the VM extension using one method or a combination. This section describes each one for you to consider.
+Review the different methods to install the VM extension using one method or a combination and determine which one works best for your scenario.
-### Using Arc-enabled servers
+### Use Azure Arc-enabled servers
This method supports managing the installation, management, and removal of VM extensions from the [Azure portal](manage-vm-extensions-portal.md), using [PowerShell](manage-vm-extensions-powershell.md), the [Azure CLI](manage-vm-extensions-cli.md), or with an [Azure Resource Manager (ARM) template](manage-vm-extensions-template.md).
This method supports managing the installation, management, and removal of VM ex
#### Disadvantages
-* Limited automation when using an Azure Resource Manager template, otherwise it is time consuming.
+* Limited automation when using an Azure Resource Manager template.
* Can only focus on a single Arc-enabled server, and not multiple instances. * Only supports specifying a single workspace to report to. Requires using PowerShell or the Azure CLI to configure the Log Analytics Windows agent VM extension to report to up to four workspaces. * Doesn't support deploying the Dependency agent from the portal. You can only use PowerShell, the Azure CLI, or ARM template.
-### Using Azure Policy
+### Use Azure Policy
You can use Azure Policy to deploy the Log Analytics agent VM extension at-scale to machines in your environment, and maintain configuration compliance. This is accomplished by using either the **Configure Log Analytics extension on Azure Arc enabled Linux servers** / **Configure Log Analytics extension on Azure Arc enabled Windows servers** policy definition, or the **Enable Azure Monitor for VMs** policy initiative.
Azure Policy includes several prebuilt definitions related to Azure Monitor. For
#### Disadvantages
-* The **Configure Log Analytics extension on Azure Arc enabled** *operating system* **servers** policy only installs the Log Analytics VM extension and configures the agent to report to a specified Log Analytics workspace. If you are interested in VM insights to monitor the operating system performance, and map running processes and dependencies on other resources, then you should apply the policy initiative **Enable Azure Monitor for VMs**. It installs and configures both the Log Analytics VM extension and the Dependency agent VM extension, which are required.
+* The **Configure Log Analytics extension on Azure Arc enabled** *operating system* **servers** policy only installs the Log Analytics VM extension and configures the agent to report to a specified Log Analytics workspace. If you want VM insights to monitor the operating system performance, and map running processes and dependencies on other resources, apply the policy initiative **Enable Azure Monitor for VMs**. It installs and configures both the Log Analytics VM extension and the Dependency agent VM extension, which are required.
* Standard compliance evaluation cycle is once every 24 hours. An evaluation scan for a subscription or a resource group can be started with Azure CLI, Azure PowerShell, a call to the REST API, or by using the Azure Policy Compliance Scan GitHub Action. For more information, see [Evaluation triggers](../../governance/policy/how-to/get-compliance-data.md#evaluation-triggers).
-### Using Azure Automation
+### Use Azure Automation
-The process automation operating environment in Azure Automation and its support for PowerShell and Python runbooks can enable you to automate the deployment of the Log Analytics agent VM extension at-scale to machines in your environment.
+The process automation operating environment in Azure Automation and its support for PowerShell and Python runbooks can help you automate the deployment of the Log Analytics agent VM extension at scale to machines in your environment.
#### Advantages
The process automation operating environment in Azure Automation and its support
* Requires an Azure Automation account. * Experience authoring and managing runbooks in Azure Automation.
-* Creating a runbook based on PowerShell or Python depending on the target operating system.
+* Must create a runbook based on PowerShell or Python, depending on the target operating system.
## Next steps
-* To manage operating system updates using Azure Automation Update Management, review [Enable from an Automation account](../../automation/update-management/enable-from-automation-account.md) and then follow the steps to enable machines reporting to the workspace.
+* To manage operating system updates using Azure Automation Update Management, see [Enable from an Automation account](../../automation/update-management/enable-from-automation-account.md) and then follow the steps to enable machines reporting to the workspace.
-* To track changes using Azure Automation Change Tracking and Inventory, review [Enable from an Automation account](../../automation/change-tracking/enable-from-automation-account.md) and then follow the steps to enable machines reporting to the workspace.
+* To track changes using Azure Automation Change Tracking and Inventory, see [Enable from an Automation account](../../automation/change-tracking/enable-from-automation-account.md) and then follow the steps to enable machines reporting to the workspace.
-* You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on servers or machines registered with Arc-enabled servers. See the [Deploy Hybrid Runbook Worker VM extension](../../automation/extension-based-hybrid-runbook-worker-install.md) article.
+* Use the Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on servers or machines registered with Arc-enabled servers. See the [Deploy Hybrid Runbook Worker VM extension](../../automation/extension-based-hybrid-runbook-worker-install.md) article.
* To start collecting security-related events with Microsoft Sentinel, see [onboard to Microsoft Sentinel](scenario-onboard-azure-sentinel.md), or to collect with Microsoft Defender for Cloud, see [onboard to Microsoft Defender for Cloud](../../security-center/quickstart-onboard-machines.md).
-* See the VM insights [Monitor performance](../../azure-monitor/vm/vminsights-performance.md) and [Map dependencies](../../azure-monitor/vm/vminsights-maps.md) articles to see how well your machine is performing and view discovered application components.
+* Read the VM insights [Monitor performance](../../azure-monitor/vm/vminsights-performance.md) and [Map dependencies](../../azure-monitor/vm/vminsights-maps.md) articles to see how well your machine is performing and view discovered application components.
azure-arc Manage Howto Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-howto-migrate.md
Title: How to migrate Azure Arc-enabled servers across regions description: Learn how to migrate an Azure Arc-enabled server from one region to another. Previously updated : 07/16/2021 Last updated : 3/29/2022 # How to migrate Azure Arc-enabled servers across regions
-There are scenarios in which you'd want to move your existing Azure Arc-enabled server from one region to another. For example, you realized the machine was registered in the wrong region, to improve manageability, or to move for governance reasons.
+There are scenarios in which you'll want to move your existing Azure Arc-enabled server from one region to another. For example, you might want to move regions to improve manageability, for governance reasons, or because you realized the machine was originally registered in the wrong region.
To migrate an Azure Arc-enabled server from one Azure region to another, you have to uninstall the VM extensions, delete the resource in Azure, and re-create it in the other region. Before you perform these steps, you should audit the machine to verify which VM extensions are installed.
To migrate an Azure Arc-enabled server from one Azure region to another, you hav
## Move machine to other region > [!NOTE]
-> During this operation, it results in downtime during the migration.
+> Performing this operation will result in downtime during the migration.
-1. Remove VM extensions installed from the [Azure portal](manage-vm-extensions-portal.md#remove-extensions), using the [Azure CLI](manage-vm-extensions-cli.md#remove-extensions), or using [Azure PowerShell](manage-vm-extensions-powershell.md#remove-extensions).
+1. Remove any VM extensions that are installed on the machine. You can do this by using the [Azure portal](manage-vm-extensions-portal.md#remove-extensions), [Azure CLI](manage-vm-extensions-cli.md#remove-extensions), or [Azure PowerShell](manage-vm-extensions-powershell.md#remove-extensions).
-2. Use the **azcmagent** tool with the [Disconnect](manage-agent.md#disconnect) parameter to disconnect the machine from Azure Arc and delete the machine resource from Azure. Disconnecting the machine from Azure Arc-enabled servers does not remove the Connected Machine agent, and you do not need to remove the agent as part of this process. You can run this manually while logged on interactively, or automate using the same service principal you used to onboard multiple agents, or with a Microsoft identity platform [access token](../../active-directory/develop/access-tokens.md). If you did not use a service principal to register the machine with Azure Arc-enabled servers, see the following [article](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale) to create a service principal.
+2. Use the **azcmagent** tool with the [Disconnect](manage-agent.md#disconnect) parameter to disconnect the machine from Azure Arc and delete the machine resource from Azure. You can run this manually while logged on interactively, with a Microsoft identity platform [access token](../../active-directory/develop/access-tokens.md), or with the service principal you used for onboarding (or with a [new service principal that you create](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale)).
-3. Re-register the Connected Machine agent with Azure Arc-enabled servers in the other region. Run the `azcmagent` tool with the [Connect](manage-agent.md#connect) parameter complete this step.
+ Disconnecting the machine from Azure Arc-enabled servers does not remove the Connected Machine agent, and you don't need to remove the agent as part of this process.
-4. Redeploy the VM extensions that were originally deployed to the machine from Azure Arc-enabled servers. If you deployed the Azure Monitor for VMs (insights) agent or the Log Analytics agent using an Azure Policy definition, the agents are redeployed after the next [evaluation cycle](../../governance/policy/how-to/get-compliance-data.md#evaluation-triggers).
+3. Run the `azcmagent` tool with the [Connect](manage-agent.md#connect) parameter to re-register the Connected Machine agent with Azure Arc-enabled servers in the other region.
+
+4. Redeploy the VM extensions that were originally deployed to the machine from Azure Arc-enabled servers.
+
+ If you deployed the Azure Monitor for VMs (insights) agent or the Log Analytics agent using an Azure Policy definition, the agents are redeployed after the next [evaluation cycle](../../governance/policy/how-to/get-compliance-data.md#evaluation-triggers).
## Next steps
azure-functions Functions Bindings Warmup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-warmup.md
def main(warmupContext: func.Context) -> None:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the `WarmupTriggerAttribute` to define the function. C# script instead uses a *function.json* configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the `WarmupTrigger` attribute to define the function. C# script instead uses a *function.json* configuration file.
# [In-process](#tab/in-process)
-Use the `WarmupTriggerAttribute` to define the function. This attribute has no parameters.
+Use the `WarmupTrigger` attribute to define the function. This attribute has no parameters.
# [Isolated process](#tab/isolated-process)
-Use the `WarmupTriggerAttribute` to define the function. This attribute has no parameters.
+Use the `WarmupTrigger` attribute to define the function. This attribute has no parameters.
# [C# script](#tab/csharp-script)
The following considerations apply to using a warmup function in C#:
# [In-process](#tab/in-process) -- Your function must be named `warmup` (case-insensitive) using the `FunctionNameAttribute`.
+- Your function must be named `warmup` (case-insensitive) using the `FunctionName` attribute.
- A return value attribute isn't required. - You must be using version `3.0.5` of the `Microsoft.Azure.WebJobs.Extensions` package, or a later version. - You can pass a `WarmupContext` instance to the function. # [Isolated process](#tab/isolated-process) -- Your function must be named `warmup` (case-insensitive) using the `FunctionNameAttribute`.
+- Your function must be named `warmup` (case-insensitive) using the `FunctionName` attribute.
- A return value attribute isn't required. - You can pass an object instance to the function.
azure-functions Functions Dotnet Dependency Injection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-dotnet-dependency-injection.md
Before you can use dependency injection, you must install the following NuGet pa
- [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/) package version 1.0.28 or later -- [Microsoft.Extensions.DependencyInjection](https://www.nuget.org/packages/Microsoft.Extensions.DependencyInjection/) (currently, only version 3.x and earlier supported)
+- [Microsoft.Extensions.DependencyInjection](https://www.nuget.org/packages/Microsoft.Extensions.DependencyInjection/) (currently, only version 2.x or later supported)
## Register services
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
First, the function.json file must be updated to include a `route` in the HTTP t
"get", "post" ],
- "route": "/{*route}"
+ "route": "{*route}"
}, { "type": "http",
Update the Python code file `init.py`, depending on the interface used by your f
```python app=fastapi.FastAPI()
-@app.get("/hello/{name}")
+@app.get("hello/{name}")
async def get_name( name: str,): return {
def main(req: func.HttpRequest, context: func.Context) -> func.HttpResponse:
```python app=Flask("Test")
-@app.route("/hello/<name>", methods=['GET'])
+@app.route("hello/<name>", methods=['GET'])
def hello(name: str): return f"hello {name}"
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
In general, service availability in Azure Government implies that all correspond
## AI + machine learning
-This section outlines variations and considerations when using **Azure Bot Service**, **Azure Machine Learning**, and **Cognitive Services** in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=machine-learning-service,bot-service,cognitive-services&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
+This section outlines variations and considerations when using **Azure Bot Service**, **Azure Machine Learning**, and **Cognitive Services** in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=machine-learning-service,bot-service,cognitive-services&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
### [Azure Bot Service](/azure/bot-service/)
The following Azure Cost Management + Billing **features are not currently avail
This section outlines variations and considerations when using Media services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=cdn,media-services&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
-### [Media Services](../media-services/index.yml)
+### [Media Services](/media-services/)
-For Azure Media Services v3 feature variations in Azure Government, see [Azure Media Services v3 clouds and regions availability](../media-services/latest/azure-clouds-regions.md#us-government-cloud).
+For Azure Media Services v3 feature variations in Azure Government, see [Azure Media Services v3 clouds and regions availability](/media-services/latest/azure-clouds-regions#us-government-cloud).
## Migration
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Machine Learning](../../machine-learning/index.yml) | &#x2705; | &#x2705; | | [Managed Applications](../../azure-resource-manager/managed-applications/index.yml) | &#x2705; | &#x2705; | | **Service** | **FedRAMP High** | **DoD IL2** |
-| [Media Services](../../media-services/index.yml) | &#x2705; | &#x2705; |
+| [Media Services](/media-services/) | &#x2705; | &#x2705; |
| [Microsoft 365 Defender](/microsoft-365/security/defender/) (formerly Microsoft Threat Protection) | &#x2705; | &#x2705; | | [Microsoft Azure Attestation](../../attestation/index.yml)| &#x2705; | &#x2705; | | [Microsoft Azure Marketplace portal](https://azuremarketplace.microsoft.com/marketplace/)| &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Active Directory (Free and Basic)](../../active-directory/fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Active Directory (Premium P1 + P2)](../../active-directory/fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Active Directory Domain Services](../../active-directory-domain-services/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure AD access reviews](../../active-directory/governance/access-reviews-overview.md) | | | | | &#x2705; |
| [Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure AD Privileged Identity Management](../../active-directory/privileged-identity-management/index.yml) | | | | | &#x2705; |
| [Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/index.yml) | &#x2705; | &#x2705; | | | | | **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Logic Apps](../../logic-apps/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Machine Learning](../../machine-learning/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Managed Applications](../../azure-resource-manager/managed-applications/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Media Services](../../media-services/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Media Services](/media-services/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Microsoft 365 Defender](/microsoft-365/security/defender/) (formerly Microsoft Threat Protection) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Microsoft Azure portal](../../azure-portal/index.yml) | &#x2705; | &#x2705; | &#x2705;| &#x2705; | &#x2705; | | **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Doublehorn, LLC](https://doublehorn.com/)| |[DXC Technology Services LLC](https://www.dxc.technology/services)| |[DXL Enterprises, Inc.](https://mahwahnjcoc.wliinc31.com/Supply-Chain-Management/DXL-Enterprises,-Inc-1349)|
-|[Dynamics Intelligence Inc.](https://www.dynamicsintelligence.us)|
|[DynTek](https://www.dyntek.com)| |[ECS Federal, LLC](https://ecstech.com/)| |[Edafio Technology Partners](https://edafio.com)|
azure-maps Weather Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-coverage.md
Title: Microsoft Azure Maps Weather services coverage
description: Learn about Microsoft Azure Maps Weather services coverage Previously updated : 01/26/2022 Last updated : 03/28/2022 -
-# Azure Maps Weather services coverage
+# Azure Maps weather services coverage
+
+This article provides coverage information for Azure Maps [Weather services][weather-services].
+
+## Weather information supported
+
+### Infrared satellite tiles
+<!-- Replace with Minimal Description
+Infrared (IR) radiation is electromagnetic radiation that measures an object's infrared emission, returning information about its temperature. Infrared images can indicate cloud heights (Colder cloud-tops mean higher clouds) and types, calculate land and surface water temperatures, and locate ocean surface features.
+ -->
+
+Infrared satellite imagery, showing clouds by their temperature, is returned when `tilesetID` is set to `microsoft.weather.infrared.main` when making calls to [Get Map Tile][get-map-tile] and can then be overlaid on the map image.
+
+### Minute forecast
+
+The [Get Minute forecast][get-minute-forecast] service returns minute-by-minute forecasts for the specified location for the next 120 minutes.
+
+### Radar tiles
+<!-- Replace with Minimal Description
+Radar imagery is a depiction of the response returned when microwave radiation is sent into the atmosphere. The pulses of radiation reflect back showing its interactions with any precipitation it encounters. The radar technology visually represents those pulses showing where it's clear, raining, snowing or stormy.
+-->
+
+Radar tiles, showing areas of rain, snow, ice and mixed conditions, are returned when `tilesetID` is set to `microsoft.weather.radar.main` when making calls to [Get Map Tile][get-map-tile] and can then be overlaid on the map image.
+
+### Severe weather alerts
-This article provides coverage information for Azure Maps [Weather services](/rest/api/maps/weather). Azure Maps Weather data services returns details such as radar tiles, current weather conditions, weather forecasts, the weather along a route, air quality, historical weather and tropical storms info.
+Azure Maps [Severe weather alerts][severe-weather-alerts] service returns severe weather alerts from both official Government Meteorological Agencies and other leading severe weather alert providers. The service can return details such as alert type, category, level and detailed description. Severe weather includes conditions like hurricanes, tornados, tsunamis, severe thunderstorms, and fires.
-Azure Maps doesn't have the same level of information and accuracy for all countries and regions.
+### Other
-The following table refers to the *Other* column and provides a list containing the weather information you can request from that country/region.
+- **Air quality**. The Air Quality service returns [current][aq-current], [hourly][aq-hourly] or [daily][aq-daily] forecasts that include pollution levels, air quality index values, the dominant pollutant, and a brief statement summarizing risk level and suggested precautions.
+- **Current conditions**. The [Get Current Conditions](/rest/api/maps/weather/get-current-conditions) service returns detailed current weather conditions such as precipitation, temperature and wind for a given coordinate location.
+- **Daily forecast**. The [Get Daily Forecast](/rest/api/maps/weather/get-current-air-quality) service returns detailed weather forecasts such as temperature and wind by day for the next 1, 5, 10, 15, 25, or 45 days for a given coordinate location.
+- **Daily indices**. The [Get Daily Indices](/rest/api/maps/weather/get-daily-indices) service returns index values that provide information that can help in planning activities. For example, a health mobile application can notify users that today is good weather for running or playing golf.
+- **Historical weather**. The Historical Weather service includes Daily Historical [Records][dh-records], [Actuals][dh-actuals] and [Normals][dh-normals] that return climatology data such as past daily record temperatures, precipitation and snowfall at a given coordinate location.
+- **Hourly forecast**. The [Get Hourly Forecast](/rest/api/maps/weather/get-hourly-forecast) service returns detailed weather forecast information by the hour for up to 10 days.
+- **Quarter-day forecast**. The [Get Quarter Day Forecast](/rest/api/maps/weather/get-quarter-day-forecast) Service returns detailed weather forecast by quarter-day for up to 15 days.
+- **Tropical storms**. The Tropical Storm Service provides information about [active storms][tropical-storm-active], tropical storm [forecasts][tropical-storm-forecasts] and [locations][tropical-storm-locations] and the ability to [search][tropical-storm-search] for tropical storms by year, basin ID, or government ID.
+- **Weather along route**. The [Get Weather Along Route](/rest/api/maps/weather/get-weather-along-route) Service returns hyper local (1 kilometer or less), up-to-the-minute weather nowcasts, weather hazard assessments, and notifications along a route described as a sequence of waypoints.
-| Symbol | Meaning |
-|:-:|--|
-| * |Refers to coverage of the following features: Air Quality, Current Conditions, Daily Forecast, Daily Indices, Historical Weather, Hourly Forecast, Quarter-day Forecast, Tropical Storms and Weather Along Route. |
+## Azure Maps Weather coverage tables
+
+> [!NOTE]
+> Azure Maps doesn't have the same level of detail and accuracy for all countries and regions.
## Americas
-| Country/Region | Infrared Satellite Tiles | Minute Forecast, Radar Tiles | Severe Weather Alerts | Other* |
+| Country/Region | Infrared satellite tiles | Minute forecast, Radar tiles | Severe weather alerts | Other* |
||::|:-:|::|::| | Anguilla | Γ£ô | | | Γ£ô | | Antarctica | Γ£ô | | | Γ£ô |
The following table refers to the *Other* column and provides a list containing
## Asia Pacific
-| Country/Region | Infrared Satellite Tiles | Minute Forecast, Radar Tiles | Severe Weather Alerts | Other* |
-||--|::|:-:|::|::|
+| Country/Region | Infrared satellite tiles | Minute forecast, Radar tiles | Severe weather alerts | Other* |
+|--|::|:-:|::|::|
| Afghanistan | Γ£ô | | | Γ£ô | | American Samoa | Γ£ô | | Γ£ô | Γ£ô | | Australia | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
The following table refers to the *Other* column and provides a list containing
## Europe
-| Country/Region | Infrared Satellite Tiles | Minute Forecast, Radar Tiles | Severe Weather Alerts | Other* |
+| Country/Region | Infrared satellite tiles | Minute forecast, Radar tiles | Severe weather alerts | Other* |
|-|::|:-:|::|::| | Albania | Γ£ô | | | Γ£ô | | Andorra | Γ£ô | | Γ£ô | Γ£ô |
The following table refers to the *Other* column and provides a list containing
## Middle East & Africa
-| Country/Region | Infrared Satellite Tiles | Minute Forecast, Radar Tiles | Severe Weather Alerts | Other* |
+| Country/Region | Infrared satellite tiles | Minute forecast, Radar tiles | Severe weather alerts | Other* |
|-|::|:-:|::|::| | Algeria | Γ£ô | | | Γ£ô | | Angola | Γ£ô | | | Γ£ô |
The following table refers to the *Other* column and provides a list containing
| Yemen | Γ£ô | | | Γ£ô | | Zambia | Γ£ô | | | Γ£ô | | Zimbabwe | Γ£ô | | | Γ£ô |+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Weather services in Azure Maps](weather-services-concepts.md)
+
+> [!div class="nextstepaction"]
+> [Azure Maps weather services frequently asked questions (FAQ)](weather-services-faq.yml)
+
+[weather-services]: /rest/api/maps/weather
+[get-map-tile]: /rest/api/maps/render-v2/get-map-tile
+[get-minute-forecast]: /rest/api/maps/weather/get-minute-forecast
+[severe-weather-alerts]: /rest/api/maps/weather/get-severe-weather-alerts
+
+[aq-current]: /rest/api/maps/weather/get-current-air-quality
+[aq-hourly]: /rest/api/maps/weather/get-air-quality-hourly-forecasts
+[aq-daily]: /rest/api/maps/weather/get-air-quality-daily-forecasts
+
+[current-conditions]: /rest/api/maps/weather/get-current-conditions
+
+[dh-records]: /rest/api/maps/weather/get-dh-records
+[dh-actuals]: /rest/api/maps/weather/get-dh-actuals
+[dh-normals]: /rest/api/maps/weather/get-dh-normals
+
+[tropical-storm-active]: /rest/api/maps/weather/get-tropical-storm-active
+[tropical-storm-forecasts]: /rest/api/maps/weather/get-tropical-storm-forecast
+[tropical-storm-locations]: /rest/api/maps/weather/get-tropical-storm-locations
+[tropical-storm-search]: /rest/api/maps/weather/get-tropical-storm-search
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-overview.md
The Azure Monitor agent (AMA) collects monitoring data from the guest operating
Here's an **introductory video** explaining all about this new agent, including a quick demo of how to set things up using the Azure Portal: [ITOps Talk: Azure Monitor Agent](https://www.youtube.com/watch?v=f8bIrFU8tCs) ## Relationship to other agents
-The Azure Monitor agent is meant to replace the following legacy monitoring agents that are currently used by Azure Monitor to collect guest data from virtual machines ([view known gaps](../faq.yml)):
+Eventually, the Azure Monitor agent will replace the following legacy monitoring agents that are currently used by Azure Monitor to collect guest data from virtual machines ([view known gaps](../faq.yml)):
- [Log Analytics agent](./log-analytics-agent.md): Sends data to a Log Analytics workspace and supports VM insights and monitoring solutions.
+- [Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md): Sends data to Azure Monitor Metrics (Linux only).
- [Diagnostics extension](./diagnostics-extension-overview.md): Sends data to Azure Monitor Metrics (Windows only), Azure Event Hubs, and Azure Storage.-- [Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md): Sends data to Azure Monitor Metrics (Linux only).+
+**Currently**, the Azure Monitor agent consolidates features from the Telegraf agent and Log Analytics agent, with [a few limitations](#current-limitations).
+In future, it will also consolidate features from the Diagnostic extensions.
In addition to consolidating this functionality into a single agent, the Azure Monitor agent provides the following benefits over the existing agents:
In addition to consolidating this functionality into a single agent, the Azure M
- **Improved extension management:** The Azure Monitor agent uses a new method of handling extensibility that's more transparent and controllable than management packs and Linux plug-ins in the current Log Analytics agents. ### Current limitations
-When compared with the existing agents, this new agent doesn't yet have full parity.
+When compared with the legacy agents, this new agent doesn't yet have full parity.
- **Comparison with Log Analytics agents (MMA/OMS):** - Not all Log Analytics solutions are supported yet. [View supported features and services](#supported-services-and-features). - The support for collecting file based logs or IIS logs is in [private preview](https://aka.ms/amadcr-privatepreviews). -- **Comparison with Azure Diagnostics extensions (WAD/LAD):**
- - No support yet for Event Hubs and Storage accounts as destinations.
- - No support yet for collecting file based logs, IIS logs, ETW events, .NET events and crash dumps.
- ### Changes in data collection The methods for defining data collection for the existing agents are distinctly different from each other. Each method has challenges that are addressed with the Azure Monitor agent.
azure-monitor Alerts Action Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-action-rules.md
You can also define filters to narrow down which specific subset of alerts are a
* **Alert Context (payload)** - the rule will apply only to alerts that contain any of the filter's strings within the [alert context](./alerts-common-schema-definitions.md#alert-context) section of the alert. This section includes fields specific to each alert type. * **Alert rule id** - the rule will apply only to alerts from a specific alert rule. The value should be the full resource ID, for example `/subscriptions/SUB1/resourceGroups/RG1/providers/microsoft.insights/metricalerts/MY-API-LATENCY`.
-You can locate the alert rule ID by opening a specific alert rule in the portal, clicking "Properties", and copying the "Resource ID" value.
-You can also locate it by listing your alert rules from PowerShell or CLI.
+You can locate the alert rule ID by opening a specific alert rule in the portal, clicking "Properties", and copying the "Resource ID" value. You can also locate it by listing your alert rules from PowerShell or CLI.
* **Alert rule name** - the rule will apply only to alerts with this alert rule name. Can also be useful with a "Contains" operator. * **Description** - the rule will apply only to alerts that contain the specified string within the alert rule description field. * **Monitor condition** - the rule will apply only to alerts with the specified monitor condition, either "Fired" or "Resolved".
For example, you can use this filter with "Does not equal" to exclude one or mor
* **Resource group** - the rule will apply only to alerts from the specified resource groups. For example, you can use this filter with "Does not equal" to exclude one or more resource groups when the rule's scope is a subscription. * **Resource type** - the rule will apply only to alerts on resource from the specified resource types, such as virtual machines. You can use "Equals" to match one or more specific resources, or you can use contains to match a resource type and all its child resources.
-For example, use "contains MICROSOFT.SQL/SERVERS" to match both SQL servers and all their child resources, like databases.
+For example, use `resource type contains "MICROSOFT.SQL/SERVERS"` to match both SQL servers and all their child resources, like databases.
* **Severity** - the rule will apply only to alerts with the selected severities. **FILTERS BEHAVIOR** * If you define multiple filters in a rule, all of them apply - there is a logical AND between all filters.
- For example, if you set both `resource type = "Virtual Machines` and `severity = "Sev0`, then the rule will apply only for Sev0 alerts on virtual machines in the scope.
+ For example, if you set both `resource type = "Virtual Machines"` and `severity = "Sev0"`, then the rule will apply only for Sev0 alerts on virtual machines in the scope.
* Each filter may include up to five values, and there is a logical OR between the values. For example, if you set `description contains ["this", "that"]`, then the rule will apply only to alerts whose description contains either "this" or "that".
azure-monitor Java 2X Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-troubleshoot.md
Questions or problems with [Azure Application Insights in Java][java]? Here are
### Java Agent cannot capture dependency data * Have you configured Java agent by following [Configure Java Agent](java-2x-agent.md) ?
-* Make sure both the java agent jar and the AI-Agent.xml file are placed in the same folder.
+* Make sure both the Java agent jar and the AI-Agent.xml file are placed in the same folder.
* Make sure that the dependency you are trying to auto-collect is supported for auto collection. Currently we only support MySQL, MsSQL, Oracle DB and Azure Cache for Redis dependency collection. ## No usage data
azure-monitor Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pricing.md
For SDKs that don't support adaptive sampling, you can employ [ingestion samplin
## Viewing Application Insights usage on your Azure bill
-The easiest way to see the billed usage for a single Application Insights resource, which isn't a workspace-baed resource is to go to the resource's Overview page and click **View Cost** in the upper right corner. You might need elevated access to Cost Management data ([learn more](../../cost-management-billing/costs/assign-access-acm-data.md)).
+The easiest way to see the billed usage for a single Application Insights resource, which isn't a workspace-based resource is to go to the resource's Overview page and click **View Cost** in the upper right corner. You might need elevated access to Cost Management data ([learn more](../../cost-management-billing/costs/assign-access-acm-data.md)).
To learn more, Azure provides a great deal of useful functionality in the [Azure Cost Management + Billing](../../cost-management-billing/costs/quick-acm-cost-analysis.md?toc=/azure/billing/TOC.json) hub. For instance, the "Cost analysis" functionality enables you to view your spends for Azure resources. Adding a filter by resource type (to microsoft.insights/components for Application Insights) will allow you to track your spending. Then for "Group by" select "Meter category" or "Meter". Application Insights billed usage for data ingestion and data retention will show up as **Log Analytics** for the Meter category since Log Analytics backend for all Azure Monitor logs.
Lower your bill with updated versions of the ASP.NET Core SDK and Worker Service
### Microsoft Q&A question page
-If you have questions about how pricing works for Application Insights, you can post a question in our [Microsoft Q&A question page](/answers/topics/azure-monitor.html).
+If you have questions about how pricing works for Application Insights, you can post a question in our [Microsoft Q&A question page](/answers/topics/azure-monitor.html).
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log.md
The following columns have been added to *AzureActivity* in the updated schema:
- Claims_d - Properties_d
+## Activity Logs Insights
+Activity log insights let you view information about changes to resources and resource groups in a subscription. The dashboards also present data about which users or services performed activities in the subscription and the activities' status. This article explains how to view Activity log insights in the Azure portal.
+ ## Activity Log Analytics monitoring solution > [!Note] > The Azure Log Analytics monitoring solution will be deprecated soon and replaced by a workbook using the updated schema in the Log Analytics workspace. You can still use the solution if you already have it enabled, but it can only be used if you're collecting the Activity log using legacy settings.
azure-monitor Activity Logs Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-logs-insights.md
+
+ Title: Activity logs insights
+description: View the overview of Azure Activity logs of your resources
+++ Last updated : 03/14/2021++
+#Customer intent: As an IT administrator, I want to track changes to resource groups or specific resources in a subscription and to see which administrators or services make these changes.
+
+
+# Activity logs insights (Preview)
+
+Activity logs insights let you view information about changes to resources and resource groups in a subscription. The dashboards also present data about which users or services performed activities in the subscription and the activities' status. This article explains how to view Activity log insights in the Azure portal.
+
+Before using Activity log insights, you'll have to [enable sending logs to your Log Analytics workspace](./diagnostic-settings.md).
+
+## How does Activity logs insights work?
+
+Activity logs you send to a [Log Analytics workspace](/articles/azure-monitor/logs/log-analytics-workspace-overview.md) are stored in a table called AzureActivity.
+
+Activity logs insights are a curated [Log Analytics workbook](/articles/azure-monitor/visualize/workbooks-overview.md) with dashboards that visualize the data in the AzureActivity table. For example, which administrators deleted, updated or created resources, and whether the activities failed or succeeded.
++
+## View Activity logs insights - Resource group / Subscription level
+
+To view Activity logs insights on a resource group or a subscription level:
+
+1. In the Azure portal, select **Monitor** > **Workbooks**.
+1. Select **Activity Logs Insights** in the **Insights** section.
+
+ :::image type="content" source="media/activity-log/open-activity-log-insights-workbook.png" lightbox="media/activity-log/open-activity-log-insights-workbook.png" alt-text="A screenshot showing how to locate and open the Activity logs insights workbook on a scale level":::
+
+1. At the top of the **Activity Logs Insights** page, select:
+ 1. One or more subscriptions from the **Subscriptions** dropdown.
+ 1. Resources and resource groups from the **CurrentResource** dropdown.
+ 1. A time range for which to view data from the **TimeRange** dropdown.
+## View Activity logs insights on any Azure resource
+
+>[!Note]
+> * Currently Applications Insights resources are not supported for this workbook.
+
+To view Activity logs insights on a resource level:
+
+1. In the Azure portal, go to your resource, select **Workbooks**.
+1. Select **Activity Logs Insights** in the **Activity Logs Insights** section.
+
+ :::image type="content" source="media/activity-log/activity-log-resource-level.png" lightbox= "media/activity-log/activity-log-resource-level.png" alt-text="A screenshot showing how to locate and open the Activity logs insights workbook on a resource level":::
+
+1. At the top of the **Activity Logs Insights** page, select:
+
+ 1. A time range for which to view data from the **TimeRange** dropdown.
+ * **Azure Activity Logs Entries** shows the count of Activity log records in each [activity log category](/articles/azure-monitor/essentials/activity-log-schema#categories).
+
+ :::image type="content" source="media/activity-log/activity-logs-insights-category-value.png" lightbox= "media/activity-log/activity-logs-insights-category-value.png" alt-text="Azure Activity Logs by Category Value":::
+
+ * **Activity Logs by Status** shows the count of Activity log records in each status.
+
+ :::image type="content" source="media/activity-log/activity-logs-insights-status.png" lightbox= "media/activity-log/activity-logs-insights-status.png" alt-text="Azure Activity Logs by Status":::
+
+ * At the subscription and resource group level, **Activity Logs by Resource** and **Activity Logs by Resource Provider** show the count of Activity log records for each resource and resource provider.
+
+ :::image type="content" source="media/activity-log/activity-logs-insights-resource.png" lightbox= "media/activity-log/activity-logs-insights-resource.png" alt-text="Azure Activity Logs by Resource":::
+
+## Next steps
+Learn more about:
+* [Platform logs](./platform-logs-overview.md)
+* [Activity log event schema](activity-log-schema.md)
+* [Creating a diagnostic setting to send Activity logs to other destinations](./diagnostic-settings.md)
azure-monitor Resource Logs Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-schema.md
Last updated 05/10/2021
# Common and service-specific schemas for Azure resource logs > [!NOTE]
-> Resource logs were previously known as diagnostic logs. The name was changed in October 2019 as the types of logs gathered by Azure Monitor shifted to include more than just the Azure resource.
+> Resource logs were previously known as diagnostic logs. The name was changed in October 2019 as the types of logs gathered by Azure Monitor shifted to include more than just the Azure resource.
>
-> This article used to list resource log categories that you can collect. That list is now at [Resource log categories](resource-logs-categories.md).
+> This article used to list resource log categories that you can collect. That list is now at [Resource log categories](resource-logs-categories.md).
[Azure Monitor resource logs](../essentials/platform-logs-overview.md) are logs emitted by Azure services that describe the operation of those services or resources. All resource logs available through Azure Monitor share a common top-level schema. Each service has the flexibility to emit unique properties for its own events.
The schema for resource logs varies depending on the resource and log category.
| Azure Load Balancer |[Log Analytics for Azure Load Balancer](../../load-balancer/monitor-load-balancer.md) | | Azure Logic Apps |[Logic Apps B2B custom tracking schema](../../logic-apps/logic-apps-track-integration-account-custom-tracking-schema.md) | | Azure Machine Learning | [Diagnostic logging in Azure Machine Learning](../../machine-learning/monitor-resource-reference.md) |
-| Azure Media Services | [Media Services monitoring schemas](../../media-services/latest/monitoring/monitor-media-services-data-reference.md#schemas) |
+| Azure Media Services | [Media Services monitoring schemas](/media-services/latest/monitoring/monitor-media-services-data-reference#schemas) |
| Network security groups |[Log Analytics for network security groups (NSGs)](../../virtual-network/virtual-network-nsg-manage-log.md) | | Azure Power BI Embedded | [Logging for Power BI Embedded in Azure](/power-bi/developer/azure-pbie-diag-logs) | | Recovery Services | [Data model for Azure Backup](../../backup/backup-azure-reports-data-model.md)|
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-reference.md
This article is a reference of the different applications and services that are
## Insights and curated visualizations
-Some services have a curated monitoring experience. That is, Microsoft provides customized functionality meant to act as a starting point for monitoring those services. These experiences are collectively known as **curated visualizations** with the larger more complex of them being called **Insights**.
+Some services have a curated monitoring experience. That is, Microsoft provides customized functionality meant to act as a starting point for monitoring those services. These experiences are collectively known as **curated visualizations** with the larger more complex of them being called **Insights**.
-The experiences collect and analyze a subset of logs and metrics and depending on the service and might also provide out-of-the-box alerting. They present this telemetry in a visual layout. The visualizations vary in size and scale. Some are considered part of Azure Monitor and follow the support and service level agreements for Azure. They are supported in all Azure regions where Azure Monitor is available. Other curated visualizations provide less functionality, might not scale, and might have different agreements. Some might be based solely on Azure Monitor Workbooks, while others might have an extensive custom experience.
+The experiences collect and analyze a subset of logs and metrics and depending on the service and might also provide out-of-the-box alerting. They present this telemetry in a visual layout. The visualizations vary in size and scale. Some are considered part of Azure Monitor and follow the support and service level agreements for Azure. They are supported in all Azure regions where Azure Monitor is available. Other curated visualizations provide less functionality, might not scale, and might have different agreements. Some might be based solely on Azure Monitor Workbooks, while others might have an extensive custom experience.
-The table below lists the available curated visualizations and more detailed information about them.
+The table below lists the available curated visualizations and more detailed information about them.
>[!NOTE] > Another type of older visualization called **monitoring solutions** are no longer in active development. The replacement technology is the Azure Monitor Insights mentioned above. We suggest you use the insights and not deploy new instances of solutions. For more information on the solutions, see [Monitoring solutions in Azure Monitor](./insights/solutions.md).
-|Name with docs link| State | [Azure portal Link](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/more)| Description |
+|Name with docs link| State | [Azure portal Link](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/more)| Description |
|:--|:--|:--|:--|
-| [Azure Monitor Workbooks for Azure Active Directory](../active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md) | GA (General availability) | [Yes](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Workbooks) | Azure Active Directory provides workbooks to understand the effect of your Conditional Access policies, to troubleshoot sign-in failures, and to identify legacy authentications. |
-| [Azure Backup](../backup/backup-azure-monitoring-use-azuremonitor.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_DataProtection/BackupCenterMenuBlade/backupReportsConfigure/menuId/backupReportsConfigure) | Provides built-in monitoring and alerting capabilities in a Recovery Services vault. |
-| [Azure Monitor for Azure Cache for Redis (preview)](./insights/redis-cache-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/redisCacheInsights) | Provides a unified, interactive view of overall performance, failures, capacity, and operational health |
-| [Azure Cosmos DB Insights](./insights/cosmosdb-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/cosmosDBInsights) | Provides a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. |
-| [Azure Container Insights](/azure/azure-monitor/insights/container-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/containerInsights) | Monitors the performance of container workloads that are deployed to managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS). It gives you performance visibility by collecting metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. Container logs are also collected. After you enable monitoring from Kubernetes clusters, these metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. |
-| [Azure Data Explorer insights](./insights/data-explorer.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/adxClusterInsights) | Azure Data Explorer Insights provides comprehensive monitoring of your clusters by delivering a unified view of your cluster performance, operations, usage, and failures. |
+| [Azure Monitor Workbooks for Azure Active Directory](../active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md) | GA (General availability) | [Yes](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Workbooks) | Azure Active Directory provides workbooks to understand the effect of your Conditional Access policies, to troubleshoot sign-in failures, and to identify legacy authentications. |
+| [Azure Backup](../backup/backup-azure-monitoring-use-azuremonitor.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_DataProtection/BackupCenterMenuBlade/backupReportsConfigure/menuId/backupReportsConfigure) | Provides built-in monitoring and alerting capabilities in a Recovery Services vault. |
+| [Azure Monitor for Azure Cache for Redis (preview)](./insights/redis-cache-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/redisCacheInsights) | Provides a unified, interactive view of overall performance, failures, capacity, and operational health |
+| [Azure Cosmos DB Insights](./insights/cosmosdb-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/cosmosDBInsights) | Provides a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. |
+| [Azure Container Insights](/azure/azure-monitor/insights/container-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/containerInsights) | Monitors the performance of container workloads that are deployed to managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS). It gives you performance visibility by collecting metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. Container logs are also collected. After you enable monitoring from Kubernetes clusters, these metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. |
+| [Azure Data Explorer insights](./insights/data-explorer.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/adxClusterInsights) | Azure Data Explorer Insights provides comprehensive monitoring of your clusters by delivering a unified view of your cluster performance, operations, usage, and failures. |
| [Azure HDInsight (preview)](../hdinsight/log-analytics-migration.md#insights) | Preview | No | An Azure Monitor workbook that collects important performance metrics from your HDInsight cluster and provides the visualizations and dashboards for most common scenarios. Gives a complete view of a single HDInsight cluster including resource utilization and application status|
- | [Azure IoT Edge](../iot-edge/how-to-explore-curated-visualizations.md) | GA | No | Visualize and explore metrics collected from the IoT Edge device right in the Azure portal using Azure Monitor Workbooks based public templates. The curated workbooks use built-in metrics from the IoT Edge runtime. These views don't need any metrics instrumentation from the workload modules. |
- | [Azure Key Vault Insights (preview)](./insights/key-vault-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/keyvaultsInsights) | Provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency. |
- | [Azure Monitor Application Insights](./app/app-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/applicationsInsights) | Extensible Application Performance Management (APM) service which monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It leverages the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations. It enables you to diagnose errors without waiting for a user to report them. Application Insights includes connection points to a variety of development tools and integrates with Visual Studio to support your DevOps processes. |
- | [Azure Monitor Log Analytics Workspace](./logs/log-analytics-workspace-insights-overview.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/lawsInsights) | Log Analytics Workspace Insights (preview) provides comprehensive monitoring of your workspaces through a unified view of your workspace usage, performance, health, agent, queries, and change log. This article will help you understand how to onboard and use Log Analytics Workspace Insights (preview). |
- | [Azure Service Bus Insights](../service-bus-messaging/service-bus-insights.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/serviceBusInsights) | Azure Service Bus insights provide a view of the overall performance, failures, capacity, and operational health of all your Service Bus resources in a unified interactive experience. |
+ | [Azure IoT Edge](../iot-edge/how-to-explore-curated-visualizations.md) | GA | No | Visualize and explore metrics collected from the IoT Edge device right in the Azure portal using Azure Monitor Workbooks based public templates. The curated workbooks use built-in metrics from the IoT Edge runtime. These views don't need any metrics instrumentation from the workload modules. |
+ | [Azure Key Vault Insights (preview)](./insights/key-vault-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/keyvaultsInsights) | Provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency. |
+ | [Azure Monitor Application Insights](./app/app-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/applicationsInsights) | Extensible Application Performance Management (APM) service which monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It leverages the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations. It enables you to diagnose errors without waiting for a user to report them. Application Insights includes connection points to a variety of development tools and integrates with Visual Studio to support your DevOps processes. |
+ | [Azure Monitor Log Analytics Workspace](./logs/log-analytics-workspace-insights-overview.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/lawsInsights) | Log Analytics Workspace Insights (preview) provides comprehensive monitoring of your workspaces through a unified view of your workspace usage, performance, health, agent, queries, and change log. This article will help you understand how to onboard and use Log Analytics Workspace Insights (preview). |
+ | [Azure Service Bus Insights](../service-bus-messaging/service-bus-insights.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/serviceBusInsights) | Azure Service Bus insights provide a view of the overall performance, failures, capacity, and operational health of all your Service Bus resources in a unified interactive experience. |
| [Azure SQL insights](./insights/sql-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/sqlWorkloadInsights) | A comprehensive interface for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. Note: If you are just setting up SQL monitoring, use this instead of the SQL Analytics solution. |
- | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/storageInsights) | Provides comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. |
- | [Azure Network Insights](./insights/network-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/networkInsights) | Provides a comprehensive view of health and metrics for all your network resource. The advanced search capability helps you identify resource dependencies, enabling scenarios like identifying resource that are hosting your website, by simply searching for your website name. |
- | [Azure Monitor for Resource Groups](./insights/resource-group-insights.md) | GA | No | Triage and diagnose any problems your individual resources encounter, while offering context as to the health and performance of the resource group as a whole. |
+ | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/storageInsights) | Provides comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. |
+ | [Azure Network Insights](./insights/network-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/networkInsights) | Provides a comprehensive view of health and metrics for all your network resource. The advanced search capability helps you identify resource dependencies, enabling scenarios like identifying resource that are hosting your website, by simply searching for your website name. |
+ | [Azure Monitor for Resource Groups](./insights/resource-group-insights.md) | GA | No | Triage and diagnose any problems your individual resources encounter, while offering context as to the health and performance of the resource group as a whole. |
| [Azure Monitor SAP](../virtual-machines/workloads/sap/monitor-sap-on-azure.md) | GA | No | An Azure-native monitoring product for anyone running their SAP landscapes on Azure. It works with both SAP on Azure Virtual Machines and SAP on Azure Large Instances. Collects telemetry data from Azure infrastructure and databases in one central location and visually correlate the data for faster troubleshooting. You can monitor different components of an SAP landscape, such as Azure virtual machines (VMs), high-availability cluster, SAP HANA database, SAP NetWeaver, and so on, by adding the corresponding provider for that component. |
- | [Azure Stack HCI insights](/azure-stack/hci/manage/azure-stack-hci-insights) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/azureStackHCIInsights) | Azure Monitor Workbook based. Provides health, performance, and usage insights about registered Azure Stack HCI, version 21H2 clusters that are connected to Azure and are enrolled in monitoring. It stores its data in a Log Analytics workspace, which allows it to deliver powerful aggregation and filtering and analyze data trends over time. |
- | [Azure VM Insights](/azure/azure-monitor/insights/vminsights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/virtualMachines) | Monitors your Azure virtual machines (VM) and virtual machine scale sets at scale. It analyzes the performance and health of your Windows and Linux VMs, and monitors their processes and dependencies on other resources and external processes. |
- | [Azure Virtual Desktop Insights](../virtual-desktop/azure-monitor.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_WVD/WvdManagerMenuBlade/insights/menuId/insights) | Azure Virtual Desktop Insights is a dashboard built on Azure Monitor Workbooks that helps IT professionals understand their Azure Virtual Desktop environments. |
+ | [Azure Stack HCI insights](/azure-stack/hci/manage/azure-stack-hci-insights) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/azureStackHCIInsights) | Azure Monitor Workbook based. Provides health, performance, and usage insights about registered Azure Stack HCI, version 21H2 clusters that are connected to Azure and are enrolled in monitoring. It stores its data in a Log Analytics workspace, which allows it to deliver powerful aggregation and filtering and analyze data trends over time. |
+ | [Azure VM Insights](/azure/azure-monitor/insights/vminsights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/virtualMachines) | Monitors your Azure virtual machines (VM) and virtual machine scale sets at scale. It analyzes the performance and health of your Windows and Linux VMs, and monitors their processes and dependencies on other resources and external processes. |
+ | [Azure Virtual Desktop Insights](../virtual-desktop/azure-monitor.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_WVD/WvdManagerMenuBlade/insights/menuId/insights) | Azure Virtual Desktop Insights is a dashboard built on Azure Monitor Workbooks that helps IT professionals understand their Azure Virtual Desktop environments. |
## Product integrations
The other services and older monitoring solutions in the following table store t
| [Microsoft Teams Rooms](/microsoftteams/room-systems/azure-monitor-deploy) | Integrated, end-to-end management of Microsoft Teams Rooms devices. | | [Visual Studio App Center](/appcenter/) | Build, test, and distribute applications and then monitor their status and usage. See [Start analyzing your mobile app with App Center and Application Insights](app/mobile-center-quickstart.md). | | Windows | [Windows Update Compliance](/windows/deployment/update/update-compliance-get-started) - Assess your Windows desktop upgrades.<br>[Desktop Analytics](/configmgr/desktop-analytics/overview) - Integrates with Configuration Manager to provide insight and intelligence to make more informed decisions about the update readiness of your Windows clients. |
-| **The following solutions also integrate with parts of Azure Monitor. Note that solutions, are no longer under active development. Use [insights](#insights-and-curated-visualizations) instead.** | |
-| Network - [Network Performance Monitor solution](insights/network-performance-monitor.md) |
+| **The following solutions also integrate with parts of Azure Monitor. Note that solutions, are no longer under active development. Use [insights](#insights-and-curated-visualizations) instead.** | |
+| Network - [Network Performance Monitor solution](insights/network-performance-monitor.md) |
| Network - [Azure Application Gateway Solution](insights/azure-networking-analytics.md#azure-application-gateway-analytics) | . | [Office 365 solution](insights/solution-office-365.md) | Monitor your Office 365 environment. Updated version with improved onboarding available through Microsoft Sentinel. | | [SQL Analytics solution](insights/azure-sql.md) | Use SQL Insights instead |
The other services and older monitoring solutions in the following table store t
| Integration | Description | |:|:| | [ITSM](alerts/itsmc-overview.md) | The IT Service Management Connector (ITSMC) allows you to connect Azure and a supported IT Service Management (ITSM) product/service. |
-| [Azure Monitor Partners](./partners.md) | A list of partners that integrate with Azure Monitor in some form |
+| [Azure Monitor Partners](./partners.md) | A list of partners that integrate with Azure Monitor in some form |
| [Azure Monitor Partner integrations](../partner-solutions/overview.md)| Specialized integrations between Azure Monitor and other non-Microsoft monitoring platforms if you've already built on them. Examples include Datadog and Elastic|
Azure Monitor can collect data from resources outside of Azure using the methods
## Azure supported services
-
-The following table lists Azure services and the data they collect into Azure Monitor.
-- Metrics - The service automatically collects metrics into Azure Monitor Metrics.
+The following table lists Azure services and the data they collect into Azure Monitor.
+
+- Metrics - The service automatically collects metrics into Azure Monitor Metrics.
- Logs - The service supports diagnostic settings which can send metrics and platform logs into Azure Monitor Logs for analysis in Log Analytics. - Insight - There is an insight available which provides a customized monitoring experience for the service.
The following table lists Azure services and the data they collect into Azure Mo
| [Azure Logic Apps](../logic-apps/index.yml) | Microsoft.Logic/workflows | [**Yes**](./essentials/metrics-supported.md#microsoftlogicworkflows) | [**Yes**](./essentials/resource-logs-categories.md#microsoftlogicworkflows) | | | | [Azure Machine Learning](../machine-learning/index.yml) | Microsoft.MachineLearningServices/workspaces | [**Yes**](./essentials/metrics-supported.md#microsoftmachinelearningservicesworkspaces) | [**Yes**](./essentials/resource-logs-categories.md#microsoftmachinelearningservicesworkspaces) | | | | [Azure Maps](../azure-maps/index.yml) | Microsoft.Maps/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftmapsaccounts) | No | | |
- | [Azure Media Services](../media-services/index.yml) | Microsoft.Medi#microsoftmediamediaservices) | | |
- | [Azure Media Services](../media-services/index.yml) | Microsoft.Medi#microsoftmediamediaservicesliveevents) | No | | |
- | [Azure Media Services](../media-services/index.yml) | Microsoft.Medi#microsoftmediamediaservicesstreamingendpoints) | No | | |
- | [Azure Media Services](../media-services/index.yml) | Microsoft.Medi#microsoftmediavideoanalyzers) | | |
+ | [Azure Media Services](/media-services/) | Microsoft.Medi#microsoftmediamediaservices) | | |
+ | [Azure Media Services](/media-services/) | Microsoft.Medi#microsoftmediamediaservicesliveevents) | No | | |
+ | [Azure Media Services](/media-services/) | Microsoft.Medi#microsoftmediamediaservicesstreamingendpoints) | No | | |
+ | [Azure Media Services](/media-services/) | Microsoft.Medi#microsoftmediavideoanalyzers) | | |
| [Azure Spatial Anchors](../spatial-anchors/index.yml) | Microsoft.MixedReality/remoteRenderingAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftmixedrealityremoterenderingaccounts) | No | | | | [Azure Spatial Anchors](../spatial-anchors/index.yml) | Microsoft.MixedReality/spatialAnchorsAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftmixedrealityspatialanchorsaccounts) | No | | | | [Azure NetApp Files](../azure-netapp-files/index.yml) | Microsoft.NetApp/netAppAccounts/capacityPools | [**Yes**](./essentials/metrics-supported.md#microsoftnetappnetappaccountscapacitypools) | No | | |
azure-monitor Vminsights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-overview.md
The steps to configure VM insights are as follows. Follow each link for detailed
- [Add VMInsights solution to workspace.](./vminsights-configure-workspace.md#add-vminsights-solution-to-workspace) - [Install agents on virtual machine and virtual machine scale set to be monitored.](./vminsights-enable-overview.md) -
+Currently, VM insights does not support multi-homing.
## Next steps
azure-portal View Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/supportability/view-quotas.md
In the list of quotas, you can toggle the arrow shown next to **Quota** to expan
You can request quota increases directly from **My quotas**. The process for requesting an increase will depend on the type of quota.
+> [!NOTE]
+> There is no cost associated with requesting a quota increase. Costs are incurred based on resource usage, not the quotas themselves.
+ ### Request a quota increase Some quotas display a pencil icon. Select this icon to quickly request an increase for that quota.
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-services-resource-providers.md
The resources providers that are marked with **- registered** are registered by
| Microsoft.Marketplace | core | | Microsoft.MarketplaceApps | core | | Microsoft.MarketplaceOrdering - [registered](#registration) | core |
-| Microsoft.Media | [Media Services](../../media-services/index.yml) |
+| Microsoft.Media | [Media Services](/media-services/) |
| Microsoft.Microservices4Spring | [Azure Spring Cloud](../../spring-cloud/overview.md) | | Microsoft.Migrate | [Azure Migrate](../../migrate/migrate-services-overview.md) | | Microsoft.MixedReality | [Azure Spatial Anchors](../../spatial-anchors/index.yml) |
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
The following table details the features and limits of the Basic, Standard, and
### Media Services v2 (legacy)
-For limits specific to Media Services v2 (legacy), see [Media Services v2 (legacy)](../../media-services/previous/media-services-quotas-and-limitations.md)
+For limits specific to Media Services v2 (legacy), see [Media Services v2 (legacy)](/media-services/previous/media-services-quotas-and-limitations)
## Mobile Services limits
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
Title: Resource naming restrictions description: Shows the rules and restrictions for naming Azure resources. Previously updated : 03/08/2022 Last updated : 03/29/2022 # Naming rules and restrictions for Azure resources
In the following tables, the term alphanumeric refers to:
> [!div class="mx-tableFixed"] > | Entity | Scope | Length | Valid Characters | > | | | | |
-> | managedInstances | global | 1-63 | Lowercase letters, numbers, and hyphens.<br><br>Can't start or end with hyphen. <br><br> Can't have any special characters, such as `@`. |
-> | servers | global | 1-63 | Lowercase letters, numbers, and hyphens.<br><br>Can't start or end with hyphen. |
+> | managedInstances | global | 1-63 | Lowercase letters, numbers, and hyphens.<br><br> Can't start or end with hyphen. CanΓÇÖt have hyphen twice in both third and fourth place.<br><br> Can't have any special characters, such as `@`. |
+> | servers | global | 1-63 | Lowercase letters, numbers, and hyphens.<br><br>Can't start or end with hyphen. CanΓÇÖt have hyphen twice in both third and fourth place. |
> | servers / administrators | server | | Must be `ActiveDirectory`. | > | servers / databases | server | 1-128 | Can't use:<br>`<>*%&:\/?` or control characters<br><br>Can't end with period or space. | > | servers / databases / syncGroups | database | 1-150 | Alphanumerics, hyphens, and underscores. |
azure-signalr Concept Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/concept-connection-string.md
+
+ Title: Connection string in Azure SignalR Service
+description: An overview of connection string in Azure SignalR Service, how to generate it and how to configure it in app server
+++ Last updated : 03/25/2022++
+# Connection string in Azure SignalR Service
+
+Connection string is an important concept that contains information about how to connect to SignalR service. In this article, you'll learn the basics of connection string and how to configure it in your application.
+
+## What is connection string
+
+When an application needs to connect to Azure SignalR Service, it will need the following information:
+
+* The HTTP endpoint of the SignalR service instance
+* How to authenticate with the service endpoint
+
+Connection string contains such information. To see how a connection string looks like, you can open a SignalR service resource in Azure portal and go to "Keys" tab. You'll see two connection strings (primary and secondary) in the following format:
+
+```
+Endpoint=https://<resource_name>.service.signalr.net;AccessKey=<access_key>;Version=1.0;
+```
+
+> [!NOTE]
+> Besides portal, you can also use Azure CLI to get the connection string:
+>
+> ```bash
+> az signalr key list -g <resource_group> -n <resource_name>
+> ```
+
+You can see in the connection string, there are two main information:
+
+* `Endpoint=https://<resource_name>.service.signalr.net` is the endpoint URL of the resource
+* `AccessKey=<access_key>` is the key to authenticate with the service. When access key is specified in connection string, SignalR service SDK will use it to generate a token that can be validated by the service.
+
+>[!NOTE]
+> For more information about how access tokens are generated and validated, see this [article](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md#authenticate-via-azure-signalr-service-accesskey).
+
+## Other authentication types
+
+Besides access key, SignalR service also supports other types of authentication methods in connection string.
+
+### Azure Active Directory Application
+
+You can use [Azure AD application](/azure/active-directory/develop/app-objects-and-service-principals) to connect to SignalR service. As long as the application has the right permission to access SignalR service, no access key is needed.
+
+To use Azure AD authentication, you need to remove `AccessKey` from connection string and add `AuthType=aad`. You also need to specify the credentials of your Azure AD application, including client ID, client secret and tenant ID. The connection string will look as follows:
+
+```
+Endpoint=https://<resource_name>.service.signalr.net;AuthType=aad;ClientId=<client_id>;ClientSecret=<client_secret>;TenantId=<tenant_id>;Version=1.0;
+```
+
+For more information about how to authenticate using Azure AD application, see this [article](signalr-howto-authorize-application.md).
+
+### Managed identity
+
+You can also use [managed identity](/azure/active-directory/managed-identities-azure-resources/overview) to authenticate with SignalR service.
+
+There are two types of managed identities, to use system assigned identity, you just need to add `AuthType=aad` to the connection string:
+
+```
+Endpoint=https://<resource_name>.service.signalr.net;AuthType=aad;Version=1.0;
+```
+
+SignalR service SDK will automatically use the identity of your app server.
+
+To use user assigned identity, you also need to specify the client ID of the managed identity:
+
+```
+Endpoint=https://<resource_name>.service.signalr.net;AuthType=aad;ClientId=<client_id>;Version=1.0;
+```
+
+For more information about how to configure managed identity, see this [article](signalr-howto-authorize-managed-identity.md).
+
+> [!NOTE]
+> It's highly recommended to use Azure AD to authenticate with SignalR service as it's a more secure way comparing to using access key. If you don't use access key authentication at all, consider to completely disable it (go to Azure portal -> Keys -> Access Key -> Disable). If you still use access key, it's highly recommended to rotate them regularly (more information can be found [here](signalr-howto-key-rotation.md)).
+
+## Client and server endpoints
+
+Connection string contains the HTTP endpoint for app server to connect to SignalR service. This is also the endpoint server will return to clients in negotiate response, so client can also connect to the service.
+
+But in some applications there may be an additional component in front of SignalR service and all client connections need to go through that component first (to gain additional benefits like network security, [Azure Application Gateway](/azure/application-gateway/overview) is a common service that provides such functionality).
+
+In such case, the client will need to connect to an endpoint different than SignalR service. Instead of manually replace the endpoint at client side, you can add `ClientEndpoint` to connecting string:
+
+```
+Endpoint=https://<resource_name>.service.signalr.net;AccessKey=<access_key>;ClientEndpoint=https://<url_to_app_gateway>;Version=1.0;
+```
+
+Then app server will return the right endpoint url in negotiate response for client to connect.
+
+> [!NOTE]
+> For more information about how clients get service url through negotiate, see this [article](signalr-concept-internals.md#client-connections).
+
+Similarly, when server wants to make [server connections](signalr-concept-internals.md#server-connections) or call [REST APIs](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md) to service, SignalR service may also be behind another service like Application Gateway. In that case, you can use `ServerEndpoint` to specify the actual endpoint for server connections and REST APIs:
+
+```
+Endpoint=https://<resource_name>.service.signalr.net;AccessKey=<access_key>;ServerEndpoint=https://<url_to_app_gateway>;Version=1.0;
+```
+
+## Use connection string generator
+
+It may be cumbersome and error-prone to compose connection string manually. In Azure portal, there is a tool to help you generate connection string with additional information like client endpoint and auth type.
+
+To use connection string generator, open the SignalR resource in Azure portal, go to "Connection strings" tab:
++
+In this page you can choose different authentication types (access key, managed identity or Azure AD application) and input information like client endpoint, client ID, client secret, etc. Then connection string will be automatically generated. You can copy and use it in your application.
+
+> [!NOTE]
+> Everything you input in this page won't be saved after you leave the page (since they're only client side information), so please copy and save it in a secure place for your application to use.
+
+## Configure connection string in your application
+
+There are two ways to configure connection string in your application.
+
+You can set the connection string when calling `AddAzureSignalR()` API:
+
+```cs
+services.AddSignalR().AddAzureSignalR("<connection_string>");
+```
+
+Or you can call `AddAzureSignalR()` without any arguments, then service SDK will read the connection string from a config named `Azure:SignalR:ConnectionString` in your [config providers](/dotnet/core/extensions/configuration-providers).
+
+In a local development environment, the config is usually stored in file (appsettings.json or secrets.json) or environment variables, so you can use one of the following ways to configure connection string:
+
+* Use .NET secret manager (`dotnet user-secrets set Azure:SignalR:ConnectionString "<connection_string>"`)
+* Set connection string to environment variable named `Azure__SignalR__ConnectionString` (colon needs to replaced with double underscore in [environment variable config provider](/dotnet/core/extensions/configuration-providers#environment-variable-configuration-provider)).
+
+In production environment, you can use other Azure services to manage config/secrets like Azure [Key Vault](/azure/key-vault/general/overview) and [App Configuration](/azure/azure-app-configuration/overview). See their documentation to learn how to set up config provider for those services.
+
+> [!NOTE]
+> Even you're directly setting connection string using code, it's not recommended to hardcode the connection string in source code, so you should still first read the connection string from a secret store like key vault and pass it to `AddAzureSignalR()`.
+
+### Configure multiple connection strings
+
+Azure SignalR Service also allows server to connect to multiple service endpoints at the same time, so it can handle more connections which are beyond one service instance's limit. Also if one service instance is down, other service instances can be used as backup. For more information about how to use multiple instances, see this [article](signalr-howto-scale-multi-instances.md).
+
+There are also two ways to configure multiple instances:
+
+* Through code
+
+ ```cs
+ services.AddSignalR().AddAzureSignalR(options =>
+ {
+ options.Endpoints = new ServiceEndpoint[]
+ {
+ new ServiceEndpoint("<connection_string_1>", name: "name_a"),
+ new ServiceEndpoint("<connection_string_2>", name: "name_b", type: EndpointType.Primary),
+ new ServiceEndpoint("<connection_string_3>", name: "name_c", type: EndpointType.Secondary),
+ };
+ });
+ ```
+
+ You can assign a name and type to each service endpoint so you can distinguish them later.
+
+* Through config
+
+ You can use any supported config provider (secret manager, environment variables, key vault, etc.) to store connection strings. Take secret manager as an example:
+
+ ```bash
+ dotnet user-secrets set Azure:SignalR:ConnectionString:name_a <connection_string_1>
+ dotnet user-secrets set Azure:SignalR:ConnectionString:name_b:primary <connection_string_2>
+ dotnet user-secrets set Azure:SignalR:ConnectionString:name_c:secondary <connection_string_3>
+ ```
+
+ You can also assign name and type to each endpoint, by using a different config name in the following format:
+
+ ```
+ Azure:SignalR:ConnectionString:<name>:<type>
+ ```
azure-sql Dtu Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/dtu-benchmark.md
+
+ Title: DTU benchmark
+description: Learn about the benchmark for the DTU-based purchasing model for Azure SQL Database.
++++
+ms.devlang:
++++ Last updated : 03/29/2022+
+# DTU benchmark
+
+A database transaction unit (DTU) is a unit of measure representing a blended measure of CPU, memory, reads, and writes. Physical characteristics (CPU, memory, IO) associated with each DTU measure are calibrated using a benchmark that simulates real-world database workload. This article summarizes the DTU benchmark and shares information about the schema, transaction types used, workload mix, users and pacing, scaling rules, and metrics associated with the benchmark.
+
+For general information about the DTU-based purchasing model, see the [DTU-based purchasing model overview](service-tiers-dtu.md).
+
+## Benchmark summary
+
+The DTU benchmark measures the performance of a mix of basic database operations that occur most frequently in online transaction processing (OLTP) workloads. Although the benchmark is designed with cloud computing in mind, the database schema, data population, and transactions have been designed to be broadly representative of the basic elements most commonly used in OLTP workloads.
+
+## Correlating benchmark results to real world database performance
+
+It's important to understand that all benchmarks are representative and indicative only. The transaction rates achieved with the benchmark application will not be the same as those that might be achieved with other applications. The benchmark comprises a collection of different transaction types run against a schema containing a range of tables and data types. While the benchmark exercises the same basic operations that are common to all OLTP workloads, it doesn't represent any specific class of database or application. The goal of the benchmark is to provide a reasonable guide to the relative performance of a database that might be expected when scaling up or down between compute sizes.
+
+In reality, databases are of different sizes and complexity, encounter different mixes of workloads, and will respond in different ways. For example, an IO-intensive application may hit IO thresholds sooner, or a CPU-intensive application may hit CPU limits sooner. There is no guarantee that any particular database will scale in the same way as the benchmark under increasing load.
+
+The benchmark and its methodology are described in more detail in this article.
+
+## Schema
+
+The schema is designed to have enough variety and complexity to support a broad range of operations. The benchmark runs against a database comprised of six tables. The tables fall into three categories: fixed-size, scaling, and growing. There are two fixed-size tables; three scaling tables; and one growing table. Fixed-size tables have a constant number of rows. Scaling tables have a cardinality that is proportional to database performance, but doesnΓÇÖt change during the benchmark. The growing table is sized like a scaling table on initial load, but then the cardinality changes in the course of running the benchmark as rows are inserted and deleted.
+
+The schema includes a mix of data types, including integer, numeric, character, and date/time. The schema includes primary and secondary keys, but not any foreign keys - that is, there are no referential integrity constraints between tables.
+
+A data generation program generates the data for the initial database. Integer and numeric data is generated with various strategies. In some cases, values are distributed randomly over a range. In other cases, a set of values is randomly permuted to ensure that a specific distribution is maintained. Text fields are generated from a weighted list of words to produce realistic looking data.
+
+The database is sized based on a ΓÇ£scale factor.ΓÇ¥ The scale factor (abbreviated as SF) determines the cardinality of the scaling and growing tables. As described below in the section Users and Pacing, the database size, number of users, and maximum performance all scale in proportion to each other.
+
+## Transactions
+
+The workload consists of nine transaction types, as shown in the table below. Each transaction is designed to highlight a particular set of system characteristics in the database engine and system hardware, with high contrast from the other transactions. This approach makes it easier to assess the impact of different components to overall performance. For example, the transaction ΓÇ£Read HeavyΓÇ¥ produces a significant number of read operations from disk.
+
+| Transaction Type | Description |
+| | |
+| Read Lite |SELECT; in-memory; read-only |
+| Read Medium |SELECT; mostly in-memory; read-only |
+| Read Heavy |SELECT; mostly not in-memory; read-only |
+| Update Lite |UPDATE; in-memory; read-write |
+| Update Heavy |UPDATE; mostly not in-memory; read-write |
+| Insert Lite |INSERT; in-memory; read-write |
+| Insert Heavy |INSERT; mostly not in-memory; read-write |
+| Delete |DELETE; mix of in-memory and not in-memory; read-write |
+| CPU Heavy |SELECT; in-memory; relatively heavy CPU load; read-only |
+
+## Workload mix
+
+Transactions are selected at random from a weighted distribution with the following overall mix. The overall mix has a read/write ratio of approximately 2:1.
+
+| Transaction Type | % of Mix |
+| | |
+| Read Lite |35 |
+| Read Medium |20 |
+| Read Heavy |5 |
+| Update Lite |20 |
+| Update Heavy |3 |
+| Insert Lite |3 |
+| Insert Heavy |2 |
+| Delete |2 |
+| CPU Heavy |10 |
+
+## Users and pacing
+
+The benchmark workload is driven from a tool that submits transactions across a set of connections to simulate the behavior of a number of concurrent users. Although all of the connections and transactions are machine generated, for simplicity we refer to these connections as ΓÇ£users.ΓÇ¥ Although each user operates independently of all other users, all users perform the same cycle of steps shown below:
+
+1. Establish a database connection.
+2. Repeat until signaled to exit:
+ - Select a transaction at random (from a weighted distribution).
+ - Perform the selected transaction and measure the response time.
+ - Wait for a pacing delay.
+3. Close the database connection.
+4. Exit.
+
+The pacing delay (in step 2c) is selected at random, but with a distribution that has an average of 1.0 second. Thus each user can, on average, generate at most one transaction per second.
+
+## Scaling rules
+
+The number of users is determined by the database size (in scale-factor units). There is one user for every five scale-factor units. Because of the pacing delay, one user can generate at most one transaction per second, on average.
+
+For example, a scale-factor of 500 (SF=500) database will have 100 users and can achieve a maximum rate of 100 TPS. To drive a higher TPS rate requires more users and a larger database.
+
+## Measurement duration
+
+A valid benchmark run requires a steady-state measurement duration of at least one hour.
+
+## Metrics
+
+The key metrics in the benchmark are throughput and response time.
+
+- Throughput is the essential performance measure in the benchmark. Throughput is reported in transactions per unit-of-time, counting all transaction types.
+- Response time is a measure of performance predictability. The response time constraint varies with class of service, with higher classes of service having a more stringent response time requirement, as shown below.
+
+| Class of Service | Throughput Measure | Response Time Requirement |
+| | | |
+| [Premium](service-tiers-dtu.md#compare-service-tiers) |Transactions per second |95th percentile at 0.5 seconds |
+| [Standard](service-tiers-dtu.md#compare-service-tiers) |Transactions per minute |90th percentile at 1.0 seconds |
+| [Basic](service-tiers-dtu.md#compare-service-tiers) |Transactions per hour |80th percentile at 2.0 seconds |
+
+> [!NOTE]
+> Response time metrics are specific to the [DTU Benchmark](#dtu-benchmark). Response times for other workloads are workload-dependent and will differ.
+
+## Next steps
+
+Learn more about purchasing models and related concepts in the following articles:
+
+- [DTU-based purchasing model overview](service-tiers-dtu.md)
+- [vCore purchasing model - Azure SQL Database](service-tiers-sql-database-vcore.md)
+- [Compare vCore and DTU-based purchasing models of Azure SQL Database](purchasing-models.md)
+- [Migrate Azure SQL Database from the DTU-based model to the vCore-based model](migrate-dtu-to-vcore.md)
+- [Resource limits for single databases using the DTU purchasing model - Azure SQL Database](resource-limits-dtu-single-databases.md)
+- [Resources limits for elastic pools using the DTU purchasing model](resource-limits-dtu-elastic-pools.md)
azure-sql Service Tiers Dtu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/service-tiers-dtu.md
Last updated 02/02/2022
# DTU-based purchasing model overview [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-In this article, learn about the DTU-based purchasing model for Azure SQL Database.
+In this article, learn about the DTU-based purchasing model for Azure SQL Database.
To learn more, review [vCore-based purchasing model](service-tiers-vcore.md) and [compare purchasing models](purchasing-models.md). - ## Database transaction units (DTUs) A database transaction unit (DTU) represents a blended measure of CPU, memory, reads, and writes. Service tiers in the DTU-based purchasing model are differentiated by a range of compute sizes with a fixed amount of included storage, fixed retention period for backups, and fixed price. All service tiers in the DTU-based purchasing model provide flexibility of changing compute sizes with minimal [downtime](https://azure.microsoft.com/support/legal/sla/azure-sql-database); however, there is a switch over period where connectivity is lost to the database for a short amount of time, which can be mitigated using retry logic. Single databases and elastic pools are billed hourly based on service tier and compute size. For a single database at a specific compute size within a [service tier](single-database-scale.md), Azure SQL Database guarantees a certain level of resources for that database (independent of any other database). This guarantee provides a predictable level of performance. The amount of resources allocated for a database is calculated as a number of DTUs and is a bundled measure of compute, storage, and I/O resources.
-The ratio among these resources is originally determined by an [online transaction processing (OLTP) benchmark workload](service-tiers-dtu.md) designed to be typical of real-world OLTP workloads. When your workload exceeds the amount of any of these resources, your throughput is throttled, resulting in slower performance and time-outs.
+The ratio among these resources is originally determined by an [online transaction processing (OLTP) benchmark workload](dtu-benchmark.md) designed to be typical of real-world OLTP workloads. When your workload exceeds the amount of any of these resources, your throughput is throttled, resulting in slower performance and time-outs.
For single databases, the resources used by your workload don't impact the resources available to other databases in the Azure cloud. Likewise, the resources used by other workloads don't impact the resources available to your database.
In the DTU-based purchasing model, customers cannot choose the hardware generati
For example, a database can be moved to a different hardware generation if it's scaled up or down to a different service objective, or if the current infrastructure in a datacenter is approaching its capacity limits, or if the currently used hardware is being decommissioned due to its end of life.
-If a database is moved to different hardware, workload performance can change. The DTU model guarantees that the throughput and response time of the [DTU benchmark](./service-tiers-dtu.md#dtu-benchmark) workload will remain substantially identical as the database moves to a different hardware generation, as long as its service objective (the number of DTUs) stays the same.
+If a database is moved to different hardware, workload performance can change. The DTU model guarantees that the throughput and response time of the [DTU benchmark](dtu-benchmark.md) workload will remain substantially identical as the database moves to a different hardware generation, as long as its service objective (the number of DTUs) stays the same.
-However, across the wide spectrum of customer workloads running in Azure SQL Database, the impact of using different hardware for the same service objective can be more pronounced. Different workloads will benefit from different hardware configuration and features. Therefore, for workloads other than the DTU benchmark, it's possible to see performance differences if the database moves from one hardware generation to another.
+However, across the wide spectrum of customer workloads running in Azure SQL Database, the impact of using different hardware for the same service objective can be more pronounced. Different workloads will benefit from different hardware configuration and features. Therefore, for workloads other than the [DTU benchmark](dtu-benchmark.md), it's possible to see performance differences if the database moves from one hardware generation to another.
For example, an application that is sensitive to network latency can see better performance on Gen5 hardware vs. Gen4 due to the use of Accelerated Networking in Gen5, but an application using intensive read IO can see better performance on Gen4 hardware versus Gen5 due to a higher memory per core ratio on Gen4.
Choosing a service tier depends primarily on business continuity, storage, and p
> [!NOTE] > You can get a free database in Azure SQL Database at the Basic service tier in conjunction with an Azure free account to explore Azure. For information, see [Create a managed cloud database with your Azure free account](https://azure.microsoft.com/free/services/sql-database/). - ## Resource limits Resource limits differ for single and pooled databases.
To learn more, review [Resource limits for pooled databases](resource-limits-dtu
## DTU Benchmark
-Physical characteristics (CPU, memory, IO) associated to each DTU measure are calibrated using a benchmark that simulates real-world database workload.
-
-### Correlating benchmark results to real world database performance
-
-It is important to understand that all benchmarks are representative and indicative only. The transaction rates achieved with the benchmark application will not be the same as those that might be achieved with other applications. The benchmark comprises a collection of different transaction types run against a schema containing a range of tables and data types. While the benchmark exercises the same basic operations that are common to all OLTP workloads, it does not represent any specific class of database or application. The goal of the benchmark is to provide a reasonable guide to the relative performance of a database that might be expected when scaling up or down between compute sizes. In reality, databases are of different sizes and complexity, encounter different mixes of workloads, and will respond in different ways. For example, an IO-intensive application may hit IO thresholds sooner, or a CPU-intensive application may hit CPU limits sooner. There is no guarantee that any particular database will scale in the same way as the benchmark under increasing load.
-
-The benchmark and its methodology are described in more detail below.
-
-### Benchmark summary
-
-The benchmark measures the performance of a mix of basic database operations that occur most frequently in online transaction processing (OLTP) workloads. Although the benchmark is designed with cloud computing in mind, the database schema, data population, and transactions have been designed to be broadly representative of the basic elements most commonly used in OLTP workloads.
-
-### Schema
-
-The schema is designed to have enough variety and complexity to support a broad range of operations. The benchmark runs against a database comprised of six tables. The tables fall into three categories: fixed-size, scaling, and growing. There are two fixed-size tables; three scaling tables; and one growing table. Fixed-size tables have a constant number of rows. Scaling tables have a cardinality that is proportional to database performance, but doesnΓÇÖt change during the benchmark. The growing table is sized like a scaling table on initial load, but then the cardinality changes in the course of running the benchmark as rows are inserted and deleted.
-
-The schema includes a mix of data types, including integer, numeric, character, and date/time. The schema includes primary and secondary keys, but not any foreign keys - that is, there are no referential integrity constraints between tables.
-
-A data generation program generates the data for the initial database. Integer and numeric data is generated with various strategies. In some cases, values are distributed randomly over a range. In other cases, a set of values is randomly permuted to ensure that a specific distribution is maintained. Text fields are generated from a weighted list of words to produce realistic looking data.
-
-The database is sized based on a ΓÇ£scale factor.ΓÇ¥ The scale factor (abbreviated as SF) determines the cardinality of the scaling and growing tables. As described below in the section Users and Pacing, the database size, number of users, and maximum performance all scale in proportion to each other.
-
-### Transactions
-
-The workload consists of nine transaction types, as shown in the table below. Each transaction is designed to highlight a particular set of system characteristics in the database engine and system hardware, with high contrast from the other transactions. This approach makes it easier to assess the impact of different components to overall performance. For example, the transaction ΓÇ£Read HeavyΓÇ¥ produces a significant number of read operations from disk.
-
-| Transaction Type | Description |
-| | |
-| Read Lite |SELECT; in-memory; read-only |
-| Read Medium |SELECT; mostly in-memory; read-only |
-| Read Heavy |SELECT; mostly not in-memory; read-only |
-| Update Lite |UPDATE; in-memory; read-write |
-| Update Heavy |UPDATE; mostly not in-memory; read-write |
-| Insert Lite |INSERT; in-memory; read-write |
-| Insert Heavy |INSERT; mostly not in-memory; read-write |
-| Delete |DELETE; mix of in-memory and not in-memory; read-write |
-| CPU Heavy |SELECT; in-memory; relatively heavy CPU load; read-only |
+Physical characteristics (CPU, memory, IO) associated with each DTU measure are calibrated using a benchmark that simulates real-world database workload.
-### Workload mix
+Learn about the schema, transaction types used, workload mix, users and pacing, scaling rules, and metrics associated with the [DTU benchmark](dtu-benchmark.md).
-Transactions are selected at random from a weighted distribution with the following overall mix. The overall mix has a read/write ratio of approximately 2:1.
+## Compare DTU-based and vCore purchasing models
-| Transaction Type | % of Mix |
-| | |
-| Read Lite |35 |
-| Read Medium |20 |
-| Read Heavy |5 |
-| Update Lite |20 |
-| Update Heavy |3 |
-| Insert Lite |3 |
-| Insert Heavy |2 |
-| Delete |2 |
-| CPU Heavy |10 |
+While the DTU-based purchasing model is based on a bundled measure of compute, storage, and I/O resources, by comparison the [vCore purchasing model for Azure SQL Database](service-tiers-sql-database-vcore.md) allows you to independently choose and scale compute and storage resources.
-### Users and pacing
+The vCore-based purchasing model also allows you to use [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/) for SQL Server to save costs, and offers [Serverless](serverless-tier-overview.md) and [Hyperscale](service-tier-hyperscale.md) options for Azure SQL Database that are not available in the DTU-based purchasing model.
-The benchmark workload is driven from a tool that submits transactions across a set of connections to simulate the behavior of a number of concurrent users. Although all of the connections and transactions are machine generated, for simplicity we refer to these connections as ΓÇ£users.ΓÇ¥ Although each user operates independently of all other users, all users perform the same cycle of steps shown below:
-
-1. Establish a database connection.
-2. Repeat until signaled to exit:
- - Select a transaction at random (from a weighted distribution).
- - Perform the selected transaction and measure the response time.
- - Wait for a pacing delay.
-3. Close the database connection.
-4. Exit.
-
-The pacing delay (in step 2c) is selected at random, but with a distribution that has an average of 1.0 second. Thus each user can, on average, generate at most one transaction per second.
-
-### Scaling rules
-
-The number of users is determined by the database size (in scale-factor units). There is one user for every five scale-factor units. Because of the pacing delay, one user can generate at most one transaction per second, on average.
-
-For example, a scale-factor of 500 (SF=500) database will have 100 users and can achieve a maximum rate of 100 TPS. To drive a higher TPS rate requires more users and a larger database.
-
-### Measurement duration
-
-A valid benchmark run requires a steady-state measurement duration of at least one hour.
-
-### Metrics
-
-The key metrics in the benchmark are throughput and response time.
--- Throughput is the essential performance measure in the benchmark. Throughput is reported in transactions per unit-of-time, counting all transaction types.-- Response time is a measure of performance predictability. The response time constraint varies with class of service, with higher classes of service having a more stringent response time requirement, as shown below.-
-| Class of Service | Throughput Measure | Response Time Requirement |
-| | | |
-| Premium |Transactions per second |95th percentile at 0.5 seconds |
-| Standard |Transactions per minute |90th percentile at 1.0 seconds |
-| Basic |Transactions per hour |80th percentile at 2.0 seconds |
-
-> [!NOTE]
-> Response time metrics are specific to the [DTU Benchmark](#dtu-benchmark). Response times for other workloads are workload-dependent and will differ.
+Learn more in [Compare vCore and DTU-based purchasing models of Azure SQL Database](purchasing-models.md).
## Next steps
+Learn more about purchasing models and related concepts in the following articles:
+ - For details on specific compute sizes and storage size choices available for single databases, see [SQL Database DTU-based resource limits for single databases](resource-limits-dtu-single-databases.md#single-database-storage-sizes-and-compute-sizes). - For details on specific compute sizes and storage size choices available for elastic pools, see [SQL Database DTU-based resource limits](resource-limits-dtu-elastic-pools.md#elastic-pool-storage-sizes-and-compute-sizes).
+- For information on the benchmark associated with the DTU-based purchasing model, see [DTU benchmark](dtu-benchmark.md).
+- [Compare vCore and DTU-based purchasing models of Azure SQL Database](purchasing-models.md).
azure-sql Vnet Service Endpoint Rule Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/vnet-service-endpoint-rule-overview.md
description: "Mark a subnet as a virtual network service endpoint. Then add the
-+ ms.devlang:
PolyBase and the COPY statement are commonly used to load data into Azure Synaps
- If you have a general-purpose v1 or Blob Storage account, you must *first upgrade to v2* by following the steps in [Upgrade to a general-purpose v2 storage account](../../storage/common/storage-account-upgrade.md). - For known issues with Azure Data Lake Storage Gen2, see [Known issues with Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-known-issues.md).
-1. Under your storage account, go to **Access Control (IAM)**, and select **Add role assignment**. Assign the **Storage Blob Data Contributor** Azure role to the server or workspace hosting your dedicated SQL pool, which you've registered with Azure AD.
+1. On your storage account page, select **Access control (IAM)**.
+
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
+
+1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+
+ | Setting | Value |
+ | | |
+ | Role | Storage Blob Data Contributor |
+ | Assign access to | User, group, or service principal |
+ | Members | Server or workspace hosting your dedicated SQL pool that you've registered with Azure AD |
+
+ ![Screenshot that shows Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
> [!NOTE] > Only members with Owner privilege on the storage account can perform this step. For various Azure built-in roles, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
azure-sql Doc Changes Updates Release Notes Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/doc-changes-updates-release-notes-whats-new.md
ms.devlang: Previously updated : 03/10/2022 Last updated : 03/28/2022 # What's new in Azure SQL Managed Instance? [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqlmi.md)]
The following table lists the features of Azure SQL Managed Instance that are cu
| [Data virtualization](data-virtualization-overview.md) | Join locally stored relational data with data queried from external data sources, such as Azure Data Lake Storage Gen2 or Azure Blob Storage. | |[Endpoint policies](../../azure-sql/managed-instance/service-endpoint-policies-configure.md) | Configure which Azure Storage accounts can be accessed from a SQL Managed Instance subnet. Grants an extra layer of protection against inadvertent or malicious data exfiltration.| | [Instance pools](instance-pools-overview.md) | A convenient and cost-efficient way to migrate smaller SQL Server instances to the cloud. |
-| [Link feature](link-feature.md)| Online replication of SQL Server databases hosted anywhere to Azure SQL Managed Instance. |
+| [Managed Instance link](managed-instance-link-feature-overview.md)| Online replication of SQL Server databases hosted anywhere to Azure SQL Managed Instance. |
| [Maintenance window advance notifications](../database/advance-notifications.md)| Advance notifications (preview) for databases configured to use a non-default [maintenance window](../database/maintenance-window.md). Advance notifications are in preview for Azure SQL Managed Instance. | | [Memory optimized premium-series hardware generation](resource-limits.md#service-tier-characteristics) | Deploy your SQL Managed Instance to the new memory optimized premium-series hardware generation to take advantage of the latest Intel Ice Lake CPUs. The memory optimized hardware generation offers higher memory to vCore ratios. |
-| [Migration with Log Replay Service](log-replay-service-migrate.md) | Migrate databases from SQL Server to SQL Managed Instance by using Log Replay Service. |
+| [Migrate with Log Replay Service](log-replay-service-migrate.md) | Migrate databases from SQL Server to SQL Managed Instance by using Log Replay Service. |
| [Premium-series hardware generation](resource-limits.md#service-tier-characteristics) | Deploy your SQL Managed Instance to the new premium-series hardware generation to take advantage of the latest Intel Ice Lake CPUs. | | [Query Store hints](/sql/relational-databases/performance/query-store-hints?view=azuresqldb-mi-current&preserve-view=true) | Use query hints to optimize your query execution via the OPTION clause. | | [Service Broker cross-instance message exchange](/sql/database-engine/configure-windows/sql-server-service-broker) | Support for cross-instance message exchange using Service Broker on Azure SQL Managed Instance. |
The following table lists the features of Azure SQL Managed Instance that have t
|[Maintenance window](../database/maintenance-window.md)| March 2022 | The maintenance window feature allows you to configure maintenance schedule for your Azure SQL Managed Instance. [Maintenance window advance notifications](../database/advance-notifications.md), however, are in preview for Azure SQL Managed Instance.| |[16 TB support in General Purpose](resource-limits.md)| November 2021 | Support for allocation up to 16 TB of space on SQL Managed Instance in the General Purpose service tier. | [Azure Active Directory-only authentication](../database/authentication-azure-ad-only-authentication.md) | November 2021 | It's now possible to restrict authentication to your Azure SQL Managed Instance only to Azure Active Directory users. |
-| [Distributed transactions](../database/elastic-transactions-overview.md) | November 2021 | Distributed database transactions for Azure SQL Managed Instance allow you to run distributed transactions that span several databases across instances. |
+|[Distributed transactions](../database/elastic-transactions-overview.md) | November 2021 | Distributed database transactions for Azure SQL Managed Instance allow you to run distributed transactions that span several databases across instances. |
|[Linked server - managed identity Azure AD authentication](/sql/relational-databases/system-stored-procedures/sp-addlinkedserver-transact-sql#h-create-sql-managed-instance-linked-server-with-managed-identity-azure-ad-authentication) |November 2021 | Create a linked server with managed identity authentication for your Azure SQL Managed Instance.| |[Linked server - pass-through Azure AD authentication](/sql/relational-databases/system-stored-procedures/sp-addlinkedserver-transact-sql#i-create-sql-managed-instance-linked-server-with-pass-through-azure-ad-authentication) |November 2021 | Create a linked server with pass-through Azure AD authentication for your Azure SQL Managed Instance. | |[Long-term backup retention](long-term-backup-retention-configure.md) |November 2021 | Store full backups for a specific database with configured redundancy for up to 10 years in Azure Blob storage, restoring the database as a new database. |
Learn about significant changes to the Azure SQL Managed Instance documentation.
| Changes | Details | | | | | **Data virtualization preview** | It's now possible to query data in external sources such as Azure Data Lake Storage Gen2 or Azure Blob Storage, joining it with locally stored relational data. This feature is currently in preview. To learn more, see [Data virtualization](data-virtualization-overview.md). |
-| **Link feature guidance** | We've published a number of guides for using the [link feature](link-feature.md) with SQL Managed Instance, including how to [prepare your environment](managed-instance-link-preparation.md), [configure replication](managed-instance-link-use-ssms-to-replicate-database.md), [failover your database](managed-instance-link-use-ssms-to-failover-database.md), and some [best practices](link-feature-best-practices.md) when using the link feature. |
+| **Log Replay Service migration** | Use the Log Replay Service to migrate from SQL Server to Azure SQL Managed Instance. This feature is currently in preview. To learn more, see [Migrate with Log Replay Service](log-replay-service-migrate.md). |
+| **Managed Instance link guidance** | We've published a number of guides for using the [Managed Instance link feature](managed-instance-link-feature-overview.md), including how to [prepare your environment](managed-instance-link-preparation.md), [configure replication by using SSMS](managed-instance-link-use-ssms-to-replicate-database.md), [configure replication via scripts](managed-instance-link-use-scripts-to-replicate-database.md), [fail over your database by using SSMS](managed-instance-link-use-ssms-to-failover-database.md), [fail over your database via scripts](managed-instance-link-use-scripts-to-failover-database.md) and some [best practices](managed-instance-link-best-practices.md) when using the link feature (currently in preview). |
| **Maintenance window GA, advance notifications preview** | The [maintenance window](../database/maintenance-window.md) feature is now generally available, allowing you to configure a maintenance schedule for your Azure SQL Managed Instance. It's also possible to receive advance notifications for planned maintenance events, which is currently in preview. Review [Maintenance window advance notifications (preview)](../database/advance-notifications.md) to learn more. | | **Windows Auth for Azure Active Directory principals preview** | Windows Authentication for managed instances empowers customers to move existing services to the cloud while maintaining a seamless user experience, and provides the basis for infrastructure modernization. Learn more in [Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance](winauth-azuread-overview.md). |
Learn about significant changes to the Azure SQL Managed Instance documentation.
| **Azure AD-only authentication GA** | Restricting authentication to your Azure SQL Managed Instance only to Azure Active Directory users is now generally available. To learn more, see [Azure AD-only authentication](../database/authentication-azure-ad-only-authentication.md). | | **Distributed transactions GA** | The ability to execute distributed transactions across managed instances is now generally available. See [Distributed transactions](../database/elastic-transactions-overview.md) to learn more. | |**Endpoint policies preview** | It's now possible to configure an endpoint policy to restrict access from a SQL Managed Instance subnet to an Azure Storage account. This grants an extra layer of protection against inadvertent or malicious data exfiltration. See [Endpoint policies](../../azure-sql/managed-instance/service-endpoint-policies-configure.md) to learn more. |
-|**Link feature preview** | Use the link feature for SQL Managed Instance to replicate data from your SQL Server hosted anywhere to Azure SQL Managed Instance, leveraging the benefits of Azure without moving your data to Azure, to offload your workloads, for disaster recovery, or to migrate to the cloud. See the [Link feature for SQL Managed Instance](link-feature.md) to learn more. The link feature is currently in limited public preview. |
+|**Link feature preview** | Use the link feature for SQL Managed Instance to replicate data from your SQL Server hosted anywhere to Azure SQL Managed Instance, leveraging the benefits of Azure without moving your data to Azure, to offload your workloads, for disaster recovery, or to migrate to the cloud. See the [Link feature for SQL Managed Instance](managed-instance-link-feature-overview.md) to learn more. The link feature is currently in limited public preview. |
|**Long-term backup retention GA** | Storing full backups for a specific database with configured redundancy for up to 10 years in Azure Blob storage is now generally available. To learn more, see [Long-term backup retention](long-term-backup-retention-configure.md). | | **Move instance to different subnet GA** | It's now possible to move your SQL Managed Instance to a different subnet. See [Move instance to different subnet](vnet-subnet-move-instance.md) to learn more. | |**New hardware generation preview** | There are now two new hardware generations for SQL Managed Instance: premium-series, and a memory optimized premium-series. Both offerings take advantage of a new generation of hardware powered by the latest Intel Ice Lake CPUs, and offer a higher memory to vCore ratio to support your most resource demanding database applications. As part of this announcement, the Gen5 hardware generation has been renamed to standard-series. The two new premium hardware generations are currently in preview. See [resource limits](resource-limits.md#service-tier-characteristics) to learn more. |
azure-sql How To Content Reference Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/how-to-content-reference-guide.md
In this article you can find a content reference to various guides, scripts, and
- [Replicate database with Azure SQL Managed Instance link feature with T-SQL and PowerShell scripts](managed-instance-link-use-scripts-to-replicate-database.md) - [Failover database with link feature in SSMS - Azure SQL Managed Instance](managed-instance-link-use-ssms-to-failover-database.md) - [Failover (migrate) database with Azure SQL Managed Instance link feature with T-SQL and PowerShell scripts](managed-instance-link-use-scripts-to-failover-database.md)-- [Best practices with link feature for Azure SQL Managed Instance](link-feature-best-practices.md)
+- [Best practices with link feature for Azure SQL Managed Instance](managed-instance-link-best-practices.md)
## Monitoring and tuning
azure-sql Log Replay Service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/log-replay-service-migrate.md
description: Learn how to migrate databases from SQL Server to SQL Managed Insta
-+
The SAS authentication is generated with the time validity that you specified. Y
:::image type="content" source="./media/log-replay-service-migrate/lrs-generated-uri-token.png" alt-text="Screenshot that shows an example of the U R I version of an S A S token."::: > [!NOTE]
- > Using SAS tokens created with permissions set through defining a [stored access policy](https://docs.microsoft.com/rest/api/storageservices/define-stored-access-policy.md) is not supported at this time. You will need to follow the instructions in this guide on manually specifying Read and List permissions for the SAS token.
+ > Using SAS tokens created with permissions set through defining a [stored access policy](/rest/api/storageservices/define-stored-access-policy.md) is not supported at this time. You will need to follow the instructions in this guide on manually specifying Read and List permissions for the SAS token.
### Copy parameters from the SAS token
Functional limitations of LRS are:
- System-managed software patches are blocked for 36 hours once the LRS has been started. After this time window expires, the next software maintenance update will stop LRS. You will need to restart LRS from scratch. - LRS requires databases on SQL Server to be backed up with the `CHECKSUM` option enabled. - The SAS token that LRS will use must be generated for the entire Azure Blob Storage container, and it must have Read and List permissions only. For example, if you grant Read, List and Write permissions, LRS will not be able to start because of the extra Write permission.-- Using SAS tokens created with permissions set through defining a [stored access policy](https://docs.microsoft.com/rest/api/storageservices/define-stored-access-policy.md) is not supported at this time. You will need to follow the instructions in this guide on manually specifying Read and List permissions for the SAS token.
+- Using SAS tokens created with permissions set through defining a [stored access policy](/rest/api/storageservices/define-stored-access-policy.md) is not supported at this time. You will need to follow the instructions in this guide on manually specifying Read and List permissions for the SAS token.
- Backup files containing % and $ characters in the file name cannot be consumed by LRS. Consider renaming such file names. - Backup files for different databases must be placed in separate folders on Blob Storage in a flat-file structure. Nested folders inside individual database folders are not supported. - LRS must be started separately for each database pointing to the full URI path containing an individual database folder. - LRS can support up to 100 simultaneous restore processes per single managed instance. > [!NOTE]
-> If you require database to be R/O accessible during the migration, and if you require migration window larger than 36 hours, please consider an alternative online migrations solution [link feature for Managed Instance](link-feature.md) providing such capability.
+> If you require database to be R/O accessible during the migration, and if you require migration window larger than 36 hours, please consider an alternative online migrations solution [link feature for Managed Instance](managed-instance-link-feature-overview.md) providing such capability.
## Troubleshooting
After you start LRS, use the monitoring cmdlet (`get-azsqlinstancedatabaselogrep
- If you started LRS in autocomplete mode, was a valid filename for the last backup file specified? ## Next steps-- Learn more about [migrating to Managed Instance using the link feature](link-feature.md).
+- Learn more about [migrating to Managed Instance using the link feature](managed-instance-link-feature-overview.md).
- Learn more about [migrating from SQL Server to SQL Managed instance](../migration-guides/managed-instance/sql-server-to-managed-instance-guide.md). - Learn more about [differences between SQL Server and SQL Managed Instance](transact-sql-tsql-differences-sql-server.md). - Learn more about [best practices to cost and size workloads migrated to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs).
azure-sql Managed Instance Link Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-best-practices.md
+
+ Title: The link feature best practices
+
+description: Learn about best practices when using the link feature for Azure SQL Managed Instance.
++++
+ms.devlang:
++++ Last updated : 03/28/2022+
+# Best practices with link feature for Azure SQL Managed Instance (preview)
+
+This article outlines best practices when using the link feature for Azure SQL Managed Instance. The link feature for Azure SQL Managed Instance connects your SQL Servers hosted anywhere to SQL Managed Instance, providing near real-time data replication to the cloud.
+
+> [!NOTE]
+> The link feature for Azure SQL Managed Instance is currently in preview.
+
+## Take log backups regularly
+
+The link feature replicates data using the [Distributed availability groups](/sql/database-engine/availability-groups/windows/distributed-availability-groups) concept based on the Always On availability groups technology stack. Data replication with distributed availability groups is based on replicating transaction log records. No transaction log records can be truncated from the database on the primary instance until they're replicated to the database on the secondary instance. If transaction log record replication is slow or blocked due to network connection issues, the log file keeps growing on the primary instance. Growth speed depends on the intensity of workload and the network speed. If there's a prolonged network connection outage and heavy workload on primary instance, the log file may take all available storage space.
+
+To minimize the risk of running out of space on your primary instance due to log file growth, make sure to **take database log backups regularly**. By taking log backups regularly, you make your database more resilient to unplanned log growth events. Consider scheduling daily log backup tasks using SQL Server Agent job.
+
+You can use a Transact-SQL (T-SQL) script to back up the log file, such as the sample provided in this section. Replace the placeholders in the sample script with name of your database, name and path of the backup file, and the description.
+
+To back up your transaction log, use the following sample Transact-SQL (T-SQL) script on SQL Server:
+
+```sql
+-- Execute on SQL Server
+USE [<DatabaseName>]
+--Set current database inside job step or script
+--Check that you are executing the script on the primary instance
+if (SELECT role
+ FROM sys.dm_hadr_availability_replica_states AS a
+ JOIN sys.availability_replicas AS b
+ ON b.replica_id = a.replica_id
+WHERE b.replica_server_name = @@SERVERNAME) = 1
+BEGIN
+-- Take log backup
+BACKUP LOG [<DatabaseName>]
+TO DISK = N'<DiskPathandFileName>'
+WITH NOFORMAT, NOINIT,
+NAME = N'<Description>', SKIP, NOREWIND, NOUNLOAD, COMPRESSION, STATS = 1
+END
+```
+
+Use the following Transact-SQL (T-SQL) command to check the log spaced used by your database on SQL Server:
+
+```sql
+-- Execute on SQL Server
+DBCC SQLPERF(LOGSPACE);
+```
+
+The query output looks like the following example below for sample database **tpcc**:
++
+In this example, the database has used 76% of the available log, with an absolute log file size of approximately 27 GB (27,971 MB). The thresholds for action may vary based on your workload, but it's typically an indication that you should take a log backup to truncate the log file and free up some space.
+
+## Add startup trace flags
+
+There are two trace flags (`-T1800` and `-T9567`) that, when added as start up parameters, can optimize the performance of data replication through the link. See [Enable startup trace flags](managed-instance-link-preparation.md#enable-startup-trace-flags) to learn more.
+
+## Next steps
+
+To get started with the link feature, [prepare your environment for replication](managed-instance-link-preparation.md).
+
+For more information on the link feature, see the following articles:
+
+- [Managed Instance link ΓÇô overview](managed-instance-link-feature-overview.md)
+- [Managed Instance link ΓÇô connecting SQL Server to Azure reimagined](https://aka.ms/mi-link-techblog)
azure-sql Managed Instance Link Feature Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-feature-overview.md
+
+ Title: The link feature
+
+description: Learn about the link feature for Azure SQL Managed Instance to continuously replicate data from SQL Server to the cloud, or migrate your SQL Server databases with the best possible minimum downtime.
++++
+ms.devlang:
++++ Last updated : 03/28/2022+
+# Link feature for Azure SQL Managed Instance (preview)
+
+The new link feature in Azure SQL Managed Instance connects your SQL Servers hosted anywhere to SQL Managed Instance, providing hybrid flexibility and database mobility. With an approach that uses near real-time data replication to the cloud, you can offload workloads to a read-only secondary in Azure to take advantage of Azure-only features, performance, and scale.
+
+After a disastrous event, you can continue running your read-only workloads on SQL Managed Instance in Azure. You can also choose to migrate one or more applications from SQL Server to SQL Managed Instance at the same time, at your own pace, and with the best possible minimum downtime compared to other solutions in Azure today.
+
+To use the link feature, you'll need:
+
+- SQL Server 2019 Enterprise Edition or Developer Edition with [CU15 (or above)](https://support.microsoft.com/en-us/topic/kb5008996-cumulative-update-15-for-sql-server-2019-4b6a8ee9-1c61-482d-914f-36e429901fb6) installed on-premises, or on an Azure VM.
+- Network connectivity between your SQL Server and managed instance is required. If your SQL Server is running on-premises, use a VPN link or Express route. If your SQL Server is running on an Azure VM, either deploy your VM to the same subnet as your managed instance, or use global VNet peering to connect two separate subnets.
+- Azure SQL Managed Instance provisioned on any service tier.
+
+> [!NOTE]
+> SQL Managed Instance link feature is available in all public Azure regions.
+> National clouds are currently not supported.
+
+## Overview
+
+The underlying technology of near real-time data replication between SQL Server and SQL Managed Instance is based on distributed availability groups, part of the well-known and proven Always On availability group technology stack. Extend your SQL Server on-premises availability group to SQL Managed Instance in Azure in a safe and secure manner.
+
+There's no need to have an existing availability group or multiple nodes. The link supports single node SQL Server instances without existing availability groups, and also multiple-node SQL Server instances with existing availability groups. Through the link, you can use the modern benefits of Azure without migrating your entire SQL Server data estate to the cloud.
+
+You can keep running the link for as long as you need it, for months and even years at a time. And for your modernization journey, if or when you're ready to migrate to Azure, the link enables a considerably-improved migration experience with the minimum possible downtime compared to all other options available today, providing a true online migration to SQL Managed Instance.
+
+## Supported scenarios
+
+Data replicated through the link feature from SQL Server to Azure SQL Managed Instance can be used with several scenarios, such as:
+
+- **Use Azure services without migrating to the cloud**
+- **Offload read-only workloads to Azure**
+- **Migrate to Azure**
+
+![Managed Instance link main scenario](./media/managed-instance-link-feature-overview/mi-link-main-scenario.png)
+
+### Use Azure services
+
+Use the link feature to leverage Azure services using SQL Server data without migrating to the cloud. Examples include reporting, analytics, backups, machine learning, and other jobs that send data to Azure.
+
+### Offload workloads to Azure
+
+You can also use the link feature to offload workloads to Azure. For example, an application could use SQL Server for read-write workloads, while offloading read-only workloads to SQL Managed Instance in any of Azure's 60+ regions worldwide. Once the link is established, the primary database on SQL Server is read/write accessible, while replicated data to SQL Managed Instance in Azure is read-only accessible. This allows for various scenarios where replicated databases on SQL Managed Instance can be used for read scale-out and offloading read-only workloads to Azure. SQL Managed Instance, in parallel, can also host independent read/write databases. This allows for copying the replicated database to another read/write database on the same managed instance for further data processing.
+
+The link is database scoped (one link per one database), allowing for consolidation and deconsolidation of workloads in Azure. For example, you can replicate databases from multiple SQL Servers to a single SQL Managed Instance in Azure (consolidation), or replicate databases from a single SQL Server to multiple managed instances via a 1 to 1 relationship between a database and a managed instance - to any of Azure's regions worldwide (deconsolidation). The latter provides you with an efficient way to quickly bring your workloads closer to your customers in any region worldwide, which you can use as read-only replicas.
+
+### Migrate to Azure
+
+The link feature also facilitates migrating from SQL Server to SQL Managed Instance, enabling:
+
+- The most performant minimum downtime migration compared to all other solutions available today
+- True online migration to SQL Managed Instance in any service tier
+
+Since the link feature enables minimum downtime migration, you can migrate to your managed instance while maintaining your primary workload online. While online migration was possible to achieve previously with other solutions when migrating to the general purpose service tier, the link feature now also allows for true online migrations to the business critical service tier as well.
+
+## How it works
+
+The underlying technology behind the link feature for SQL Managed Instance is distributed availability groups. The solution supports single-node systems without existing availability groups, or multiple node systems with existing availability groups.
+
+![How does the link feature for SQL Managed Instance work](./media/managed-instance-link-feature-overview/mi-link-ag-dag.png)
+
+Secure connectivity, such as VPN or Express Route is used between an on-premises network and Azure. If SQL Server is hosted on an Azure VM, the internal Azure backbone can be used between the VM and managed instance ΓÇô such as, for example, global VNet peering. The trust between the two systems is established using certificate-based authentication, in which SQL Server and SQL Managed Instance exchange their public keys.
+
+There could exist up to 100 links from the same, or various SQL Server sources to a single SQL Managed Instance. This limit is governed by the number of databases that could be hosted on a managed instance at this time. Likewise, a single SQL Server can establish multiple parallel database replication links with several managed instances in different Azure regions in a 1 to 1 relationship between a database and a managed instance . The feature requires CU13 or higher to be installed on SQL Server 2019.
+
+## Use the link feature
+
+To help with the initial environment setup, we have prepared the following online guide on how to setup your SQL Server environment to use with the link feature for Managed Instance:
+
+* [Prepare environment for the link](managed-instance-link-preparation.md)
+
+Once you have ensured the pre-requirements have been met, you can create the link using the automated wizard in SSMS, or you can choose to setup the link manually using scripts. Create the link using one of the following instructions:
+
+* [Replicate database with link feature in SSMS](managed-instance-link-use-ssms-to-replicate-database.md), or alternatively
+* [Replicate database with Azure SQL Managed Instance link feature with T-SQL and PowerShell scripts](managed-instance-link-use-scripts-to-replicate-database.md)
+
+Once the link has been created, ensure that you follow the best practices for maintaining the link, by following instructions described at this page:
+
+* [Best practices with link feature for Azure SQL Managed Instance](managed-instance-link-best-practices.md)
+
+If and when you are ready to migrate a database to Azure with a minimum downtime, you can do this using an automated wizard in SSMS, or you can choose to do this manually with scripts. Migrate database to Azure link using one of the following instructions:
+
+* [Failover database with link feature in SSMS](managed-instance-link-use-ssms-to-failover-database.md), or alternatively
+* [Failover (migrate) database with Azure SQL Managed Instance link feature with T-SQL and PowerShell scripts](managed-instance-link-use-scripts-to-failover-database.md)
+
+## Limitations
+
+This section describes the productΓÇÖs functional limitations.
+
+### General functional limitations
+
+Managed Instance link has a set of general limitations, and those are listed in this section. Listed limitations are of a technical nature and are unlikely to be addressed in the foreseeable future.
+
+- Only user databases can be replicated. Replication of system databases isn't supported.
+- The solution doesn't replicate server level objects, agent jobs, nor user logins from SQL Server to Managed Instance.
+- Only one database can be placed into a single Availability Group per one Distributed Availability Group link.
+- Link can't be established between SQL Server and Managed Instance if functionality used on SQL Server isn't support on Managed Instance.
+ - File tables and file streams aren't supported for replication, as Managed Instance doesn't support this.
+ - Replicating Databases using Hekaton (In-Memory OLTP) isn't supported on Managed Instance General Purpose service tier. Hekaton is only supported on Managed Instance Business Critical service tier.
+ - For the full list of differences between SQL Server and Managed Instance, see [this article](./transact-sql-tsql-differences-sql-server.md).
+- In case Change data capture (CDC), log shipping, or service broker are used with database replicated on the SQL Server, and in case of database migration to Managed Instance, on the failover to the Azure, clients will need to connect using instance name of the current global primary replica. you'll need to manually re-configure these settings.
+- In case Transactional Replication is used with database replicated on the SQL Server, and in case of migration scenario, on failover to Azure, transactional replication on Azure SQL Managed instance won't continue. you'll need to manually re-configure Transactional Replication.
+- In case distributed transactions are used with database replicated from the SQL Server, and in case of migration scenario, on the cutover to the cloud, the DTC capabilities won't be transferred. There will be no possibility for migrated database to get involved in distributed transactions with SQL Server, as Managed Instance doesn't support distributed transactions with SQL Server at this time. For reference, Managed Instance today supports distributed transactions only between other Managed Instances, see [this article](../database/elastic-transactions-overview.md#transactions-for-sql-managed-instance).
+- Managed Instance link can replicate database of any size if it fits into chosen storage size of target Managed Instance.
+
+### Preview limitations
+
+Some Managed Instance link features and capabilities are limited **at this time**. Details can be found in the following list.
+- SQL Server 2019, Enterprise Edition or Developer Edition, CU15 (or higher) on Windows or Linux host OS is supported.
+- Private endpoint (VPN/VNET) is supported to connect Distributed Availability Groups to Managed Instance. Public endpoint can't be used to connect to Managed Instance.
+- Managed Instance Link authentication between SQL Server instance and Managed Instance is certificate-based, available only through exchange of certificates. Windows authentication between instances isn't supported.
+- Replication of user databases from SQL Server to Managed Instance is one-way. User databases from Managed Instance can't be replicated back to SQL Server.
+- [Auto failover groups](auto-failover-group-sql-mi.md) replication to secondary Managed Instance can't be used in parallel while operating the Managed Instance link with SQL Server.
+- Replicated R/O databases aren't part of auto-backup process on SQL Managed Instance.
+
+## Next steps
+
+If you're interested in using Link feature for Azure SQL Managed Instance with versions and editions that are currently not supported, sign-up [here](https://aka.ms/mi-link-signup).
+
+For more information on the link feature, see the following:
+
+- [Managed Instance link ΓÇô connecting SQL Server to Azure reimagined](https://aka.ms/mi-link-techblog).
+- [Prepare for SQL Managed Instance link](./managed-instance-link-preparation.md).
+- [Use SQL Managed Instance link via SSMS to replicate database](./managed-instance-link-use-ssms-to-replicate-database.md).
+- [Use SQL Managed Instance link via SSMS to migrate database](./managed-instance-link-use-ssms-to-failover-database.md).
+
+For other replication scenarios, consider:
+
+- [Transactional replication with Azure SQL Managed Instance (Preview)](replication-transactional-overview.md)
azure-sql Managed Instance Link Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-preparation.md
Last updated 03/22/2022
# Prepare your environment for a link - Azure SQL Managed Instance [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
-This article teaches you how to prepare your environment for a [Managed Instance link](link-feature.md) so that you can replicate databases from SQL Server to Azure SQL Managed Instance.
+This article teaches you how to prepare your environment for a [Managed Instance link](managed-instance-link-feature-overview.md) so that you can replicate databases from SQL Server to Azure SQL Managed Instance.
> [!NOTE] > The link is a feature of Azure SQL Managed Instance and is currently in preview.
azure-sql Managed Instance Link Use Scripts To Failover Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-scripts-to-failover-database.md
Last updated 03/15/2022
[!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
-This article teaches you how to use Transact-SQL (T-SQL) and PowerShell scripts and a [Managed Instance link](link-feature.md) to fail over (migrate) your database from SQL Server to SQL Managed Instance.
+This article teaches you how to use Transact-SQL (T-SQL) and PowerShell scripts and a [Managed Instance link](managed-instance-link-feature-overview.md) to fail over (migrate) your database from SQL Server to SQL Managed Instance.
> [!NOTE] > - The link is a feature of Azure SQL Managed Instance and is currently in preview. You can also use a [SQL Server Management Studio (SSMS) wizard](managed-instance-link-use-ssms-to-failover-database.md) to fail over a database with the link.
azure-sql Managed Instance Link Use Scripts To Replicate Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-scripts-to-replicate-database.md
Last updated 03/22/2022
[!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
-This article teaches you how to use Transact-SQL (T-SQL) and PowerShell scripts to replicate your database from SQL Server to Azure SQL Managed Instance by using a [Managed Instance link](link-feature.md).
+This article teaches you how to use Transact-SQL (T-SQL) and PowerShell scripts to replicate your database from SQL Server to Azure SQL Managed Instance by using a [Managed Instance link](managed-instance-link-feature-overview.md).
> [!NOTE] > - The link is a feature of Azure SQL Managed Instance and is currently in preview. You can also use a [SQL Server Management Studio (SSMS) wizard](managed-instance-link-use-ssms-to-replicate-database.md) to set up the link to replicate your database.
After the connection is established, the **Managed Instance Databases** view in
> [!IMPORTANT] > - The link won't work unless network connectivity exists between SQL Server and SQL Managed Instance. To troubleshoot network connectivity, follow the steps in [Test bidirectional network connectivity](managed-instance-link-preparation.md#test-bidirectional-network-connectivity).
-> - Take regular backups of the log file on SQL Server. If the used log space reaches 100 percent, replication to SQL Managed Instance stops until space use is reduced. We highly recommend that you automate log backups by setting up a daily job. For details, see [Back up log files on SQL Server](link-feature-best-practices.md#take-log-backups-regularly).
+> - Take regular backups of the log file on SQL Server. If the used log space reaches 100 percent, replication to SQL Managed Instance stops until space use is reduced. We highly recommend that you automate log backups by setting up a daily job. For details, see [Back up log files on SQL Server](managed-instance-link-best-practices.md#take-log-backups-regularly).
## Next steps
azure-sql Managed Instance Link Use Ssms To Failover Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-ssms-to-failover-database.md
Last updated 03/10/2022
[!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
-This article teaches you how to fail over a database from SQL Server to Azure SQL Managed Instance by using [the link feature](link-feature.md) in SQL Server Management Studio (SSMS).
+This article teaches you how to fail over a database from SQL Server to Azure SQL Managed Instance by using [the link feature](managed-instance-link-feature-overview.md) in SQL Server Management Studio (SSMS).
Failing over your database from SQL Server to SQL Managed Instance breaks the link between the two databases. It stops replication and leaves both databases in an independent state, ready for individual read/write workloads.
In the following steps, you use the **Failover database to Managed Instance** wi
During the failover process, the link is dropped and no longer exists. The source SQL Server database and the target SQL Managed Instance database can both execute a read/write workload. They're completely independent.
-You can validate that the link bas been dropped by reviewing the database on SQL Server.
+You can validate that the link has been dropped by reviewing the database on SQL Server.
:::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-ssms-sql-server-database.png" alt-text="Screenshot that shows a database on SQL Server in S S M S.":::
Then, review the database on SQL Managed Instance.
## Next steps
-To learn more, see [Link feature for Azure SQL Managed Instance](link-feature.md).
+To learn more, see [Link feature for Azure SQL Managed Instance](managed-instance-link-feature-overview.md).
azure-sql Managed Instance Link Use Ssms To Replicate Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-ssms-to-replicate-database.md
Last updated 03/22/2022
[!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
-This article teaches you how to replicate your database from SQL Server to Azure SQL Managed Instance by using [the link feature](link-feature.md) in SQL Server Management Studio (SSMS).
+This article teaches you how to replicate your database from SQL Server to Azure SQL Managed Instance by using [the link feature](managed-instance-link-feature-overview.md) in SQL Server Management Studio (SSMS).
> [!NOTE] > The link is a feature of Azure SQL Managed Instance and is currently in preview.
Connect to your managed instance and use Object Explorer to view your replicated
## Next steps
-To break the link and fail over your database to SQL Managed Instance, see [Fail over a database](managed-instance-link-use-ssms-to-failover-database.md). To learn more, see [Link feature for Azure SQL Managed Instance](link-feature.md).
+To break the link and fail over your database to SQL Managed Instance, see [Fail over a database](managed-instance-link-use-ssms-to-failover-database.md). To learn more, see [Link feature for Azure SQL Managed Instance](managed-instance-link-feature-overview.md).
azure-sql Sql Server To Managed Instance Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide.md
After you've verified that your source environment is supported, start with the
In the Discover phase, scan the network to identify all SQL Server instances and features used by your organization.
-Use [Azure Migrate](../../../migrate/migrate-services-overview.md) to assesses migration suitability of on-premises servers, perform performance-based sizing, and provide cost estimations for running them in Azure.
+Use [Azure Migrate](../../../migrate/migrate-services-overview.md) to assess migration suitability of on-premises servers, perform performance-based sizing, and provide cost estimations for running them in Azure.
Alternatively, use theΓÇ»[Microsoft Assessment and Planning ToolkitΓÇ»(the "MAP Toolkit")](https://www.microsoft.com/download/details.aspx?id=7826) to assess your current IT infrastructure. The toolkit provides a powerful inventory, assessment, and reporting tool to simplify the migration planning process.
activities to the platform as they are built in. Therefore, some instance-level
need to be migrated, such as maintenance jobs for regular backups or Always On configuration, as [high availability](../../database/high-availability-sla.md) is built in.
-SQL Managed Instance supports the following database migration options (currently these are the
-only supported migration methods):
+This article covers two of the recommended migration options:
- Azure Database Migration Service - migration with near-zero downtime. - Native `RESTORE DATABASE FROM URL` - uses native backups from SQL Server and requires some downtime.
-This guide describe the two most popular options - Azure Database Migration Service (DMS) and native backup and restore.
+This guide describes the two most popular options - Azure Database Migration Service (DMS) and native backup and restore.
+
+For other migration tools, see [Compare migration options](sql-server-to-managed-instance-overview.md#compare-migration-options).
### Database Migration Service
To learn more about this migration option, see [Restore a database to Azure SQL
> [!NOTE] > A database restore operation is asynchronous and retryable. You might get an error in SQL Server Management Studio if the connection breaks or a time-out expires. Azure SQL Database will keep trying to restore database in the background, and you can track the progress of the restore using the [sys.dm_exec_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-requests-transact-sql) and [sys.dm_operation_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-operation-status-azure-sql-database) views.
-## Migation tools
-
-While using [Azure Database Migration Service](../../../dms/tutorial-sql-server-to-managed-instance.md), or [native backup and restore](../../managed-instance/restore-sample-database-quickstart.md) to migrate a database to Managed Instance, consider as well the following migration tools:
-
-|Migration option |When to use |Considerations |
-||||
-|[Azure SQL Migration extension for Azure Data Studio](../../../dms/migration-using-azure-data-studio.md) | - Migrate single databases or multiple databases at scale. </br> - Can run in both online (minimal downtime) and offline (acceptable downtime) modes. </br> </br> Supported sources: </br> - SQL Server (2005 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - Easy to setup and get started. </br> - Requires setup of self-hosted integration runtime to access on-premises SQL Server and backups. </br> - Includes both assessment and migration capabilities. |
-|[Log Replay Service](../../managed-instance/log-replay-service-migrate.md) | - Migrate individual line-of-business application databases. </br> - More control is needed for database migrations. </br> </br> Supported sources: </br> - SQL Server (2008 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - The migration entails making full database backups on SQL Server and copying backup files to Azure Blob Storage. Log Replay Service is used to restore backup files from Azure Blob Storage to SQL Managed Instance. </br> - Databases being restored during the migration process will be in a restoring mode and can't be used to read or write until the process has finished.|
-|[Link feature for Managed Instance](../../managed-instance/link-feature.md) | - Migrate individual line-of-business application databases. </br> - More control is needed for database migrations. </br> - Minimum downtime migration is needed. </br> </br> Supported sources: </br> - SQL Server (2016 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - GCP Compute SQL Server VM | - The migration entails establishing a VPN connectivity between SQL Server and Managed Instance, and opening inbound communication ports. </br> - Always On technology is used to replicate database near real-time, making an exact replica of SQL Server database on Managed Instance. </br> - Database can be used for R/O access on Managed Instance while migration is in progress. </br> - Provides the best performance minimum downtime migration. |
## Data sync and cutover
azure-sql Sql Server To Managed Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-overview.md
We recommend the following migration tools:
|[Azure Database Migration Service](../../../dms/tutorial-sql-server-to-managed-instance.md) | This Azure service supports migration in the offline mode for applications that can afford downtime during the migration process. Unlike the continuous migration in online mode, offline mode migration runs a one-time restore of a full database backup from the source to the target. | |[Native backup and restore](../../managed-instance/restore-sample-database-quickstart.md) | SQL Managed Instance supports restore of native SQL Server database backups (.bak files). It's the easiest migration option for customers who can provide full database backups to Azure Storage.| |[Log Replay Service](../../managed-instance/log-replay-service-migrate.md) | This cloud service is enabled for SQL Managed Instance based on SQL Server log-shipping technology. It's a migration option for customers who can provide full, differential, and log database backups to Azure Storage. Log Replay Service is used to restore backup files from Azure Blob Storage to SQL Managed Instance.|
-|[Link feature for Managed Instance](../../managed-instance/link-feature.md) | This feature enables online migration to Managed Instance using Always On technology. ItΓÇÖs a migration option for customers who require database on Managed Instance to be accessible in R/O mode while migration is in progress, who need to keep the migration running for prolonged periods of time (weeks or months at the time), who require true online replication to Business Critical service tier, and for customers who require the most performant minimum downtime migration. |
+|[Managed Instance link](../../managed-instance/managed-instance-link-feature-overview.md) | This feature enables online migration to Managed Instance using Always On technology. ItΓÇÖs a migration option for customers who require database on Managed Instance to be accessible in R/O mode while migration is in progress, who need to keep the migration running for prolonged periods of time (weeks or months at the time), who require true online replication to Business Critical service tier, and for customers who require the most performant minimum downtime migration. |
The following table lists alternative migration tools:
Compare migration options to choose the path that's appropriate to your business needs.
-The following table compares the migration options that we recommend:
+The following table compares the recommended migration options:
|Migration option |When to use |Considerations | |||| |[Azure SQL Migration extension for Azure Data Studio](../../../dms/migration-using-azure-data-studio.md) | - Migrate single databases or multiple databases at scale. </br> - Can run in both online (minimal downtime) and offline (acceptable downtime) modes. </br> </br> Supported sources: </br> - SQL Server (2005 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - Easy to setup and get started. </br> - Requires setup of self-hosted integration runtime to access on-premises SQL Server and backups. </br> - Includes both assessment and migration capabilities. | |[Azure Database Migration Service](../../../dms/tutorial-sql-server-to-managed-instance.md) | - Migrate single databases or multiple databases at scale. </br> - Can accommodate downtime during the migration process. </br> </br> Supported sources: </br> - SQL Server (2005 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - Migrations at scale can be automated via [PowerShell](../../../dms/howto-sql-server-to-azure-sql-managed-instance-powershell-offline.md). </br> - Time to complete migration depends on database size and is affected by backup and restore time. </br> - Sufficient downtime might be required. | |[Native backup and restore](../../managed-instance/restore-sample-database-quickstart.md) | - Migrate individual line-of-business application databases. </br> - Quick and easy migration without a separate migration service or tool. </br> </br> Supported sources: </br> - SQL Server (2005 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - Database backup uses multiple threads to optimize data transfer to Azure Blob Storage, but partner bandwidth and database size can affect transfer rate. </br> - Downtime should accommodate the time required to perform a full backup and restore (which is a size of data operation).|
-|[Log Replay Service](../../managed-instance/log-replay-service-migrate.md) | - Migrate individual line-of-business application databases. </br> - More control is needed for database migrations. </br> </br> Supported sources: </br> - SQL Server (2008 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - The migration entails making full database backups on SQL Server and copying backup files to Azure Blob Storage. Log Replay Service is used to restore backup files from Azure Blob Storage to SQL Managed Instance. </br> - Databases being restored during the migration process will be in a restoring mode and can't be used to read or write until the process has finished.|
-|[Link feature for Managed Instance](../../managed-instance/link-feature.md) | - Migrate individual line-of-business application databases. </br> - More control is needed for database migrations. </br> - Minimum downtime migration is needed. </br> </br> Supported sources: </br> - SQL Server (2016 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - GCP Compute SQL Server VM | - The migration entails establishing a VPN connectivity between SQL Server and Managed Instance, and opening inbound communication ports. </br> - Always On technology is used to replicate database near real-time, making an exact replica of SQL Server database on Managed Instance. </br> - Database can be used for R/O access on Managed Instance while migration is in progress. </br> - Provides the best performance minimum downtime migration. |
+|[Log Replay Service](../../managed-instance/log-replay-service-migrate.md) | - Migrate individual line-of-business application databases. </br> - More control is needed for database migrations. </br> </br> Supported sources: </br> - SQL Server (2008 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - The migration entails making full database backups on SQL Server and copying backup files to Azure Blob Storage. Log Replay Service is used to restore backup files from Azure Blob Storage to SQL Managed Instance. </br> - Databases being restored during the migration process will be in a restoring mode and can't be used for read or write workloads until the process is complete.|
+|[Managed Instance link](../../managed-instance/managed-instance-link-feature-overview.md) | - Migrate individual line-of-business application databases. </br> - More control is needed for database migrations. </br> - Minimum downtime migration is needed. </br> </br> Supported sources: </br> - SQL Server (2016 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - GCP Compute SQL Server VM | - The migration entails establishing a network connection between SQL Server and SQL Managed Instance, and opening communication ports. </br> - Uses [Always On availability group](/sql/database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server) technology to replicate database near real-time, making an exact replica of the SQL Server database on SQL Managed Instance. </br> - The database can be used for read-only access on SQL Managed Instance while migration is in progress. </br> - Provides the best performance during migration with minimum downtime. |
The following table compares the alternative migration options:
azure-video-analyzer Compare Video Indexer With Media Services Presets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-for-media-docs/compare-video-indexer-with-media-services-presets.md
-# Compare Azure Media Services v3 presets and Video Analyzer for Media
+# Compare Azure Media Services v3 presets and Video Analyzer for Media
-This article compares the capabilities of **Video Analyzer for Media (formerly Video Indexer) APIs** and **Media Services v3 APIs**.
+This article compares the capabilities of **Video Analyzer for Media (formerly Video Indexer) APIs** and **Media Services v3 APIs**.
-Currently, there is an overlap between features offered by the [Video Analyzer for Media APIs](https://api-portal.videoindexer.ai/) and the [Media Services v3 APIs](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/mediaservices/resource-manager/Microsoft.Media/stable/2018-07-01/Encoding.json). The following table offers the current guideline for understanding the differences and similarities.
+Currently, there is an overlap between features offered by the [Video Analyzer for Media APIs](https://api-portal.videoindexer.ai/) and the [Media Services v3 APIs](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/mediaservices/resource-manager/Microsoft.Media/stable/2018-07-01/Encoding.json). The following table offers the current guideline for understanding the differences and similarities.
## Compare |Feature|Video Analyzer for Media APIs |Video Analyzer and Audio Analyzer Presets<br/>in Media Services v3 APIs| ||||
-|Media Insights|[Enhanced](video-indexer-output-json-v2.md) |[Fundamentals](../../media-services/latest/analyze-video-audio-files-concept.md)|
+|Media Insights|[Enhanced](video-indexer-output-json-v2.md) |[Fundamentals](/media-services/latest/analyze-video-audio-files-concept)|
|Experiences|See the full list of supported features: <br/> [Overview](video-indexer-overview.md)|Returns video insights only| |Billing|[Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/#analytics)|[Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/#analytics)| |Compliance|For the most current compliance updates, visit [Azure Compliance Offerings.pdf](https://gallery.technet.microsoft.com/Overview-of-Azure-c1be3942/file/178110/23/Microsoft%20Azure%20Compliance%20Offerings.pdf) and search for "Video Analyzer for Media" to see if it complies with a certificate of interest.|For the most current compliance updates, visit [Azure Compliance Offerings.pdf](https://gallery.technet.microsoft.com/Overview-of-Azure-c1be3942/file/178110/23/Microsoft%20Azure%20Compliance%20Offerings.pdf) and search for "Media Services" to see if it complies with a certificate of interest.|
Currently, there is an overlap between features offered by the [Video Analyzer f
[Video Analyzer for Media overview](video-indexer-overview.md)
-[Media Services v3 overview](../../media-services/latest/media-services-overview.md)
+[Media Services v3 overview](/media-services/latest/media-services-overview)
azure-video-analyzer Connect To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-for-media-docs/connect-to-azure.md
Last updated 10/19/2021
-
+ # Create a Video Analyzer for Media account When creating an Azure Video Analyzer for Media (formerly Video Indexer) account, you can choose a free trial account (where you get a certain number of free indexing minutes) or a paid option (where you're not limited by the quota). With a free trial, Video Analyzer for Media provides up to 600 minutes of free indexing to users and up to 2400 minutes of free indexing to users that subscribe to the Video Analyzer API on the [developer portal](https://aka.ms/avam-dev-portal). With the paid options, Azure Video Analyzer for Media offers two types of accounts: classic accounts(General Availability), and ARM-based accounts(Public Preview). Main difference between the two is account management platform. While classic accounts is built on the API Management, ARM-based accounts management is built on Azure, enables to apply access control to all services with role-based access control (Azure RBAC) natively.
If the connection to Azure failed, you can attempt to troubleshoot the problem b
### Create and configure a Media Services account
-1. Use the [Azure](https://portal.azure.com/) portal to create an Azure Media Services account, as described in [Create an account](../../media-services/previous/media-services-portal-create-account.md).
+1. Use the [Azure](https://portal.azure.com/) portal to create an Azure Media Services account, as described in [Create an account](/media-services/previous/media-services-portal-create-account).
+
+ Make sure the Media Services account was created with the classic APIs.
- Make sure the Media Services account was created with the classic APIs.
-
![Media Services classic API](./media/create-account/enable-classic-api.png)
If the connection to Azure failed, you can attempt to troubleshoot the problem b
In the new Media Services account, select **Streaming endpoints**. Then select the streaming endpoint and press start. ![Streaming endpoints](./media/create-account/create-ams-account-se.png)
-4. For Video Analyzer for Media to authenticate with Media Services API, an AD app needs to be created. The following steps guide you through the Azure AD authentication process described in [Get started with Azure AD authentication by using the Azure portal](../../media-services/previous/media-services-portal-get-started-with-aad.md):
+4. For Video Analyzer for Media to authenticate with Media Services API, an AD app needs to be created. The following steps guide you through the Azure AD authentication process described in [Get started with Azure AD authentication by using the Azure portal](/media-services/previous/media-services-portal-get-started-with-aad):
1. In the new Media Services account, select **API access**.
- 2. Select [Service principal authentication method](../../media-services/previous/media-services-portal-get-started-with-aad.md).
+ 2. Select [Service principal authentication method](/media-services/previous/media-services-portal-get-started-with-aad).
3. Get the client ID and client secret After you select **Settings**->**Keys**, add **Description**, press **Save**, and the key value gets populated.
In the dialog, provide the following information:
When creating a new **ARM-Based** account, you have an option to import your content from the *trial* account into the new **ARM-Based** account free of charge. > [!NOTE] > * Import from trial can be performed only once per trial account.
-> * The target ARM-Based account needs to be created and available before import is assigned.
+> * The target ARM-Based account needs to be created and available before import is assigned.
> * Target ARM-Based account has to be an empty account (never indexed any media files). To import your data, follow the steps:
To import your data, follow the steps:
3. Click the *Import content to an ARM-based account* 4. From the dropdown menu choose the ARM-based account you wish to import the data to. * If the account ID isn't showing, you can copy and paste the account ID from Azure portal or the account list, on the side blade in the Azure Video Analyzer for Media Portal.
- 5. Click **Import content**
+ 5. Click **Import content**
![import](./media/create-account/import-steps.png)
All media and content model customizations will be copied from the *trial* accou
The following Azure Media Services related considerations apply:
-* If you plan to connect to an existing Media Services account, make sure the Media Services account was created with the classic APIs.
-
+* If you plan to connect to an existing Media Services account, make sure the Media Services account was created with the classic APIs.
+ ![Media Services classic API](./media/create-account/enable-classic-api.png) * If you connect to an existing Media Services account, Video Analyzer for Media doesn't change the existing media **Reserved Units** configuration.
The following Azure Media Services related considerations apply:
* If you connect automatically, Video Analyzer for Media sets the media **Reserved Units** to 10 S3 units: ![Media Services reserved units](./media/create-account/ams-reserved-units.png)
-
+ ## Automate creation of the Video Analyzer for Media account To automate the creation of the account is a two steps process:
-
+ 1. Use Azure Resource Manager to create an Azure Media Services account + Azure AD application. See an example of the [Media Services account creation template](https://github.com/Azure-Samples/media-services-v3-arm-templates).
To automate the creation of the account is a two steps process:
To create a paid account via the Video Analyzer for Media portal:
-1. Go to https://videoindexer.ai.azure.us
+1. Go to https://videoindexer.ai.azure.us
1. Log in with your Azure Government Azure AD account.
-1. If you do not have any Video Analyzer for Media accounts in Azure Government that you are an owner or a contributor to, you will get an empty experience from which you can start creating your account.
+1. If you do not have any Video Analyzer for Media accounts in Azure Government that you are an owner or a contributor to, you will get an empty experience from which you can start creating your account.
- The rest of the flow is as described in above , only the regions to select from will be Government regions in which Video Analyzer for Media is available
+ The rest of the flow is as described in above , only the regions to select from will be Government regions in which Video Analyzer for Media is available
If you already are a contributor or an admin of an existing one or more Video Analyzer for Media account in Azure Government, you will be taken to that account and from there you can start a follow steps for creating an additional account if needed, as described above.
-
+ ### Create new account via the API on Azure Government To create a paid account in Azure Government, follow the instructions in [Create-Paid-Account](). This API end point only includes Government cloud regions. ### Limitations of Video Analyzer for Media on Azure Government
-* No manual content moderation available in Government cloud.
+* No manual content moderation available in Government cloud.
- In the public cloud when content is deemed offensive based on a content moderation, the customer can ask for a human to look at that content and potentially revert that decision.
-* No trial accounts.
-* Bing description - in Gov cloud we will not present a description of celebrities and named entities identified. This is a UI capability only.
+ In the public cloud when content is deemed offensive based on a content moderation, the customer can ask for a human to look at that content and potentially revert that decision.
+* No trial accounts.
+* Bing description - in Gov cloud we will not present a description of celebrities and named entities identified. This is a UI capability only.
## Clean up resources
After you are done with this tutorial, delete resources that you are not plannin
If you want to delete a Video Analyzer for Media account, you can delete the account from the Video Analyzer for Media website. To delete the account, you must be the owner.
-Select the account -> **Settings** -> **Delete this account**.
+Select the account -> **Settings** -> **Delete this account**.
The account will be permanently deleted in 90 days.
azure-video-analyzer Considerations When Use At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-for-media-docs/considerations-when-use-at-scale.md
Title: Things to consider when using Azure Video Analyzer for Media (formerly Vi
description: This topic explains what things to consider when using Azure Video Analyzer for Media (formerly Video Indexer) at scale. Last updated 11/13/2020-+ # Things to consider when using Video Analyzer for Media at scale
-When using Azure Video Analyzer for Media (formerly Video Indexer) to index videos and your archive of videos is growing, consider scaling.
+When using Azure Video Analyzer for Media (formerly Video Indexer) to index videos and your archive of videos is growing, consider scaling.
This article answers questions like:
First, it has file size limitations. The size of the byte array file is limited
Second, consider just some of the issues that can affect your performance and hence your ability to scale:
-* Sending files using multi-part means high dependency on your network,
-* service reliability,
-* connectivity,
-* upload speed,
+* Sending files using multi-part means high dependency on your network,
+* service reliability,
+* connectivity,
+* upload speed,
* lost packets somewhere in the world wide web. :::image type="content" source="./media/considerations-when-use-at-scale/first-consideration.png" alt-text="First consideration for using Video Analyzer for Media at scale":::
When you upload videos using URL, you just need to provide a path to the locatio
To see an example of how to upload videos using URL, check out [this example](upload-index-videos.md#code-sample). Or, you can use [AzCopy](../../storage/common/storage-use-azcopy-v10.md) for a fast and reliable way to get your content to a storage account from which you can submit it to Video Analyzer for Media using [SAS URL](../../storage/common/storage-sas-overview.md). Video Analyzer for Media recommends using *readonly* SAS URLs.
-## Automatic Scaling of Media Reserved Units
+## Automatic Scaling of Media Reserved Units
-Starting August 1st 2021, Azure Video Analyzer for Media (formerly Video Indexer) enabled [Reserved Units](../../media-services/latest/concept-media-reserved-units.md)(MRUs) auto scaling by [Azure Media Services](../../media-services/latest/media-services-overview.md) (AMS), as a result you do not need to manage them through Azure Video Analyzer for Media. That will allow price optimization, e.g. price reduction in many cases, based on your business needs as it is being auto scaled.
+Starting August 1st 2021, Azure Video Analyzer for Media (formerly Video Indexer) enabled [Reserved Units](/media-services/latest/concept-media-reserved-units)(MRUs) auto scaling by [Azure Media Services](/media-services/latest/media-services-overview) (AMS), as a result you do not need to manage them through Azure Video Analyzer for Media. That will allow price optimization, e.g. price reduction in many cases, based on your business needs as it is being auto scaled.
## Respect throttling
Video Analyzer for Media is built to deal with indexing at scale, and when you w
We recommend that instead of polling the status of your request constantly from the second you sent the upload request, you can add a [callback URL](upload-index-videos.md#callbackurl), and wait for Video Analyzer for Media to update you. As soon as there is any status change in your upload request, you get a POST notification to the URL you specified.
-You can add a callback URL as one of the parameters of the [upload video API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video). Check out the code samples in [GitHub repo](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/).
+You can add a callback URL as one of the parameters of the [upload video API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video). Check out the code samples in [GitHub repo](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/).
For callback URL you can also use Azure Functions, a serverless event-driven platform that can be triggered by HTTP and implement a following flow.
For example, donΓÇÖt set the preset to streaming if you don't plan to watch the
## Index in optimal resolution, not highest resolution
-You might be asking, what video quality do you need for indexing your videos?
+You might be asking, what video quality do you need for indexing your videos?
In many cases, indexing performance has almost no difference between HD (720P) videos and 4K videos. Eventually, youΓÇÖll get almost the same insights with the same confidence. The higher the quality of the movie you upload means the higher the file size, and this leads to higher computing power and time needed to upload the video.
azure-video-analyzer Create Video Analyzer For Media Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-for-media-docs/create-video-analyzer-for-media-account.md
To start using Azure Video Analyzer for Media, you will need to create a Video A
![Image of create account](media/create-video-analyzer-for-media-account/create-account-blade.png)
-
+ | Name | Description | | || |**Subscription**|Choose the subscription that you are using to create the Video Analyzer for Media account.|
To start using Azure Video Analyzer for Media, you will need to create a Video A
|**Video Analyzer for Media account**|Select *Create a new account* option.| |**Resource name**|Enter the name of the new Video Analyzer for Media account, the name can contain letters, numbers and dashes with no spaces.| |**Location**|Select the geographic region that will be used to deploy the Video Analyzer for Media account. The location matches the **resource group location** you chose, if you'd like to change the selected location change the selected resource group or create a new one in the preferred location. [Azure region in which Video Analyzer for Media is available](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services&regions=all)|
-|**Media Services account name**|Select a Media Services that the new Video Analyzer for Media account will use to process the videos. You can select an existing Media Services or you can create a new one. The Media Services must be in the same location you selected.|
+|**Media Services account name**|Select a Media Services that the new Video Analyzer for Media account will use to process the videos. You can select an existing Media Services or you can create a new one. The Media Services must be in the same location you selected.|
|**User-assigned managed identity**|Select a user-assigned managed identity that the new Video Analyzer for Media account will use to access the Media Services. You can select an existing user-assigned managed identity or you can create a new one. The user-assignment managed identity will be assigned the role of Contributor role on the Media Services.| 1. Click **Review + create** at the bottom of the form.
Learn how to [Upload a video using C#](https://github.com/Azure-Samples/media-se
<!-- links --> [docs-uami]: ../../active-directory/managed-identities-azure-resources/overview.md
-[docs-ms]: ../../media-services/latest/media-services-overview.md
+[docs-ms]: /media-services/latest/media-services-overview
[docs-role-contributor]: ../../role-based-access-control/built-in-roles.md#contibutor [docs-contributor-on-ms]: ./add-contributor-role-on-the-media-service.md
azure-video-analyzer Deploy With Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-for-media-docs/deploy-with-arm-template.md
Title: Deploy Azure Video Analyzer for Media with ARM template
+ Title: Deploy Azure Video Analyzer for Media with ARM template
description: In this tutorial you will create an Azure Video Analyzer for Media account by using Azure Resource Manager (ARM) template.
Last updated 12/01/2021
-# Tutorial: deploy Azure Video Analyzer for Media with ARM template
+# Tutorial: deploy Azure Video Analyzer for Media with ARM template
## Overview
-In this tutorial you will create an Azure Video Analyzer for Media (formerly Video Indexer) account by using Azure Resource Manager (ARM) template (preview).
+In this tutorial you will create an Azure Video Analyzer for Media (formerly Video Indexer) account by using Azure Resource Manager (ARM) template (preview).
The resource will be deployed to your subscription and will create the Azure Video Analyzer for Media resource based on parameters defined in the avam.template file. > [!NOTE]
The resource will be deployed to your subscription and will create the Azure Vid
## Prerequisites
-* An Azure Media Services (AMS) account. You can create one for free through the [Create AMS Account](../../media-services/latest/account-create-how-to.md).
+* An Azure Media Services (AMS) account. You can create one for free through the [Create AMS Account](/media-services/latest/account-create-how-to).
## Deploy the sample
The resource will be deployed to your subscription and will create the Azure Vid
### Option 1: Click the "Deploy To Azure Button", and fill in the missing parameters
-[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fmedia-services-video-indexer%2Fmaster%2FARM-Samples%2FCreate-Account%2Favam.template.json)
+[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fmedia-services-video-indexer%2Fmaster%2FARM-Samples%2FCreate-Account%2Favam.template.json)
-
azure-video-analyzer Odrv Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-for-media-docs/odrv-download.md
Last updated 12/17/2021
-# Index your videos stored on OneDrive
+# Index your videos stored on OneDrive
This article shows how to index videos stored on OneDrive by using the Azure Video Analyzer for Media (formerly Video Indexer) website. ## Supported file formats
-For a list of file formats that you can use with Video Analyzer for Media, see [Standard Encoder formats and codecs](../../media-services/latest/encode-media-encoder-standard-formats-reference.md).
-
+For a list of file formats that you can use with Video Analyzer for Media, see [Standard Encoder formats and codecs](/media-services/latest/encode-media-encoder-standard-formats-reference).
+ ## Index a video by using the website 1. Sign into the [Video Analyzer for Media](https://www.videoindexer.ai/) website, and then select **Upload**.
For a list of file formats that you can use with Video Analyzer for Media, see [
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/video-indexer-get-started/video-indexer-upload.png" alt-text="Screenshot that shows the Upload button.":::
-1. Click on **enter a file URL** button
+1. Click on **enter a file URL** button
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/video-indexer-get-started/avam-enter-file-url.png" alt-text="Screenshot that shows the enter file URL button.":::
For a list of file formats that you can use with Video Analyzer for Media, see [
1. Copy the embed code and extract only the URL part including the key. For example: `https://onedrive.live.com/embed?cid=5BC591B7C713B04F&resid=5DC518B6B713C40F%2110126&authkey=HnsodidN_50oA3lLfk`
-
+ Replace **embed** with **download**. You will now have a url that looks like this:
-
+ `https://onedrive.live.com/download?cid=5BC591B7C713B04F&resid=5DC518B6B713C40F%2110126&authkey=HnsodidN_50oA3lLfk` 1. Now enter this URL in the Azure Video Analyzer for Media portal in the URL field.
Once Video Analyzer for Media is done analyzing, you will receive an email with
## Upload and index a video by using the API
-You can use the [Upload Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) API to upload and index your videos based on a URL. The code sample that follows includes the commented-out code that shows how to upload the byte array.
+You can use the [Upload Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) API to upload and index your videos based on a URL. The code sample that follows includes the commented-out code that shows how to upload the byte array.
### Configurations and parameters This section describes some of the optional parameters and when to set them. For the most up-to-date info about parameters, see the [Video Analyzer for Media portal](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video).
-#### externalID
+#### externalID
Use this parameter to specify an ID that will be associated with the video. The ID can be applied to integration into an external video content management (VCM) system. The videos that are in the Video Analyzer for Media portal can be searched via the specified external ID. #### callbackUrl
-Use this parameter to specify a callback URL.
+Use this parameter to specify a callback URL.
[!INCLUDE [callback url](./includes/callback-url.md)]
Use this parameter to define an AI bundle that you want to apply on your audio o
- `BasicAudio`: Index and extract insights by using audio only (ignoring video). Include only basic audio features (transcription, translation, formatting of output captions and subtitles). - `AdvancedAudio`: Index and extract insights by using audio only (ignoring video). Include advanced audio features (such as audio event detection) in addition to the standard audio analysis. - `AdvancedVideo`: Index and extract insights by using video only (ignoring audio). Include advanced video features (such as observed people tracing) in addition to the standard video analysis.-- `AdvancedVideoAndAudio`: Index and extract insights by using both advanced audio and advanced video analysis.
+- `AdvancedVideoAndAudio`: Index and extract insights by using both advanced audio and advanced video analysis.
> [!NOTE]
-> The preceding advanced presets include models that are in public preview. When these models reach general availability, there might be implications for the price.
+> The preceding advanced presets include models that are in public preview. When these models reach general availability, there might be implications for the price.
Video Analyzer for Media covers up to two tracks of audio. If the file has more audio tracks, they're treated as one track. If you want to index the tracks separately, you need to extract the relevant audio file and index it as `AudioOnly`.
This parameter is supported only for paid accounts.
#### streamingPreset
-After your video is uploaded, Video Analyzer for Media optionally encodes the video. It then proceeds to indexing and analyzing the video. When Video Analyzer for Media is done analyzing, you get a notification with the video ID.
+After your video is uploaded, Video Analyzer for Media optionally encodes the video. It then proceeds to indexing and analyzing the video. When Video Analyzer for Media is done analyzing, you get a notification with the video ID.
-When you're using the [Upload Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) or [Re-Index Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) API, one of the optional parameters is `streamingPreset`. If you set `streamingPreset` to `Default`, `SingleBitrate`, or `AdaptiveBitrate`, the encoding process is triggered.
+When you're using the [Upload Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) or [Re-Index Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) API, one of the optional parameters is `streamingPreset`. If you set `streamingPreset` to `Default`, `SingleBitrate`, or `AdaptiveBitrate`, the encoding process is triggered.
After the indexing and encoding jobs are done, the video is published so you can also stream your video. The streaming endpoint from which you want to stream the video must be in the **Running** state. For `SingleBitrate`, the standard encoder cost will apply for the output. If the video height is greater than or equal to 720, Video Analyzer for Media encodes it as 1280 x 720. Otherwise, it's encoded as 640 x 468.
-The default setting is [content-aware encoding](../../media-services/latest/encode-content-aware-concept.md).
+The default setting is [content-aware encoding](/media-services/latest/encode-content-aware-concept).
If you only want to index your video and not encode it, set `streamingPreset` to `NoStreaming`.
The following C# code snippets demonstrate the usage of all the Video Analyzer f
After you copy the following code into your development platform, you'll need to provide two parameters:
-* API key (`apiKey`): Your personal API management subscription key. It allows you to get an access token in order to perform operations on your Video Analyzer for Media account.
+* API key (`apiKey`): Your personal API management subscription key. It allows you to get an access token in order to perform operations on your Video Analyzer for Media account.
To get your API key:
After you copy the following code into your development platform, you'll need to
* Video URL (`videoUrl`): A URL of the video or audio file to be indexed. Here are the requirements:
- - The URL must point at a media file. (HTML pages are not supported.)
+ - The URL must point at a media file. (HTML pages are not supported.)
- The file can be protected by an access token that's provided as part of the URI. The endpoint that serves the file must be secured with TLS 1.2 or later.
- - The URL must be encoded.
+ - The URL must be encoded.
-The result of successfully running the code sample includes an insight widget URL and a player widget URL. They allow you to examine the insights and the uploaded video, respectively.
+The result of successfully running the code sample includes an insight widget URL and a player widget URL. They allow you to examine the insights and the uploaded video, respectively.
```csharp
public async Task Sample()
HttpResponseMessage result = await client.GetAsync($"{apiUrl}/auth/trial/Accounts?{queryParams}"); var json = await result.Content.ReadAsStringAsync(); var accounts = JsonConvert.DeserializeObject<AccountContractSlim[]>(json);
-
- // Take the relevant account. Here we simply take the first.
+
+ // Take the relevant account. Here we simply take the first.
// You can also get the account via accounts.First(account => account.Id == <GUID>); var accountInfo = accounts.First();
public class AccountContractSlim
### [Azure Resource Manager account](#tab/with-arm-account-account/)
-After you copy this C# project into your development platform, you need to take the following steps:
+After you copy this C# project into your development platform, you need to take the following steps:
1. Go to Program.cs and populate:
namespace VideoIndexerArm
Console.WriteLine($"account id: {accountId}"); Console.WriteLine($"account location: {accountLocation}");
- // Get account-level access token for Azure Video Analyzer for Media
+ // Get account-level access token for Azure Video Analyzer for Media
var accessTokenRequest = new AccessTokenRequest { PermissionType = AccessTokenPermission.Contributor,
namespace VideoIndexerArm
[JsonPropertyName("projectId")] public string ProjectId { get; set; }
-
+ [JsonPropertyName("videoId")] public string VideoId { get; set; } }
The upload operation might return the following status codes:
|429||Trial accounts are allowed 5 uploads per minute. Paid accounts are allowed 50 uploads per minute.| ## Uploading considerations and limitations
-
+ - The name of a video must be no more than 80 characters. - When you're uploading a video based on the URL (preferred), the endpoint must be secured with TLS 1.2 or later. - The upload size with the URL option is limited to 30 GB.
The upload operation might return the following status codes:
- The URL provided in the `videoURL` parameter must be encoded. - Indexing Media Services assets has the same limitation as indexing from a URL. - Video Analyzer for Media has a duration limit of 4 hours for a single file.-- The URL must be accessible (for example, a public URL).
+- The URL must be accessible (for example, a public URL).
If it's a private URL, the access token must be provided in the request. - The URL must point to a valid media file and not to a webpage, such as a link to the `www.youtube.com` page.
The upload operation might return the following status codes:
> [!Tip] > We recommend that you use .NET Framework version 4.6.2. or later, because older .NET Framework versions don't default to TLS 1.2. >
-> If you must use an older .NET Framework version, add one line to your code before making the REST API call:
+> If you must use an older .NET Framework version, add one line to your code before making the REST API call:
> > `System.Net.ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls | SecurityProtocolType.Tls11 | SecurityProtocolType.Tls12;`
azure-video-analyzer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-for-media-docs/release-notes.md
Added new code samples including HTTP calls to use Video Analyzer for Media crea
### Improved audio effects detection
-The audio effects detection capability was improved to have a better detection rate over the following classes:
+The audio effects detection capability was improved to have a better detection rate over the following classes:
-* Crowd reactions (cheering, clapping, and booing),
-* Gunshot or explosion,
+* Crowd reactions (cheering, clapping, and booing),
+* Gunshot or explosion,
* Laughter For more information, see [Audio effects detection](audio-effects-detection.md). ### New source languages support for STT, translation, and search on the website
-
-Video Analyzer for Media introduces source languages support for STT (speech-to-text), translation, and search in Hebrew (he-IL), Portuguese (pt-PT), and Persian (fa-IR) on the [Video Analyzer for Media](https://www.videoindexer.ai/) website.
+
+Video Analyzer for Media introduces source languages support for STT (speech-to-text), translation, and search in Hebrew (he-IL), Portuguese (pt-PT), and Persian (fa-IR) on the [Video Analyzer for Media](https://www.videoindexer.ai/) website.
It means transcription, translation, and search features are also supported for these languages in Video Analyzer for Media web applications and widgets.
-## December 2021
-
+## December 2021
+ ### The projects feature is now GA The projects feature is now GA and ready for productive use. There is no pricing impact related to the "Preview to GA" transition. See [Add video clips to your projects](use-editor-create-project.md).
-
-### New source languages support for STT, translation, and search on API level
+
+### New source languages support for STT, translation, and search on API level
Video Analyzer for Media introduces source languages support for STT (speech-to-text), translation, and search in Hebrew (he-IL), Portuguese (pt-PT), and Persian (fa-IR) on the API level. ### Matched person detection capability
-When indexing a video through our advanced video settings, you can view the new matched person detection capability. If there are people observed in your media file, you can now view the specific person who matched each of them through the media player.
+When indexing a video through our advanced video settings, you can view the new matched person detection capability. If there are people observed in your media file, you can now view the specific person who matched each of them through the media player.
## November 2021
-
+ ### Public preview of Video Analyzer for Media account management based on ARM Azure Video Analyzer for Media introduces a public preview of Azure Resource Manager (ARM) based account management. You can leverage ARM-based Video Analyzer for Media APIs to create, edit, and delete an account from the [Azure portal](https://portal.azure.com/#home). > [!NOTE]
-> The Government cloud includes support for CRUD ARM based accounts from Video Analyzer for Media API and from the Azure portal.
->
+> The Government cloud includes support for CRUD ARM based accounts from Video Analyzer for Media API and from the Azure portal.
+>
> There is currently no support from the Video Analyzer for Media [website](https://www.videoindexer.ai). For more information go to [create a Video Analyzer for Media account](https://techcommunity.microsoft.com/t5/azure-ai/azure-video-analyzer-for-media-is-now-available-as-an-azure/ba-p/2912422). ### PeopleΓÇÖs clothing detection
-When indexing a video through the advanced video settings, you can view the new **PeopleΓÇÖs clothing detection** capability. If there are people detected in your media file, you can now view the clothing type they are wearing through the media player.
+When indexing a video through the advanced video settings, you can view the new **PeopleΓÇÖs clothing detection** capability. If there are people detected in your media file, you can now view the clothing type they are wearing through the media player.
### Face bounding box (preview)
There is now an option to re-index video or audio files that have failed during
Fixed bugs related to CSS, theming and accessibility: * high contrast
-* account settings and insights views in the [portal](https://www.videoindexer.ai).
+* account settings and insights views in the [portal](https://www.videoindexer.ai).
## July 2021 ### Automatic Scaling of Media Reserved Units
-
-Starting August 1st 2021, Azure Video Analyzer for Media (formerly Video Indexer) enabled [Media Reserved Units (MRUs)](../../media-services/latest/concept-media-reserved-units.md) auto scaling by [Azure Media Services](../../media-services/latest/media-services-overview.md), as a result you do not need to manage them through Azure Video Analyzer for Media. That will allow price optimization, for example price reduction in many cases, based on your business needs as it is being auto scaled.
+
+Starting August 1st 2021, Azure Video Analyzer for Media (formerly Video Indexer) enabled [Media Reserved Units (MRUs)](/media-services/latest/concept-media-reserved-units) auto scaling by [Azure Media Services](/media-services/latest/media-services-overview), as a result you do not need to manage them through Azure Video Analyzer for Media. That will allow price optimization, for example price reduction in many cases, based on your business needs as it is being auto scaled.
## June 2021
-
+ ### Video Analyzer for Media deployed in six new regions
-
+ You can now create a Video Analyzer for Media paid account in France Central, Central US, Brazil South, West Central US, Korea Central, and Japan West regions.
-
+ ## May 2021 ### New source languages support for speech-to-text (STT), translation, and search
-Video Analyzer for Media now supports STT, translation, and search in Chinese (Cantonese) ('zh-HK'), Dutch (Netherlands) ('Nl-NL'), Czech ('Cs-CZ'), Polish ('Pl-PL'), Swedish (Sweden) ('Sv-SE'), Norwegian('nb-NO'), Finnish('fi-FI'), Canadian French ('fr-CA'), Thai('th-TH'),
-Arabic: (United Arab Emirates) ('ar-AE', 'ar-EG'), (Iraq) ('ar-IQ'), (Jordan) ('ar-JO'), (Kuwait) ('ar-KW'), (Lebanon) ('ar-LB'), (Oman) ('ar-OM'), (Qatar) ('ar-QA'), (Palestinian Authority) ('ar-PS'), (Syria) ('ar-SY'), and Turkish('tr-TR').
+Video Analyzer for Media now supports STT, translation, and search in Chinese (Cantonese) ('zh-HK'), Dutch (Netherlands) ('Nl-NL'), Czech ('Cs-CZ'), Polish ('Pl-PL'), Swedish (Sweden) ('Sv-SE'), Norwegian('nb-NO'), Finnish('fi-FI'), Canadian French ('fr-CA'), Thai('th-TH'),
+Arabic: (United Arab Emirates) ('ar-AE', 'ar-EG'), (Iraq) ('ar-IQ'), (Jordan) ('ar-JO'), (Kuwait) ('ar-KW'), (Lebanon) ('ar-LB'), (Oman) ('ar-OM'), (Qatar) ('ar-QA'), (Palestinian Authority) ('ar-PS'), (Syria) ('ar-SY'), and Turkish('tr-TR').
These languages are available in both API and Video Analyzer for Media website. Select the language from the combobox under **Video source language**. ### New theme for Azure Video Analyzer for Media New theme is available: 'Azure' along with the 'light' and 'dark themes. To select a theme, click on the gear icon in the top-right corner of the website, find themes under **User settings**.
-
-### New open-source code you can leverage
+
+### New open-source code you can leverage
Three new Git-Hub projects are available at our [GitHub repository](https://github.com/Azure-Samples/media-services-video-indexer): * Code to help you leverage the newly added [widget customization](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/Embedding%20widgets). * Solution to help you add [custom search](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/VideoSearchWithAutoMLVision) to your video libraries. * Solution to help you add [de-duplication](https://github.com/Azure-Samples/media-services-video-indexer/commit/6b828f598f5bf61ce1b6dbcbea9e8b87ba11c7b1) to your video libraries.
-
-### New option to toggle bounding boxes (for observed people) on the player
+
+### New option to toggle bounding boxes (for observed people) on the player
When indexing a video through our advanced video settings, you can view our new observed people capabilities. If there are people detected in your media file, you can enable a bounding box on the detected person through the media player.
When indexing a video through our advanced video settings, you can view our new
The Video Indexer service was renamed to Azure Video Analyzer for Media. ### Improved upload experience in the portal
-
+ Video Analyzer for Media has a new upload experience in the [portal](https://www.videoindexer.ai). To upload your media file, press the **Upload** button from the **Media files** tab. ### New developer portal in available in gov-cloud
-
+ [Video Analyzer for Media Developer Portal](https://api-portal.videoindexer.ai) is now also available in Azure for US Government.
-### Observed people tracing (preview)
+### Observed people tracing (preview)
-Azure Video Analyzer for Media now detects observed people in videos and provides information such as the location of the person in the video frame and the exact timestamp (start, end) when a person appears. The API returns the bounding box coordinates (in pixels) for each person instance detected, including its confidence.
+Azure Video Analyzer for Media now detects observed people in videos and provides information such as the location of the person in the video frame and the exact timestamp (start, end) when a person appears. The API returns the bounding box coordinates (in pixels) for each person instance detected, including its confidence.
-For example, if a video contains a person, the detect operation will list the person appearances together with their coordinates in the video frames. You can use this functionality to determine the person path in a video. It also lets you determine whether there are multiple instances of the same person in a video.
+For example, if a video contains a person, the detect operation will list the person appearances together with their coordinates in the video frames. You can use this functionality to determine the person path in a video. It also lets you determine whether there are multiple instances of the same person in a video.
-The newly added observed people tracing feature is available when indexing your file by choosing the **Advanced option** -> **Advanced video** or **Advanced video + audio** preset (under Video + audio indexing). Standard and basic indexing presets will not include this new advanced model.
+The newly added observed people tracing feature is available when indexing your file by choosing the **Advanced option** -> **Advanced video** or **Advanced video + audio** preset (under Video + audio indexing). Standard and basic indexing presets will not include this new advanced model.
-When you choose to see Insights of your video on the Video Analyzer for Media website, the Observed People Tracing will show up on the page with all detected people thumbnails. You can choose a thumbnail of a person and see where the person appears in the video player.
+When you choose to see Insights of your video on the Video Analyzer for Media website, the Observed People Tracing will show up on the page with all detected people thumbnails. You can choose a thumbnail of a person and see where the person appears in the video player.
The feature is also available in the JSON file generated by Video Analyzer for Media. For more information, see [Trace observed people in a video](observed-people-tracing.md).
You can now see the detected acoustic events in the closed captions file. The fi
## March 2021
-### Audio analysis
+### Audio analysis
Audio analysis is available now in additional new bundle of audio features at different price point. The new **Basic Audio** analysis preset provides a low-cost option to only extract speech transcription, translation and format output captions and subtitles. The **Basic Audio** preset will produce two separate meters on your bill, including a line for transcription and a separate line for caption and subtitle formatting. More information on the pricing, see the [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/) page. The newly added bundle is available when indexing or re-indexing your file by choosing the **Advanced option** -> **Basic Audio** preset (under the **Video + audio indexing** drop-down box).
-### New developer portal
+### New developer portal
+
+Video Analyzer for Media has a new [Developer Portal](https://api-portal.videoindexer.ai/), try out the new Video Analyzer for Media APIs and find all the relevant resources in one place: [GitHub repository](https://github.com/Azure-Samples/media-services-video-indexer), [Stack overflow](https://stackoverflow.com/questions/tagged/video-indexer), [Video Analyzer for Media tech community](https://techcommunity.microsoft.com/t5/azure-media-services/bg-p/AzureMediaServices/label-name/Video%20Indexer) with relevant blog posts, [Video Analyzer for Media FAQs](faq.yml), [User Voice](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858) to provide your feedback and suggest features, and ['CodePen' link](https://codepen.io/videoindexer) with widgets code samples.
-Video Analyzer for Media has a new [Developer Portal](https://api-portal.videoindexer.ai/), try out the new Video Analyzer for Media APIs and find all the relevant resources in one place: [GitHub repository](https://github.com/Azure-Samples/media-services-video-indexer), [Stack overflow](https://stackoverflow.com/questions/tagged/video-indexer), [Video Analyzer for Media tech community](https://techcommunity.microsoft.com/t5/azure-media-services/bg-p/AzureMediaServices/label-name/Video%20Indexer) with relevant blog posts, [Video Analyzer for Media FAQs](faq.yml), [User Voice](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858) to provide your feedback and suggest features, and ['CodePen' link](https://codepen.io/videoindexer) with widgets code samples.
-
-### Advanced customization capabilities for insight widget
+### Advanced customization capabilities for insight widget
-SDK is now available to embed Video Analyzer for Media's insights widget in your own service and customize its style and data. The SDK supports the standard Video Analyzer for Media insights widget and a fully customizable insights widget. Code sample is available in [Video Analyzer for Media GitHub repository](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/Embedding%20widgets/widget-customization). With this advanced customization capabilities, solution developer can apply custom styling and bring customerΓÇÖs own AI data and present that in the insight widget (with or without Video Analyzer for Media insights).
+SDK is now available to embed Video Analyzer for Media's insights widget in your own service and customize its style and data. The SDK supports the standard Video Analyzer for Media insights widget and a fully customizable insights widget. Code sample is available in [Video Analyzer for Media GitHub repository](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/Embedding%20widgets/widget-customization). With this advanced customization capabilities, solution developer can apply custom styling and bring customerΓÇÖs own AI data and present that in the insight widget (with or without Video Analyzer for Media insights).
-### Video Analyzer for Media deployed in the US North Central , US West and Canada Central
+### Video Analyzer for Media deployed in the US North Central , US West and Canada Central
You can now create a Video Analyzer for Media paid account in the US North Central, US West and Canada Central regions
-
-### New source languages support for speech-to-text (STT), translation and search
-Video Analyzer for Media now support STT, translation and search in Danish ('da-DK'), Norwegian('nb-NO'), Swedish('sv-SE'), Finnish('fi-FI'), Canadian French ('fr-CA'), Thai('th-TH'), Arabic ('ar-BH', 'ar-EG', 'ar-IQ', 'ar-JO', 'ar-KW', 'ar-LB', 'ar-OM', 'ar-QA', 'ar-S', and 'ar-SY'), and Turkish('tr-TR'). Those languages are available in both API and Video Analyzer for Media website.
-
-### Search by Topic in Video Analyzer for Media Website
+### New source languages support for speech-to-text (STT), translation and search
-You can now use the search feature, at the top of the [Video Analyzer for Media website](https://www.videoindexer.ai/account/login) page, to search for videos with specific topics.
+Video Analyzer for Media now support STT, translation and search in Danish ('da-DK'), Norwegian('nb-NO'), Swedish('sv-SE'), Finnish('fi-FI'), Canadian French ('fr-CA'), Thai('th-TH'), Arabic ('ar-BH', 'ar-EG', 'ar-IQ', 'ar-JO', 'ar-KW', 'ar-LB', 'ar-OM', 'ar-QA', 'ar-S', and 'ar-SY'), and Turkish('tr-TR'). Those languages are available in both API and Video Analyzer for Media website.
+
+### Search by Topic in Video Analyzer for Media Website
+
+You can now use the search feature, at the top of the [Video Analyzer for Media website](https://www.videoindexer.ai/account/login) page, to search for videos with specific topics.
## February 2021
-### Multiple account owners
+### Multiple account owners
Account owner role was added to Video Analyzer for Media. You can add, change, and remove users; change their role. For details on how to share an account, see [Invite users](invite-users.md). ### Audio event detection (public preview) > [!NOTE]
-> This feature is only available in trial accounts.
+> This feature is only available in trial accounts.
-Video Analyzer for Media now detects the following audio effects in the non-speech segments of the content: gunshot, glass shatter, alarm, siren, explosion, dog bark, screaming, laughter, crowd reactions (cheering, clapping, and booing) and Silence.
+Video Analyzer for Media now detects the following audio effects in the non-speech segments of the content: gunshot, glass shatter, alarm, siren, explosion, dog bark, screaming, laughter, crowd reactions (cheering, clapping, and booing) and Silence.
-The newly added audio affects feature is available when indexing your file by choosing the **Advanced option** -> **Advanced audio** preset (under Video + audio indexing). Standard indexing will only include **silence** and **crowd reaction**.
+The newly added audio affects feature is available when indexing your file by choosing the **Advanced option** -> **Advanced audio** preset (under Video + audio indexing). Standard indexing will only include **silence** and **crowd reaction**.
The **clapping** event type that was included in the previous audio effects model, is now extracted a part of the **crowd reaction** event type.
When you choose to see **Insights** of your video on the [Video Analyzer for Med
:::image type="content" source="./media/release-notes/audio-detection.png" alt-text="Audio event detection":::
-### Named entities enhancement
+### Named entities enhancement
-The extracted list of people and location was extended and updated in general.
+The extracted list of people and location was extended and updated in general.
-In addition, the model now includes people and locations in-context which are not famous, like a ΓÇÿSamΓÇÖ or ΓÇÿHomeΓÇÖ in the video.
+In addition, the model now includes people and locations in-context which are not famous, like a ΓÇÿSamΓÇÖ or ΓÇÿHomeΓÇÖ in the video.
## January 2021
-### Video Analyzer for Media is deployed on US Government cloud
+### Video Analyzer for Media is deployed on US Government cloud
-You can now create a Video Analyzer for Media paid account on US government cloud in Virginia and Arizona regions.
-Video Analyzer for Media free trial offering isn't available in the mentioned region. For more information go to Video Analyzer for Media Documentation.
+You can now create a Video Analyzer for Media paid account on US government cloud in Virginia and Arizona regions.
+Video Analyzer for Media free trial offering isn't available in the mentioned region. For more information go to Video Analyzer for Media Documentation.
-### Video Analyzer for Media deployed in the India Central region
+### Video Analyzer for Media deployed in the India Central region
-You can now create a Video Analyzer for Media paid account in the India Central region.
+You can now create a Video Analyzer for Media paid account in the India Central region.
### New Dark Mode for the Video Analyzer for Media website experience
-The Video Analyzer for Media website experiences is now available in dark mode.
-To enable the dark mode open the settings panel and toggle on the **Dark Mode** option.
+The Video Analyzer for Media website experiences is now available in dark mode.
+To enable the dark mode open the settings panel and toggle on the **Dark Mode** option.
:::image type="content" source="./media/release-notes/dark-mode.png" alt-text="Dark mode setting":::
You can now create a Video Analyzer for Media paid account in the Switzerland We
## October 2020
-### Animated character identification improvements
+### Animated character identification improvements
Video Analyzer for Media supports detection, grouping, and recognition of characters in animated content via integration with Cognitive Services custom vision. We added a major improvement to this AI algorithm in the detection and characters recognition, as a result insight accuracy and identified characters are significantly improved.
Starting March 1st 2021, you no longer will be able to sign up and sign in to th
You will be able to sign up and sign in using one of these providers: Azure AD, Microsoft, and Google. > [!NOTE]
-> The Video Analyzer for Media accounts connected to LinkedIn and Facebook will not be accessible after March 1st 2021.
->
+> The Video Analyzer for Media accounts connected to LinkedIn and Facebook will not be accessible after March 1st 2021.
+>
> You should [invite](invite-users.md) an Azure AD, Microsoft, or Google email you own to the Video Analyzer for Media account so you will still have access. You can add an additional owner of supported providers, as described in [invite](invite-users.md). <br/> > Alternatively, you can create a paid account and migrate the data.
You will be able to sign up and sign in using one of these providers: Azure AD,
### Mobile design for the Video Analyzer for Media website
-The Video Analyzer for Media website experience is now supporting mobile devices. The user experience is responsive to adapt to your mobile screen size (excluding customization UIs).
+The Video Analyzer for Media website experience is now supporting mobile devices. The user experience is responsive to adapt to your mobile screen size (excluding customization UIs).
-### Accessibility improvements and bug fixes
+### Accessibility improvements and bug fixes
-As part of WCAG (Web Content Accessibility guidelines), the Video Analyzer for Media website experiences is aligned with grade C, as part of Microsoft Accessibility standards. Several bugs and improvements related to keyboard navigation, programmatic access, and screen reader were solved.
+As part of WCAG (Web Content Accessibility guidelines), the Video Analyzer for Media website experiences is aligned with grade C, as part of Microsoft Accessibility standards. Several bugs and improvements related to keyboard navigation, programmatic access, and screen reader were solved.
## July 2020
Side panel is also used for user preferences and help.
You can now use the search API to search for videos with specific topics (API only).
-Topics is added as part of the `textScope` (optional parameter). See [API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Search-Videos) for details.
+Topics is added as part of the `textScope` (optional parameter). See [API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Search-Videos) for details.
### Labels enhancement
The label tagger was upgraded and now includes more visual labels that can be id
### Video Analyzer for Media deployed in the East US You can now create a Video Analyzer for Media paid account in the East US region.
-
+ ### Video Analyzer for Media URL Video Analyzer for Media regional endpoints were all unified to start only with www. No action item is required.
The **Insights** widget includes new parameters: `language` and `control`.
The **Player** widget has a new `locale` parameter. Both `locale` and `language` parameters control the playerΓÇÖs language.
-For more information, see the [widget types](video-indexer-embed-widgets.md#widget-types) section.
+For more information, see the [widget types](video-indexer-embed-widgets.md#widget-types) section.
### New player skin
A new player skin launched with updated design.
* [Get-Account](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account) * [Get-Accounts-Authorization](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Accounts-Authorization) * [Get-Accounts-With-Token](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Accounts-With-Token)
-
+ The Account object has a `Url` field pointing to the location of the [Video Analyzer for Media website](https://www.videoindexer.ai/). For paid accounts the `Url` field is currently pointing to an internal URL instead of the public website. In the coming weeks we will change it and return the [Video Analyzer for Media website](https://www.videoindexer.ai/) URL for all accounts (trial and paid).
In the coming weeks we will change it and return the [Video Analyzer for Media w
* Replacing the URL with a URL pointing to the Video Analyzer for Media widget APIs (for example, the [insights widget](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Insights-Widget)) * Using the Video Analyzer for Media website to generate a new embedded URL:
-
+ Press **Play** to get to your video's page -> click the **&lt;/&gt; Embed** button -> copy the URL into your application:
-
+ The regional URLs are not supported and will be blocked in the coming weeks. ## January 2020
-
+ ### Custom language support for additional languages Video Analyzer for Media now supports custom language models for `ar-SY` , `en-UK`, and `en-AU` (API only).
-
+ ### Delete account timeframe action update Delete account action now deletes the account within 90 days instead of 48 hours.
-
+ ### New Video Analyzer for Media GitHub repository A new Video Analyzer for Media GitHub with different projects, getting started guides and code samples is now available: https://github.com/Azure-Samples/media-services-video-indexer
-
+ ### Swagger update Video Analyzer for Media unified **authentications** and **operations** into a single [Video Analyzer for Media OpenAPI Specification (swagger)](https://api-portal.videoindexer.ai/api-details#api=Operations&operation). Developers can find the APIs in [Video Analyzer for Media Developer Portal](https://api-portal.videoindexer.ai/).
Update a specific section in the transcript using the [Update-Video-Index](https
### Fix account configuration from the Video Analyzer for Media portal
-You can now update Media Services connection configuration in order to self-help with issues like:
+You can now update Media Services connection configuration in order to self-help with issues like:
* incorrect Azure Media Services resource * password changes
-* Media Services resources were moved between subscriptions
+* Media Services resources were moved between subscriptions
To fix the account configuration, in the Video Analyzer for Media portal navigate to Settings > Account tab (as owner). ### Configure the custom vision account
-Configure the custom vision account on paid accounts using the Video Analyzer for Media portal (previously, this was only supported by API). To do that, sign in to the Video Analyzer for Media portal, choose Model Customization > Animated characters > Configure.
+Configure the custom vision account on paid accounts using the Video Analyzer for Media portal (previously, this was only supported by API). To do that, sign in to the Video Analyzer for Media portal, choose Model Customization > Animated characters > Configure.
### Scenes, shots and keyframes ΓÇô now in one insight pane
-Scenes, shots, and keyframes are now merged into one insight for easier consumption and navigation. When you select the desired scene you can see what shots and keyframes it consists of.
+Scenes, shots, and keyframes are now merged into one insight for easier consumption and navigation. When you select the desired scene you can see what shots and keyframes it consists of.
### Notification about a long video name
When streaming endpoint is disabled, Video Analyzer for Media will show a descri
Status code 409 will now be returned from [Re-Index Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) and [Update Video Index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Video-Index) APIs in case a video is actively indexed, to prevent overriding the current re-index changes by accident. ## November 2019
-
+ * Korean custom language models support
- Video Analyzer for Media now supports custom language models in Korean (`ko-KR`) in both the API and portal.
+ Video Analyzer for Media now supports custom language models in Korean (`ko-KR`) in both the API and portal.
* New languages supported for speech-to-text (STT) Video Analyzer for Media APIs now support STT in Arabic Levantine (ar-SY), English UK dialect (en-GB), and English Australian dialect (en-AU).
-
+ For video upload, we replaced zh-HANS to zh-CN, both are supported but zh-CN is recommended and more accurate.
-
+ ## October 2019
-
+ * Search for animated characters in the gallery When indexing animated characters, you can now search for them in the accountΓÇÖs video galley. For more information, see [Animated characters recognition](animated-characters-recognition.md). ## September 2019
-
+ Multiple advancements announced at IBC 2019:
-
+ * Animated character recognition (public preview) Ability to detect group ad recognize characters in animated content, via integration with custom vision. For more information, see [Animated character detection](animated-characters-recognition.md).
Multiple advancements announced at IBC 2019:
Tagging of shots with editorial types such as close up, medium shot, two shot, indoor, outdoor etc. For more information, see [Editorial shot type detection](scenes-shots-keyframes.md#editorial-shot-type-detection). * Topic inferencing enhancement - now covering level 2
-
+ The topic inferencing model now supports deeper granularity of the IPTC taxonomy. Read full details at [Azure Media Services new AI-powered innovation](https://azure.microsoft.com/blog/azure-media-services-new-ai-powered-innovation/). ## August 2019
-
+ ### Video Analyzer for Media deployed in UK South You can now create a Video Analyzer for Media paid account in the UK south region.
Video Analyzer for Media identifies named locations and people via natural langu
### Keyframes extraction in native resolution Keyframes extracted by Video Analyzer for Media are available in the original resolution of the video.
-
+ ### GA for training custom face models from images Training faces from images moved from Preview mode to GA (available via API and in the portal).
Training faces from images moved from Preview mode to GA (available via API and
### Hide gallery toggle option User can choose to hide the gallery tab from the portal (similar to hiding the samples tab).
-
+ ### Maximum URL size increased Support for URL query string of 4096 (instead of 2048) on indexing a video.
-
+ ### Support for multi-lingual projects Projects can now be created based on videos indexed in different languages (API only).
You can now create a Video Analyzer for Media paid account in the Japan East reg
Added a new API that enables you to [update the Azure Media Service connection endpoint or key](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Paid-Account-Azure-Media-Services).
-### Improve error handling on upload
+### Improve error handling on upload
A descriptive message is returned in case of misconfiguration of the underlying Azure Media Services account.
-### Player timeline Keyframes preview
+### Player timeline Keyframes preview
You can now see an image preview for each time on the player's timeline.
azure-video-analyzer Upload Index Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-for-media-docs/upload-index-videos.md
Last updated 11/15/2021
-# Upload and index your videos
+# Upload and index your videos
-This article shows how to upload and index videos by using the Azure Video Analyzer for Media (formerly Video Indexer) website and the Upload Video API.
+This article shows how to upload and index videos by using the Azure Video Analyzer for Media (formerly Video Indexer) website and the Upload Video API.
When you're creating a Video Analyzer for Media account, you choose between: -- A free trial account. Video Analyzer for Media provides up to 600 minutes of free indexing to website users and up to 2,400 minutes of free indexing to API users. -- A paid option where you're not limited by a quota. You create a Video Analyzer for Media account that's [connected to your Azure subscription and an Azure Media Services account](connect-to-azure.md). You pay for indexed minutes.
+- A free trial account. Video Analyzer for Media provides up to 600 minutes of free indexing to website users and up to 2,400 minutes of free indexing to API users.
+- A paid option where you're not limited by a quota. You create a Video Analyzer for Media account that's [connected to your Azure subscription and an Azure Media Services account](connect-to-azure.md). You pay for indexed minutes.
For more information about account types, see [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/).
-When you're uploading videos by using the API, you have the following options:
+When you're uploading videos by using the API, you have the following options:
* Upload your video from a URL (preferred). * Send the video file as a byte array in the request body.
-* Use existing an Azure Media Services asset by providing the [asset ID](../../media-services/latest/assets-concept.md). This option is supported in paid accounts only.
+* Use existing an Azure Media Services asset by providing the [asset ID](/media-services/latest/assets-concept). This option is supported in paid accounts only.
## Supported file formats
-For a list of file formats that you can use with Video Analyzer for Media, see [Standard Encoder formats and codecs](../../media-services/latest/encode-media-encoder-standard-formats-reference.md).
+For a list of file formats that you can use with Video Analyzer for Media, see [Standard Encoder formats and codecs](/media-services/latest/encode-media-encoder-standard-formats-reference).
## Storage of video files
When you use Video Analyzer for Media, video files are stored in Azure Storage t
You can always delete your video and audio files, along with any metadata and insights that Video Analyzer for Media has extracted from them. After you delete a file from Video Analyzer for Media, the file and its metadata and insights are permanently removed from Video Analyzer for Media. However, if you've implemented your own backup solution in Azure Storage, the file remains in Azure Storage. The persistence of a video is identical whether you upload by using the Video Analyzer for Media website or by using the Upload Video API.
-
+ ## Upload and index a video by using the website Sign in on the [Video Analyzer for Media](https://www.videoindexer.ai/) website, and then select **Upload**.
After Video Analyzer for Media is done analyzing, you get an email with a link t
## Upload and index a video by using the API
-You can use the [Upload Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) API to upload and index your videos based on a URL. The code sample that follows includes the commented-out code that shows how to upload the byte array.
+You can use the [Upload Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) API to upload and index your videos based on a URL. The code sample that follows includes the commented-out code that shows how to upload the byte array.
### Configurations and parameters This section describes some of the optional parameters and when to set them. For the most up-to-date info about parameters, see the [Video Analyzer for Media portal](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video).
-#### externalID
+#### externalID
Use this parameter to specify an ID that will be associated with the video. The ID can be applied to integration into an external video content management (VCM) system. The videos that are in the Video Analyzer for Media portal can be searched via the specified external ID. #### callbackUrl
-Use this parameter to specify a callback URL.
+Use this parameter to specify a callback URL.
[!INCLUDE [callback url](./includes/callback-url.md)]
Use this parameter to define an AI bundle that you want to apply on your audio o
- `BasicAudio`: Index and extract insights by using audio only (ignoring video). Include only basic audio features (transcription, translation, formatting of output captions and subtitles). - `AdvancedAudio`: Index and extract insights by using audio only (ignoring video). Include advanced audio features (such as audio event detection) in addition to the standard audio analysis. - `AdvancedVideo`: Index and extract insights by using video only (ignoring audio). Include advanced video features (such as observed people tracing) in addition to the standard video analysis.-- `AdvancedVideoAndAudio`: Index and extract insights by using both advanced audio and advanced video analysis.
+- `AdvancedVideoAndAudio`: Index and extract insights by using both advanced audio and advanced video analysis.
> [!NOTE]
-> The preceding advanced presets include models that are in public preview. When these models reach general availability, there might be implications for the price.
+> The preceding advanced presets include models that are in public preview. When these models reach general availability, there might be implications for the price.
Video Analyzer for Media covers up to two tracks of audio. If the file has more audio tracks, they're treated as one track. If you want to index the tracks separately, you need to extract the relevant audio file and index it as `AudioOnly`.
This parameter is supported only for paid accounts.
#### streamingPreset
-After your video is uploaded, Video Analyzer for Media optionally encodes the video. It then proceeds to indexing and analyzing the video. When Video Analyzer for Media is done analyzing, you get a notification with the video ID.
+After your video is uploaded, Video Analyzer for Media optionally encodes the video. It then proceeds to indexing and analyzing the video. When Video Analyzer for Media is done analyzing, you get a notification with the video ID.
-When you're using the [Upload Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) or [Re-Index Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) API, one of the optional parameters is `streamingPreset`. If you set `streamingPreset` to `Default`, `SingleBitrate`, or `AdaptiveBitrate`, the encoding process is triggered.
+When you're using the [Upload Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) or [Re-Index Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) API, one of the optional parameters is `streamingPreset`. If you set `streamingPreset` to `Default`, `SingleBitrate`, or `AdaptiveBitrate`, the encoding process is triggered.
After the indexing and encoding jobs are done, the video is published so you can also stream your video. The streaming endpoint from which you want to stream the video must be in the **Running** state. For `SingleBitrate`, the standard encoder cost will apply for the output. If the video height is greater than or equal to 720, Video Analyzer for Media encodes it as 1280 x 720. Otherwise, it's encoded as 640 x 468.
-The default setting is [content-aware encoding](../../media-services/latest/encode-content-aware-concept.md).
+The default setting is [content-aware encoding](/media-services/latest/encode-content-aware-concept).
If you only want to index your video and not encode it, set `streamingPreset` to `NoStreaming`.
The following C# code snippets demonstrate the usage of all the Video Analyzer f
After you copy the following code into your development platform, you'll need to provide two parameters:
-* API key (`apiKey`): Your personal API management subscription key. It allows you to get an access token in order to perform operations on your Video Analyzer for Media account.
+* API key (`apiKey`): Your personal API management subscription key. It allows you to get an access token in order to perform operations on your Video Analyzer for Media account.
To get your API key:
After you copy the following code into your development platform, you'll need to
* Video URL (`videoUrl`): A URL of the video or audio file to be indexed. Here are the requirements:
- - The URL must point at a media file. (HTML pages are not supported.)
+ - The URL must point at a media file. (HTML pages are not supported.)
- The file can be protected by an access token that's provided as part of the URI. The endpoint that serves the file must be secured with TLS 1.2 or later.
- - The URL must be encoded.
+ - The URL must be encoded.
-The result of successfully running the code sample includes an insight widget URL and a player widget URL. They allow you to examine the insights and the uploaded video, respectively.
+The result of successfully running the code sample includes an insight widget URL and a player widget URL. They allow you to examine the insights and the uploaded video, respectively.
```csharp
public async Task Sample()
HttpResponseMessage result = await client.GetAsync($"{apiUrl}/auth/trial/Accounts?{queryParams}"); var json = await result.Content.ReadAsStringAsync(); var accounts = JsonConvert.DeserializeObject<AccountContractSlim[]>(json);
-
- // Take the relevant account. Here we simply take the first.
+
+ // Take the relevant account. Here we simply take the first.
// You can also get the account via accounts.First(account => account.Id == <GUID>); var accountInfo = accounts.First();
public class AccountContractSlim
### [Azure Resource Manager account](#tab/with-arm-account-account/)
-After you copy this C# project into your development platform, you need to take the following steps:
+After you copy this C# project into your development platform, you need to take the following steps:
1. Go to Program.cs and populate:
namespace VideoIndexerArm
Console.WriteLine($"account id: {accountId}"); Console.WriteLine($"account location: {accountLocation}");
- // Get account-level access token for Azure Video Analyzer for Media
+ // Get account-level access token for Azure Video Analyzer for Media
var accessTokenRequest = new AccessTokenRequest { PermissionType = AccessTokenPermission.Contributor,
namespace VideoIndexerArm
[JsonPropertyName("projectId")] public string ProjectId { get; set; }
-
+ [JsonPropertyName("videoId")] public string VideoId { get; set; } }
The upload operation might return the following status codes:
|429||Trial accounts are allowed 5 uploads per minute. Paid accounts are allowed 50 uploads per minute.| ## Uploading considerations and limitations
-
+ - The name of a video must be no more than 80 characters. - When you're uploading a video based on the URL (preferred), the endpoint must be secured with TLS 1.2 or later. - The upload size with the URL option is limited to 30 GB.
The upload operation might return the following status codes:
- The URL provided in the `videoURL` parameter must be encoded. - Indexing Media Services assets has the same limitation as indexing from a URL. - Video Analyzer for Media has a duration limit of 4 hours for a single file.-- The URL must be accessible (for example, a public URL).
+- The URL must be accessible (for example, a public URL).
If it's a private URL, the access token must be provided in the request. - The URL must point to a valid media file and not to a webpage, such as a link to the `www.youtube.com` page.
The upload operation might return the following status codes:
> [!Tip] > We recommend that you use .NET Framework version 4.6.2. or later, because older .NET Framework versions don't default to TLS 1.2. >
-> If you must use an older .NET Framework version, add one line to your code before making the REST API call:
+> If you must use an older .NET Framework version, add one line to your code before making the REST API call:
> > `System.Net.ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls | SecurityProtocolType.Tls11 | SecurityProtocolType.Tls12;`
azure-video-analyzer Video Indexer Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-get-started.md
This getting started quickstart shows how to sign in to the Azure Video Analyzer for Media (formerly Video Indexer) website and how to upload your first video.
-When creating a Video Analyzer for Media account, you can choose a free trial account (where you get a certain number of free indexing minutes) or a paid option (where you are not limited by the quota). With free trial, Video Analyzer for Media provides up to 600 minutes of free indexing to website users and up to 2400 minutes of free indexing to API users. With paid option, you create a Video Analyzer for Media account that is [connected to your Azure subscription and an Azure Media Services account](connect-to-azure.md). You pay for minutes indexed, for more information, see [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/).
+When creating a Video Analyzer for Media account, you can choose a free trial account (where you get a certain number of free indexing minutes) or a paid option (where you are not limited by the quota). With free trial, Video Analyzer for Media provides up to 600 minutes of free indexing to website users and up to 2400 minutes of free indexing to API users. With paid option, you create a Video Analyzer for Media account that is [connected to your Azure subscription and an Azure Media Services account](connect-to-azure.md). You pay for minutes indexed, for more information, see [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/).
## Sign up for Video Analyzer for Media
Once you start using Video Analyzer for Media, all your stored data and uploaded
### Supported file formats for Video Analyzer for Media
-See the [input container/file formats](../../media-services/latest/encode-media-encoder-standard-formats-reference.md) article for a list of file formats that you can use with Video Analyzer for Media.
+See the [input container/file formats](/media-services/latest/encode-media-encoder-standard-formats-reference) article for a list of file formats that you can use with Video Analyzer for Media.
### Upload a video
See the [input container/file formats](../../media-services/latest/encode-media-
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/video-indexer-get-started/video-indexer-upload.png" alt-text="Upload":::
-1. Once your video has been uploaded, Video Analyzer for Media starts indexing and analyzing the video. You see the progress.
+1. Once your video has been uploaded, Video Analyzer for Media starts indexing and analyzing the video. You see the progress.
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/video-indexer-get-started/progress.png" alt-text="Progress of the upload":::
For more information, see [supported browsers](video-indexer-overview.md#support
See [Upload and index videos](upload-index-videos.md) for more details.
-After you upload and index a video, you can start using [Video Analyzer for Media website](video-indexer-view-edit.md) or [Video Analyzer for Media Developer Portal](video-indexer-use-apis.md) to see the insights of the video.
+After you upload and index a video, you can start using [Video Analyzer for Media website](video-indexer-view-edit.md) or [Video Analyzer for Media Developer Portal](video-indexer-use-apis.md) to see the insights of the video.
[Start using APIs](video-indexer-use-apis.md) ## Next steps
-For detailed introduction please visit our [introduction lab](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/IntroToVideoIndexer.md).
+For detailed introduction please visit our [introduction lab](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/IntroToVideoIndexer.md).
At the end of the workshop you will have a good understanding of the kind of information that can be extracted from video and audio content, you will be more prepared to identify opportunities related to content intelligence, pitch video AI on Azure, and demo several scenarios on Video Analyzer for Media.
azure-vmware Install Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/install-vmware-hcx.md
Title: Install VMware HCX in Azure VMware Solution description: Install VMware HCX in your Azure VMware Solution private cloud. Previously updated : 09/16/2021 Last updated : 03/29/2022 # Install and activate VMware HCX in Azure VMware Solution VMware HCX Advanced and its associated Cloud Manager are no longer pre-deployed in Azure VMware Solution. Instead, you'll install it through the Azure portal as an add-on. You'll still download the HCX Connector OVA and deploy the virtual appliance on your on-premises vCenter.
-Any edition of VMware HCX supports 25 site pairings (on-premises to cloud or cloud to cloud). The default is HCX Advanced, but you can open a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) to have HCX Enterprise Edition enabled, which is currently in public preview. Once the service is generally available, you'll have 30 days to decide on your next steps. You can also turn off or opt out of the HCX Enterprise Edition service but keep HCX Advanced as it's part of the node cost.
+Any edition of VMware HCX supports 25 site pairings (on-premises to cloud or cloud to cloud). The default is HCX Advanced, but you can open a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) to have HCX Enterprise Edition enabled. Once the service is generally available, you'll have 30 days to decide on your next steps. You can turn off or opt out of the HCX Enterprise Edition service but keep HCX Advanced as it's part of the node cost.
Downgrading from HCX Enterprise Edition to HCX Advanced is possible without redeploying. First, ensure you've reverted to an HCX Advanced configuration state and not using the Enterprise features. If you plan to downgrade, ensure that no scheduled migrations, features like RAV and [HCX Mobility Optimized Networking (MON)](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-0E254D74-60A9-479C-825D-F373C41F40BC.html) aren't in use, and site pairings are three or fewer.
cdn Cdn Caching Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-caching-policy.md
Azure Media Services provides [integrated CDN](https://azure.microsoft.com/updat
> [!IMPORTANT] >Azure Media Services has complete integration with Azure CDN. With a single click, you can integrate all the available Azure CDN providers to your streaming endpoint including standard and premium products. For more information, see this [announcement](https://azure.microsoft.com/blog/standardstreamingendpoint/).
->
+>
> Data charges from streaming endpoint to CDN only gets disabled if the CDN is enabled over streaming endpoint APIs or using Azure portal's streaming endpoint section. Manual integration or directly creating a CDN endpoint using CDN APIs or portal section doesn't disable the data charges. ## Configuring cache headers with Azure Media Services You can use Azure portal or Azure Media Services APIs to configure cache header values.
-1. To configure cache headers using Azure portal, refer to [How to Manage Streaming Endpoints](../media-services/previous/media-services-portal-manage-streaming-endpoints.md) section Configuring the Streaming Endpoint.
+1. To configure cache headers using Azure portal, refer to [How to Manage Streaming Endpoints](/media-services/previous/media-services-portal-manage-streaming-endpoints) section Configuring the Streaming Endpoint.
2. Azure Media Services REST API, [StreamingEndpoint](/rest/api/media/operations/streamingendpoint#StreamingEndpointCacheControl). 3. Azure Media Services .NET SDK, [StreamingEndpointCacheControl Properties](/dotnet/api/microsoft.windowsazure.mediaservices.client.streamingendpointcachecontrol).
cdn Cdn Custom Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-custom-ssl.md
To enable HTTPS on a custom domain, follow these steps:
> This option is available only with **Azure CDN from Microsoft** and **Azure CDN from Verizon** profiles. >
-You can use your own certificate to enable the HTTPS feature. This process is done through an integration with Azure Key Vault, which allows you to store your certificates securely. Azure Front Door uses this secure mechanism to get your certificate and it requires a few extra steps. When you create your TLS/SSL certificate, you must create a complete certificate chain with an allowed certificate authority (CA) that is part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If you use a non-allowed CA, your request will be rejected. If a certificate without complete chain is presented, the requests which involve that certificate are not guaranteed to work as expected. For Azure CDN from Verizon, any valid CA will be accepted.
+You can use your own certificate to enable the HTTPS feature. This process is done through an integration with Azure Key Vault, which allows you to store your certificates securely. Azure CDN uses this secure mechanism to get your certificate and it requires a few extra steps. When you create your TLS/SSL certificate, you must create a complete certificate chain with an allowed certificate authority (CA) that is part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If you use a non-allowed CA, your request will be rejected. If a certificate without complete chain is presented, the requests which involve that certificate are not guaranteed to work as expected. For Azure CDN from Verizon, any valid CA will be accepted.
### Prepare your Azure Key vault account and certificate
cdn Cdn Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-features.md
# What are the comparisons between Azure CDN product features?
-Azure Content Delivery Network (CDN) includes four products:
+Azure Content Delivery Network (CDN) includes four products:
* **Azure CDN Standard from Microsoft** * **Azure CDN Standard from Akamai** * **Azure CDN Standard from Verizon**
-* **Azure CDN Premium from Verizon**.
+* **Azure CDN Premium from Verizon**.
The following table compares the features available with each product.
The following table compares the features available with each product.
| IPv4/IPv6 dual-stack | **&#x2713;** |**&#x2713;** |**&#x2713;** |**&#x2713;** | | [HTTP/2 support](cdn-http2.md) | **&#x2713;** |**&#x2713;** |**&#x2713;** |**&#x2713;** | ||||
- **Security** | **Standard Microsoft** | **Standard Akamai** | **Standard Verizon** | **Premium Verizon** |
+ **Security** | **Standard Microsoft** | **Standard Akamai** | **Standard Verizon** | **Premium Verizon** |
| HTTPS support with CDN endpoint | **&#x2713;** |**&#x2713;** |**&#x2713;** |**&#x2713;** | | [Custom domain HTTPS](cdn-custom-ssl.md) | **&#x2713;** | **&#x2713;**, Requires direct CNAME to enable |**&#x2713;** |**&#x2713;** | | [Custom domain name support](cdn-map-content-to-custom-domain.md) | **&#x2713;** |**&#x2713;** |**&#x2713;** |**&#x2713;** | | [Geo-filtering](cdn-restrict-access-by-country-region.md) | **&#x2713;** |**&#x2713;** |**&#x2713;** |**&#x2713;** |
-| [Token authentication](cdn-token-auth.md) | | | |**&#x2713;**|
+| [Token authentication](cdn-token-auth.md) | | | |**&#x2713;**|
| [DDOS protection](https://www.us-cert.gov/ncas/tips/ST04-015) | **&#x2713;** |**&#x2713;** |**&#x2713;** |**&#x2713;** | | [Bring your own certificate](cdn-custom-ssl.md?tabs=option-2-enable-https-with-your-own-certificate#tlsssl-certificates) |**&#x2713;** | | **&#x2713;** | **&#x2713;** | | Supported TLS Versions | TLS 1.2, TLS 1.0/1.1 - [Configurable](/rest/api/cdn/custom-domains/enable-custom-https#usermanagedhttpsparameters) | TLS 1.2 | TLS 1.2 | TLS 1.2 | ||||
-| **Analytics and reporting** | **Standard Microsoft** | **Standard Akamai** | **Standard Verizon** | **Premium Verizon** |
+| **Analytics and reporting** | **Standard Microsoft** | **Standard Akamai** | **Standard Verizon** | **Premium Verizon** |
| [Azure diagnostic logs](cdn-azure-diagnostic-logs.md) | **&#x2713;** | **&#x2713;** |**&#x2713;** |**&#x2713;** | | [Core reports from Verizon](cdn-analyze-usage-patterns.md) | | |**&#x2713;** |**&#x2713;** | | [Custom reports from Verizon](cdn-verizon-custom-reports.md) | | |**&#x2713;** |**&#x2713;** |
The following table compares the features available with each product.
| [Edge node performance](cdn-edge-performance.md) | | | |**&#x2713;** | | [Real-time alerts](cdn-real-time-alerts.md) | | | |**&#x2713;** | ||||
-| **Ease of use** | **Standard Microsoft** | **Standard Akamai** | **Standard Verizon** | **Premium Verizon** |
-| Easy integration with Azure services, such as [Storage](cdn-create-a-storage-account-with-cdn.md), [Web Apps](cdn-add-to-web-app.md), and [Media Services](../media-services/previous/media-services-portal-manage-streaming-endpoints.md) | **&#x2713;** |**&#x2713;** |**&#x2713;** |**&#x2713;** |
+| **Ease of use** | **Standard Microsoft** | **Standard Akamai** | **Standard Verizon** | **Premium Verizon** |
+| Easy integration with Azure services, such as [Storage](cdn-create-a-storage-account-with-cdn.md), [Web Apps](cdn-add-to-web-app.md), and [Media Services](/media-services/previous/media-services-portal-manage-streaming-endpoints) | **&#x2713;** |**&#x2713;** |**&#x2713;** |**&#x2713;** |
| Management via [REST API](/rest/api/cdn/), [.NET](cdn-app-dev-net.md), [Node.js](cdn-app-dev-node.md), or [PowerShell](cdn-manage-powershell.md) | **&#x2713;** |**&#x2713;** |**&#x2713;** |**&#x2713;** | | [Compression MIME types](./cdn-improve-performance.md) |Configurable |Configurable |Configurable |Configurable | | Compression encodings |gzip, brotli |gzip |gzip, deflate, bzip2 |gzip, deflate, bzip2 | ## Migration
-For information about migrating an **Azure CDN Standard from Verizon** profile to **Azure CDN Premium from Verizon**, see [Migrate an Azure CDN profile from Standard Verizon to Premium Verizon](cdn-migrate.md).
+For information about migrating an **Azure CDN Standard from Verizon** profile to **Azure CDN Premium from Verizon**, see [Migrate an Azure CDN profile from Standard Verizon to Premium Verizon](cdn-migrate.md).
> [!NOTE] > There is an upgrade path from Standard Verizon to Premium Verizon, there is no conversion mechanism between other products at this time.
cognitive-services Video Moderation Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Content-Moderator/video-moderation-api.md
This article provides information and code samples to help you get started using the [Content Moderator SDK for .NET](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.ContentModerator/) to scan video content for adult or racy content.
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
## Prerequisites - Any edition of [Visual Studio 2015 or 2017](https://www.visualstudio.com/downloads/) ## Set up Azure resources
-The Content Moderator's video moderation capability is available as a free public preview **media processor** in Azure Media Services (AMS). Azure Media Services is a specialized Azure service for storing and streaming video content.
+The Content Moderator's video moderation capability is available as a free public preview **media processor** in Azure Media Services (AMS). Azure Media Services is a specialized Azure service for storing and streaming video content.
### Create an Azure Media Services account
-Follow the instructions in [Create an Azure Media Services account](../../media-services/previous/media-services-portal-create-account.md) to subscribe to AMS and create an associated Azure storage account. In that storage account, create a new Blob storage container.
+Follow the instructions in [Create an Azure Media Services account](/media-services/previous/media-services-portal-create-account) to subscribe to AMS and create an associated Azure storage account. In that storage account, create a new Blob storage container.
### Create an Azure Active Directory application
In the **Azure AD app** section, select **Create New** and name your new Azure A
Select your app registration and click the **Manage application** button below it. Note the value in the **Application ID** field; you will need this later. Select **Settings** > **Keys**, and enter a description for a new key (such as "VideoModKey"). Click **Save**, and then notice the new key value. Copy this string and save it somewhere secure.
-For a more thorough walkthrough of the above process, See [Get started with Azure AD authentication](../../media-services/previous/media-services-portal-get-started-with-aad.md).
+For a more thorough walkthrough of the above process, See [Get started with Azure AD authentication](/media-services/previous/media-services-portal-get-started-with-aad).
Once you've done this, you can use the video moderation media processor in two different ways.
The Azure Media Services Explorer is a user-friendly frontend for AMS. Use it to
## Create the Visual Studio project
-1. In Visual Studio, create a new **Console app (.NET Framework)** project and name it **VideoModeration**.
+1. In Visual Studio, create a new **Console app (.NET Framework)** project and name it **VideoModeration**.
1. If there are other projects in your solution, select this one as the single startup project. 1. Get the required NuGet packages. Right-click on your project in the Solution Explorer and select **Manage NuGet Packages**; then find and install the following packages: - windowsazure.mediaservices
Add the following static fields to the **Program** class in _Program.cs_. These
private static CloudMediaContext _context = null; private static CloudStorageAccount _StorageAccount = null;
-// Azure Media Services (AMS) associated Storage Account, Key, and the Container that has
+// Azure Media Services (AMS) associated Storage Account, Key, and the Container that has
// a list of Blobs to be processed. static string STORAGE_NAME = "YOUR AMS ASSOCIATED BLOB STORAGE NAME"; static string STORAGE_KEY = "YOUR AMS ASSOCIATED BLOB STORAGE KEY";
static string STORAGE_CONTAINER_NAME = "YOUR BLOB CONTAINER FOR VIDEO FILES";
private static StorageCredentials _StorageCredentials = null;
-// Azure Media Services authentication.
+// Azure Media Services authentication.
private const string AZURE_AD_TENANT_NAME = "microsoft.onmicrosoft.com"; private const string CLIENT_ID = "YOUR CLIENT ID"; private const string CLIENT_SECRET = "YOUR CLIENT SECRET";
-// REST API endpoint, for example "https://accountname.restv2.westcentralus.media.azure.net/API".
+// REST API endpoint, for example "https://accountname.restv2.westcentralus.media.azure.net/API".
private const string REST_API_ENDPOINT = "YOUR API ENDPOINT"; // Content Moderator Media Processor Nam
Add the following method to the **Program** class. You use the Storage Context,
// Creates a storage context from the AMS associated storage name and key static void CreateStorageContext() {
- // Get a reference to the storage account associated with a Media Services account.
+ // Get a reference to the storage account associated with a Media Services account.
if (_StorageCredentials == null) { _StorageCredentials = new StorageCredentials(STORAGE_NAME, STORAGE_KEY);
static IEnumerable<IListBlobItem> GetBlobsList()
CloudBlobClient CloudBlobClient = _StorageAccount.CreateCloudBlobClient(); CloudBlobContainer MediaBlobContainer = CloudBlobClient.GetContainerReference(STORAGE_CONTAINER_NAME);
- // Get the reference to the list of Blobs
+ // Get the reference to the list of Blobs
var blobList = MediaBlobContainer.ListBlobs(); return blobList; }
static void RunContentModeratorJob(IAsset asset)
CancellationToken.None); progressJobTask.Wait();
- // If job state is Error, the event handling
- // method for job progress should log errors. Here we check
+ // If job state is Error, the event handling
+ // method for job progress should log errors. Here we check
// for error state and exit if needed. if (job.State == JobState.Error) {
After the Content Moderation job is completed, analyze the JSON response. It con
- **Shots** as "**fragments**" - **Key frames** as "**events**" with a **reviewRecommended" (= true or false)"** flag based on **Adult** and **Racy** scores - **start**, **duration**, **totalDuration**, and **timestamp** are in "ticks". Divide by **timescale** to get the number in seconds.
-
+ > [!NOTE] > - `adultScore` represents the potential presence and prediction score of content that may be considered sexually explicit or adult in certain situations. > - `racyScore` represents the potential presence and prediction score of content that may be considered sexually suggestive or mature in certain situations.
cognitive-services Get Started Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-text-to-speech.md
keywords: text to speech
[!INCLUDE [CLI include](includes/quickstarts/text-to-speech-basics/cli.md)] ::: zone-end
-## Get position information
-
-Your project might need to know when a word is spoken by text-to-speech so that it can take specific action based on that timing. For example, if you want to highlight words as they're spoken, you need to know what to highlight, when to highlight it, and for how long to highlight it.
-
-You can accomplish this by using the `WordBoundary` event within `SpeechSynthesizer`. This event is raised at the beginning of each new spoken word. It provides a time offset within the spoken stream and a text offset within the input prompt:
-
-* `AudioOffset` reports the output audio's elapsed time between the beginning of synthesis and the start of the next word. This is measured in hundred-nanosecond units (HNS), with 10,000 HNS equivalent to 1 millisecond.
-* `WordOffset` reports the character position in the input string (original text or [SSML](speech-synthesis-markup.md)) immediately before the word that's about to be spoken.
-
-> [!NOTE]
-> `WordBoundary` events are raised as the output audio data becomes available, which will be faster than playback to an output device. The caller must appropriately synchronize stream timing to "real time."
-
-You can find examples of using `WordBoundary` in the [text-to-speech samples](https://aka.ms/csspeech/samples) on GitHub.
- ## Next steps
-* [Get started with Custom Neural Voice](how-to-custom-voice.md)
-* [Improve synthesis with SSML](speech-synthesis-markup.md)
-* Learn how to use the [Long Audio API](long-audio-api.md) for large text samples like books and news articles
+> [!div class="nextstepaction"]
+> [Learn more about speech synthesis](how-to-speech-synthesis.md)
+
cognitive-services How To Recognize Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-recognize-speech.md
Title: "How to recognize speech - Speech service"
-description: Learn how to use the Speech SDK to convert speech to text, including object construction, supported audio input formats, and configuration options for speech recognition.
+description: Learn how to convert speech to text, including object construction, supported audio input formats, and configuration options for speech recognition.
keywords: speech to text, speech to text software
## Next steps
-> [!div class="nextstepaction"]
-> [See the quickstart samples on GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart)
+* [Try the speech to text quickstart](get-started-speech-to-text.md)
+* [Improve recognition accuracy with custom speech](custom-speech-overview.md)
+* [Transcribe audio in batches](batch-transcription.md)
cognitive-services How To Speech Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-speech-synthesis.md
+
+ Title: "How to synthesize speech from text - Speech service"
+
+description: Learn how to convert text to speech. Learn about object construction and design patterns, supported audio output formats, and custom configuration options for speech synthesis.
++++++ Last updated : 03/14/2022+
+ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
+
+zone_pivot_groups: programming-languages-speech-services
+keywords: text to speech
++
+# How to synthesize speech from text
+++++++++++
+## Get facial pose events
+
+Speech can be a good way to drive the animation of facial expressions.
+[Visemes](how-to-speech-synthesis-viseme.md) are often used to represent the key poses in observed speech. Key poses include the position of the lips, jaw, and tongue in producing a particular phoneme.
+
+You can subscribe to viseme events in the Speech SDK. Then, you can apply viseme events to animate the face of a character as speech audio plays.
+Learn [how to get viseme events](how-to-speech-synthesis-viseme.md#get-viseme-events-with-the-speech-sdk).
+
+## Get position information
+
+Your project might need to know when a word is spoken by text-to-speech so that it can take specific action based on that timing. For example, if you want to highlight words as they're spoken, you need to know what to highlight, when to highlight it, and for how long to highlight it.
+
+You can accomplish this by using the `WordBoundary` event within `SpeechSynthesizer`. This event is raised at the beginning of each new spoken word. It provides a time offset within the spoken stream and a text offset within the input prompt:
+
+* `AudioOffset` reports the output audio's elapsed time between the beginning of synthesis and the start of the next word. This is measured in hundred-nanosecond units (HNS), with 10,000 HNS equivalent to 1 millisecond.
+* `WordOffset` reports the character position in the input string (original text or [SSML](speech-synthesis-markup.md)) immediately before the word that's about to be spoken.
+
+> [!NOTE]
+> `WordBoundary` events are raised as the output audio data becomes available, which will be faster than playback to an output device. The caller must appropriately synchronize streaming and real time.
+
+You can find examples of using `WordBoundary` in the [text-to-speech samples](https://aka.ms/csspeech/samples) on GitHub.
+
+## Next steps
+
+* [Get started with Custom Neural Voice](how-to-custom-voice.md)
+* [Improve synthesis with SSML](speech-synthesis-markup.md)
+* [Synthesize from long-form text](long-audio-api.md) like books and news articles
cognitive-services Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/disconnected-containers.md
Previously updated : 03/11/2022 Last updated : 03/28/2022 # Use Docker containers in disconnected environments
-Containers enable you to run Cognitive Services APIs in your own environment, and are great for your specific security and data governance requirements. Disconnected containers enable you to use several of these APIs completely disconnected from the internet. Currently, the following containers can be run in this manner:
+Containers enable you to run Cognitive Services APIs in your own environment, and are great for your specific security and data governance requirements. Disconnected containers enable you to use several of these APIs disconnected from the internet. Currently, the following containers can be run in this manner:
* [Speech to Text (Standard)](../speech-service/speech-container-howto.md?tabs=stt) * [Neural Text to Speech](../speech-service/speech-container-howto.md?tabs=ntts) * [Text Translation (Standard)](../translator/containers/translator-how-to-install-container.md#host-computer) * [Language Understanding (LUIS)](../LUIS/luis-container-howto.md) * Azure Cognitive Service for Language
- * [Sentiment Analysis](../language-service/sentiment-opinion-mining/how-to/use-containers.md)
- * [Key Phrase Extraction](../language-service/key-phrase-extraction/how-to/use-containers.md)
- * [Language Detection](../language-service/language-detection/how-to/use-containers.md)
+ * [Sentiment Analysis](../language-service/sentiment-opinion-mining/how-to/use-containers.md)
+ * [Key Phrase Extraction](../language-service/key-phrase-extraction/how-to/use-containers.md)
+ * [Language Detection](../language-service/language-detection/how-to/use-containers.md)
* [Computer Vision - Read](../computer-vision/computer-vision-how-to-install-containers.md) Disconnected container usage is also available for the following Applied AI service:
-* [Form Recognizer ΓÇô Custom/Invoice](../../applied-ai-services/form-recognizer/containers/form-recognizer-container-install-run.md)
+
+* [Form Recognizer](../../applied-ai-services/form-recognizer/containers/form-recognizer-container-install-run.md#required-containers)
Before attempting to run a Docker container in an offline environment, make sure you know the steps to successfully download and use the container. For example:+ * Host computer requirements and recommendations.
-* The Docker `pull` command you will use to download the container.
+* The Docker `pull` command you'll use to download the container.
* How to validate that a container is running. * How to send queries to the container's endpoint, once it's running.
Fill out and submit the [request form](https://aka.ms/csdisconnectedcontainers)
[!INCLUDE [Request access to public preview](../../../includes/cognitive-services-containers-request-access.md)]
-Access is limited to customers that meet the following requirements:
-* Your organization must have a Microsoft Enterprise Agreement or an equivalent agreement and should identified as strategic customer or partner with Microsoft.
+Access is limited to customers that meet the following requirements:
+
+* Your organization must have a Microsoft Enterprise Agreement or an equivalent agreement and should be identified as strategic customer or partner with Microsoft.
* Disconnected containers are expected to run fully offline, hence your use cases must meet one of below or similar requirements:
- * Environment or device(s) with zero connectivity to internet.
- * Remote location that occasionally has internet access.
- * Organization under strict regulation of not sending any kind of data back to cloud.
+ * Environment or device(s) with zero connectivity to internet.
+ * Remote location that occasionally has internet access.
+ * Organization under strict regulation of not sending any kind of data back to cloud.
* Application completed as instructed - Please pay close attention to guidance provided throughout the application to ensure you provide all the necessary information required for approval. ## Purchase a commitment plan to use containers in disconnected environments ### Create a new resource
-1. Sign into the [Azure portal](https://portal.azure.com/) and select **Create a new resource** for one of the applicable Cognitive Services or Applied AI services listed above.
+1. Sign into the [Azure portal](https://portal.azure.com/) and select **Create a new resource** for one of the applicable Cognitive Services or Applied AI services listed above.
2. Enter the applicable information to create your resource. Be sure to select **Commitment tier disconnected containers** as your pricing tier. > [!NOTE]
+ >
> * You will only see the option to purchase a commitment tier if you have been approved by Microsoft. > * Pricing details are for example only. :::image type="content" source="media/offline-container-signup.png" alt-text="A screenshot showing resource creation on the Azure portal." lightbox="media/offline-container-signup.png":::
-3. Select **Review + Create** at the bottom of the page. Review the information, and select **Create**.
+3. Select **Review + Create** at the bottom of the page. Review the information, and select **Create**.
## Gather required parameters
docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice:l
## Configure the container to be run in a disconnected environment
-Now that you've downloaded your container, you will need to run the container with the `DownloadLicense=True` parameter in your `docker run` command. This parameter will download a license file that will enable your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file will be invalid to run the container. You can only use a license file with the appropriate container that you've been approved for. For example, you cannot use a license file for a speech-to-text container with a form recognizer container.
+Now that you've downloaded your container, you'll need to run the container with the `DownloadLicense=True` parameter in your `docker run` command. This parameter will download a license file that will enable your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file will be invalid to run the container. You can only use a license file with the appropriate container that you've been approved for. For example, you can't use a license file for a speech-to-text container with a form recognizer container.
> [!IMPORTANT]
-> * [**Translator container only**](../translator/containers/translator-how-to-install-container.md):
-> * You must include a parameter to download model files for the [languages](../translator/language-support.md) you want to translate. For example: `-e Languages=en,es`
-> * The container will generate a `docker run` template that you can use to run the container, containing parameters you will need for the downloaded models and configuration file. Make sure you save this template.
+>
+> * [**Translator container only**](../translator/containers/translator-how-to-install-container.md):
+> * You must include a parameter to download model files for the [languages](../translator/language-support.md) you want to translate. For example: `-e Languages=en,es`
+> * The container will generate a `docker run` template that you can use to run the container, containing parameters you will need for the downloaded models and configuration file. Make sure you save this template.
The following example shows the formatting of the `docker run` command you'll use, with placeholder values. Replace these placeholder values with your own values.
DownloadLicense=True \
Mounts:License={LICENSE_MOUNT} \ ```
-After you have configured the container, use the next section to run the container in your environment with the license, and appropriate memory and CPU allocations.
+After you've configured the container, use the next section to run the container in your environment with the license, and appropriate memory and CPU allocations.
## Run the container in a disconnected environment > [!IMPORTANT] > If you're using the Translator, Neural text-to-speech, or Speech-to-text containers, read the **Additional parameters** section below for information on commands or additional parameters you will need to use.
-Once the license file has been downloaded, you can run the container in a disconnected environment. The following example shows the formatting of the `docker run` command you'll use, with placeholder values. Replace these placeholder values with your own values.
+Once the license file has been downloaded, you can run the container in a disconnected environment. The following example shows the formatting of the `docker run` command you'll use, with placeholder values. Replace these placeholder values with your own values.
Wherever the container is run, the license file must be mounted to the container and the location of the license folder on the container's local filesystem must be specified with `Mounts:License=`. An output mount must also be specified so that billing usage records can be written. Placeholder | Value | Format or example | |-|-|| | `{IMAGE}` | The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice` |
- `{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container. | `4g` |
+ `{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container. | `4g` |
| `{NUMBER_CPUS}` | The appropriate number of CPUs to allocate for your container. | `4` | | `{LICENSE_MOUNT}` | The path where the license will be located and mounted. | `/volume/license:/path/to/license/directory` | | `{OUTPUT_PATH}` | The output path for logging [usage records](#usage-records). | `/host/output:/path/to/output/directory` |
Mounts:Output={OUTPUT_PATH}
See the following sections for additional parameters and commands you may need to run the container.
-#### Translator container
+#### Translator container
+
+If you're using the [Translator container](../translator/containers/translator-how-to-install-container.md), you'll need to add parameters for the downloaded translation models and container configuration. These values are generated and displayed in the container output when you [configure the container](#configure-the-container-to-be-run-in-a-disconnected-environment) as described above. For example:
-If you're using the [Translator container](../translator/containers/translator-how-to-install-container.md), you will need to add parameters for the downloaded translation models and container configuration. These values are generated and displayed in the container output when you [configure the container](#configure-the-container-to-be-run-in-a-disconnected-environment) as described above. For example:
```bash -e MODELS= /path/to/model1/, /path/to/model2/ -e TRANSLATORSYSTEMCONFIG=/path/to/model/config/translatorsystemconfig.json
If you're using the [Translator container](../translator/containers/translator-h
#### Speech-to-text and Neural text-to-speech containers
-The [speech-to-text](../speech-service/speech-container-howto.md?tabs=stt) and [neural text-to-speech](../speech-service/speech-container-howto.md?tabs=ntts) containers provide a default directory for writing the license file and billing log at runtime. When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container.
-
+The [speech-to-text](../speech-service/speech-container-howto.md?tabs=stt) and [neural text-to-speech](../speech-service/speech-container-howto.md?tabs=ntts) containers provide a default directory for writing the license file and billing log at runtime. When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container.
+ Below is a sample command to set file/directory ownership. ```bash
sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PA
## Usage records
-When operating Docker containers in a disconnected environment, the container will write usage records to a volume where they are collected over time. You can also call a REST endpoint to generate a report about service usage.
+When operating Docker containers in a disconnected environment, the container will write usage records to a volume where they're collected over time. You can also call a REST endpoint to generate a report about service usage.
### Arguments for storing logs
docker run -v /host/output:{OUTPUT_PATH} ... <image> ... Mounts:Output={OUTPUT_P
The container provides two endpoints for returning records about its usage.
-#### Get all records
+#### Get all records
The following endpoint will provide a report summarizing all of the usage collected in the mounted billing record directory.
The following endpoint will provide a report summarizing all of the usage collec
https://<service>/records/usage-logs/ ```
-It will return JSON similar to the example below.
+It will return JSON similar to the example below.
```json {
It will return JSON similar to the example below.
] } ```+ #### Get records for a specific month The following endpoint will provide a report summarizing usage over a specific month and year.
it will return a JSON response similar to the example below:
## Purchase a different commitment plan for disconnected containers
-Commitment plans for disconnected containers have a calendar year commitment period. When you purchase a plan, you will be charged the full price immediately. During the commitment period, you cannot change your commitment plan, however you can purchase additional unit(s) at a pro-rated price for the remaining days in the year. You have until midnight (UTC) on the last day of your commitment, to end a commitment plan.
+Commitment plans for disconnected containers have a calendar year commitment period. When you purchase a plan, you'll be charged the full price immediately. During the commitment period, you can't change your commitment plan, however you can purchase additional unit(s) at a pro-rated price for the remaining days in the year. You have until midnight (UTC) on the last day of your commitment, to end a commitment plan.
-You can choose a different commitment plan in the **Commitment Tier pricing** settings of your resource.
+You can choose a different commitment plan in the **Commitment Tier pricing** settings of your resource.
## End a commitment plan
-If you decide that you don't want to continue purchasing a commitment plan, you can set your resource's auto-renewal to **Do not auto-renew**. Your commitment plan will expire on the displayed commitment end date. After this date, you won't be charged for the commitment plan. You will be able to continue using the Azure resource to make API calls, charged at pay-as-you-go pricing. You have until midnight (UTC) on the last day of the year to end a commitment plan for disconnected containers, and not be charged for the following year.
+If you decide that you don't want to continue purchasing a commitment plan, you can set your resource's auto-renewal to **Do not auto-renew**. Your commitment plan will expire on the displayed commitment end date. After this date, you won't be charged for the commitment plan. You'll be able to continue using the Azure resource to make API calls, charged at pay-as-you-go pricing. You have until midnight (UTC) on the last day of the year to end a commitment plan for disconnected containers, and not be charged for the following year.
## Troubleshooting
If you run the container with an output mount and logging enabled, the container
> [!TIP] > For more troubleshooting information and guidance, see [Disconnected containers Frequently asked questions (FAQ)](disconnected-container-faq.yml).+ ## Next steps
-[Azure Cognitive Services containers overview](../cognitive-services-container-support.md)
+[Azure Cognitive Services containers overview](../cognitive-services-container-support.md)
communication-services Media Comp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/media-comp.md
These media streams are typically arrayed in a grid and broadcast to call partic
- Connect devices and services using streaming protocols such as [RTMP](https://datatracker.ietf.org/doc/html/rfc7016) or [SRT](https://datatracker.ietf.org/doc/html/draft-sharabayko-srt) - Compose media streams into complex scenes
-RTMP & SRT connectivity can be used for both input and output. Using RTMP/SRT input, a videography studio that emits RTMP/SRT can join an Azure Communication Services call. RTMP/SRT output allows you to stream media from Azure Communication Services into [Azure Media Services](../../../media-services/latest/concepts-overview.md), YouTube Live, and many other broadcasting channels. The ability to attach industry standard RTMP/SRT emitters and to output content to RTMP/SRT subscribers for broadcasting transforms a small group call into a virtual event that reaches millions of people in real time.
+RTMP & SRT connectivity can be used for both input and output. Using RTMP/SRT input, a videography studio that emits RTMP/SRT can join an Azure Communication Services call. RTMP/SRT output allows you to stream media from Azure Communication Services into [Azure Media Services](/media-services/latest/concepts-overview), YouTube Live, and many other broadcasting channels. The ability to attach industry standard RTMP/SRT emitters and to output content to RTMP/SRT subscribers for broadcasting transforms a small group call into a virtual event that reaches millions of people in real time.
Media Composition REST APIs (and open-source SDKs) allow you to command the Azure service to cloud compose these media streams. For example, a **presenter layout** can be used to compose a speaker and a translator together in a classic picture-in-picture style. Media Composition allows for all clients and services connected to the media data plane to enjoy a particular dynamic layout without local processing or application complexity.
- In the diagram below, three endpoints are participating actively in a group call and uploading media. Two users, one of which is using Microsoft Teams, are composed using a *presenter layout.* The third endpoint is a television studio that emits RTMP into the call. The Azure Calling client and Teams client will receive the composed media stream instead of a typical grid. Additionally, Azure Media Services is shown here subscribing to the call's RTMP channel and broadcasting content externally.
+ In the diagram below, three endpoints are participating actively in a group call and uploading media. Two users, one of which is using Microsoft Teams, are composed using a *presenter layout.* The third endpoint is a television studio that emits RTMP into the call. The Azure Calling client and Teams client will receive the composed media stream instead of a typical grid. Additionally, Azure Media Services is shown here subscribing to the call's RTMP channel and broadcasting content externally.
:::image type="content" source="../media/media-comp.svg" alt-text="Diagram showing how media input is processed by the Azure Communication Services Media Composition services":::
connectors Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/built-in.md
Azure Logic Apps provides the following built-in triggers and actions:
:::row-end::: :::row::: :::column:::
- [![STFP-SSH icon][sftp-ssh-icon]][sftp-ssh-doc]
+ [![SFTP-SSH icon][sftp-ssh-icon]][sftp-ssh-doc]
\ \
- [**STFP-SSH**][sftp-ssh-doc]<br>(*Standard logic app only*)
+ [**SFTP-SSH**][sftp-ssh-doc]<br>(*Standard logic app only*)
\ \ Connect to SFTP servers that you can access from the internet by using SSH so that you can work with your files and folders.
container-apps Azure Resource Manager Api Spec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-resource-manager-api-spec.md
Previously updated : 11/02/2021 Last updated : 03/28/2022 # Container Apps Preview ARM template API specification
-Azure Container Apps deployments are powered by an Azure Resource Manager (ARM) template. The following tables describe the properties available in the container app ARM template.
+Azure Container Apps deployments are powered by an Azure Resource Manager (ARM) template. Some Container Apps CLI commands also support using a YAML template to specify a resource.
-The [sample ARM template for usage examples](#examples).
+> [!NOTE]
+> Azure Container Apps resources are in the process of migrating from the `Microsoft.Web` namespace to the `Microsoft.App` namespace. Refer to [Namespace migration from Microsoft.Web to Microsoft.App in March 2022](https://github.com/microsoft/azure-container-apps/issues/109) for more details.
+
+## Container Apps environment
+
+The following tables describe the properties available in the Container Apps environment resource.
+
+### Resource
+
+A container app resource of the ARM template has the following properties:
+
+| Property | Description | Data type |
+|||--|
+| `name` | The Container Apps environment name. | string |
+| `location` | The Azure region where the Container Apps environment is deployed. | string |
+| `type` | `Microsoft.App/managedEnvironments` ΓÇô the ARM resource type | string |
+
+#### `properties`
+
+A resource's `properties` object has the following properties:
+
+| Property | Description | Data type | Read only |
+|||||
+| `daprAIInstrumentationKey` | The Application Insights instrumentation key used by Dapr. | string | No |
+| `appLogsConfiguration` | The environment's logging configuration. | Object | No |
+
+### <a name="container-apps-environment-examples"></a>Examples
+
+# [ARM template](#tab/arm-template)
+
+The following example ARM template deploys a Container Apps environment.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "defaultValue": "canadacentral",
+ "type": "String"
+ },
+ "dapr_ai_instrumentation_key": {
+ "defaultValue": "",
+ "type": "String"
+ },
+ "environment_name": {
+ "defaultValue": "myenvironment",
+ "type": "String"
+ },
+ "log_analytics_customer_id": {
+ "type": "String"
+ },
+ "log_analytics_shared_key": {
+ "type": "SecureString"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "type": "Microsoft.App/managedEnvironments",
+ "apiVersion": "2022-01-01-preview",
+ "name": "[parameters('environment_name')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "daprAIInstrumentationKey": "[parameters('dapr_ai_instrumentation_key')]",
+ "appLogsConfiguration": {
+ "destination": "log-analytics",
+ "logAnalyticsConfiguration": {
+ "customerId": "[parameters('log_analytics_customer_id')]",
+ "sharedKey": "[parameters('log_analytics_shared_key')]"
+ }
+ }
+ }
+ }
+ ]
+}
+```
+
+# [YAML](#tab/yaml)
+
+YAML input isn't currently used by Azure CLI commands to specify a Container Apps environment.
+++
+## Container app
-## Resources
+The following tables describe the properties available in the container app resource.
-Entries in the `resources` array of the ARM template have the following properties:
+### Resource
+
+A container app resource of the ARM template has the following properties:
| Property | Description | Data type | |||--| | `name` | The Container Apps application name. | string | | `location` | The Azure region where the Container Apps instance is deployed. | string | | `tags` | Collection of Azure tags associated with the container app. | array |
-| `type` | Always `Microsoft.Web/containerApps` ARM endpoint determines which API to forward to | string |
-
-> [!NOTE]
-> Azure Container Apps resources are in the process of migrating from the `Microsoft.Web` namespace to the `Microsoft.App` namespace. Refer to [Namespace migration from Microsoft.Web to Microsoft.App in March 2022](https://github.com/microsoft/azure-container-apps/issues/109) for more details.
+| `type` | `Microsoft.App/containerApps` ΓÇô the ARM resource type | string |
In this example, you put your values in place of the placeholder tokens surrounded by `<>` brackets.
-## properties
+#### `properties`
A resource's `properties` object has the following properties:
A resource's `properties` object has the following properties:
The `environmentId` value takes the following form: ```console
-/subscriptions/<SUBSCRIPTION_ID>/resourcegroups/<RESOURCE_GROUP_NAME>/providers/Microsoft.Web/environmentId/<ENVIRONMENT_NAME>
+/subscriptions/<SUBSCRIPTION_ID>/resourcegroups/<RESOURCE_GROUP_NAME>/providers/Microsoft.App/environmentId/<ENVIRONMENT_NAME>
``` In this example, you put your values in place of the placeholder tokens surrounded by `<>` brackets.
-## properties.configuration
+#### `properties.configuration`
A resource's `properties.configuration` object has the following properties: | Property | Description | Data type | ||||
-| `activeRevisionsMode` | Setting to `multiple` allows you to maintain multiple revisions. Setting to `single` automatically deactivates old revisions, and only keeps the latest revision active. | string |
+| `activeRevisionsMode` | Setting to `single` automatically deactivates old revisions, and only keeps the latest revision active. Setting to `multiple` allows you to maintain multiple revisions. | string |
| `secrets` | Defines secret values in your container app. | object | | `ingress` | Object that defines public accessibility configuration of a container app. | object | | `registries` | Configuration object that references credentials for private container registries. Entries defined with `secretref` reference the secrets configuration object. | object |
+| `dapr` | Configuration object that defines the Dapr settings for the container app. | object |
Changes made to the `configuration` section are [application-scope changes](revisions.md#application-scope-changes), which doesn't trigger a new revision.
-## properties.template
+#### `properties.template`
A resource's `properties.template` object has the following properties:
A resource's `properties.template` object has the following properties:
| `revisionSuffix` | A friendly name for a revision. This value must be unique as the runtime rejects any conflicts with existing revision name suffix values. | string | | `containers` | Configuration object that defines what container images are included in the container app. | object | | `scale` | Configuration object that defines scale rules for the container app. | object |
-| `dapr` | Configuration object that defines the Dapr settings for the container app. | object |
Changes made to the `template` section are [revision-scope changes](revisions.md#revision-scope-changes), which triggers a new revision.
-## Examples
+### <a name="container-app-examples"></a>Examples
-The following is an example ARM template used to deploy a container app.
+# [ARM template](#tab/arm-template)
+
+The following example ARM template deploys a container app.
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "containerappName": {
- "defaultValue": "mycontainerapp",
- "type": "String"
- },
- "location": {
- "defaultValue": "canadacentral",
- "type": "String"
- },
- "environment_name": {
- "defaultValue": "myenvironment",
- "type": "String"
- }
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "containerappName": {
+ "defaultValue": "mycontainerapp",
+ "type": "String"
+ },
+ "location": {
+ "defaultValue": "canadacentral",
+ "type": "String"
+ },
+ "environment_name": {
+ "defaultValue": "myenvironment",
+ "type": "String"
+ },
+ "container_image": {
+ "type": "String"
},
- "variables": {},
- "resources": [
- {
- "apiVersion": "2021-03-01",
- "type": "Microsoft.Web/containerApps",
- "name": "[parameters('containerappName')]",
- "location": "[parameters('location')]",
- "properties": {
- "kubeEnvironmentId": "[resourceId('Microsoft.Web/kubeEnvironments', parameters('environment_name'))]",
- "configuration": {
- "secrets": [
- {
- "name": "mysecret",
- "value": "thisismysecret"
- }
- ],
- "ingress": {
- "external": true,
- "targetPort": 80,
- "allowInsecure": false,
- "traffic": [
- {
- "latestRevision": true,
- "weight": 100
- }
- ]
- }
+ "registry_password": {
+ "type": "SecureString"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "apiVersion": "2022-01-01-preview",
+ "type": "Microsoft.App/containerApps",
+ "name": "[parameters('containerappName')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "managedEnvironmentId": "[resourceId('Microsoft.App/managedEnvironments', parameters('environment_name'))]",
+ "configuration": {
+ "secrets": [
+ {
+ "name": "mysecret",
+ "value": "thisismysecret"
+ },
+ {
+ "name": "myregistrypassword",
+ "value": "[parameters('registry_password')]"
+ }
+ ],
+ "ingress": {
+ "external": true,
+ "targetPort": 80,
+ "allowInsecure": false,
+ "traffic": [
+ {
+ "latestRevision": true,
+ "weight": 100
+ }
+ ]
+ },
+ "registries": [
+ {
+ "server": "myregistry.azurecr.io",
+ "username": "[parameters('containerappName')]",
+ "passwordSecretRef": "myregistrypassword"
+ }
+ ],
+ "dapr": {
+ "appId": "[parameters('containerappName')]",
+ "appPort": 80,
+ "appProtocol": "http",
+ "enabled": true
+ }
+ },
+ "template": {
+ "revisionSuffix": "myrevision",
+ "containers": [
+ {
+ "name": "main",
+ "image": "[parameters('container_image')]",
+ "env": [
+ {
+ "name": "HTTP_PORT",
+ "value": "80"
},
- "template": {
- "revisionSuffix": "myrevision",
- "containers": [
- {
- "name": "nginx",
- "image": "nginx",
- "env": [
- {
- "name": "HTTP_PORT",
- "value": "80"
- },
- {
- "name": "SECRET_VAL",
- "secretRef": "mysecret"
- }
- ],
- "resources": {
- "cpu": 0.5,
- "memory": "1Gi"
- }
- }
- ],
- "scale": {
- "minReplicas": 1,
- "maxReplicas": 3
- }
+ {
+ "name": "SECRET_VAL",
+ "secretRef": "mysecret"
}
+ ],
+ "resources": {
+ "cpu": 0.5,
+ "memory": "1Gi"
+ }
}
+ ],
+ "scale": {
+ "minReplicas": 1,
+ "maxReplicas": 3
+ }
}
- ]
+ }
+ }
+ ]
} ```
-The following is an example YAML configuration used to deploy a container app.
+# [YAML](#tab/yaml)
+
+The following example YAML configuration deploys a container app when used with the `--yaml` parameter in the following Azure CLI commands:
+
+- [`az containerapp create`](/cli/azure/containerapp?view=azure-cli-latest&preserve-view=true#az-containerapp-create)
+- [`az containerapp update`](/cli/azure/containerapp?view=azure-cli-latest&preserve-view=true#az-containerapp-update)
+- [`az containerapp revision copy`](/cli/azure/containerapp?view=azure-cli-latest&preserve-view=true#az-containerapp-revision-copy)
```yaml kind: containerapp location: northeurope name: mycontainerapp resourceGroup: myresourcegroup
-type: Microsoft.Web/containerApps
+type: Microsoft.App/containerApps
tags:
- tagname: value
+ tagname: value
properties:
- kubeEnvironmentId: /subscriptions/mysubscription/resourceGroups/myresourcegroup/providers/Microsoft.Web/kubeEnvironments/myenvironment
- configuration:
- activeRevisionsMode: Multiple
- secrets:
- - name: mysecret
- value: thisismysecret
- ingress:
- external: True
- allowInsecure: false
- targetPort: 80
- traffic:
- - latestRevision: true
- weight: 100
- transport: Auto
- template:
- revisionSuffix: myrevision
- containers:
- - image: nginx
- name: nginx
- env:
+ managedEnvironmentId: /subscriptions/mysubscription/resourceGroups/myresourcegroup/providers/Microsoft.App/managedEnvironments/myenvironment
+ configuration:
+ activeRevisionsMode: Multiple
+ secrets:
+ - name: mysecret
+ value: thisismysecret
+ - name: myregistrypassword
+ value: I<3containerapps
+ ingress:
+ external: true
+ allowInsecure: false
+ targetPort: 80
+ traffic:
+ - latestRevision: true
+ weight: 100
+ transport: Auto
+ registries:
+ - passwordSecretRef: myregistrypassword
+ server: myregistry.azurecr.io
+ username: myregistrye
+ dapr:
+ appId: mycontainerapp
+ appPort: 80
+ appProtocol: http
+ enabled: true
+ template:
+ revisionSuffix: myrevision
+ containers:
+ - image: nginx
+ name: nginx
+ env:
- name: HTTP_PORT value: 80 - name: secret_name secretRef: mysecret
- resources:
- cpu: 0.5
- memory: 1Gi
- scale:
- minReplicas: 1
- maxReplicas: 1
+ resources:
+ cpu: 0.5
+ memory: 1Gi
+ scale:
+ minReplicas: 1
+ maxReplicas: 3
```++
container-apps Background Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/background-processing.md
Individual container apps are deployed to an Azure Container Apps environment. T
az containerapp env create \ --name $CONTAINERAPPS_ENVIRONMENT \ --resource-group $RESOURCE_GROUP \
- --logs-workspace-id $LOG_ANALYTICS_WORKSPACE_CLIENT_ID \
- --logs-workspace-key $LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET \
--location "$LOCATION" ```
az containerapp env create \
az containerapp env create ` --name $CONTAINERAPPS_ENVIRONMENT ` --resource-group $RESOURCE_GROUP `
- --logs-workspace-id $LOG_ANALYTICS_WORKSPACE_CLIENT_ID `
- --logs-workspace-key $LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET `
--location $LOCATION ```
Create a file named *queue.json* and paste the following configuration code into
"resources": [ { "name": "queuereader",
- "type": "Microsoft.Web/containerApps",
- "apiVersion": "2021-03-01",
+ "type": "Microsoft.App/containerApps",
+ "apiVersion": "2022-01-01-preview",
"kind": "containerapp", "location": "[parameters('location')]", "properties": {
- "kubeEnvironmentId": "[resourceId('Microsoft.Web/kubeEnvironments', parameters('environment_name'))]",
+ "managedEnvironmentId": "[resourceId('Microsoft.App/managedEnvironments', parameters('environment_name'))]",
"configuration": { "activeRevisionsMode": "single", "secrets": [
Run the following command to see logged messages. This command requires the Log
# [Bash](#tab/bash) ```azurecli
+LOG_ANALYTICS_WORKSPACE_CLIENT_ID=`az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query properties.appLogsConfiguration.logAnalyticsConfiguration.customerId --out tsv`
+ az monitor log-analytics query \ --workspace $LOG_ANALYTICS_WORKSPACE_CLIENT_ID \ --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'queuereader' and Log_s contains 'Message ID'" \
az monitor log-analytics query \
# [PowerShell](#tab/powershell) ```powershell
+$LOG_ANALYTICS_WORKSPACE_CLIENT_ID=(az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query properties.appLogsConfiguration.logAnalyticsConfiguration.customerId --out tsv)
+ $queryResults = Invoke-AzOperationalInsightsQuery -WorkspaceId $LOG_ANALYTICS_WORKSPACE_CLIENT_ID -Query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'queuereader' and Log_s contains 'Message ID'" $queryResults.Results ```
container-apps Get Started Existing Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started-existing-container-image.md
Previously updated : 12/16/2021 Last updated : 03/21/2022 zone_pivot_groups: container-apps-registry-types
To create the environment, run the following command:
az containerapp env create \ --name $CONTAINERAPPS_ENVIRONMENT \ --resource-group $RESOURCE_GROUP \
- --logs-workspace-id $LOG_ANALYTICS_WORKSPACE_CLIENT_ID \
- --logs-workspace-key $LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET \
--location $LOCATION ```
az containerapp env create \
az containerapp env create ` --name $CONTAINERAPPS_ENVIRONMENT ` --resource-group $RESOURCE_GROUP `
- --logs-workspace-id $LOG_ANALYTICS_WORKSPACE_CLIENT_ID `
- --logs-workspace-key $LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET `
--location $LOCATION ```
For details on how to provide values for any of these parameters to the `create`
```bash CONTAINER_IMAGE_NAME=<CONTAINER_IMAGE_NAME>
-REGISTRY_LOGIN_SERVER=<REGISTRY_LOGIN_URL>
+REGISTRY_SERVER=<REGISTRY_SERVER>
REGISTRY_USERNAME=<REGISTRY_USERNAME> REGISTRY_PASSWORD=<REGISTRY_PASSWORD> ```
az containerapp create \
--resource-group $RESOURCE_GROUP \ --image $CONTAINER_IMAGE_NAME \ --environment $CONTAINERAPPS_ENVIRONMENT \
- --registry-login-server $REGISTRY_LOGIN_SERVER \
+ --registry-server $REGISTRY_SERVER \
--registry-username $REGISTRY_USERNAME \ --registry-password $REGISTRY_PASSWORD ```
az containerapp create \
```powershell $CONTAINER_IMAGE_NAME=<CONTAINER_IMAGE_NAME>
-$REGISTRY_LOGIN_SERVER=<REGISTRY_LOGIN_URL>
+$REGISTRY_SERVER=<REGISTRY_SERVER>
$REGISTRY_USERNAME=<REGISTRY_USERNAME> $REGISTRY_PASSWORD=<REGISTRY_PASSWORD> ```
az containerapp create `
--resource-group $RESOURCE_GROUP ` --image $CONTAINER_IMAGE_NAME ` --environment $CONTAINERAPPS_ENVIRONMENT `
- --registry-login-server $REGISTRY_LOGIN_SERVER `
+ --registry-server $REGISTRY_SERVER `
--registry-username $REGISTRY_USERNAME ` --registry-password $REGISTRY_PASSWORD ```
az containerapp create `
```azurecli az containerapp create \
- --image <REGISTRY_CONTAINER_URL> \
+ --image <REGISTRY_CONTAINER_NAME> \
--name my-container-app \ --resource-group $RESOURCE_GROUP \ --environment $CONTAINERAPPS_ENVIRONMENT
az containerapp create \
```azurecli az containerapp create `
- --image <REGISTRY_CONTAINER_URL> `
+ --image <REGISTRY_CONTAINER_NAME> `
--name my-container-app ` --resource-group $RESOURCE_GROUP ` --environment $CONTAINERAPPS_ENVIRONMENT
az containerapp create `
-Before you run this command, replace `<REGISTRY_CONTAINER_URL>` with the URL to the public container registry location including tag.
+Before you run this command, replace `<REGISTRY_CONTAINER_NAME>` with the full name the public container registry location, including the registry path and tag. For example, a valid container name is `mcr.microsoft.com/azuredocs/containerapps-helloworld:latest`.
::: zone-end
-If you have enabled ingress on your container app, you can add `--query configuration.ingress.fqdn` to the `create` command to return the public URL for the application.
+If you have enabled ingress on your container app, you can add `--query properties.configuration.ingress.fqdn` to the `create` command to return the public URL for the application.
## Verify deployment
After about 5-10 minutes has passed, use the following steps to view logged mess
# [Bash](#tab/bash) ```azurecli
+LOG_ANALYTICS_WORKSPACE_CLIENT_ID=`az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query properties.appLogsConfiguration.logAnalyticsConfiguration.customerId --out tsv`
+ az monitor log-analytics query \ --workspace $LOG_ANALYTICS_WORKSPACE_CLIENT_ID \ --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'my-container-app' | project ContainerAppName_s, Log_s, TimeGenerated" \
az monitor log-analytics query \
# [PowerShell](#tab/powershell) ```powershell
+$LOG_ANALYTICS_WORKSPACE_CLIENT_ID=(az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query properties.appLogsConfiguration.logAnalyticsConfiguration.customerId --out tsv)
+ $queryResults = Invoke-AzOperationalInsightsQuery -WorkspaceId $LOG_ANALYTICS_WORKSPACE_CLIENT_ID -Query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'nodeapp' and (Log_s contains 'persisted' or Log_s contains 'order') | project ContainerAppName_s, Log_s, TimeGenerated | take 5" $queryResults.Results --out table
container-apps Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started.md
Previously updated : 11/02/2021 Last updated : 03/21/2022 ms.devlang: azurecli
To create the environment, run the following command:
az containerapp env create \ --name $CONTAINERAPPS_ENVIRONMENT \ --resource-group $RESOURCE_GROUP \
- --logs-workspace-id $LOG_ANALYTICS_WORKSPACE_CLIENT_ID \
- --logs-workspace-key $LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET \
--location $LOCATION ```
az containerapp env create \
az containerapp env create ` --name $CONTAINERAPPS_ENVIRONMENT ` --resource-group $RESOURCE_GROUP `
- --logs-workspace-id $LOG_ANALYTICS_WORKSPACE_CLIENT_ID `
- --logs-workspace-key $LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET `
--location $LOCATION ```
az containerapp create \
--image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \ --target-port 80 \ --ingress 'external' \
- --query configuration.ingress.fqdn
+ --query properties.configuration.ingress.fqdn
``` # [PowerShell](#tab/powershell)
az containerapp create `
--image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest ` --target-port 80 ` --ingress 'external' `
- --query configuration.ingress.fqdn
+ --query properties.configuration.ingress.fqdn
```
container-apps Manage Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/manage-secrets.md
az containerapp create \
--environment "my-environment-name" \ --image demos/myQueueApp:v1 \ --secrets "queue-connection-string=$CONNECTIONSTRING" \
- --environment-variables "QueueName=myqueue,ConnectionString=secretref:queue-connection-string"
+ --env-vars "QueueName=myqueue" "ConnectionString=secretref:queue-connection-string"
``` Here, the environment variable named `connection-string` gets its value from the application-level `queue-connection-string` secret by using `secretref`.
az containerapp create `
--environment "my-environment-name" ` --image demos/myQueueApp:v1 ` --secrets "queue-connection-string=$CONNECTIONSTRING" `
- --environment-variables "QueueName=myqueue,ConnectionString=secretref:queue-connection-string"
+ --env-vars "QueueName=myqueue" "ConnectionString=secretref:queue-connection-string"
``` Here, the environment variable named `connection-string` gets its value from the application-level `queue-connection-string` secret by using `secretref`.
container-apps Microservices Dapr Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-azure-resource-manager.md
Title: 'Tutorial: Deploy a Dapr application to Azure Container Apps with an ARM or Bicep template'
+ Title: "Tutorial: Deploy a Dapr application to Azure Container Apps with an ARM or Bicep template"
description: Deploy a Dapr application to Azure Container Apps with an ARM or Bicep template. Last updated 01/31/2022-+ zone_pivot_groups: container-apps
zone_pivot_groups: container-apps
You learn how to: > [!div class="checklist"]-
-> * Create a Container Apps environment for your container apps
-> * Create an Azure Blob Storage state store for the container app
-> * Deploy two apps that a produce and consume messages and persist them with the state store
+> * Create an Azure Blob Storage for use as a Dapr state store
+> * Deploy a container apps environment to host container apps
+> * Deploy two dapr-enabled container apps: one that produces orders and one that consumes orders and stores them
> * Verify the interaction between the two microservices. With Azure Container Apps, you get a fully managed version of the Dapr APIs when building microservices. When you use Dapr in Azure Container Apps, you can enable sidecars to run next to your microservices that provide a rich set of capabilities. Available Dapr APIs include [Service to Service calls](https://docs.dapr.io/developing-applications/building-blocks/service-invocation/), [Pub/Sub](https://docs.dapr.io/developing-applications/building-blocks/pubsub/), [Event Bindings](https://docs.dapr.io/developing-applications/building-blocks/bindings/), [State Stores](https://docs.dapr.io/developing-applications/building-blocks/state-management/), and [Actors](https://docs.dapr.io/developing-applications/building-blocks/actors/).
-In this tutorial, you deploy the same applications from the Dapr [Hello World](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes) quickstart.
+In this tutorial, you deploy the same applications from the Dapr [Hello World](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-world) quickstart.
The application consists of:
-* A client (Python) container app to generate messages.
-* A service (Node) container app to consume and persist those messages in a state store
+- A client (Python) container app to generates messages.
+- A service (Node) container app to consume and persist those messages in a state store
The following architecture diagram illustrates the components that make up this tutorial:
The following architecture diagram illustrates the components that make up this
## Prerequisites
-* Install [Azure CLI](/cli/azure/install-azure-cli)
+- Install [Azure CLI](/cli/azure/install-azure-cli)
::: zone pivot="container-apps-bicep"
-* [Bicep](../azure-resource-manager/bicep/install.md)
+- [Bicep](../azure-resource-manager/bicep/install.md)
::: zone-end
-* An Azure account with an active subscription is required. If you don't already have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure account with an active subscription is required. If you don't already have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
## Before you begin
This guide uses the following environment variables:
RESOURCE_GROUP="my-containerapps" LOCATION="canadacentral" CONTAINERAPPS_ENVIRONMENT="containerapps-env"
-LOG_ANALYTICS_WORKSPACE="containerapps-logs"
STORAGE_ACCOUNT_CONTAINER="mycontainer" ```
STORAGE_ACCOUNT_CONTAINER="mycontainer"
$RESOURCE_GROUP="my-containerapps" $LOCATION="canadacentral" $CONTAINERAPPS_ENVIRONMENT="containerapps-env"
-$LOG_ANALYTICS_WORKSPACE="containerapps-logs"
$STORAGE_ACCOUNT_CONTAINER="mycontainer" ``` - # [Bash](#tab/bash) ```bash
$STORAGE_ACCOUNT="<storage account name>"
-Choose a name for `STORAGE_ACCOUNT`. Storage account names must be *unique within Azure*. Be from 3 to 24 characters in length and contain numbers and lowercase letters only.
+Choose a name for `STORAGE_ACCOUNT`. Storage account names must be _unique within Azure_. Be from 3 to 24 characters in length and contain numbers and lowercase letters only.
## Setup
az upgrade
Next, install the Azure Container Apps extension for the Azure CLI.
+> [!NOTE]
+> If you have worked with earlier versions of Container Apps, make sure to first remove the old extension version by running `az extension remove -n containerapp`.
+ # [Bash](#tab/bash) ```azurecli
-az extension add \
- --source https://workerappscliextension.blob.core.windows.net/azure-cli-extension/containerapp-0.2.4-py2.py3-none-any.whl
+az extension add --name containerapp
``` # [PowerShell](#tab/powershell) ```azurecli
-az extension add `
- --source https://workerappscliextension.blob.core.windows.net/azure-cli-extension/containerapp-0.2.4-py2.py3-none-any.whl
+az extension add --name containerapp
```
-Now that the extension is installed, register the `Microsoft.Web` namespace.
+Now that the extension is installed, register the `Microsoft.App` namespace.
> [!NOTE] > Azure Container Apps resources are in the process of migrating from the `Microsoft.Web` namespace to the `Microsoft.App` namespace. Refer to [Namespace migration from Microsoft.Web to Microsoft.App in March 2022](https://github.com/microsoft/azure-container-apps/issues/109) for more details.
Now that the extension is installed, register the `Microsoft.Web` namespace.
# [Bash](#tab/bash) ```azurecli
-az provider register --namespace Microsoft.Web
+az provider register --namespace Microsoft.App
``` # [PowerShell](#tab/powershell) ```powershell
-Register-AzResourceProvider -ProviderNamespace Microsoft.Web
+Register-AzResourceProvider -ProviderNamespace Microsoft.App
```
-Create a resource group to organize the services related to your new container app.
+Create a resource group to organize the services related to your container apps.
# [Bash](#tab/bash)
New-AzResourceGroup -Name $RESOURCE_GROUP -Location $LOCATION
-With the CLI upgraded and a new resource group available, you can create a Container Apps environment and deploy your container app.
-
-## Create an environment
-
-The Azure Container Apps environment acts as a secure boundary around a group of container apps. Container Apps deployed to the same environment share a virtual network and write logs to the same Log Analytics workspace.
-
-Your container apps are monitored with Azure Log Analytics, which is required when you create a Container Apps environment.
-
-Create a Log Analytics workspace with the following command:
-
-# [Bash](#tab/bash)
-
-```azurecli
-az monitor log-analytics workspace create \
- --resource-group $RESOURCE_GROUP \
- --workspace-name $LOG_ANALYTICS_WORKSPACE
-```
-
-# [PowerShell](#tab/powershell)
-
-```powershell
-New-AzOperationalInsightsWorkspace `
- -Location $LOCATION `
- -Name $LOG_ANALYTICS_WORKSPACE `
- -ResourceGroupName $RESOURCE_GROUP
-```
---
-Next, retrieve the Log Analytics Client ID and client secret.
-
-# [Bash](#tab/bash)
-
-Make sure to run each query separately to give enough time for the request to complete.
-
-```azurecli
-LOG_ANALYTICS_WORKSPACE_CLIENT_ID=`az monitor log-analytics workspace show --query customerId -g $RESOURCE_GROUP -n $LOG_ANALYTICS_WORKSPACE -o tsv | tr -d '[:space:]'`
-```
-
-```azurecli
-LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET=`az monitor log-analytics workspace get-shared-keys --query primarySharedKey -g $RESOURCE_GROUP -n $LOG_ANALYTICS_WORKSPACE -o tsv | tr -d '[:space:]'`
-```
-
-# [PowerShell](#tab/powershell)
-
-Make sure to run each query separately to give enough time for the request to complete.
-
-```powershell
-$LOG_ANALYTICS_WORKSPACE_CLIENT_ID=(Get-AzOperationalInsightsWorkspace -ResourceGroupName $RESOURCE_GROUP -Name $LOG_ANALYTICS_WORKSPACE).CustomerId
-```
-
-<! This was taken out because of a breaking changes warning. We should put it back after it's fixed. Until then we'll go with the az command
-$LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET=(Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName $RESOURCE_GROUP -Name $LOG_ANALYTICS_WORKSPACE).PrimarySharedKey
->
-
-```azurecli
-$LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET=(az monitor log-analytics workspace get-shared-keys --query primarySharedKey -g $RESOURCE_GROUP -n $LOG_ANALYTICS_WORKSPACE --out tsv)
-```
---
-Individual container apps are deployed to an Azure Container Apps environment. To create the environment, run the following command:
-
-# [Bash](#tab/bash)
-
-```azurecli
-az containerapp env create \
- --name $CONTAINERAPPS_ENVIRONMENT \
- --resource-group $RESOURCE_GROUP \
- --logs-workspace-id $LOG_ANALYTICS_WORKSPACE_CLIENT_ID \
- --logs-workspace-key $LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET \
- --location "$LOCATION"
-```
-
-# [PowerShell](#tab/powershell)
-
-```azurecli
-az containerapp env create `
- --name $CONTAINERAPPS_ENVIRONMENT `
- --resource-group $RESOURCE_GROUP `
- --logs-workspace-id $LOG_ANALYTICS_WORKSPACE_CLIENT_ID `
- --logs-workspace-key $LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET `
- --location "$LOCATION"
-```
--- ## Set up a state store ### Create an Azure Blob Storage account
New-AzStorageAccount -ResourceGroupName $RESOURCE_GROUP `
Once your Azure Blob Storage account is created, the following values are needed for subsequent steps in this tutorial.
-* `storage_account_name` is the value of the `STORAGE_ACCOUNT` variable.
+- `storage_account_name` is the value of the `STORAGE_ACCOUNT` variable.
-* `storage_container_name` is the value of the `STORAGE_ACCOUNT_CONTAINER`variable.
+- `storage_container_name` is the value of the `STORAGE_ACCOUNT_CONTAINER` variable.
Dapr creates a container with this name when it doesn't already exist in your Azure Storage account.
-Get the storage account key with the following command:
-
-# [Bash](#tab/bash)
-
-```azurecli
-STORAGE_ACCOUNT_KEY=`az storage account keys list --resource-group $RESOURCE_GROUP --account-name $STORAGE_ACCOUNT --query '[0].value' --out tsv`
-```
-
-# [PowerShell](#tab/powershell)
-
-```powershell
-$STORAGE_ACCOUNT_KEY=(Get-AzStorageAccountKey -ResourceGroupName $RESOURCE_GROUP -AccountName $STORAGE_ACCOUNT)| Where-Object -Property KeyName -Contains 'key1' | Select-Object -ExpandProperty Value
-```
--- ::: zone pivot="container-apps-arm"
-### Create Azure Resource Manager (ARM) templates
-
-Create two ARM templates.
+### Create Azure Resource Manager (ARM) template
-Each ARM template has a container app definition and a Dapr component definition.
+Create an ARM template to deploy a Container Apps environment including the associated Log Analytics workspace and Application Insights resource for distributed tracing, a dapr component for the state store and the two dapr-enabled container apps.
-The following example shows how your ARM template should look when configured for your Azure Blob Storage account.
-
-Save the following file as *serviceapp.json*:
+Save the following file as _hello-world.json_:
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-08-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "location": {
- "defaultValue": "canadacentral",
- "type": "String"
- },
- "environment_name": {
- "type": "String"
- },
- "storage_account_name": {
- "type": "String"
- },
- "storage_account_key": {
- "type": "String"
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "environment_name": {
+ "type": "string"
+ },
+ "location": {
+ "defaultValue": "canadacentral",
+ "type": "string"
+ },
+ "storage_account_name": {
+ "type": "string"
+ },
+ "storage_container_name": {
+ "type": "string"
+ }
+ },
+ "variables": {
+ "logAnalyticsWorkspaceName": "[concat('logs-', parameters('environment_name'))]",
+ "appInsightsName": "[concat('appins-', parameters('environment_name'))]"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.OperationalInsights/workspaces",
+ "apiVersion": "2020-03-01-preview",
+ "name": "[variables('logAnalyticsWorkspaceName')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "retentionInDays": 30,
+ "features": {
+ "searchVersion": 1
},
- "storage_container_name": {
- "type": "String"
+ "sku": {
+ "name": "PerGB2018"
}
+ }
},
- "variables": {},
- "resources": [
+ {
+ "type": "Microsoft.Insights/components",
+ "apiVersion": "2020-02-02",
+ "name": "[variables('appInsightsName')]",
+ "location": "[parameters('location')]",
+ "kind": "web",
+ "dependsOn": [
+ "[resourceId('Microsoft.OperationalInsights/workspaces/', variables('logAnalyticsWorkspaceName'))]"
+ ],
+ "properties": {
+ "Application_Type": "web",
+ "WorkspaceResourceId": "[resourceId('Microsoft.OperationalInsights/workspaces/', variables('logAnalyticsWorkspaceName'))]"
+ }
+ },
+ {
+ "type": "Microsoft.App/managedEnvironments",
+ "apiVersion": "2022-01-01-preview",
+ "name": "[parameters('environment_name')]",
+ "location": "[parameters('location')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.Insights/components/', variables('appInsightsName'))]"
+ ],
+ "properties": {
+ "daprAIInstrumentationKey": "[reference(resourceId('Microsoft.Insights/components/', variables('appInsightsName')), '2020-02-02').InstrumentationKey]",
+ "appLogsConfiguration": {
+ "destination": "log-analytics",
+ "logAnalyticsConfiguration": {
+ "customerId": "[reference(resourceId('Microsoft.OperationalInsights/workspaces/', variables('logAnalyticsWorkspaceName')), '2020-03-01-preview').customerId]",
+ "sharedKey": "[listKeys(resourceId('Microsoft.OperationalInsights/workspaces/', variables('logAnalyticsWorkspaceName')), '2020-03-01-preview').primarySharedKey]"
+ }
+ }
+ },
+ "resources": [
{
- "name": "nodeapp",
- "type": "Microsoft.Web/containerApps",
- "apiVersion": "2021-03-01",
- "kind": "containerapp",
- "location": "[parameters('location')]",
- "properties": {
- "kubeEnvironmentId": "[resourceId('Microsoft.Web/kubeEnvironments', parameters('environment_name'))]",
- "configuration": {
- "ingress": {
- "external": true,
- "targetPort": 3000
- },
- "secrets": [
- {
- "name": "storage-key",
- "value": "[parameters('storage_account_key')]"
- }
- ]
- },
- "template": {
- "containers": [
- {
- "image": "dapriosamples/hello-k8s-node:latest",
- "name": "hello-k8s-node",
- "resources": {
- "cpu": 0.5,
- "memory": "1Gi"
- }
- }
- ],
- "scale": {
- "minReplicas": 1,
- "maxReplicas": 1
- },
- "dapr": {
- "enabled": true,
- "appPort": 3000,
- "appId": "nodeapp",
- "components": [
- {
- "name": "statestore",
- "type": "state.azure.blobstorage",
- "version": "v1",
- "metadata": [
- {
- "name": "accountName",
- "value": "[parameters('storage_account_name')]"
- },
- {
- "name": "accountKey",
- "secretRef": "storage-key"
- },
- {
- "name": "containerName",
- "value": "[parameters('storage_container_name')]"
- }
- ]
- }
- ]
- }
- }
+ "type": "daprComponents",
+ "name": "statestore",
+ "apiVersion": "2022-01-01-preview",
+ "dependsOn": [
+ "[resourceId('Microsoft.App/managedEnvironments/', parameters('environment_name'))]"
+ ],
+ "properties": {
+ "componentType": "state.azure.blobstorage",
+ "version": "v1",
+ "ignoreErrors": false,
+ "initTimeout": "5s",
+ "secrets": [
+ {
+ "name": "storageaccountkey",
+ "value": "[listKeys(resourceId('Microsoft.Storage/storageAccounts/', parameters('storage_account_name')), '2021-09-01').keys[0].value]"
+ }
+ ],
+ "metadata": [
+ {
+ "name": "accountName",
+ "value": "[parameters('storage_account_name')]"
+ },
+ {
+ "name": "containerName",
+ "value": "[parameters('storage_container_name')]"
+ },
+ {
+ "name": "accountKey",
+ "secretRef": "storageaccountkey"
+ }
+ ],
+ "scopes": ["nodeapp"]
+ }
+ }
+ ]
+ },
+ {
+ "type": "Microsoft.App/containerApps",
+ "apiVersion": "2022-01-01-preview",
+ "name": "nodeapp",
+ "location": "[parameters('location')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.App/managedEnvironments/', parameters('environment_name'))]"
+ ],
+ "properties": {
+ "managedEnvironmentId": "[resourceId('Microsoft.App/managedEnvironments/', parameters('environment_name'))]",
+ "configuration": {
+ "ingress": {
+ "external": true,
+ "targetPort": 3000
+ },
+ "dapr": {
+ "enabled": true,
+ "appId": "nodeapp",
+ "appProcotol": "http",
+ "appPort": 3000
+ }
+ },
+ "template": {
+ "containers": [
+ {
+ "image": "dapriosamples/hello-k8s-node:latest",
+ "name": "hello-k8s-node",
+ "resources": {
+ "cpu": 0.5,
+ "memory": "1.0Gi"
+ }
}
+ ],
+ "scale": {
+ "minReplicas": 1,
+ "maxReplicas": 1
+ }
}
- ]
+ }
+ },
+ {
+ "type": "Microsoft.App/containerApps",
+ "apiVersion": "2022-01-01-preview",
+ "name": "pythonapp",
+ "location": "[parameters('location')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.App/managedEnvironments/', parameters('environment_name'))]",
+ "[resourceId('Microsoft.App/containerApps/', 'nodeapp')]"
+ ],
+ "properties": {
+ "managedEnvironmentId": "[resourceId('Microsoft.App/managedEnvironments/', parameters('environment_name'))]",
+ "configuration": {
+ "dapr": {
+ "enabled": true,
+ "appId": "pythonapp"
+ }
+ },
+ "template": {
+ "containers": [
+ {
+ "image": "dapriosamples/hello-k8s-python:latest",
+ "name": "hello-k8s-python",
+ "resources": {
+ "cpu": 0.5,
+ "memory": "1.0Gi"
+ }
+ }
+ ],
+ "scale": {
+ "minReplicas": 1,
+ "maxReplicas": 1
+ }
+ }
+ }
+ }
+ ]
} ```
Save the following file as *serviceapp.json*:
### Create Azure Bicep templates
-Create two Bicep templates.
-
-Each Bicep template contains a container app definition and a Dapr component definition.
-
-The following example shows how your Bicep template should look when configured for your Azure Blob Storage account.
+Create a bicep template to deploy a Container Apps environment including the associated Log Analytics workspace and Application Insights resource for distributed tracing, a dapr component for the state store and the two dapr-enabled container apps.
-Save the following file as *serviceapp.bicep*:
+Save the following file as _hello-world.bicep_:
```bicep
-param location string = 'canadacentral'
param environment_name string
+param location string = 'canadacentral'
param storage_account_name string
-param storage_account_key string
param storage_container_name string
-resource nodeapp 'Microsoft.Web/containerapps@2021-03-01' = {
+var logAnalyticsWorkspaceName = 'logs-${environment_name}'
+var appInsightsName = 'appins-${environment_name}'
+
+resource logAnalyticsWorkspace'Microsoft.OperationalInsights/workspaces@2020-03-01-preview' = {
+ name: logAnalyticsWorkspaceName
+ location: location
+ properties: any({
+ retentionInDays: 30
+ features: {
+ searchVersion: 1
+ }
+ sku: {
+ name: 'PerGB2018'
+ }
+ })
+}
+
+resource appInsights 'Microsoft.Insights/components@2020-02-02' = {
+ name: appInsightsName
+ location: location
+ kind: 'web'
+ properties: {
+ Application_Type: 'web'
+ WorkspaceResourceId: logAnalyticsWorkspace.id
+ }
+}
+
+resource environment 'Microsoft.App/managedEnvironments@2022-01-01-preview' = {
+ name: environment_name
+ location: location
+ properties: {
+ daprAIInstrumentationKey: reference(appInsights.id, '2020-02-02').InstrumentationKey
+ appLogsConfiguration: {
+ destination: 'log-analytics'
+ logAnalyticsConfiguration: {
+ customerId: reference(logAnalyticsWorkspace.id, '2020-03-01-preview').customerId
+ sharedKey: listKeys(logAnalyticsWorkspace.id, '2020-03-01-preview').primarySharedKey
+ }
+ }
+ }
+ resource daprComponent 'daprComponents@2022-01-01-preview' = {
+ name: 'statestore'
+ properties: {
+ componentType: 'state.azure.blobstorage'
+ version: 'v1'
+ ignoreErrors: false
+ initTimeout: '5s'
+ secrets: [
+ {
+ name: 'storageaccountkey'
+ value: listKeys(resourceId('Microsoft.Storage/storageAccounts/', storage_account_name), '2021-09-01').keys[0].value
+ }
+ ]
+ metadata: [
+ {
+ name: 'accountName'
+ value: storage_account_name
+ }
+ {
+ name: 'containerName'
+ value: storage_container_name
+ }
+ {
+ name: 'accountKey'
+ secretRef: 'storageaccountkey'
+ }
+ ]
+ scopes: [
+ 'nodeapp'
+ ]
+ }
+ }
+}
+
+resource nodeapp 'Microsoft.App/containerApps@2022-01-01-preview' = {
name: 'nodeapp'
- kind: 'containerapp'
location: location properties: {
- kubeEnvironmentId: resourceId('Microsoft.Web/kubeEnvironments', environment_name)
+ managedEnvironmentId: environment.id
configuration: { ingress: { external: true targetPort: 3000 }
- secrets: [
- {
- name: 'storage-key'
- value: storage_account_key
- }
- ]
+ dapr: {
+ enabled: true
+ appId: 'nodeapp'
+ appProtocol: 'http'
+ appPort: 3000
+ }
} template: { containers: [
resource nodeapp 'Microsoft.Web/containerapps@2021-03-01' = {
name: 'hello-k8s-node' resources: { cpu: '0.5'
- memory: '1Gi'
+ memory: '1.0Gi'
} } ]
resource nodeapp 'Microsoft.Web/containerapps@2021-03-01' = {
minReplicas: 1 maxReplicas: 1 }
- dapr: {
- enabled: true
- appPort: 3000
- appId: 'nodeapp'
- components: [
- {
- name: 'statestore'
- type: 'state.azure.blobstorage'
- version: 'v1'
- metadata: [
- {
- name: 'accountName'
- value: storage_account_name
- }
- {
- name: 'accountKey'
- secretRef: 'storage-key'
- }
- {
- name: 'containerName'
- value: storage_container_name
- }
- ]
- }
- ]
- }
} } }
-```
--
-> [!NOTE]
-> Container Apps does not currently support the native [Dapr components schema](https://docs.dapr.io/operations/components/component-schema/). The above example uses the supported schema.
->
-> In a production-grade application, follow [secret management](https://docs.dapr.io/operations/components/component-secrets) instructions to securely manage your secrets.
--
-Save the following file as *clientapp.json*:
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-08-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "location": {
- "defaultValue": "canadacentral",
- "type": "String"
- },
- "environment_name": {
- "type": "String"
- }
- },
- "variables": {},
- "resources": [
- {
- "name": "pythonapp",
- "type": "Microsoft.Web/containerApps",
- "apiVersion": "2021-03-01",
- "kind": "containerapp",
- "location": "[parameters('location')]",
- "properties": {
- "kubeEnvironmentId": "[resourceId('Microsoft.Web/kubeEnvironments', parameters('environment_name'))]",
- "configuration": {},
- "template": {
- "containers": [
- {
- "image": "dapriosamples/hello-k8s-python:latest",
- "name": "hello-k8s-python",
- "resources": {
- "cpu": 0.5,
- "memory": "1Gi"
- }
- }
- ],
- "scale": {
- "minReplicas": 1,
- "maxReplicas": 1
- },
- "dapr": {
- "enabled": true,
- "appId": "pythonapp"
- }
- }
- }
- }
- ]
-}
-```
---
-Save the following file as *clientapp.bicep*:
-
-```bicep
-param location string = 'canadacentral'
-param environment_name string
-
-resource pythonapp 'Microsoft.Web/containerApps@2021-03-01' = {
+resource pythonapp 'Microsoft.App/containerApps@2022-01-01-preview' = {
name: 'pythonapp'
- kind: 'containerapp'
location: location properties: {
- kubeEnvironmentId: resourceId('Microsoft.Web/kubeEnvironments', environment_name)
- configuration: {}
+ managedEnvironmentId: environment.id
+ configuration: {
+ dapr: {
+ enabled: true
+ appId: 'pythonapp'
+ }
+ }
template: { containers: [ {
resource pythonapp 'Microsoft.Web/containerApps@2021-03-01' = {
name: 'hello-k8s-python' resources: { cpu: '0.5'
- memory: '1Gi'
+ memory: '1.0Gi'
} } ]
resource pythonapp 'Microsoft.Web/containerApps@2021-03-01' = {
minReplicas: 1 maxReplicas: 1 }
- dapr: {
- enabled: true
- appId: 'pythonapp'
- }
} }
+ dependsOn: [
+ nodeapp
+ ]
}
-
``` ::: zone-end
-## Deploy the service application (HTTP web server)
+> [!NOTE]
+> Container Apps does not currently support the native [Dapr components schema](https://docs.dapr.io/operations/components/component-schema/). The above example uses the supported schema.
+
+## Deploy
::: zone pivot="container-apps-arm"
-Now deploy the service Container App. Navigate to the directory in which you stored the ARM template file and run the following command:
+Navigate to the directory in which you stored the ARM template file and run the following command:
# [Bash](#tab/bash) ```azurecli az deployment group create \ --resource-group "$RESOURCE_GROUP" \
- --template-file ./serviceapp.json \
+ --template-file ./hello-world.json \
--parameters \ environment_name="$CONTAINERAPPS_ENVIRONMENT" \ location="$LOCATION" \ storage_account_name="$STORAGE_ACCOUNT" \
- storage_account_key="$STORAGE_ACCOUNT_KEY" \
storage_container_name="$STORAGE_ACCOUNT_CONTAINER" ```
$params = @{
environment_name = $CONTAINERAPPS_ENVIRONMENT location = $LOCATION storage_account_name = $STORAGE_ACCOUNT
- storage_account_key = $STORAGE_ACCOUNT_KEY
storage_container_name = $STORAGE_ACCOUNT_CONTAINER } New-AzResourceGroupDeployment ` -ResourceGroupName $RESOURCE_GROUP ` -TemplateParameterObject $params `
- -TemplateFile ./serviceapp.json `
- -SkipTemplateParameterPrompt
+ -TemplateFile ./hello-world.json `
+ -SkipTemplateParameterPrompt
``` ::: zone-end ::: zone pivot="container-apps-bicep"
-Now deploy the service container. Navigate to the directory in which you stored the Bicep template file and run the following command:
+Navigate to the directory in which you stored the Bicep template file and run the following command:
A warning (BCP081) might be displayed. This warning has no effect on the successful deployment of the application.
A warning (BCP081) might be displayed. This warning has no effect on the success
```azurecli az deployment group create \ --resource-group "$RESOURCE_GROUP" \
- --template-file ./serviceapp.bicep \
+ --template-file ./hello-world.bicep \
--parameters \ environment_name="$CONTAINERAPPS_ENVIRONMENT" \ location="$LOCATION" \ storage_account_name="$STORAGE_ACCOUNT" \
- storage_account_key="$STORAGE_ACCOUNT_KEY" \
storage_container_name="$STORAGE_ACCOUNT_CONTAINER" ```
$params = @{
environment_name = $CONTAINERAPPS_ENVIRONMENT location = $LOCATION storage_account_name = $STORAGE_ACCOUNT
- storage_account_key = $STORAGE_ACCOUNT_KEY
storage_container_name = $STORAGE_ACCOUNT_CONTAINER } New-AzResourceGroupDeployment ` -ResourceGroupName $RESOURCE_GROUP ` -TemplateParameterObject $params `
- -TemplateFile ./serviceapp.bicep `
- -SkipTemplateParameterPrompt
+ -TemplateFile ./hello-world.bicep `
+ -SkipTemplateParameterPrompt
``` -- ::: zone-end This command deploys:
-* the service (Node) app server on `targetPort: 3000` (the app port)
-* its accompanying Dapr sidecar configured with `"appId": "nodeapp",` and dapr `"appPort": 3000,` for service discovery and invocation.
-
-Your state store is configured with the `components` object of `"type": "state.azure.blobstorage"`, which enables the sidecar to persist state.
-
-## Deploy the client application (headless client)
-
-Run the following command to deploy the client container.
--
-# [Bash](#tab/bash)
-
-```azurecli
-az deployment group create --resource-group "$RESOURCE_GROUP" \
- --template-file ./clientapp.json \
- --parameters \
- environment_name="$CONTAINERAPPS_ENVIRONMENT" \
- location="$LOCATION"
-```
-
-# [PowerShell](#tab/powershell)
-
-```powershell
-$params = @{
- environment_name = $CONTAINERAPPS_ENVIRONMENT
- location = $LOCATION
-}
-
-New-AzResourceGroupDeployment `
- -ResourceGroupName $RESOURCE_GROUP `
- -TemplateParameterObject $params `
- -TemplateFile ./clientapp.json `
- -SkipTemplateParameterPrompt
-```
---
-A warning (BCP081) might be displayed. This warning has no effect on the successful deployment of the application.
-
-# [Bash](#tab/bash)
-
-```azurecli
-az deployment group create --resource-group "$RESOURCE_GROUP" \
- --template-file ./clientapp.bicep \
- --parameters \
- environment_name="$CONTAINERAPPS_ENVIRONMENT" \
- location="$LOCATION"
-```
-
-# [PowerShell](#tab/powershell)
-
-```powershell
-$params = @{
- environment_name = $CONTAINERAPPS_ENVIRONMENT
- location = $LOCATION
-}
-
-New-AzResourceGroupDeployment `
- -ResourceGroupName $RESOURCE_GROUP `
- -TemplateParameterObject $params `
- -TemplateFile ./clientapp.bicep `
- -SkipTemplateParameterPrompt
-```
----
-This command deploys `pythonapp` that also runs with a Dapr sidecar that is used to look up and securely call the Dapr sidecar for `nodeapp`. As this app is headless there's no `targetPort` to start a server, nor is there a need to enable ingress.
+- the container apps environment and associated Log Analytics workspace for hosting the hello world dapr solution
+- an Application Insights instance for Dapr distributed tracing
+- the `nodeapp` app server running on `targetPort: 3000` with dapr enabled and configured using: `"appId": "nodeapp"` and `"appPort": 3000`
+- the `daprComponents` object of `"type": "state.azure.blobstorage"` scoped for use by the `nodeapp` for storing state
+- the headless `pythonapp` with no ingress and dapr enabled that calls the `nodeapp` service via dapr service-to-service communication
## Verify the result
This command deploys `pythonapp` that also runs with a Dapr sidecar that is used
You can confirm that the services are working correctly by viewing data in your Azure Storage account.
-1. Open the [Azure portal](https://portal.azure.com) in your browser and navigate to your storage account.
+1. Open the [Azure portal](https://portal.azure.com) in your browser.
+
+1. Navigate to your storage account.
1. Select **Containers** from the menu on the left side.
Use the following command to view logs in bash or PowerShell.
# [Bash](#tab/bash)
+```azurecli
+LOG_ANALYTICS_WORKSPACE_CLIENT_ID=`az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query properties.appLogsConfiguration.logAnalyticsConfiguration.customerId --out tsv`
+```
+ ```azurecli az monitor log-analytics query \
- --workspace $LOG_ANALYTICS_WORKSPACE_CLIENT_ID \
+ --workspace "$LOG_ANALYTICS_WORKSPACE_CLIENT_ID" \
--analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'nodeapp' and (Log_s contains 'persisted' or Log_s contains 'order') | project ContainerAppName_s, Log_s, TimeGenerated | take 5" \ --out table ``` # [PowerShell](#tab/powershell)
+```powershell
+$LOG_ANALYTICS_WORKSPACE_CLIENT_ID=(az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query properties.appLogsConfiguration.logAnalyticsConfiguration.customerId --out tsv)
+```
+ ```powershell $queryResults = Invoke-AzOperationalInsightsQuery -WorkspaceId $LOG_ANALYTICS_WORKSPACE_CLIENT_ID -Query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'nodeapp' and (Log_s contains 'persisted' or Log_s contains 'order') | project ContainerAppName_s, Log_s, TimeGenerated | take 5" $queryResults.Results
container-apps Microservices Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr.md
Previously updated : 11/02/2021 Last updated : 03/22/2022 ms.devlang: azurecli
Individual container apps are deployed to an Azure Container Apps environment. T
az containerapp env create \ --name $CONTAINERAPPS_ENVIRONMENT \ --resource-group $RESOURCE_GROUP \
- --logs-workspace-id $LOG_ANALYTICS_WORKSPACE_CLIENT_ID \
- --logs-workspace-key $LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET \
--location "$LOCATION" ```
az containerapp env create \
az containerapp env create ` --name $CONTAINERAPPS_ENVIRONMENT ` --resource-group $RESOURCE_GROUP `
- --logs-workspace-id $LOG_ANALYTICS_WORKSPACE_CLIENT_ID `
- --logs-workspace-key $LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET `
--location "$LOCATION" ```
New-AzStorageAccount -ResourceGroupName $RESOURCE_GROUP `
-Once your Azure Blob Storage account is created, the following values are needed for subsequent steps in this tutorial.
-
-* `storage_account_name` is the value of the `STORAGE_ACCOUNT` variable that you set previously.
-
-* `storage_container_name` is the value of the `STORAGE_ACCOUNT_CONTAINER` variable. Dapr creates a container with this name when it doesn't already exist in your Azure Storage account.
- Get the storage account key with the following command: # [Bash](#tab/bash)
$STORAGE_ACCOUNT_KEY=(Get-AzStorageAccountKey -ResourceGroupName $RESOURCE_GROUP
### Configure the state store component
-Create a config file named *components.yaml* with the properties that you sourced from the previous steps. This file helps enable your Dapr app to access your state store. The following example shows how your *components.yaml* file should look when configured for your Azure Blob Storage account:
+Create a config file named *statestore.yaml* with the properties that you sourced from the previous steps. This file helps enable your Dapr app to access your state store. The following example shows how your *statestore.yaml* file should look when configured for your Azure Blob Storage account:
```yaml
-# components.yaml for Azure Blob storage component
-- name: statestore
- type: state.azure.blobstorage
- version: v1
- metadata:
- # Note that in a production scenario, account keys and secrets
- # should be securely stored. For more information, see
- # https://docs.dapr.io/operations/components/component-secrets
- - name: accountName
- secretRef: storage-account-name
- - name: accountKey
- secretRef: storage-account-key
- - name: containerName
- value: mycontainer
+# statestore.yaml for Azure Blob storage component
+componentType: state.azure.blobstorage
+version: v1
+metadata:
+- name: accountName
+ value: "<STORAGE_ACCOUNT>"
+- name: accountKey
+ secretRef: account-key
+- name: containerName
+ value: mycontainer
+secrets:
+- name: account-key
+ value: "<STORAGE_ACCOUNT_KEY>"
+scopes:
+- nodeapp
```
-To use this file, make sure to replace the value of `containerName` with your own value if you have changed `STORAGE_ACCOUNT_CONTAINER` variable from its original value, `mycontainer`.
+To use this file, update the placeholders:
+
+- Replace `<STORAGE_ACCOUNT>` with the value of the `STORAGE_ACCOUNT` variable you defined. To obtain its value, run the following command:
+ ```azurecli
+ echo $STORAGE_ACCOUNT
+ ```
+- Replace `<STORAGE_ACCOUNT_KEY>` with the storage account key. To obtain its value, run the following command:
+ ```azurecli
+ echo $STORAGE_ACCOUNT_KEY
+ ```
+
+If you've changed the `STORAGE_ACCOUNT_CONTAINER` variable from its original value, `mycontainer`, replace the value of `containerName` with your own value.
> [!NOTE] > Container Apps does not currently support the native [Dapr components schema](https://docs.dapr.io/operations/components/component-schema/). The above example uses the supported schema.
+Navigate to the directory in which you stored the *statestore.yaml* file and run the following command to configure the Dapr component in the Container Apps environment.
-## Deploy the service application (HTTP web server)
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp env dapr-component set \
+ --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP \
+ --dapr-component-name statestore \
+ --yaml statestore.yaml
+```
-Navigate to the directory in which you stored the *components.yaml* file and run the following command to deploy the service container app.
+# [PowerShell](#tab/powershell)
+
+```powershell
+az containerapp env dapr-component set `
+ --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP `
+ --dapr-component-name statestore `
+ --yaml statestore.yaml
+```
+++
+Your state store is configured using the Dapr component described in *statestore.yaml*. The component is scoped to a container app named `nodeapp` and is not available to other container apps.
+
+## Deploy the service application (HTTP web server)
# [Bash](#tab/bash)
az containerapp create \
--max-replicas 1 \ --enable-dapr \ --dapr-app-port 3000 \
- --dapr-app-id nodeapp \
- --secrets "storage-account-name=${STORAGE_ACCOUNT},storage-account-key=${STORAGE_ACCOUNT_KEY}" \
- --dapr-components ./components.yaml
+ --dapr-app-id nodeapp
``` # [PowerShell](#tab/powershell)
az containerapp create `
--max-replicas 1 ` --enable-dapr ` --dapr-app-port 3000 `
- --dapr-app-id nodeapp `
- --secrets "storage-account-name=${STORAGE_ACCOUNT},storage-account-key=${STORAGE_ACCOUNT_KEY}" `
- --dapr-components ./components.yaml
+ --dapr-app-id nodeapp
```
This command deploys:
* the service (Node) app server on `--target-port 3000` (the app port) * its accompanying Dapr sidecar configured with `--dapr-app-id nodeapp` and `--dapr-app-port 3000` for service discovery and invocation
-Your state store is configured using `--dapr-components ./components.yaml`, which enables the sidecar to persist state.
-- ## Deploy the client application (headless client) Run the following command to deploy the client container app.
Use the following CLI command to view logs on the command line.
# [Bash](#tab/bash) ```azurecli
+LOG_ANALYTICS_WORKSPACE_CLIENT_ID=`az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query properties.appLogsConfiguration.logAnalyticsConfiguration.customerId --out tsv`
+ az monitor log-analytics query \ --workspace $LOG_ANALYTICS_WORKSPACE_CLIENT_ID \ --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'nodeapp' and (Log_s contains 'persisted' or Log_s contains 'order') | project ContainerAppName_s, Log_s, TimeGenerated | take 5" \
az monitor log-analytics query \
# [PowerShell](#tab/powershell) ```powershell
+$LOG_ANALYTICS_WORKSPACE_CLIENT_ID=(az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query properties.appLogsConfiguration.logAnalyticsConfiguration.customerId --out tsv)
+ $queryResults = Invoke-AzOperationalInsightsQuery -WorkspaceId $LOG_ANALYTICS_WORKSPACE_CLIENT_ID -Query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'nodeapp' and (Log_s contains 'persisted' or Log_s contains 'order') | project ContainerAppName_s, Log_s, TimeGenerated | take 5" $queryResults.Results ```
container-apps Revisions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/revisions-manage.md
Activate a revision by using `az containerapp revision activate`.
```azurecli az containerapp revision activate \
- --name <REVISION_NAME> \
- --app <CONTAINER_APP_NAME> \
+ --revision <REVISION_NAME> \
+ --name <CONTAINER_APP_NAME> \
--resource-group <RESOURCE_GROUP_NAME> ```
az containerapp revision activate \
```poweshell az containerapp revision activate `
- --name <REVISION_NAME> `
- --app <CONTAINER_APP_NAME> `
+ --revision <REVISION_NAME> `
+ --name <CONTAINER_APP_NAME> `
--resource-group <RESOURCE_GROUP_NAME> ```
Deactivate revisions that are no longer in use with `az container app revision d
```azurecli az containerapp revision deactivate \
- --name <REVISION_NAME> \
- --app <CONTAINER_APP_NAME> \
+ --revision <REVISION_NAME> \
+ --name <CONTAINER_APP_NAME> \
--resource-group <RESOURCE_GROUP_NAME> ```
az containerapp revision deactivate \
```azurecli az containerapp revision deactivate `
- --name <REVISION_NAME> `
- --app <CONTAINER_APP_NAME> `
+ --revision <REVISION_NAME> `
+ --name <CONTAINER_APP_NAME> `
--resource-group <RESOURCE_GROUP_NAME> ```
All existing container apps revisions will not have access to this secret until
```azurecli az containerapp revision restart \
- --name <REVISION_NAME> \
- --app <APPLICATION_NAME> \
+ --revision <REVISION_NAME> \
+ --name <APPLICATION_NAME> \
--resource-group <RESOURCE_GROUP_NAME> ```
az containerapp revision restart \
```azurecli az containerapp revision restart `
- --name <REVISION_NAME> `
- --app <APPLICATION_NAME> `
+ --revision <REVISION_NAME> `
+ --name <APPLICATION_NAME> `
--resource-group <RESOURCE_GROUP_NAME> ```
container-apps Vnet Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom.md
Finally, create the Container Apps environment with the VNET and subnets.
az containerapp env create \ --name $CONTAINERAPPS_ENVIRONMENT \ --resource-group $RESOURCE_GROUP \
- --logs-workspace-id $LOG_ANALYTICS_WORKSPACE_CLIENT_ID \
- --logs-workspace-key $LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET \
--location "$LOCATION" \ --app-subnet-resource-id $APP_SUBNET \ --controlplane-subnet-resource-id $CONTROL_PLANE_SUBNET
az containerapp env create \
az containerapp env create ` --name $CONTAINERAPPS_ENVIRONMENT ` --resource-group $RESOURCE_GROUP `
- --logs-workspace-id $LOG_ANALYTICS_WORKSPACE_CLIENT_ID `
- --logs-workspace-key $LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET `
--location "$LOCATION" ` --app-subnet-resource-id $APP_SUBNET ` --controlplane-subnet-resource-id $CONTROL_PLANE_SUBNET
az containerapp env create `
> [!NOTE] > As you call `az containerapp create` to create the container app inside your environment, make sure the value for the `--image` parameter is in lower case.
-The following table describes the parameters used in for `containerapp env create`.
+The following table describes the parameters used in `containerapp env create`.
| Parameter | Description | ||| | `name` | Name of the container apps environment. | | `resource-group` | Name of the resource group. |
-| `logs-workspace-id` | The ID of the Log Analytics workspace. |
-| `logs-workspace-key` | The Log Analytics client secret. |
| `location` | The Azure location where the environment is to deploy. | | `app-subnet-resource-id` | The resource ID of a subnet where containers are injected into the container app. This subnet must be in the same VNET as the subnet defined in `--control-plane-subnet-resource-id`. | | `controlplane-subnet-resource-id` | The resource ID of a subnet for control plane infrastructure components. This subnet must be in the same VNET as the subnet defined in `--app-subnet-resource-id`. |
cosmos-db Configure Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-synapse-link.md
Serverless SQL pool allows you to query and analyze data in your Azure Cosmos DB
## <a id="analyze-with-powerbi"></a>Use serverless SQL pool to analyze and visualize data in Power BI
-You can create a serverless SQL pool database and views over Synapse Link for Azure Cosmos DB. Later you can query the Azure Cosmos DB containers and then build a model with Power BI over those views to reflect that query. There is no performance or cost impact to your transactional workloads, and no complexity of managing ETL pipelines. You can use either [DirectQuery](/power-bi/connect-dat) article.
+You can use integrated BI experience in Azure Cosmos DB portal, to build BI dashboards using Synapse Link with just a few clicks. To learn more, see [how to build BI dashboards using Synapse Link](integrated-power-bi-synapse-link.md). This integrated experience will create simple T-SQL views in Synapse serverless SQL pools, for your Cosmos DB containers. You can build BI dashboards over these views, which will query your Azure Cosmos DB containers in real-time, using [Direct Query](/power-bi/connect-data/service-dataset-modes-understand#directquery-mode), reflecting latest changes to your data. There is no performance or cost impact to your transactional workloads, and no complexity of managing ETL pipelines.
+
+If you want to use advance T-SQL views with joins across your containers or build BI dashboards in import](/power-bi/connect-dat) article.
## Configure custom partitioning
cosmos-db How To Always Encrypted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-always-encrypted.md
Title: Use client-side encryption with Always Encrypted for Azure Cosmos DB
description: Learn how to use client-side encryption with Always Encrypted for Azure Cosmos DB Previously updated : 01/26/2022 Last updated : 03/30/2022
-# Use client-side encryption with Always Encrypted for Azure Cosmos DB (Preview)
+# Use client-side encryption with Always Encrypted for Azure Cosmos DB
[!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)]
+> [!IMPORTANT]
+> A breaking change has been introduced with the 1.0 release of our encryption packages. If you created data encryption keys and encryption-enabled containers with prior versions, you will need to re-create your databases and containers after migrating your client code to 1.0 packages.
+ Always Encrypted is a feature designed to protect sensitive data, such as credit card numbers or national identification numbers (for example, U.S. social security numbers), stored in Azure Cosmos DB. Always Encrypted allows clients to encrypt sensitive data inside client applications and never reveal the encryption keys to the database. Always Encrypted brings client-side encryption capabilities to Azure Cosmos DB. Encrypting your data client-side can be required in the following scenarios:
Always Encrypted brings client-side encryption capabilities to Azure Cosmos DB.
- **Protecting sensitive data that has specific confidentiality characteristics**: Always Encrypted allows clients to encrypt sensitive data inside their applications and never reveal the plain text data or encryption keys to the Azure Cosmos DB service. - **Implementing per-property access control**: Because the encryption is controlled with keys that you own and manage from Azure Key Vault, you can apply access policies to control which sensitive properties each client has access to.
-> [!IMPORTANT]
-> Always Encrypted for Azure Cosmos DB is currently in preview. This preview version is provided without a Service Level Agreement and is not recommended for production workloads. For more information, see [Supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-## Prerequisites
-
-To start using the preview of Always Encrypted for Azure Cosmos DB, you can:
--- Use the 2.11.13.0 or higher version of [Azure Cosmos DB local emulator](local-emulator.md).-- Request the preview to be enabled on your Azure Cosmos DB account by filling [this form](https://ncv.microsoft.com/poTcF52I6N).-
-> [!TIP]
-> Do you have any feedback to share regarding the preview of Always Encrypted for Azure Cosmos DB? Reach out to [azurecosmosdbcse@service.microsoft.com](mailto:azurecosmosdbcse@service.microsoft.com).
- ## Concepts Always Encrypted for Azure Cosmos DB introduces some new concepts that are involved in the configuration of your client-side encryption.
You can:
#### Customer-managed keys
-Before DEKs get stored in Azure Cosmos DB, they are wrapped by a customer-managed key (CMK). By controlling the wrapping and unwrapping of DEKs, CMKs effectively control the access to the data that's encrypted with their corresponding DEKs. CMK storage is designed as an extensible/plug-in model, with a default implementation that expects them to be stored in Azure Key Vault.
+Before DEKs get stored in Azure Cosmos DB, they are wrapped by a customer-managed key (CMK). By controlling the wrapping and unwrapping of DEKs, CMKs effectively control the access to the data that's encrypted with their corresponding DEKs. CMK storage is designed as an extensible, with a default implementation that expects them to be stored in Azure Key Vault.
:::image type="content" source="./media/how-to-always-encrypted/encryption-keys.png" alt-text="Encryption keys" border="true":::
If you're using an existing Azure Key Vault instance, you can verify that these
> - In **.NET** with the [Microsoft.Azure.Cosmos.Encryption package](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Encryption). > - In **Java** with the [azure.cosmos.encryption package](https://mvnrepository.com/artifact/com.azure/azure-cosmos-encryption).
-To use Always Encrypted, an instance of an `EncryptionKeyWrapProvider` must be attached to your Azure Cosmos DB SDK instance. This object is used to interact with the key store hosting your CMKs. The default key store provider for Azure Key Vault is named `AzureKeyVaultKeyWrapProvider`.
-
-The following snippets use the `DefaultAzureCredential` class to retrieve the Azure AD identity to use when accessing your Azure Key Vault instance. You can find examples of creating different kinds of `TokenCredential` classes:
+# [.NET](#tab/dotnet)
-- [In .NET](/dotnet/api/overview/azure/identity-readme#credential-classes)-- [In Java](/java/api/overview/azure/identity-readme#credential-classes)
+To use Always Encrypted, an instance of a `KeyResolver` must be attached to your Azure Cosmos DB SDK instance. This class, defined in the `Azure.Security.KeyVault.Keys.Cryptography` namespace, is used to interact with the key store hosting your CMKs.
-# [.NET](#tab/dotnet)
+The following snippets use the `DefaultAzureCredential` class to retrieve the Azure AD identity to use when accessing your Azure Key Vault instance. You can find examples of creating different kinds of `TokenCredential` classes [here](/dotnet/api/overview/azure/identity-readme#credential-classes).
> [!NOTE] > You will need the additional [Azure.Identity package](https://www.nuget.org/packages/Azure.Identity/) to access the `TokenCredential` classes. ```csharp var tokenCredential = new DefaultAzureCredential();
-var keyWrapProvider = new AzureKeyVaultKeyWrapProvider(tokenCredential);
+var keyResolver = new KeyResolver(tokenCredential);
var client = new CosmosClient("<connection-string>")
- .WithEncryption(keyStoreProvider);
+ .WithEncryption(keyResolver, KeyEncryptionKeyResolverName.AzureKeyVault);
``` # [Java](#tab/java)
+To use Always Encrypted, an instance of a `KeyEncryptionKeyClientBuilder` must be attached to your Azure Cosmos DB SDK instance. This class, defined in the `com.azure.security.keyvault.keys.cryptography` namespace, is used to interact with the key store hosting your CMKs.
+
+The following snippets use the `DefaultAzureCredential` class to retrieve the Azure AD identity to use when accessing your Azure Key Vault instance. You can find examples of creating different kinds of `TokenCredential` classes [here](/java/api/overview/azure/identity-readme#credential-classes).
+ ```java TokenCredential tokenCredential = new DefaultAzureCredentialBuilder() .build();
-AzureKeyVaultKeyStoreProvider encryptionKeyStoreProvider =
- new AzureKeyVaultKeyStoreProvider(tokenCredential);
+KeyEncryptionKeyClientBuilder keyEncryptionKeyClientBuilder =
+ new KeyEncryptionKeyClientBuilder().credential(tokenCredentials);
CosmosAsyncClient client = new CosmosClientBuilder() .endpoint("<endpoint>") .key("<primary-key>") .buildAsyncClient();
-EncryptionAsyncCosmosClient encryptionClient =
- EncryptionAsyncCosmosClient.buildEncryptionAsyncClient(client, encryptionKeyStoreProvider);
+CosmosEncryptionAsyncClient cosmosEncryptionAsyncClient =
+ new CosmosEncryptionClientBuilder().cosmosAsyncClient(client).keyEncryptionKeyResolver(keyEncryptionKeyClientBuilder)
+ .keyEncryptionKeyResolverName(CosmosEncryptionClientBuilder.KEY_RESOLVER_NAME_AZURE_KEY_VAULT).buildAsyncClient();
``` ## Create a data encryption key
-Before data can be encrypted in a container, a [data encryption key](#data-encryption-keys) must be created in the parent database. This operation is done by calling the `CreateClientEncryptionKeyAsync` method and passing:
+Before data can be encrypted in a container, a [data encryption key](#data-encryption-keys) must be created in the parent database.
+
+# [.NET](#tab/dotnet)
+
+Creating a new data encryption key is done by calling the `CreateClientEncryptionKeyAsync` method and passing:
- A string identifier that will uniquely identify the key in the database. - The encryption algorithm intended to be used with the key. Only one algorithm is currently supported.-- The key identifier of the [CMK](#customer-managed-keys) stored in Azure Key Vault. This parameter is passed in a generic `EncryptionKeyWrapMetadata` object where the `name` can be any friendly name you want, and the `value` must be the key identifier.-
-# [.NET](#tab/dotnet)
+- The key identifier of the [CMK](#customer-managed-keys) stored in Azure Key Vault. This parameter is passed in a generic `EncryptionKeyWrapMetadata` object where:
+ - The `type` defines the type of key resolver (for example, Azure Key Vault).
+ - The `name` can be any friendly name you want.
+ - The `value` must be the key identifier.
+ - The `algorithm` defines which algorithm shall be used to wrap the key encryption key with the customer-managed key.
```csharp var database = client.GetDatabase("my-database"); await database.CreateClientEncryptionKeyAsync( "my-key",
- DataEncryptionKeyAlgorithm.AeadAes256CbcHmacSha256,
+ DataEncryptionAlgorithm.AeadAes256CbcHmacSha256,
new EncryptionKeyWrapMetadata(
- keyWrapProvider.ProviderName,
+ KeyEncryptionKeyResolverName.AzureKeyVault,
"akvKey",
- "https://<my-key-vault>.vault.azure.net/keys/<key>/<version>"));
+ "https://<my-key-vault>.vault.azure.net/keys/<key>/<version>",
+ EncryptionAlgorithm.RsaOaep.ToString()));
``` # [Java](#tab/java)
+Creating a new data encryption key is done by calling the `createClientEncryptionKey` method and passing:
+
+- A string identifier that will uniquely identify the key in the database.
+- The encryption algorithm intended to be used with the key. Only one algorithm is currently supported.
+- The key identifier of the [CMK](#customer-managed-keys) stored in Azure Key Vault. This parameter is passed in a generic `EncryptionKeyWrapMetadata` object where:
+ - The `type` defines the type of key resolver (for example, Azure Key Vault).
+ - The `name` can be any friendly name you want.
+ - The `value` must be the key identifier.
+ - The `algorithm` defines which algorithm shall be used to wrap the key encryption key with the customer-managed key.
+ ```java
-EncryptionCosmosAsyncDatabase database =
- client.getEncryptedCosmosAsyncDatabase("my-database");
+CosmosEncryptionAsyncDatabase database =
+ cosmosEncryptionAsyncClient.getCosmosEncryptionAsyncDatabase("my-database");
+EncryptionKeyWrapMetadata metadata = new EncryptionKeyWrapMetadata(
+ cosmosEncryptionAsyncClient.getKeyEncryptionKeyResolverName(),
+ "akvKey",
+ "https://<my-key-vault>.vault.azure.net/keys/<key>/<version>",
+ EncryptionAlgorithm.RSA_OAEP.toString());
database.createClientEncryptionKey( "my-key",
- CosmosEncryptionAlgorithm.AEAES_256_CBC_HMAC_SHA_256,
- new EncryptionKeyWrapMetadata(
- "akvKey",
- "https://<my-key-vault>.vault.azure.net/keys/<key>/<version>"));
+ CosmosEncryptionAlgorithm.AEAD_AES_256_CBC_HMAC_SHA256.getName(),
+ metadata);
```
await database.DefineContainer("my-container", "/partition-key")
```java ClientEncryptionIncludedPath path1 = new ClientEncryptionIncludedPath();
-path1.clientEncryptionKeyId = "my-key":
-path1.path = "/property1";
-path1.encryptionType = CosmosEncryptionType.DETERMINISTIC;
-path1.encryptionAlgorithm = CosmosEncryptionAlgorithm.AEAES_256_CBC_HMAC_SHA_256;
+path1.setClientEncryptionKeyId("my-key"):
+path1.setPath("/property1");
+path1.setEncryptionType(CosmosEncryptionType.DETERMINISTIC.getName());
+path1.setEncryptionAlgorithm(CosmosEncryptionAlgorithm.AEAES_256_CBC_HMAC_SHA_256.getName());
ClientEncryptionIncludedPath path2 = new ClientEncryptionIncludedPath();
-path2.clientEncryptionKeyId = "my-key":
-path2.path = "/property2";
-path2.encryptionType = CosmosEncryptionType.RANDOMIZED;
-path2.encryptionAlgorithm = CosmosEncryptionAlgorithm.AEAES_256_CBC_HMAC_SHA_256;
+path2.setClientEncryptionKeyId("my-key"):
+path2.setPath("/property2");
+path2.setEncryptionType(CosmosEncryptionType.RANDOMIZED.getName());
+path2.setEncryptionAlgorithm(CosmosEncryptionAlgorithm.AEAES_256_CBC_HMAC_SHA_256.getName());
List<ClientEncryptionIncludedPath> paths = new ArrayList<>(); paths.add(path1);
Note that the resolution of encrypted properties and their subsequent decryption
### Filter queries on encrypted properties
-When writing queries that filter on encrypted properties, the `AddParameterAsync` method must be used to pass the value of the query parameter. This method takes the following arguments:
+When writing queries that filter on encrypted properties, a specific method must be used to pass the value of the query parameter. This method takes the following arguments:
- The name of the query parameter. - The value to use in the query.
await queryDefinition.AddParameterAsync(
# [Java](#tab/java) ```java
-EncryptionSqlQuerySpec encryptionSqlQuerySpec = new EncryptionSqlQuerySpec(
- new SqlQuerySpec("SELECT * FROM c where c.property1 = @Property1"), container);
-encryptionSqlQuerySpec.addEncryptionParameterAsync(
- new SqlParameter("@Property1", 1234), "/property1")
+SqlQuerySpecWithEncryption sqlQuerySpecWithEncryption = new SqlQuerySpecWithEncryption(
+ new SqlQuerySpec("SELECT * FROM c where c.property1 = @Property1"));
+sqlQuerySpecWithEncryption.addEncryptionParameter(
+ "/property1", new SqlParameter("@Property1", 1234))
```
You may want to "rotate" your CMK (that is, use a new CMK instead of the current
await database.RewrapClientEncryptionKeyAsync( "my-key", new EncryptionKeyWrapMetadata(
- keyWrapProvider.ProviderName,
+ KeyEncryptionKeyResolverName.AzureKeyVault,
"akvKey",
- " https://<my-key-vault>.vault.azure.net/keys/<new-key>/<version>"));
+ "https://<my-key-vault>.vault.azure.net/keys/<new-key>/<version>",
+ EncryptionAlgorithm.RsaOaep.ToString()));
``` # [Java](#tab/java) ```java
-database. rewrapClientEncryptionKey(
+EncryptionKeyWrapMetadata metadata = new EncryptionKeyWrapMetadata(
+ cosmosEncryptionAsyncClient.getKeyEncryptionKeyResolverName(),
+ "akvKey",
+ "https://<my-key-vault>.vault.azure.net/keys/<new-key>/<version>",
+ EncryptionAlgorithm.RSA_OAEP.toString());
+database.rewrapClientEncryptionKey(
"my-key",
- new EncryptionKeyWrapMetadata(
- "akvKey", " https://<my-key-vault>.vault.azure.net/keys/<new-key>/<version>"));
+ metadata);
```
+## DEK rotation
+
+Performing a rotation of a data encryption key isn't offered as a turnkey capability. This is because updating a DEK requires a scan of all containers where this key is used and a re-encryption of all properties encrypted with this key. This operation can only happen client-side as the Azure Cosmos DB service does not store or ever accesses the plain text value of the DEK.
+
+In practice, a DEK rotation can be done by performing a data migration from the impacted containers to new ones. The new containers can be created the exact same way as the original ones. To help you with such a data migration, you can find [a standalone migration tool on GitHub](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/ReEncryption).
+
+## Adding additional encrypted properties
+
+Adding additional encrypted properties to an existing encryption policy isn't supported for the same reasons explained in the section just above. This operation requires a full scan of the container to ensure that all instances of the properties are properly encrypted, and this is an operation that can only happen client-side. Just like a DEK rotation, adding additional encrypted properties can be done by performing a data migration to a new container with an appropriate encryption policy.
+
+If you have flexibility in the way new encrypted properties can be added from a schema standpoint, you can also leverage the schema-agnostic nature of Azure Cosmos DB. If you use a property defined in your encryption policy as a "property bag", you can add more properties below with no constraint. For example, let's imagine that `property1` is defined in your encryption policy and you initially write `property1.property2` in your documents. If, at a later stage, you need to add `property3` as an encrypted property, you can start writing `property1.property3` in your documents and the new property will automatically be encrypted as well. This approach doesn't require any data migration.
+ ## Next steps -- Get an overview of [secure access to data in Cosmos DB](secure-access-to-data.md).-- Learn more about [customer-managed keys](how-to-setup-cmk.md)
+- Get an overview of [secure access to data in Azure Cosmos DB](secure-access-to-data.md).
+- Learn more about [customer-managed keys for encryption-at-rest](how-to-setup-cmk.md)
cosmos-db Integrated Power Bi Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/integrated-power-bi-synapse-link.md
Use the following steps to build a Power BI report from Azure Cosmos DB data in
## Next steps
-* Learn more about [Azure Synapse Link for Azure Cosmos DB](synapse-link.md)
* [Connect serverless SQL pool to Power BI Desktop & create report](../synapse-analytics/sql/tutorial-connect-power-bi-desktop.md#prerequisites)
+* [Use Power BI and serverless Synapse SQL pool to analyze Azure Cosmos DB data with Synapse Link](synapse-link-power-bi.md)
cosmos-db Secure Access To Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/secure-access-to-data.md
Last updated 08/30/2021-+ # Secure access to data in Azure Cosmos DB
CosmosClient client = new CosmosClient(accountEndpoint: "MyEndpoint", authKeyOrR
To add Azure Cosmos DB account reader access to your user account, have a subscription owner perform the following steps in the Azure portal. 1. Open the Azure portal, and select your Azure Cosmos DB account.
-2. Click the **Access control (IAM)** tab, and then click **+ Add role assignment**.
-3. In the **Add role assignment** pane, in the **Role** box, select **Cosmos DB Account Reader Role**.
-4. In the **Assign access to box**, select **Azure AD user, group, or application**.
-5. Select the user, group, or application in your directory to which you wish to grant access. You can search the directory by display name, email address, or object identifiers.
- The selected user, group, or application appears in the selected members list.
-6. Click **Save**.
+
+1. Select **Access control (IAM)**.
+
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
+
+1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+
+ | Setting | Value |
+ | | |
+ | Role | Cosmos DB Account Reader |
+ | Assign access to | User, group, or service principal |
+ | Members | The user, group, or application in your directory to which you wish to grant access. |
+
+ ![Screenshot that shows Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
The entity can now read Azure Cosmos DB resources.
cosmos-db Create Real Time Weather Dashboard Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-real-time-weather-dashboard-powerbi.md
- Title: Create a real-time dashboard using Azure Cosmos DB, Azure Analysis Services, and Power BI
-description: Learn how to create a live weather dashboard in Power BI using Azure Cosmos DB and Azure Analysis Services.
----- Previously updated : 09/04/2019----
-# Create a real-time dashboard using Azure Cosmos DB and Power BI
-
-This article describes the steps required to create a live weather dashboard in Power BI using Azure Cosmos DB OLTP connector and Azure Analysis Services. The Power BI dashboard will display charts to show near real-time information about temperature and rainfall in a region.
-
-Another option is to create near real-time reports using [Azure Synapse Link for Azure Cosmos DB](../synapse-link.md). With Azure Synapse Link, you can connect Power BI to analyze your Azure Cosmos DB data, with no performance or cost impact to your transactional workloads, and no ETL pipelines. You can use either [DirectQuery](/power-bi/connect-dat).
--
-## Reporting scenarios
-
-There are multiple ways to set up reporting dashboards on data stored in Azure Cosmos DB. Depending on the staleness requirements and the size of the data, the following table describes the reporting setup for each scenario:
--
-|Scenario |Setup |
-|||
-|1. Generating real time reports on large data sets with aggregates | **Option 1:** [Power BI and Azure Synapse Link with DirectQuery](../synapse-link-power-bi.md)<br /> **Option 2:** [Power BI and Spark connector with DirectQuery + Azure Databricks + Azure Cosmos DB Spark connector.](https://github.com/Azure/azure-cosmosdb-spark/wiki/Connecting-Cosmos-DB-with-PowerBI-using-spark-and-databricks-premium)<br /> **Option 3:** Power BI and Azure Analysis Services connector with DirectQuery + Azure Analysis Services + Azure Databricks + Cosmos DB Spark connector. |
-|2. Generating real time reports on large data sets (>= 10 GB) | **Option 1:** [Power BI and Azure Synapse Link with DirectQuery](../synapse-link-power-bi.md)<br /> **Option 2:** [Power BI and Azure Analysis Services connector with DirectQuery + Azure Analysis Services](create-real-time-weather-dashboard-powerbi.md) |
-|3. Generating ad-hoc reports on large data sets (< 10 GB) | [Power BI Azure Cosmos DB connector with import mode and incremental refresh](create-real-time-weather-dashboard-powerbi.md) |
-|4. Generating ad-hoc reports with periodic refresh | [Power BI Azure Cosmos DB connector with import mode (Scheduled periodic refresh)](powerbi-visualize.md) |
-|5. Generating ad-hoc reports (no refresh) | [Power BI Azure Cosmos DB connector with import mode](powerbi-visualize.md) |
--
-Scenarios 4 and 5 can be easily set up [using the Azure Cosmos DB Power BI connector](powerbi-visualize.md). This article describes below the setups for scenarios 2 (Option 2) and 3.
-
-### Power BI with incremental refresh
-
-Power BI has a mode where incremental refresh can be configured. This mode eliminates the need to create and manage Azure Analysis Services partitions. Incremental refresh can be set up to filter only the latest updates in large datasets. However, this mode works only with Power BI Premium service that has a dataset limitation of 10 GB.
-
-### Power BI Azure Analysis connector + Azure Analysis Services
-
-Azure Analysis Services provides a fully managed platform as a service that hosts enterprise-grade data models in the cloud. Massive data sets can be loaded from Azure Cosmos DB into Azure Analysis Services. To avoid querying the entire dataset all the time, the datasets can be subdivided into Azure Analysis Services partitions, which can be refreshed independently at different frequencies.
-
-## Power BI incremental refresh
-
-### Ingest weather data into Azure Cosmos DB
-
-Set up an ingestion pipeline to load [weather data](https://catalog.data.gov/dataset?groups=climate5434&#topic=climate_navigation) to Azure Cosmos DB. You can set up an [Azure Data Factory (ADF)](../../data-factory/connector-azure-cosmos-db.md) job to periodically load the latest weather data into Azure Cosmos DB using the HTTP Source and Cosmos DB sink.
--
-### Connect Power BI to Azure Cosmos DB
-
-1. **Connect Azure Cosmos account to Power BI** - Open the Power BI Desktop and use the Azure Cosmos DB connector to select the right database and container.
-
- :::image type="content" source="./media/create-real-time-weather-dashboard-powerbi/cosmosdb-powerbi-connector.png" alt-text="Azure Cosmos DB Power BI connector":::
-
-1. **Configure incremental refresh** - Follow the steps in [incremental refresh with Power BI](/power-bi/service-premium-incremental-refresh) article to configure incremental refresh for the dataset. Add the **RangeStart** and **RangeEnd** parameters as shown in the following screenshot:
-
- :::image type="content" source="./media/create-real-time-weather-dashboard-powerbi/configure-range-parameters.png" alt-text="Configure range parameters":::
-
- Since the dataset has a Date column that is in text form, the **RangeStart** and **RangeEnd** parameters should be transformed to use the following filter. In the **Advanced Editor** pane, modify your query add the following text to filter the rows based on the RangeStart and RangeEnd parameters:
-
- ```
- #"Filtered Rows" = Table.SelectRows(#"Expanded Document", each [Document.date] > DateTime.ToText(RangeStart,"yyyy-MM-dd") and [Document.date] < DateTime.ToText(RangeEnd,"yyyy-MM-dd"))
- ```
-
- Depending on which column and data type is present in the source dataset, you can change the RangeStart and RangeEnd fields accordingly
-
-
- |Property |Data type |Filter |
- ||||
- |_ts | Numeric | [_ts] > Duration.TotalSeconds(RangeStart - #datetime(1970, 1, 1, 0, 0, 0)) and [_ts] < Duration.TotalSeconds(RangeEnd - #datetime(1970, 1, 1, 0, 0, 0))) |
- |Date (for example:- 2019-08-19) | String | [Document.date]> DateTime.ToText(RangeStart,"yyyy-MM-dd") and [Document.date] < DateTime.ToText(RangeEnd,"yyyy-MM-dd") |
- |Date (for example:- 2019-08-11 12:00:00) | String | [Document.date]> DateTime.ToText(RangeStart," yyyy-mm-dd HH:mm:ss") and [Document.date] < DateTime.ToText(RangeEnd,"yyyy-mm-dd HH:mm:ss") |
--
-1. **Define the refresh policy** - Define the refresh policy by navigating to the **Incremental refresh** tab on the **context** menu for the table. Set the refresh policy to refresh **every day** and store the last month data.
-
- :::image type="content" source="./media/create-real-time-weather-dashboard-powerbi/define-refresh-policy.png" alt-text="Define refresh policy":::
-
- Ignore the warning that says *the M query cannot be confirmed to be folded*. The Azure Cosmos DB connector folds filter queries.
-
-1. **Load the data and generate the reports** - By using the data you have loaded earlier, create the charts to report on temperature and rainfall.
-
- :::image type="content" source="./media/create-real-time-weather-dashboard-powerbi/load-data-generate-report.png" alt-text="Load data and generate report":::
-
-1. **Publish the report to Power BI premium** - Since incremental refresh is a Premium only feature, the publish dialog only allows selection of a workspace on Premium capacity. The first refresh may take longer to import the historical data. Subsequent data refreshes are much quicker because they use incremental refresh.
--
-## Power BI Azure Analysis connector + Azure Analysis Services
-
-### Ingest weather data into Azure Cosmos DB
-
-Set up an ingestion pipeline to load [weather data](https://catalog.data.gov/dataset?groups=climate5434&#topic=climate_navigation) to Azure Cosmos DB. You can set up an Azure Data Factory(ADF) job to periodically load the latest weather data into Azure Cosmos DB using the HTTP Source and Cosmos DB Sink.
-
-### Connect Azure Analysis Services to Azure Cosmos account
-
-1. **Create a new Azure Analysis Services cluster** - [Create an instance of Azure Analysis services](../../analysis-services/analysis-services-create-server.md) in the same region as the Azure Cosmos account and the Databricks cluster.
-
-1. **Create a new Analysis Services Tabular Project in Visual Studio** - [Install the SQL Server Data Tools (SSDT)](/sql/ssdt/download-sql-server-data-tools-ssdt) and create an Analysis Services Tabular project in Visual Studio.
-
- :::image type="content" source="./media/create-real-time-weather-dashboard-powerbi/create-analysis-services-project.png" alt-text="Create Azure Analysis Services project":::
-
- Choose the **Integrated Workspace** instance and the set the Compatibility Level to **SQL Server 2017 / Azure Analysis Services (1400)**
-
- :::image type="content" source="./media/create-real-time-weather-dashboard-powerbi/tabular-model-designer.png" alt-text="Azure Analysis Services tabular model designer":::
-
-1. **Add the Azure Cosmos DB data source** - Navigate to **Models**> **Data Sources** > **New Data Source** and add the Azure Cosmos DB data source as shown in the following screenshot:
-
- :::image type="content" source="./media/create-real-time-weather-dashboard-powerbi/add-data-source.png" alt-text="Add Cosmos DB data source":::
-
- Connect to Azure Cosmos DB by providing the **account URI**, **database name**, and the **container name**. You can now see the data from Azure Cosmos container is imported into Power BI.
-
- :::image type="content" source="./media/create-real-time-weather-dashboard-powerbi/preview-cosmosdb-data.png" alt-text="Preview Azure Cosmos DB data":::
-
-1. **Construct the Analysis Services model** - Open the query editor, perform the required operations to optimize the loaded data set:
-
- * Extract only the weather-related columns (temperature and rainfall)
-
- * Extract the month information from the table. This data is useful in creating partitions as described in the next section.
-
- * Convert the temperature columns to number
-
- The resulting M expression is as follows:
-
- ```
- let
- Source=#"DocumentDB/https://[ACCOUNTNAME].documents.azure.com:443/",
- #"Expanded Document" = Table.ExpandRecordColumn(Source, "Document", {"id", "_rid", "_self", "_etag", "fogground", "snowfall", "dust", "snowdepth", "mist", "drizzle", "hail", "fastest2minwindspeed", "thunder", "glaze", "snow", "ice", "fog", "temperaturemin", "fastest5secwindspeed", "freezingfog", "temperaturemax", "blowingsnow", "freezingrain", "rain", "highwind", "date", "precipitation", "fogheavy", "smokehaze", "avgwindspeed", "fastest2minwinddir", "fastest5secwinddir", "_attachments", "_ts"}, {"Document.id", "Document._rid", "Document._self", "Document._etag", "Document.fogground", "Document.snowfall", "Document.dust", "Document.snowdepth", "Document.mist", "Document.drizzle", "Document.hail", "Document.fastest2minwindspeed", "Document.thunder", "Document.glaze", "Document.snow", "Document.ice", "Document.fog", "Document.temperaturemin", "Document.fastest5secwindspeed", "Document.freezingfog", "Document.temperaturemax", "Document.blowingsnow", "Document.freezingrain", "Document.rain", "Document.highwind", "Document.date", "Document.precipitation", "Document.fogheavy", "Document.smokehaze", "Document.avgwindspeed", "Document.fastest2minwinddir", "Document.fastest5secwinddir", "Document._attachments", "Document._ts"}),
- #"Select Columns" = Table.SelectColumns(#"Expanded Document",{"Document.temperaturemin", "Document.temperaturemax", "Document.rain", "Document.date"}),
- #"Duplicated Column" = Table.DuplicateColumn(#"Select Columns", "Document.date", "Document.month"),
- #"Extracted First Characters" = Table.TransformColumns(#"Duplicated Column", {{"Document.month", each Text.Start(_, 7), type text}}),
- #"Sorted Rows" = Table.Sort(#"Extracted First Characters",{{"Document.date", Order.Ascending}}),
- #"Changed Type" = Table.TransformColumnTypes(#"Sorted Rows",{{"Document.temperaturemin", type number}, {"Document.temperaturemax", type number}}),
- #"Filtered Rows" = Table.SelectRows(#"Changed Type", each [Document.month] = "2019-07")
- in
- #"Filtered Rows"
- ```
-
- Additionally, change the data type of the temperature columns to Decimal to make sure that these values can be plotted in Power BI.
-
-1. **Create Azure Analysis partitions** - Create partitions in Azure Analysis Services to divide the dataset into logical partitions that can be refreshed independently and at different frequencies. In this example, you create two partitions that would divide the dataset into the most recent month's data and everything else.
-
- :::image type="content" source="./media/create-real-time-weather-dashboard-powerbi/create-analysis-services-partitions.png" alt-text="Create analysis services partitions":::
-
- Create the following two partitions in Azure Analysis
-
- * **Latest Month** - `#"Filtered Rows" = Table.SelectRows(#"Sorted Rows", each [Document.month] = "2019-07")`
- * **Historical** - `#"Filtered Rows" = Table.SelectRows(#"Sorted Rows", each [Document.month] <> "2019-07")`
-
-1. **Deploy the Model to the Azure Analysis Server** - Right click on the Azure Analysis Services project and choose **Deploy**. Add the server name in the **Deployment Server properties** pane.
-
- :::image type="content" source="./media/create-real-time-weather-dashboard-powerbi/analysis-services-deploy-model.png" alt-text="Deploy Azure Analysis Services model":::
-
-1. **Configure partition refreshes and merges** - Azure Analysis Services allows independent processing of partitions. Since we want the **Latest Month** partition to be constantly updated with the most recent data, set the refresh interval to 5 minutes. You can refresh the data by using the [REST API](../../analysis-services/analysis-services-async-refresh.md), [Azure automation](../../analysis-services/analysis-services-refresh-azure-automation.md), or with a [Logic App](../../analysis-services/analysis-services-refresh-logic-app.md). It's not required to refresh the data in historical partition. Additionally, you need to write some code to consolidate the latest month partition to the historical partition and create a new latest month partition.
-
-## Connect Power BI to Analysis Services
-
-1. **Connect to the Azure Analysis Server using the Azure Analysis Services database Connector** - Choose the **Live mode** and connect to the Azure Analysis Services instance as shown in the following screenshot:
-
- :::image type="content" source="./media/create-real-time-weather-dashboard-powerbi/analysis-services-get-data.png" alt-text="Get data from Azure Analysis Services":::
-
-1. **Load the data and generate reports** - By using the data you have loaded earlier, create charts to report on temperature and rainfall. Since you are creating a live connection, the queries should be executed on the data in the Azure Analysis Services model that you have deployed in the previous step. The temperature charts will be updated within five minutes after the new data is loaded into Azure Cosmos DB.
-
- :::image type="content" source="./media/create-real-time-weather-dashboard-powerbi/load-data-generate-report.png" alt-text="Load the data and generate reports":::
-
-## Next steps
-
-* To learn more about Power BI, see [Get started with Power BI](https://powerbi.microsoft.com/documentation/powerbi-service-get-started/).
-
-* [Connect Qlik Sense to Azure Cosmos DB and visualize your data](../visualize-qlik-sense.md)
cosmos-db Troubleshoot Sdk Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-sdk-availability.md
Title: Diagnose and troubleshoot the availability of Azure Cosmos SDKs in multir
description: Learn all about the Azure Cosmos SDK availability behavior when operating in multi regional environments. Previously updated : 02/18/2021 Last updated : 03/28/2022
If you **don't set a preferred region**, the SDK client defaults to the primary
> [!WARNING] > The failover and availability logic described in this document can be disabled on the client configuration, which is not advised unless the user application is going to handle availability errors itself. This can be achieved by: >
-> * Setting the [ConnectionPolicy.EnableEndpointRediscovery](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.enableendpointdiscovery) property in .NET V2 SDK to false.
+> * Setting the [ConnectionPolicy.EnableEndpointDiscovery](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.enableendpointdiscovery) property in .NET V2 SDK to false.
> * Setting the [CosmosClientOptions.LimitToEndpoint](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.limittoendpoint) property in .NET V3 SDK to true. > * Setting the [CosmosClientBuilder.endpointDiscoveryEnabled](/java/api/com.azure.cosmos.cosmosclientbuilder.endpointdiscoveryenabled) method in Java V4 SDK to false. > * Setting the [CosmosClient.enable_endpoint_discovery](/python/api/azure-cosmos/azure.cosmos.cosmos_client.cosmosclient) parameter in Python SDK to false.
cosmos-db Synapse Link Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/synapse-link-power-bi.md
In this article, you learn how to build a serverless SQL pool database and views
With Azure Synapse Link, you can build near real-time dashboards in Power BI to analyze your Azure Cosmos DB data. There is no performance or cost impact to your transactional workloads, and no complexity of managing ETL pipelines. You can use either [DirectQuery](/power-bi/connect-data/service-dataset-modes-understand#directquery-mode) or [import](/power-bi/connect-data/service-dataset-modes-understand#import-mode) modes.
+> [!Note]
+> You can build Power BI dashboards with just a few clicks using Azure Cosmos DB portal. For more information, see [Integrated Power BI experience in Azure Cosmos DB portal for Synapse Link enabled accounts](integrated-power-bi-synapse-link.md). This will automatically create T-SQL views in Synapse serverless SQL pools on your Cosmos DB containers. You can simply download the .pbids file that connects to these T-SQL views to start building your BI dashboards.
+ In this scenario, you will use dummy data about Surface product sales in a partner retail store. You will analyze the revenue per store based on the proximity to large households and the impact of advertising for a specific week. In this article, you create two views named **RetailSales** and **StoreDemographics** and a query between them. You can get the sample product data from this [GitHub](https://github.com/Azure-Samples/Synapse/tree/main/Notebooks/PySpark/Synapse%20Link%20for%20Cosmos%20DB%20samples/Retail/RetailData) repo. ## Prerequisites
After you choose these options, you should see a graph like the following screen
## Next steps
+[Integrated Power BI experience in Azure Cosmos DB portal for Synapse Link enabled accounts](integrated-power-bi-synapse-link.md)
+ [Use T-SQL to query Azure Cosmos DB data using Azure Synapse Link](../synapse-analytics/sql/query-cosmos-db-analytical-store.md)
-Use serverless SQL pool to [analyze Azure Open Datasets and visualize the results in Azure Synapse Studio](../synapse-analytics/sql/tutorial-data-analyst.md)
+Use serverless SQL pool to [analyze Azure Open Datasets and visualize the results in Azure Synapse Studio](../synapse-analytics/sql/tutorial-data-analyst.md)
cost-management-billing Programmatically Create Subscription Enterprise Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-enterprise-agreement.md
Previously updated : 03/22/2022 Last updated : 03/29/2022
Using one of the following methods, you'll create a subscription alias name. We
- Start with a letter and end with an alphanumeric character - Don't use periods
+An alias is used for simple substitution of a user-defined string instead of the subscription GUID. In other words, you can use it as a shortcut. You can learn more about alias at [Alias - Create](/rest/api/subscription/2020-09-01/alias/create). In the following examples, `sampleAlias` is created but you can use any string you like.
### [REST](#tab/rest)
data-factory Data Factory Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-service-identity.md
Last updated 01/27/2022 -+ # Managed identity for Azure Data Factory and Azure Synapse
You can find the managed identity information from Azure portal -> your data fac
The managed identity information will also show up when you create linked service, which supports managed identity authentication, like Azure Blob, Azure Data Lake Storage, Azure Key Vault, etc.
-When granting permission, in Azure resource's Access Control (IAM) tab -> Add role assignment -> Assign access to -> select Data Factory under System assigned managed identity -> select by factory name; or in general, you can use object ID or data factory name (as managed identity name) to find this identity. If you need to get managed identity's application ID, you can use PowerShell.
+To grant permissions, follow these steps. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+
+1. Select **Access control (IAM)**.
+
+1. Select **Add** > **Add role assignment**.
+
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open.":::
+
+1. On the **Members** tab, select **Managed identity**, and then select **Select members**.
+
+1. Select your Azure subscription.
+
+1. Under **System-assigned managed identity**, select **Data Factory**, and then select a data factory. You can also use the object ID or data factory name (as the managed-identity name) to find this identity. To get the managed identity's application ID, use PowerShell.
+
+1. On the **Review + assign** tab, select **Review + assign** to assign the role.
#### Retrieve system-assigned managed identity using PowerShell
data-factory How To Schedule Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-schedule-azure-ssis-integration-runtime.md
Last updated 02/15/2022 + # How to start and stop Azure-SSIS Integration Runtime on a schedule
If you create a third trigger that is scheduled to run daily at midnight and ass
:::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/adf-until-activity-on-demand-ssis-ir-open.png" alt-text="ADF Until Activity On-Demand SSIS IR Open":::
-7. Assign the managed identity for your ADF a **Contributor** role to itself, so Web activities in its pipelines can call REST API to start/stop Azure-SSIS IRs provisioned in it. On your ADF page in Azure portal, click **Access control (IAM)**, click **+ Add role assignment**, and then on **Add role assignment** blade, do the following actions:
+7. Assign the managed identity for your ADF a **Contributor** role to itself, so Web activities in its pipelines can call REST API to start/stop Azure-SSIS IRs provisioned in it:
- 1. For **Role**, select **Contributor**.
- 2. For **Assign access to**, select **Azure AD user, group, or service principal**.
- 3. For **Select**, search for your ADF name and select it.
- 4. Click **Save**.
-
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/adf-managed-identity-role-assignment.png" alt-text="ADF Managed Identity Role Assignment":::
+ 1. On your ADF page in the Azure portal, select **Access control (IAM)**.
+ 1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
+ 1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+
+ | Setting | Value |
+ | | |
+ | Role | Contributor |
+ | Assign access to | User, group, or service principal |
+ | Members | Your ADF username |
+
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-page.png" alt-text="Screenshot that shows Add role assignment page in Azure portal.":::
8. Validate your ADF and all pipeline settings by clicking **Validate all/Validate** on the factory/pipeline toolbar. Close **Factory/Pipeline Validation Output** by clicking **>>** button.
databox-online Azure Stack Edge Gpu Enable Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-enable-azure-monitor.md
Previously updated : 06/03/2021 Last updated : 03/28/2022
Take the following steps to enable Container Insights on your workspace.
`Set-HcsKubernetesAzureMonitorConfiguration -WorkspaceId <> -WorkspaceKey <>`
+ > [!NOTE]
+ > By default, this cmdlet configures the Azure public cloud. To configure a government cloud or non-public cloud, use the parameter `AzureCloudDomainName`.
+ 1. After the Azure Monitor is enabled, you should see logs in the Log Analytics workspace. To view the status of the Kubernetes cluster deployed on your device, go to **Azure Monitor > Insights > Containers**. For the environment option, select **All**. ![Metrics in Log Analytics workspace](media/azure-stack-edge-gpu-enable-azure-monitor/log-analytics-workspace-metrics-1.png)
ddos-protection Manage Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection.md
If you want to delete a DDoS protection plan, you must first dissociate all virt
## Next steps
-To learn how to view and configure telemetry for your DDoS protection plan, continue to the tutorials.
+To learn how to view and configure telemetry for your DDoS protection plan, continue to the tutorials.
> [!div class="nextstepaction"]
-> [View and configure DDoS protection telemetry](telemetry.md)
+> [View and configure DDoS protection telemetry](telemetry.md)
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Microsoft Defender for Containers provides security alerts on the cluster level
|--|--|:-:|--| | **A file was downloaded and executed (Preview)**<br>(K8S.NODE_LinuxSuspiciousActivity) | Analysis of processes running within a container indicates that a file has been downloaded to the container, given execution privileges and then executed. | Execution | Medium | | **A history file has been cleared (Preview)**<br>(K8S.NODE_HistoryFileCleared) | Analysis of processes running within a container indicates that the command history log file has been cleared. Attackers may do this to cover their tracks. The operation was performed by the specified user account. | DefenseEvasion | Medium |
+| **Abnormal activity of managed identity associated with Kubernetes (Preview)**<br>(K8S_AbnormalMiAcitivty) | Analysis of Azure Resource Manager operations detected an abnormal behavior of a managed identity used by an AKS addon. The detected activity isn\'t consistent with the behavior of the associated addon. While this activity can be legitimate, such behavior might indicate that the identity was gained by an attacker, possibly from a compromised container in the Kubernetes cluster. | Lateral Movement | Medium |
+| **Abnormal Kubernetes service account operation detected**<br>(K8S_ServiceAccountRareOperation) | Kubernetes audit log analysis detected abnormal behavior by a service account in your Kubernetes cluster. The service account was used for an operation which isn't common for this service account. While this activity can be legitimate, such behavior might indicate that the service account is being used for malicious purposes. | Lateral Movement, Credential Access | Medium |
| **An uncommon connection attempt detected (Preview)**<br>(K8S.NODE_SuspectConnection) | Analysis of processes running within a container detected an uncommon connection attempt utilizing a socks protocol. This is very rare in normal operations, but a known technique for attackers attempting to bypass network-layer detections. | Execution, Exfiltration, Exploitation | Medium | | **Anomalous pod deployment (Preview)**<br>(K8S_AnomalousPodDeployment) | Kubernetes audit log analysis detected pod deployment which is anomalous based on previous pod deployment activity. This activity is considered an anomaly when taking into account how the different features seen in the deployment operation are in relations to one another. The features monitored include the container image registry used, the account performing the deployment, day of the week, how often this account performs pod deployments, user agent used in the operation, whether this is a namespace to which pod deployments often occur, and other features. Top contributing reasons for raising this alert as anomalous activity are detailed under the alertΓÇÖs extended properties. | Execution | Medium | | **Attempt to stop apt-daily-upgrade.timer service detected (Preview)**<br>(K8S.NODE_TimerServiceDisabled) | Analysis of host/device data detected an attempt to stop apt-daily-upgrade.timer service. Attackers have been observed stopping this service to download malicious files and grant execution privileges for their attacks. This activity can also happen if the service is updated through normal administrative actions. | DefenseEvasion | Informational |
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Defender for Cloud filters, and classifies findings from the scanner. When an im
:::image type="content" source="./media/defender-for-containers/recommendation-acr-images-with-vulnerabilities.png" alt-text="Sample Microsoft Defender for Cloud recommendation about vulnerabilities discovered in Azure Container Registry (ACR) hosted images." lightbox="./media/defender-for-containers/recommendation-acr-images-with-vulnerabilities.png":::
-### View vulnerabilities for running images
+### View vulnerabilities for running images
-Defender for Containers expands on the registry scanning features by introducing the **preview feature** of run-time visibility of vulnerabilities powered by the Defender profile, or extension.
-
-> [!NOTE]
-> There's no Defender profile for Windows, it's only available on Linux OS.
-
-The new recommendation, **Running container images should have vulnerability findings resolved**, only shows vulnerabilities for running images, and relies on the Defender security profile, or extension to discover which images are currently running. This recommendation groups running images that have vulnerabilities, and provides details about the issues discovered, and how to remediate them. The Defender profile, or extension is used to gain visibility into vulnerable containers that are active.
-
-This recommendation shows running images, and their vulnerabilities based on ACR image. Images that are deployed from a non ACR registry, won't be scanned, and will appear under the Not applicable tab.
+The recommendation **Running container images should have vulnerability findings resolved** shows vulnerabilities for running images by using the scan results from ACR registeries and information on running images from the Defender security profile/extension. Images that are deployed from a non ACR registry, will appear under the Not applicable tab.
:::image type="content" source="media/defender-for-containers/running-image-vulnerabilities-recommendation.png" alt-text="Screenshot showing where the recommendation is viewable" lightbox="media/defender-for-containers/running-image-vulnerabilities-recommendation-expanded.png":::
+> [!NOTE]
+> This recommendation is currently supported for Linux containers only, as there's no Defender profile/extension for Windows.
+>
## Run-time protection for Kubernetes nodes and clusters Defender for Cloud provides real-time threat protection for your containerized environments and generates alerts for suspicious activities. You can use this information to quickly remediate security issues and improve the security of your containers.
defender-for-cloud Defender For Sql Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-introduction.md
The focus of **Microsoft Defender for SQL on machines** is obviously security. B
The service has a split architecture to balance data uploading and speed with performance: -- some of our detectors run on the machine for real-time speed advantages-- others run in the cloud to spare the machine from heavy computational loads
+- Some of our detectors, including an [extended events trace](../azure-sql/database/xevent-db-diff-from-svr.md) named `SQLAdvancedThreatProtectionTraffic`, run on the machine for real-time speed advantages.
+- Other detectors run in the cloud to spare the machine from heavy computational loads.
Lab tests of our solution, comparing it against benchmark loads, showed CPU usage averaging 3% for peak slices. An analysis of the telemetry for our current users shows a negligible impact on CPU and memory usage.
defender-for-cloud Export To Siem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/export-to-siem.md
description: Learn how to stream your security alerts to Microsoft Sentinel, thi
Previously updated : 11/09/2021 Last updated : 03/29/2022 # Stream alerts to a SIEM, SOAR, or IT Service Management solution
Microsoft Sentinel includes built-in connectors for Microsoft Defender for Cloud
- [Stream alerts to Microsoft Sentinel at the subscription level](../sentinel/connect-azure-security-center.md) - [Connect all subscriptions in your tenant to Microsoft Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/azure-security-center-auto-connect-to-sentinel/ba-p/1387539)
-When you connect Defender for Cloud to Microsoft Sentinel, the status of Defender for Cloud alerts that get ingested into Microsoft Sentinel is synchronized between the two services. So, for example, when an alert is closed in Defender for Cloud, that alert will display as closed in Microsoft Sentinel as well. Changing the status of an alert in Defender for Cloud "won't"* affect the status of any Microsoft Sentinel **incidents** that contain the synchronized Microsoft Sentinel alert, only that of the synchronized alert itself.
+When you connect Defender for Cloud to Microsoft Sentinel, the status of Defender for Cloud alerts that get ingested into Microsoft Sentinel is synchronized between the two services. So, for example, when an alert is closed in Defender for Cloud, that alert is also shown as closed in Microsoft Sentinel. If you change the status of an alert in Defender for Cloud, the status of the alert in Microsoft Sentinel is also updated, but the statuses of any Microsoft Sentinel **incidents** that contain the synchronized Microsoft Sentinel alert aren't updated.
-Enabling the preview feature, **bi-directional alert synchronization**, will automatically sync the status of the original Defender for Cloud alerts with Microsoft Sentinel incidents that contain the copies of those Defender for Cloud alerts. So, for example, when a Microsoft Sentinel incident containing a Defender for Cloud alert is closed, Defender for Cloud will automatically close the corresponding original alert.
+You can enable the preview feature **bi-directional alert synchronization** to automatically sync the status of the original Defender for Cloud alerts with Microsoft Sentinel incidents that contain the copies of those Defender for Cloud alerts. So, for example, when a Microsoft Sentinel incident that contains a Defender for Cloud alert is closed, Defender for Cloud automatically closes the corresponding original alert.
Learn more in [Connect alerts from Microsoft Defender for Cloud](../sentinel/connect-azure-security-center.md).
Learn more in [Connect alerts from Microsoft Defender for Cloud](../sentinel/con
### Configure ingestion of all audit logs into Microsoft Sentinel Another alternative for investigating Defender for Cloud alerts in Microsoft Sentinel is to stream your audit logs into Microsoft Sentinel:
- - [Connect Windows security events](../sentinel/connect-windows-security-events.md)
- - [Collect data from Linux-based sources using Syslog](../sentinel/connect-syslog.md)
- - [Connect data from Azure Activity log](../sentinel/data-connectors-reference.md#azure-activity)
+- [Connect Windows security events](../sentinel/connect-windows-security-events.md)
+- [Collect data from Linux-based sources using Syslog](../sentinel/connect-syslog.md)
+- [Connect data from Azure Activity log](../sentinel/data-connectors-reference.md#azure-activity)
> [!TIP]
-> Microsoft Sentinel is billed based on the volume of data ingested for analysis in Microsoft Sentinel and stored in the Azure Monitor Log Analytics workspace. Microsoft Sentinel offers a flexible and predictable pricing model. [Learn more at the Microsoft Sentinel pricing page](https://azure.microsoft.com/pricing/details/azure-sentinel/).
+> Microsoft Sentinel is billed based on the volume of data that it ingests for analysis in Microsoft Sentinel and stores in the Azure Monitor Log Analytics workspace. Microsoft Sentinel offers a flexible and predictable pricing model. [Learn more at the Microsoft Sentinel pricing page](https://azure.microsoft.com/pricing/details/azure-sentinel/).
## Stream alerts with Azure Monitor
-To stream alerts into **ArcSight**, **Splunk**, **QRadar**, **SumoLogic**, **Syslog servers**, **LogRhythm**, **Logz.io Cloud Observability Platform**, and other monitoring solutions. connect Defender for Cloud with Azure monitor via Azure Event Hubs:
+To stream alerts into **ArcSight**, **Splunk**, **QRadar**, **SumoLogic**, **Syslog servers**, **LogRhythm**, **Logz.io Cloud Observability Platform**, and other monitoring solutions, connect Defender for Cloud to Azure monitor using Azure Event Hubs:
> [!NOTE]
-> To stream alerts at the tenant level, use this Azure policy and set the scope at the root management group (you'll need permissions for the root management group as explained in [Defender for Cloud permissions](permissions.md)): [Deploy export to event hub for Microsoft Defender for Cloud alerts and recommendations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fcdfcce10-4578-4ecd-9703-530938e4abcb).
+> To stream alerts at the tenant level, use this Azure policy and set the scope at the root management group. You'll need permissions for the root management group as explained in [Defender for Cloud permissions](permissions.md): [Deploy export to an event hub for Microsoft Defender for Cloud alerts and recommendations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fcdfcce10-4578-4ecd-9703-530938e4abcb).
1. Enable [continuous export](continuous-export.md) to stream Defender for Cloud alerts into a dedicated event hub at the subscription level. To do this at the Management Group level using Azure Policy, see [Create continuous export automation configurations at scale](continuous-export.md?tabs=azure-policy#configure-continuous-export-at-scale-using-the-supplied-policies)
To stream alerts into **ArcSight**, **Splunk**, **QRadar**, **SumoLogic**, **Sys
1. Optionally, stream the raw logs to the event hub and connect to your preferred solution. Learn more in [Monitoring data available](../azure-monitor/essentials/stream-monitoring-data-event-hubs.md#monitoring-data-available).
-To view the event schemas of the exported data types, visit the [Event hub event schemas](https://aka.ms/ASCAutomationSchemas).
+To view the event schemas of the exported data types, visit the [Event Hubs event schemas](https://aka.ms/ASCAutomationSchemas).
## Other streaming options
As an alternative to Sentinel and Azure Monitor, you can use Defender for Cloud'
You can use this API to stream alerts from your **entire tenant** (and data from many other Microsoft Security products) into third-party SIEMs and other popular platforms: - **Splunk Enterprise and Splunk Cloud** - [Use the Microsoft Graph Security API Add-On for Splunk](https://splunkbase.splunk.com/app/4564/) -- **Power BI** - [Connect to the Microsoft Graph Security API in Power BI Desktop](/power-bi/connect-data/desktop-connect-graph-security)-- **ServiceNow** - [Follow the instructions to install and configure the Microsoft Graph Security API application from the ServiceNow Store](https://docs.servicenow.com/bundle/orlando-security-management/page/product/secops-integration-sir/secops-integration-ms-graph/task/ms-graph-install.html)-- **QRadar** - [IBM's Device Support Module for Microsoft Defender for Cloud via Microsoft Graph API](https://www.ibm.com/support/knowledgecenter/SS42VS_DSM/com.ibm.dsm.doc/c_dsm_guide_ms_azure_security_center_overview.html) -- **Palo Alto Networks**, **Anomali**, **Lookout**, **InSpark**, and more - [Microsoft Graph Security API](https://www.microsoft.com/security/business/graph-security-api#office-MultiFeatureCarousel-09jr2ji)
+- **Power BI** - [Connect to the Microsoft Graph Security API in Power BI Desktop](/power-bi/connect-data/desktop-connect-graph-security).
+- **ServiceNow** - [Install and configure the Microsoft Graph Security API application from the ServiceNow Store](https://docs.servicenow.com/bundle/orlando-security-management/page/product/secops-integration-sir/secops-integration-ms-graph/task/ms-graph-install.html).
+- **QRadar** - [Use IBM's Device Support Module for Microsoft Defender for Cloud via Microsoft Graph API](https://www.ibm.com/support/knowledgecenter/SS42VS_DSM/com.ibm.dsm.doc/c_dsm_guide_ms_azure_security_center_overview.html).
+- **Palo Alto Networks**, **Anomali**, **Lookout**, **InSpark**, and more - [Use the Microsoft Graph Security API](https://www.microsoft.com/security/business/graph-security-api#office-MultiFeatureCarousel-09jr2ji).
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
The **tabs** below show the features that are available, by environment, for Mic
| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier | |--|--| -- | -- | -- | -- | --| | Compliance | Docker CIS | EC2 | Preview | X | Log Analytics agent | Defender for Servers |
-| Vulnerability Assessment | Registry scan | - | - | - | - | - |
-| Vulnerability Assessment | View vulnerabilities for running images | - | - | - | - | - |
+| Vulnerability assessment | Registry scan | - | - | - | - | - |
+| Vulnerability assessment | View vulnerabilities for running images | - | - | - | - | - |
| Hardening | Control plane recommendations | - | - | - | - | - | | Hardening | Kubernetes data plane recommendations | EKS | Preview | X | Azure Policy extension | Defender for Containers | | Runtime protection| Threat detection (control plane)| EKS | Preview | Γ£ô | Agentless | Defender for Containers |
The **tabs** below show the features that are available, by environment, for Mic
| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier | |--|--| -- | -- | -- | -- | --| | Compliance | Docker CIS | GCP VMs | Preview | X | Log Analytics agent | Defender for Servers |
-| Vulnerability Assessment | Registry scan | - | - | - | - | - |
-| Vulnerability Assessment | View vulnerabilities for running images | - | - | - | - | - |
+| Vulnerability assessment | Registry scan | - | - | - | - | - |
+| Vulnerability assessment | View vulnerabilities for running images | - | - | - | - | - |
| Hardening | Control plane recommendations | - | - | - | - | - | | Hardening | Kubernetes data plane recommendations | GKE | Preview | X | Azure Policy extension | Defender for Containers | | Runtime protection| Threat detection (control plane)| GKE | Preview | Γ£ô | Agentless | Defender for Containers |
The **tabs** below show the features that are available, by environment, for Mic
| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier | |--|--| -- | -- | -- | -- | --| | Compliance | Docker CIS | Arc enabled VMs | Preview | X | Log Analytics agent | Defender for Servers |
-| Vulnerability Assessment | Registry scan | ACR, Private ACR | Preview | Γ£ô (Preview) | Agentless | Defender for Containers |
-| Vulnerability Assessment | View vulnerabilities for running images | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers |
+| Vulnerability assessment | Registry scan | ACR, Private ACR | Preview | Γ£ô (Preview) | Agentless | Defender for Containers |
+| Vulnerability assessment | View vulnerabilities for running images | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers |
| Hardening | Control plane recommendations | - | - | - | - | - | | Hardening | Kubernetes data plane recommendations | Arc enabled K8s clusters | Preview | X | Azure Policy extension | Defender for Containers | | Runtime protection| Threat detection (control plane)| Arc enabled K8s clusters | Preview | Γ£ô | Defender extension | Defender for Containers |
devtest-labs Devtest Lab Attach Detach Data Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-attach-detach-data-disk.md
Title: Attach an existing data disk to a lab VM
-description: Learn how to attach or detach a lab data disk to a lab virtual machine in Azure DevTest Labs
+ Title: Attach & detach data disks for lab VMs
+description: Learn how to attach or detach a data disk for a lab virtual machine in Azure DevTest Labs.
Previously updated : 10/26/2021 Last updated : 03/29/2022
-# Attach or detach a lab data disk to a lab virtual machine in Azure DevTest Labs
+# Attach or detach a data disk for a lab virtual machine in Azure DevTest Labs
-You can create and attach a new lab [data disk](../virtual-machines/managed-disks-overview.md) for a lab Azure virtual machine (VM). The data disk can then be detached, and either: deleted, reattached, or attached to a different lab VM that you own. This functionality is handy for managing storage or software outside of each individual virtual machine.
-
-In this article, you'll learn how to attach and detach a data disk to a lab virtual machine.
+This article explains how to attach and detach a lab virtual machine (VM) data disk in Azure DevTest Labs. You can create, attach, detach, and reattach [data disks](/azure/virtual-machines/managed-disks-overview) for lab VMs that you own. This functionality is useful for managing storage or software separately from individual VMs.
## Prerequisites
-Your lab virtual machine must be running. The virtual machine size controls how many data disks you can attach. For details, see [Sizes for virtual machines](../virtual-machines/sizes.md).
-
-## Attach a new data disk
-
-Follow these steps to create and attach a new managed data disk to a VM in Azure DevTest Labs.
+To attach or detach a data disk, you need to own the lab VM, and the VM must be running. The VM size determines how many data disks you can attach. For more information, see [Sizes for virtual machines](/azure/virtual-machines/sizes).
-1. Sign in to the [Azure portal](https://portal.azure.com/).
+## Create and attach a new data disk
-1. Navigate to your lab in **DevTest Labs**.
+Follow these steps to create and attach a new managed data disk for a DevTest Labs VM.
-1. Select your running virtual machine.
+1. Select your VM from the **My virtual machines** list on the lab **Overview** page.
-1. From the **virtual machine** page, under **Settings**, select **Disks**.
+1. On the VM **Overview** page, select **Disks** under **Settings** in the left navigation.
-1. Select **Attach new**.
+1. On the **Disks** page, select **Attach new**.
- :::image type="content" source="./media/devtest-lab-attach-detach-data-disk/devtest-lab-attach-new.png" alt-text="Screenshot of attach new data disk to a virtual machine.":::
+ :::image type="content" source="./media/devtest-lab-attach-detach-data-disk/devtest-lab-attach-new.png" alt-text="Screenshot of Attach new on the V M's Disk page.":::
-1. From the **Attach new disk** page, provide the following information:
+1. Fill out the **Attach new disk** form as follows:
- |Property | Description |
- |||
- |Name|Enter a unique name.|
- |Disk type| Select a [disk type](../virtual-machines/disks-types.md) from the drop-down list.|
- |Size (GiB)|Enter a size in gigabytes.|
-
- :::image type="content" source="./media/devtest-lab-attach-detach-data-disk/devtest-lab-attach-new-form.png" alt-text="Screenshot of complete the 'attach new disk' form.":::
+ - For **Name**, enter a unique name.
+ - For **Disk type**, select a [disk type](/azure/virtual-machines/disks-types) from the drop-down list.
+ - For **Size (GiB)**, enter a size in gigabytes.
1. Select **OK**.
-1. You're returned to the **virtual machine** page. View your attached disk under **Data disks**.
-
- :::image type="content" source="./media/devtest-lab-attach-detach-data-disk/devtest-lab-attached-data-disk.png" alt-text="Screenshot of attached disk appears under data disks.":::
+ :::image type="content" source="./media/devtest-lab-attach-detach-data-disk/devtest-lab-attach-new-form.png" alt-text="Screenshot of the Attach new disk form.":::
-## Detach a data disk
+1. After the disk is attached, on the **Disks** page, view the new attached disk under **Data disks**.
-Detaching removes the lab disk from the lab VM, but keeps it in storage for later use.
+ :::image type="content" source="./media/devtest-lab-attach-detach-data-disk/devtest-lab-attached-data-disk.png" alt-text="Screenshot of the new data disk under Data disks on the Disks page.":::
-### Detach from the VM's management page
+## Attach an existing data disk
-1. Navigate to your lab in **DevTest Labs**.
+Follow these steps to attach an existing available data disk to a running VM.
-1. Select your running virtual machine with an attached data disk.
+1. Select your VM from the **My virtual machines** list on the lab **Overview** page.
-1. From the **virtual machine** page, under **Settings**, select **Disks**.
-
-1. Under **Data disks**, select the data disk you want to detach.
-
- :::image type="content" source="./media/devtest-lab-attach-detach-data-disk/devtest-lab-detach-button.png" alt-text="Screenshot of select data disks for a virtual machine.":::
+1. On the VM **Overview** page, select **Disks** under **Settings** in the left navigation.
+
+1. On the **Disks** page, select **Attach existing**.
-1. From the **Data disk** page, select **Detach**.
+ :::image type="content" source="./media/devtest-lab-attach-detach-data-disk/devtest-lab-attach-existing-button.png" alt-text="Screenshot of Attach existing on the V M's Disk page.":::
- :::image type="content" source="./media/devtest-lab-attach-detach-data-disk/devtest-lab-detach-data-disk-2.png" alt-text="Screenshot shows a disk's details pane with the 'Detach' action highlighted.":::
+1. On the **Attach existing disk** page, select a disk, and then select **OK**.
-1. Select **OK** to confirm that you want to detach the data disk. The disk is detached and is available to attach to another VM.
+ :::image type="content" source="./media/devtest-lab-attach-detach-data-disk/devtest-lab-attach-existing.png" alt-text="Screenshot of attach existing data disk to a virtual machine.":::
-### Detach from the lab's management page
+1. After the disk is attached, on the **Disks** page, view the attached disk under **Data disks**.
-1. Navigate to your lab in **DevTest Labs**.
+## Detach a data disk
-1. Under **My Lab**, select **My data disks**.
+Detaching removes the lab disk from the VM, but keeps it in storage for later use.
-1. For the disk you wish to detach, select its ellipsis (**...**) ΓÇô and select **Detach**.
+Follow these steps to detach an attached data disk from a running VM.
- :::image type="content" source="./media/devtest-lab-attach-detach-data-disk/devtest-lab-detach-data-disk.png" alt-text="Screenshot of detach a data disk.":::
+1. Select the VM with the disk from the **My virtual machines** list on the lab **Overview** page.
-1. Select **Yes** to confirm that you want to detach it.
+1. On the VM **Overview** page, select **Disks** under **Settings** in the left navigation.
+
+1. On the **Disks** page, under **Data disks**, select the data disk you want to detach.
- > [!NOTE]
- > If a data disk is already detached, you can choose to remove it from your list of available data disks by selecting **Delete**.
+ :::image type="content" source="./media/devtest-lab-attach-detach-data-disk/devtest-lab-detach-button.png" alt-text="Screenshot of selecting a data disk to detach.":::
-## Attach an existing disk
+1. On the data disk's page, select **Detach**, and then select **OK**.
-Follow these steps to attach an existing available data disk to a running VM.
+ :::image type="content" source="./media/devtest-lab-attach-detach-data-disk/devtest-lab-detach-data-disk-2.png" alt-text="Screenshot showing Detach on the Data disk page.":::
-1. Navigate to your lab in **DevTest Labs**.
+The disk is detached, and is available to reattach to this or another VM.
-1. Select your running virtual machine.
+### Detach or delete a data disk on the lab management page
-1. From the **virtual machine** page, under **Settings**, select **Disks**.
+You can also detach or delete a data disk without navigating to the VM's page.
-1. Select **Attach existing**.
+1. In the left navigation for your lab's **Overview** page, select **My data disks** under **My Lab**.
- :::image type="content" source="./media/devtest-lab-attach-detach-data-disk/devtest-lab-attach-existing-button.png" alt-text="Screenshot that shows the 'Disks' setting selected and 'Attach existing' selected.":::
+1. On the **My data disks** page, either:
-1. From the **Attach existing disk** page, select a disk and then **OK**. After a few moments, the data disk is attached to the VM and appears in the list of **Data disks** for that VM.
+ - Select the disk you want to detach, and then on the data disk's page, select **Detach** and then select **OK**.
- :::image type="content" source="./media/devtest-lab-attach-detach-data-disk/devtest-lab-attach-existing.png" alt-text="Screenshot of attach existing data disk to a virtual machine.":::
+ or
-## Upgrade an unmanaged data disk
+ - Select the ellipsis (**...**) next to the disk you want to detach, select **Detach** from the context menu, and then select **Yes**.
-If you have a VM with unmanaged data disks, you can convert the VM to use managed disks. This process converts both the OS disk and any attached data disks.
+ :::image type="content" source="./media/devtest-lab-attach-detach-data-disk/devtest-lab-detach-data-disk.png" alt-text="Screenshot of detaching a data disk from the listing's context menu.":::
-First [detach the data disk](#detach-a-data-disk) from the unmanaged VM. Then, [reattach the disk](#attach-an-existing-disk) to a managed VM to automatically upgrade the data disk from unmanaged to managed.
+You can also delete a detached data disk, by selecting **Delete** from the context menu or from the data disk page. When you delete a data disk, it is removed from storage and can't be reattached anymore.
## Next steps
-Learn how to manage data disks for [claimable virtual machines](devtest-lab-add-claimable-vm.md#unclaim-a-vm).
+For information about transferring data disks for claimable lab VMs, see [Transfer the data disk](devtest-lab-add-claimable-vm.md#transfer-the-data-disk).
devtest-labs Devtest Lab Auto Startup Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-auto-startup-vm.md
Title: Configure auto start settings for a VM
-description: Learn how to configure auto start settings for VMs in a lab. This setting allows VMs in the lab to be automatically started on a schedule.
+ Title: Configure auto-start settings for a VM
+description: Learn how to configure auto-start settings for VMs in a lab. This setting allows VMs in the lab to be automatically started on a schedule.
Previously updated : 12/10/2021 Last updated : 03/29/2022
-# Start up lab virtual machines automatically
+# Automatically start lab VMs with auto-start in Azure DevTest Labs
-Auto start allows you to automatically start virtual machines (VMs) in a lab at a scheduled time each day. You first need to create an auto start policy. Then you must select which VMs to follow the policy. The extra step of affirmatively selecting VMs to auto start is meant to prevent the unintentional starting of VMs that result in increased costs.
+This article shows how to configure and apply an auto-start policy for Azure DevTest Labs virtual machines (VMs). Auto-start automatically starts up lab VMs at specified times and days.
-This article shows you how to configure an auto start policy for a lab. For information on configuring auto shutdown settings, see [Manage auto shutdown policies for a lab in Azure DevTest Labs](devtest-lab-auto-shutdown.md).
+To implement auto-start, you configure an auto-start policy for the lab first. Then, you can enable the policy for individual lab VMs. Requiring individual VMs to enable auto-start helps prevent unnecessary startups that could increase costs.
-## Configure auto start settings for a lab
+You can also configure auto-shutdown policies for lab VMs. For more information, see [Manage auto shutdown policies for a lab in Azure DevTest Labs](devtest-lab-auto-shutdown.md).
-The policy doesn't automatically apply auto start to any VMs in the lab. After configuring the policy, follow the steps from [Enable auto start for a VM in the lab](#enable-auto-start-for-a-vm-in-the-lab).
+## Configure auto-start for the lab
-1. Sign in to the [Azure portal](https://portal.azure.com/).
+To configure auto-start policy for a lab, follow these steps. After configuring the policy, [enable auto-start](#add-vms-to-the-auto-start-schedule) for each VM that you want to auto-start.
-1. Navigate to your lab in **DevTest Labs**.
+1. On your lab **Overview** page, select **Configuration and policies** under **Settings** in the left navigation.
-1. Under **Settings**, select **Configuration and policies**.
+ :::image type="content" source="./media/devtest-lab-auto-startup-vm/configuration-policies-menu.png" alt-text="Screenshot that shows selecting Configuration and policies in the left navigation menu.":::
- :::image type="content" source="./media/devtest-lab-auto-startup-vm/configuration-policies-menu.png" alt-text="Screenshot that shows the 'Configuration and policies' menu in the DevTest Labs.":::
+1. On the **Configuration and policies** page, select **Auto-start** under **Schedules** in the left navigation.
-1. On the **Configuration and policies** page, under **Schedules**, select **Auto-start**.
-
-1. For **Allow auto-start**, select **Yes**. Scheduling information will then appear.
+1. Select **Yes** for **Allow auto-start**.
:::image type="content" source="./media/devtest-lab-auto-startup-vm/portal-lab-auto-start.png" alt-text="Screenshot of Auto-start option under Schedules.":::
-1. Provide the following scheduling information:
-
- |Property | Description |
- |||
- |Scheduled start| Enter a start time.|
- |Time zone| Select a time zone from the drop-down list.|
- |Days of the week| Select each box next to the day you want the schedule to be applied.|
-
- :::image type="content" source="./media/devtest-lab-auto-startup-vm/auto-start-configuration.png" alt-text="Screenshot of Autostart schedule settings.":::
+1. Enter a **Scheduled start** time, select a **Time zone**, and select the checkboxes next to the **Days of the week** that you want to apply the schedule.
-1. Select **Save**.
+1. Select **Save**.
-## Enable auto start for a VM in the lab
+ :::image type="content" source="./media/devtest-lab-auto-startup-vm/auto-start-configuration.png" alt-text="Screenshot of auto-start schedule settings.":::
-These steps continue from the prior section. Now that an auto start policy has been created, select the VMs to apply the policy against.
+## Add VMs to the auto-start schedule
-1. Close the **Configuration and policies** page to return to the **DevTest Labs** page.
+After you configure the auto-start policy, follow these steps for each VM that you want to auto-start.
-1. Under **My virtual machines**, select a VM.
+1. On your lab **Overview** page, select the VM under **My virtual machines**.
- :::image type="content" source="./media/devtest-lab-auto-startup-vm/select-vm.png" alt-text="Screenshot of Select VM from list under My virtual machines.":::
+ :::image type="content" source="./media/devtest-lab-auto-startup-vm/select-vm.png" alt-text="Screenshot of selecting a VM from the list under My virtual machines.":::
-1. On the **virtual machine** page, under **Operations**, select **Auto-start**.
+1. On the VM's **Overview** page, select **Auto-start** under **Operations** in the left navigation.
-1. On the **Auto-start** page, select **Yes**, and then **Save**.
+1. On the **Auto-start** page, select **Yes** for **Allow this virtual machine to be scheduled for automatic start**, and then select **Save**.
- :::image type="content" source="./media/devtest-lab-auto-startup-vm/select-auto-start.png" alt-text="Screenshot of Select autostart menu.":::
+ :::image type="content" source="./media/devtest-lab-auto-startup-vm/select-auto-start.png" alt-text="Screenshot of selecting Yes on the Auto-start page.":::
## Next steps - [Manage auto shutdown policies for a lab in Azure DevTest Labs](devtest-lab-auto-shutdown.md)
+- [Use command lines to start and stop DevTest Labs virtual machines](use-command-line-start-stop-virtual-machines.md)
devtest-labs Test App Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/test-app-azure.md
Title: How to test your app in Azure
-description: Learn how to deploy desktop/web applications to a file share and test them.
+ Title: Set up an app for testing on a lab VM
+description: Learn how to publish an app to an Azure file share for testing from a DevTest Labs virtual machine.
Previously updated : 11/03/2021 Last updated : 03/29/2022
-# Test your app in Azure
+# Set up an app for testing on an Azure DevTest Labs VM
-In this guide, you'll learn how to test your application in Azure using DevTest Labs. You use Visual Studio to deploy your app to an Azure file share. Then you'll access the share from a lab virtual machine (VM).
+This article shows how to set up an application for testing from an Azure DevTest Labs virtual machine (VM). In this example, you use Visual Studio to publish an app to an Azure file share. Then you access the file share from a lab VM for testing.
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A Windows-based [DevTest Labs VM](devtest-lab-add-vm.md) to use for testing the app.
+- [Visual Studio](https://visualstudio.microsoft.com/free-developer-offers/) installed on a different workstation.
+- A [file share](/azure/storage/files/storage-how-to-create-file-share) created in your lab's [Azure Storage Account](encrypt-storage.md).
+- The [file share mounted](/azure/storage/files/storage-how-to-use-files-windows#mount-the-azure-file-share) to your Visual Studio workstation, and to the lab VM you want to use for testing.
-- A local workstation with [Visual Studio](https://visualstudio.microsoft.com/free-developer-offers/).--- A lab in [DevTest Labs](devtest-lab-overview.md).--- An [Azure virtual machine](devtest-lab-add-vm.md) running Windows in your lab.--- A [file share](../storage/files/storage-how-to-create-file-share.md) in your lab's existing Azure storage account. A storage account is automatically created with a lab.
+## Publish your app from Visual Studio
-- The [Azure file share mounted](../storage/files/storage-how-to-use-files-windows.md#mount-the-azure-file-share) to your local workstation and lab VM.
+First, publish an app from Visual Studio to your Azure file share.
-## Publish your app from Visual Studio
+1. Open Visual Studio, and choose **Create a new project** in the **Start** window.
-In this section, you publish your app from Visual Studio to your Azure file share.
+ :::image type="content" source="./media/test-app-in-azure/launch-visual-studio.png" alt-text="Screenshot of the Visual Studio Start page with Create a new project selected.":::
-1. Open Visual Studio, and choose **Create a new project** in the Start window.
+1. On the **Create a new project** screen, select **Console Application**, and then select **Next**.
- :::image type="content" source="./media/test-app-in-azure/launch-visual-studio.png" alt-text="Screenshot of visual studio start page.":::
+ :::image type="content" source="./media/test-app-in-azure/select-console-application.png" alt-text="Screenshot of choosing Console Application.":::
-1. Select **Console Application** and then **Next**.
+1. On the **Configure your new project** page, keep the defaults and select **Next**.
- :::image type="content" source="./media/test-app-in-azure/select-console-application.png" alt-text="Screenshot of option to choose console application.":::
+1. On the **Additional information** page, keep the defaults and select **Create**.
-1. On the **Configure your new project** page, leave the defaults, and select **Next**.
+1. In Visual Studio **Solution Explorer**, right-click your project name, and select **Build**.
-1. On the **Additional information** page, leave the defaults and select **Create**.
+1. When the build succeeds, in **Solution Explorer**, right-click your project name, and select **Publish**.
-1. From **Solution Explorer**, right-click your project and select **Build**.
+ :::image type="content" source="./media/test-app-in-azure/publish-application.png" alt-text="Screenshot of selecting Publish from Solution Explorer.":::
-1. From **Solution Explorer**, right-click your project and select **Publish**.
+1. On the **Publish** screen, select **Folder**, and then select **Next**.
- :::image type="content" source="./media/test-app-in-azure/publish-application.png" alt-text="Screenshot of option to publish application.":::
+ :::image type="content" source="./media/test-app-in-azure/publish-to-folder.png" alt-text="Screenshot of selecting Folder on the Publish screen.":::
-1. On the **Publish** page, select **Folder** and then **Next**.
+1. For **Specific target**, select **Folder**, and then select **Next**.
- :::image type="content" source="./media/test-app-in-azure/publish-to-folder.png" alt-text="Screenshot of option to publish to folder.":::
+1. For the **Location** option, select **Browse**, and then select the file share you mounted earlier.
-1. For the **Specific target** option, select **Folder** and then **Next**.
+ :::image type="content" source="./media/test-app-in-azure/selecting-file-share.png" alt-text="Screenshot of browsing and selecting the file share.":::
-1. For the **Location** option, select **Browse**, and select the file share you mounted earlier. Then Select **OK**, and then **Finish**.
+1. Select **OK**, and then select **Finish**.
- :::image type="content" source="./media/test-app-in-azure/selecting-file-share.png" alt-text="Screenshot of option to select file share.":::
+1. Select **Publish**.
-1. Select **Publish**. Visual Studio builds your application and publishes it to your file share.
+ :::image type="content" source="./media/test-app-in-azure/final-publish.png" alt-text="Screenshot of selecting Publish.":::
- :::image type="content" source="./media/test-app-in-azure/final-publish.png" alt-text="Screenshot of publish button.":::
+Visual Studio publishes your application to the file share.
-## Test the app on your test VM in the lab
+## Access the app on your lab VM
-1. Connect to your lab virtual machine.
+1. Connect to your lab test VM.
-1. Within the virtual machine, launch **File Explorer**, and select **This PC** to find the file share you mounted earlier.
+1. On the lab VM, start up **File Explorer**, select **This PC**, and find the file share you mounted earlier.
- :::image type="content" source="./media/test-app-in-azure/find-share-on-vm.png" alt-text="Screenshot of file explorer.":::
+ :::image type="content" source="./media/test-app-in-azure/find-share-on-vm.png" alt-text="Screenshot of the file share in the V M's File Explorer.":::
-1. Open the file share and confirm that you see the app you deployed from Visual Studio.
+1. Open the file share, and confirm that you see the app you deployed from Visual Studio.
- :::image type="content" source="./media/test-app-in-azure/open-file-share.png" alt-text="Screenshot of contents of file share.":::
+ :::image type="content" source="./media/test-app-in-azure/open-file-share.png" alt-text="Screenshot of contents of file share.":::
- You can now access and test your app within the test VM you created in Azure.
+You can now test your app on your lab VM.
## Next steps
devtest-labs Use Command Line Start Stop Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/use-command-line-start-stop-virtual-machines.md
Title: Use command-line tools to start and stop VMs
-description: Learn how to use command-line tools to start and stop virtual machines in Azure DevTest Labs.
+ Title: Start & stop lab VMs with command lines
+description: Use Azure PowerShell or Azure CLI command lines and scripts to start and stop Azure DevTest Labs virtual machines.
Previously updated : 10/22/2021 Last updated : 03/29/2022 ms.devlang: azurecli
-# Use command-line tools to start and stop Azure DevTest Labs virtual machines
+# Use command lines to start and stop DevTest Labs virtual machines
-This article shows you how to start or stop a lab virtual machines in Azure DevTest Labs. You can create Azure PowerShell or Azure CLI scripts to automate these operations.
+This article shows how to start or stop Azure DevTest Labs virtual machines (VMs) by using Azure PowerShell or Azure CLI command lines and scripts.
-## Prerequisites
-- If using PowerShell, you'll need the [Az Module](/powershell/azure/new-azureps-module-az) installed on your workstation. Ensure you have the latest version. If necessary, run `Update-Module -Name Az`.--- If wanting to use Azure CLI and you haven't yet installed it, see [Install the Azure CLI](/cli/azure/install-azure-cli).
+You can start, stop, or [restart DevTest Labs VMs](devtest-lab-restart-vm.md) by using the Azure portal. You can also use the portal to configure [automatic startup](devtest-lab-auto-startup-vm.md) and [automatic shutdown](devtest-lab-auto-shutdown.md) schedules and policies for lab VMs.
-- A virtual machine in a DevTest Labs lab.
+When you want to script or automate start or stop for lab VMs, use PowerShell or Azure CLI commands. For example, you can use start or stop commands to:
-## Overview
+- Test a three-tier application, where the tiers need to start in a sequence.
+- Turn off VMs to save costs when they meet custom criteria.
+- Start when a continuous integration and continuous delivery (CI/CD) workflow begins, and stop when it finishes. For an example of this workflow, see [Run an image factory from Azure DevOps](image-factory-set-up-devops-lab.md).
-Azure DevTest Labs provides a way to create fast, easy, and lean dev/test environments. Labs allow you to manage cost, quickly create VMs, and minimize waste. You can use the features in the Azure portal to automatically start and stop VMs at specific times. However, you may want to automate the starting and stopping of VMs from scripts. Here are some situations in which running these tasks by using scripts would be helpful.
+## Prerequisites
-- When using a three-tier application as part of a test environment and the tiers need to be started in a sequence. -- To turn off a VM when a custom criteria is met to save money. -- As a task within a continuous integration and continuous delivery workflow to start at the beginning, and then stop the VMs when the process is complete. An example of this workflow would be the custom image factory with Azure DevTest Labs.
+- A [lab VM in DevTest Labs](devtest-lab-add-vm.md).
+- For Azure PowerShell, the [Az module](/powershell/azure/new-azureps-module-az) installed on your workstation. Make sure you have the latest version. If necessary, run `Update-Module -Name Az` to update the module.
+- For Azure CLI, [Azure CLI ](/cli/azure/install-azure-cli) installed on your workstation.
-## Azure PowerShell
+## Azure PowerShell script
-The following PowerShell script can start or stop a VM in a lab. [Invoke-AzResourceAction](/powershell/module/az.resources/invoke-azresourceaction) is the primary focus for this script. The **ResourceId** parameter is the fully qualified resource ID for the VM in the lab. The **Action** parameter is where the **Start** or **Stop** options are set depending on what is needed.
+The following PowerShell script starts or stops a VM in a lab by using [Invoke-AzResourceAction](/powershell/module/az.resources/invoke-azresourceaction). The `ResourceId` parameter is the fully qualified ID for the lab VM you want to start or stop. The `Action` parameter determines whether to start or stop the VM, depending on which action you need.
-1. From your workstation, sign in to your Azure subscription with the PowerShell [Connect-AzAccount](/powershell/module/Az.Accounts/Connect-AzAccount) cmdlet and follow the on-screen directions.
+1. From your workstation, use the PowerShell [Connect-AzAccount](/powershell/module/Az.Accounts/Connect-AzAccount) cmdlet to sign in to your Azure account. If you have multiple Azure subscriptions, uncomment the `Set-AzContext` line and fill in the `<Subscription ID>` you want to use.
```powershell # Sign in to your Azure subscription
The following PowerShell script can start or stop a VM in a lab. [Invoke-AzResou
Connect-AzAccount }
- # If you have multiple subscriptions, set the one to use
- # Set-AzContext -SubscriptionId "<SUBSCRIPTIONID>"
+ # Set-AzContext -SubscriptionId "<Subscription ID>"
```
-1. Provide an appropriate value for the variables and then execute the script.
+1. Provide values for the *`<lab name>`* and *`<VM name>`*, and enter which action you want for *`<Start or Stop>`*.
```powershell
- $devTestLabName = "yourlabname"
- $vMToStart = "vmname"
+ $devTestLabName = "<lab name>"
+ $vMToStart = "<VM name>"
# The action on the virtual machine (Start or Stop)
- $vmAction = "Start"
+ $vmAction = "<Start or Stop>"
```
-1. Start or stop the VM based on the value you passed to `$vmAction`.
+1. Start or stop the VM, based on the value you passed to `$vmAction`.
```powershell # Get the lab information
The following PowerShell script can start or stop a VM in a lab. [Invoke-AzResou
Write-Output "##[section] Successfully updated DTL machine: $vMToStart, Action: $vmAction" } else {
- Write-Error "##[error]Failed to update DTL machine: $vMToStart, Action: $vmAction"
+ Write-Error "##[error] Failed to update DTL machine: $vMToStart, Action: $vmAction"
} ```
-## Azure CLI
+## Azure CLI script
-The [Azure CLI](/cli/azure/get-started-with-azure-cli) is another way to automate the starting and stopping of DevTest Labs VMs. The following script gives you commands for starting and stopping a VM in a lab. The use of variables in this section is based on a Windows environment. Slight variations will be needed for bash or other environments.
+The following script provides [Azure CLI](/cli/azure/get-started-with-azure-cli) commands for starting or stopping a lab VM. The variables in this script are for a Windows environment. Bash or other environments have slight variations.
-1. Replace `SubscriptionID`, `yourlabname`, `yourVM`, and `action` with the appropriate values. Then execute the script.
+1. Provide appropriate values for *`<Subscription ID>`*, *`<lab name>`*, *`<VM name>`*, and the *`<Start or Stop>`* action to take.
- ```azurecli
- set SUBSCIPTIONID=SubscriptionID
- set DEVTESTLABNAME=yourlabname
- set VMNAME=yourVM
-
- REM The action on the virtual machine (Start or Stop)
- set ACTION=action
- ```
+ ```azurecli
+ set SUBSCIPTIONID=<Subscription ID>
+ set DEVTESTLABNAME=<lab name>
+ set VMNAME=<VM name>
+ set ACTION=<Start or Stop>
+ ```
-1. Sign in to your Azure subscription and get the name of the resource group that contains the lab.
+1. Sign in to your Azure account. If you have multiple Azure subscriptions, uncomment the `az account set` line to use the subscription ID you provided.
- ```azurecli
- az login
-
- REM If you have multiple subscriptions, set the one to use
- REM az account set --subscription %SUBSCIPTIONID%
+ ```azurecli
+ az login
+
+ REM az account set --subscription %SUBSCIPTIONID%
+ ```
- az resource list --resource-type "Microsoft.DevTestLab/labs" --name %DEVTESTLABNAME% --query "[0].resourceGroup"
- ```
+1. Get the name of the resource group that contains the lab.
-1. Replace `resourceGroup` with the value obtained from the prior step. Then execute the script.
+ ```azurecli
+ az resource list --resource-type "Microsoft.DevTestLab/labs" --name %DEVTESTLABNAME% --query "[0].resourceGroup"
+ ```
- ```azurecli
- set RESOURCEGROUP=resourceGroup
- ```
+1. Replace *`<resourceGroup>`* with the value you got from the previous step.
-1. Start or stop the VM based on the value you passed to `ACTION`.
+ ```azurecli
+ set RESOURCEGROUP=<resourceGroup>
+ ```
- ```azurecli
- az lab vm %ACTION% --lab-name %DEVTESTLABNAME% --name %VMNAME% --resource-group %RESOURCEGROUP%
- ```
+1. Run the command line to start or stop the VM, based on the value you passed to `ACTION`.
+
+ ```azurecli
+ az lab vm %ACTION% --lab-name %DEVTESTLABNAME% --name %VMNAME% --resource-group %RESOURCEGROUP%
+ ```
## Next steps
-See the following article for using the Azure portal to do these operations: [Restart a VM](devtest-lab-restart-vm.md).
+- [Azure CLI az lab reference](/cli/azure/lab)
+- [PowerShell Az.DevTestLabs reference](/powershell/module/az.devtestlabs)
+- [Define the startup order for DevTest Labs VMs](start-machines-use-automation-runbooks.md)
digital-twins How To Use Data History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-data-history.md
# Mandatory fields. Title: Use data history (preview) with Azure Data Explorer
+ Title: Use data history with Azure Data Explorer (preview)
description: See how to set up and use data history for Azure Digital Twins, using the CLI or Azure portal.
digital-twins Troubleshoot Error Azure Digital Twins Explorer Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/troubleshoot-error-azure-digital-twins-explorer-authentication.md
Previously updated : 03/28/2022 Last updated : 03/29/2022 # Troubleshoot Azure Digital Twins Explorer: Authentication errors
When running Azure Digital Twins Explorer, you encounter the following error mes
:::image type="content" source="media/troubleshoot-error-azure-digital-twins-explorer-authentication/permission-error.png" alt-text="Screenshot of an error message in the Azure Digital Twins Explorer, entitled Make sure you have the right permissions.":::
-If you are running the code locally, you might see this error message instead:
-- ## Causes ### Cause #1
-You will see these errors if your Azure account doesn't have the required Azure role-based access control (Azure RBAC) permissions set on your Azure Digital Twins instance. In order to access data in your instance, you must have the *Azure Digital Twins Data Reader* or *Azure Digital Twins Data Owner* role on the instance you are trying to read or manage, respectively.
+This error will occur if your Azure account doesn't have the required Azure role-based access control (Azure RBAC) permissions set on your Azure Digital Twins instance. In order to access data in your instance, you must have the *Azure Digital Twins Data Reader* or *Azure Digital Twins Data Owner* role on the instance you are trying to read or manage, respectively.
For more information about security and roles in Azure Digital Twins, see [Security for Azure Digital Twins solutions](concepts-security.md).
Read the setup steps for creating and authenticating a new Azure Digital Twins i
* [Set up an instance and authentication (CLI)](how-to-set-up-instance-cli.md) Read more about security and permissions on Azure Digital Twins:
-* [Security for Azure Digital Twins solutions](concepts-security.md)
+* [Security for Azure Digital Twins solutions](concepts-security.md)
dms Migration Dms Powershell Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-dms-powershell-cli.md
+
+ Title: Migrate databases at scale using Azure PowerShell / CLI
+description: Learn how to use Azure PowerShell or CLI to migrate databases at scale using the capabilities of Azure SQL Migration extension in Azure Data Studio with Azure Database Migration Service.
++++++++ Last updated : 03/28/2022+++
+# Migrate databases at scale using automation (Preview)
+
+The [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) enables you to assess, get Azure recommendations and migrate your SQL Server databases to Azure. Using automation with [Azure PowerShell](/powershell/module/az.datamigration) or [Azure CLI](/cli/azure/datamigration), you can leverage the capabilities of the extension with Azure Database Migration Service to migrate one or more databases at scale (including databases across multiple SQL Server instances).
+
+The following sample scripts can be referenced to suit your migration scenario using Azure PowerShell or Azure CLI:
+
+|Scripting language |Migration scenario |Azure Samples link |
+||||
+|PowerShell |SQL Server assessment |[Azure-Samples/data-migration-sql/PowerShell/sql-server-assessment](https://github.com/Azure-Samples/data-migration-sql/tree/main/PowerShell/sql-server-assessment.md) |
+|PowerShell |SQL Server to Azure SQL Managed Instance (using file share) |[Azure-Samples/data-migration-sql/PowerShell/sql-server-to-sql-mi-fileshare](https://github.com/Azure-Samples/data-migration-sql/tree/main/PowerShell/sql-server-to-sql-mi-fileshare.md) |
+|PowerShell |SQL Server to Azure SQL Managed Instance (using Azure storage) |[Azure-Samples/data-migration-sql/PowerShell/sql-server-to-sql-mi-blob](https://github.com/Azure-Samples/data-migration-sql/tree/main/PowerShell/sql-server-to-sql-mi-blob.md) |
+|PowerShell |SQL Server to SQL Server on Azure Virtual Machines (using file share) |[Azure-Samples/data-migration-sql/PowerShell/sql-server-to-sql-vm-fileshare](https://github.com/Azure-Samples/data-migration-sql/tree/main/PowerShell/sql-server-to-sql-vm-fileshare.md) |
+|PowerShell |SQL Server to SQL Server on Azure Virtual Machines (using Azure Storage) |[Azure-Samples/data-migration-sql/PowerShell/sql-server-to-sql-vm-blob](https://github.com/Azure-Samples/data-migration-sql/tree/main/PowerShell/sql-server-to-sql-vm-blob.md) |
+|PowerShell |Sample: End-to-End migration automation |[Azure-Samples/data-migration-sql/PowerShell/scripts/](https://github.com/Azure-Samples/data-migration-sql/tree/main/PowerShell/scripts/) |
+|PowerShell |Sample: End-to-End migration automation for multiple databases |[Azure-Samples/data-migration-sql/PowerShell/scripts/multiple%20databases/](https://github.com/Azure-Samples/data-migration-sql/tree/main/PowerShell/scripts/multiple%20databases/) |
+|CLI |SQL Server assessment |[Azure-Samples/data-migration-sql/CLI/sql-server-assessment](https://github.com/Azure-Samples/data-migration-sql/tree/main/CLI/sql-server-assessment.md) |
+|CLI |SQL Server to Azure SQL Managed Instance (using file share) |[Azure-Samples/data-migration-sql/CLI/sql-server-to-sql-mi-fileshare](https://github.com/Azure-Samples/data-migration-sql/tree/main/CLI/sql-server-to-sql-mi-fileshare.md) |
+|CLI |SQL Server to Azure SQL Managed Instance (using Azure storage) |[Azure-Samples/data-migration-sql/CLI/sql-server-to-sql-mi-blob](https://github.com/Azure-Samples/data-migration-sql/tree/main/CLI/sql-server-to-sql-mi-blob.md) |
+|CLI |SQL Server to SQL Server on Azure Virtual Machines (using file share) |[Azure-Samples/data-migration-sql/CLI/sql-server-to-sql-vm-fileshare](https://github.com/Azure-Samples/data-migration-sql/tree/main/CLI/sql-server-to-sql-vm-fileshare.md) |
+|CLI |SQL Server to SQL Server on Azure Virtual Machines (using Azure Storage) |[Azure-Samples/data-migration-sql/CLI/sql-server-to-sql-vm-blob](https://github.com/Azure-Samples/data-migration-sql/tree/main/CLI/sql-server-to-sql-vm-blob.md) |
+|CLI |Sample: End-to-End migration automation |[Azure-Samples/data-migration-sql/CLI/scripts/](https://github.com/Azure-Samples/data-migration-sql/tree/main/CLI/scripts/) |
+|CLI |Sample: End-to-End migration automation for multiple databases |[Azure-Samples/data-migration-sql/CLI/scripts/multiple%20databases/](https://github.com/Azure-Samples/data-migration-sql/tree/main/CLI/scripts/multiple%20databases/) |
+
+## Prerequisites
+
+Pre-requisites that are common across all supported migration scenarios using Azure PowerShell or Azure CLI are:
+
+* Have an Azure account that is assigned to one of the built-in roles listed below:
+ - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share).
+ - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
+ - Owner or Contributor role for the Azure subscription.
+ > [!IMPORTANT]
+ > Azure account is only required when running the migration steps and is not required for assessment or Azure recommendation steps process.
+* Create a target [Azure SQL Managed Instance](../azure-sql/managed-instance/create-configure-managed-instance-powershell-quickstart.md) or [SQL Server on Azure Virtual Machine](../azure-sql/virtual-machines/windows/sql-vm-create-powershell-quickstart.md)
+
+ > [!IMPORTANT]
+ > If you have an existing Azure Virtual Machine, it should be registered with [SQL IaaS Agent extension in Full management mode](../azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management.md#management-modes).
+* Ensure that the logins used to connect the source SQL Server are members of the *sysadmin* server role or have `CONTROL SERVER` permission.
+* Use one of the following storage options for the full database and transaction log backup files:
+ - SMB network share
+ - Azure storage account file share or blob container
+
+ > [!IMPORTANT]
+ > - If your database backup files are provided in an SMB network share, [Create an Azure storage account](../storage/common/storage-account-create.md) that allows the DMS service to upload the database backup files. Make sure to create the Azure Storage Account in the same region as the Azure Database Migration Service instance is created.
+ > - Azure Database Migration Service does not initiate any backups, and instead uses existing backups, which you may already have as part of your disaster recovery plan, for the migration.
+ > - You should take [backups using the `WITH CHECKSUM` option](/sql/relational-databases/backup-restore/enable-or-disable-backup-checksums-during-backup-or-restore-sql-server).
+ > - Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups (i.e. full and t-log) into a single backup media is not supported.
+ > - Use compressed backups to reduce the likelihood of experiencing potential issues associated with migrating large backups.
+* Ensure that the service account running the source SQL Server instance has read and write permissions on the SMB network share that contains database backup files.
+* The source SQL Server instance certificate from a database protected by Transparent Data Encryption (TDE) needs to be migrated to the target Azure SQL Managed Instance or SQL Server on Azure Virtual Machine before migrating data. To learn more, see [Migrate a certificate of a TDE-protected database to Azure SQL Managed Instance](../azure-sql/managed-instance/tde-certificate-migrate.md) and [Move a TDE Protected Database to Another SQL Server](/sql/relational-databases/security/encryption/move-a-tde-protected-database-to-another-sql-server).
+ > [!TIP]
+ > If your database contains sensitive data that is protected by [Always Encrypted](/sql/relational-databases/security/encryption/configure-always-encrypted-using-sql-server-management-studio), migration process using Azure Data Studio with DMS will automatically migrate your Always Encrypted keys to your target Azure SQL Managed Instance or SQL Server on Azure Virtual Machine.
+
+* If your database backups are in a network file share, provide a machine to install [self-hosted integration runtime](../data-factory/create-self-hosted-integration-runtime.md) to access and migrate database backups. The Azure PowerShell or Azure CLI modules provide the authentication keys to register your self-hosted integration runtime. In preparation for the migration, ensure that the machine where you plan to install the self-hosted integration runtime has the following outbound firewall rules and domain names enabled:
+
+ | Domain names | Outbound ports | Description |
+ | -- | -- | |
+ | Public Cloud: `{datafactory}.{region}.datafactory.azure.net`<br> or `*.frontend.clouddatahub.net` <br> Azure Government: `{datafactory}.{region}.datafactory.azure.us` <br> China: `{datafactory}.{region}.datafactory.azure.cn` | 443 | Required by the self-hosted integration runtime to connect to the Data Migration service. <br>For new created Data Factory in public cloud, locate the FQDN from your Self-hosted Integration Runtime key, which is in format `{datafactory}.{region}.datafactory.azure.net`. For old Data factory, if you don't see the FQDN in your Self-hosted Integration key, use *.frontend.clouddatahub.net instead. |
+ | `download.microsoft.com` | 443 | Required by the self-hosted integration runtime for downloading the updates. If you have disabled auto-update, you can skip configuring this domain. |
+ | `*.core.windows.net` | 443 | Used by the self-hosted integration runtime that connects to the Azure storage account for uploading database backups from your network share |
+
+ > [!TIP]
+ > If your database backup files are already provided in an Azure storage account, self-hosted integration runtime is not required during the migration process.
+
+* When using self-hosted integration runtime, make sure that the machine where the runtime is installed can connect to the source SQL Server instance and the network file share where backup files are located. Outbound port 445 should be enabled to allow access to the network file share.
+* If you're using the Azure Database Migration Service for the first time, ensure that Microsoft.DataMigration resource provider is registered in your subscription. You can follow the steps to [register the resource provider](./quickstart-create-data-migration-service-portal.md#register-the-resource-provider)
+
+## Automate database migrations
+Using Azure PowerShell [Az.DataMigration](/powershell/module/az.datamigration) or Azure CLI [az datamigration](/cli/azure/datamigration), you can migrate databases by automating the creation of the Azure Database Migration Service, configuring database migrations for online migration and performing a cutover. There are several more commands and functionality that is documented in [Azure Samples](https://github.com/Azure-Samples/data-migration-sql).
+
+Example of automating migration of a SQL Server database using Azure CLI:
+
+**Step 1: Create Azure Database Migration Service which will orchestrate all the migration activities for your database.**
+```azurepowershell-interactive
+#STEP 1: Create Database Migration Service
+az datamigration sql-service create --resource-group "myRG" --sql-migration-service-name "myMigrationService" --location "EastUS2"
+```
+
+**Step 2: Configure and start online database migration from SQL Server on-premises (with backups in Azure storage) to Azure SQL Managed Instance.**
+```azurepowershell-interactive
+#STEP 2: Start Migration
+az datamigration sql-managed-instance create `
+--source-location '{\"AzureBlob\":{\"storageAccountResourceId\":\"/subscriptions/mySubscriptionID/resourceGroups/myRG/providers/Microsoft.Storage/storageAccounts/dbbackupssqlbits\",\"accountKey\":\"myAccountKey\",\"blobContainerName\":\"dbbackups\"}}' `
+--migration-service "/subscriptions/mySubscriptionID/resourceGroups/myRG/providers/Microsoft.DataMigration/SqlMigrationServices/myMigrationService" `
+--scope "/subscriptions/mySubscriptionID/resourceGroups/myRG/providers/Microsoft.Sql/managedInstances/mySQLMI" `
+--source-database-name "AdventureWorks2008" `
+--source-sql-connection authentication="SqlAuthentication" data-source="mySQLServer" password="myPassword" user-name="sqluser" `
+--target-db-name "AdventureWorks2008" `
+--resource-group myRG `
+--managed-instance-name mySQLMI
+```
+
+**Step 3: Perform a migration cutover once all backups are restored to Azure SQL Managed Instance.**
+```azurepowershell-interactive
+#STEP 3: Get migration ID and perform Cutover
+$migOpId = az datamigration sql-managed-instance show --managed-instance-name "mySQLMI" --resource-group "myRG" --target-db-name "AdventureWorks2008" --expand=MigrationStatusDetails --query "properties.migrationOperationId"
+az datamigration sql-managed-instance cutover --managed-instance-name "mySQLMI" --resource-group "myRG" --target-db-name "AdventureWorks2008" --migration-operation-id $migOpId
+```
+
+## Next steps
+
+- For Azure PowerShell reference documentation for SQL Server database migrations, see [Az.DataMigration](/powershell/module/az.datamigration).
+- For Azure CLI reference documentation for SQL Server database migrations, see [az datamigration](/cli/azure/datamigration).
+- For Azure Samples code repository, see [data-migration-sql](https://github.com/Azure-Samples/data-migration-sql)
event-grid Event Schema Media Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-media-services.md
This article provides the schemas and properties for Media Services events.
## Job-related event types
-Media Services emits the **Job-related** event types described below. There are two categories for the **Job-related** events: "Monitoring Job State Changes" and "Monitoring Job Output State Changes".
+Media Services emits the **Job-related** event types described below. There are two categories for the **Job-related** events: "Monitoring Job State Changes" and "Monitoring Job Output State Changes".
-You can register for all of the events by subscribing to the JobStateChange event. Or, you can subscribe for specific events only (for example, final states like JobErrored, JobFinished, and JobCanceled).
+You can register for all of the events by subscribing to the JobStateChange event. Or, you can subscribe for specific events only (for example, final states like JobErrored, JobFinished, and JobCanceled).
### Monitoring Job state changes
See [Schema examples](#event-schema-examples) that follow.
A job may contain multiple job outputs (if you configured the transform to have multiple job outputs.) If you want to track the details of the individual job output, listen for a job output change event.
-Each **Job** is going to be at a higher level than **JobOutput**, thus job output events get fired inside of a corresponding job.
+Each **Job** is going to be at a higher level than **JobOutput**, thus job output events get fired inside of a corresponding job.
The error messages in `JobFinished`, `JobCanceled`, `JobError` output the aggregated results for each job output ΓÇô when all of them are finished. Whereas the job output events fire as each task finishes. For example, if you have an encoding output, followed by a Video Analytics output, you would get two events firing as job output events before the final JobFinished event fires with the aggregated data.
See [Schema examples](#event-schema-examples) that follow.
## Live event types
-Media Services also emits the **Live** event types described below. There are two categories for the **Live** events: stream-level events and track-level events.
+Media Services also emits the **Live** event types described below. There are two categories for the **Live** events: stream-level events and track-level events.
### Stream-level events
See [Schema examples](#event-schema-examples) that follow.
### Track-level events
-Track-level events are raised per track.
+Track-level events are raised per track.
> [!NOTE] > All track-level events are raised after a live encoder is connected.
See [Schema examples](#event-schema-examples) that follow.
# [Event Grid event schema](#tab/event-grid-event-schema)
-The following example shows the schema of the **JobStateChange** event:
+The following example shows the schema of the **JobStateChange** event:
```json [
The following example shows the schema of the **JobStateChange** event:
# [Cloud event schema](#tab/cloud-event-schema)
-The following example shows the schema of the **JobStateChange** event:
+The following example shows the schema of the **JobStateChange** event:
```json [
The example schema looks similar to the following:
### LiveEventConnectionRejected
-The following example shows the schema of the **LiveEventConnectionRejected** event:
+The following example shows the schema of the **LiveEventConnectionRejected** event:
```json [
The following example shows the schema of the **LiveEventConnectionRejected** ev
"eventType": "Microsoft.Media.LiveEventConnectionRejected", "eventTime": "2018-01-16T01:57:26.005121Z", "id": "b303db59-d5c1-47eb-927a-3650875fded1",
- "data": {
+ "data": {
"streamId":"Mystream1", "ingestUrl": "http://abc.ingest.isml", "encoderIp": "118.238.251.xxx",
The data object has the following properties:
| Property | Type | Description | | -- | - | -- |
-| `streamId` | string | Identifier of the stream or connection. Encoder or customer is responsible to add this ID in the ingest URL. |
-| `ingestUrl` | string | Ingest URL provided by the live event. |
+| `streamId` | string | Identifier of the stream or connection. Encoder or customer is responsible to add this ID in the ingest URL. |
+| `ingestUrl` | string | Ingest URL provided by the live event. |
| `encoderIp` | string | IP of the encoder. | | `encoderPort` | string | Port of the encoder from where this stream is coming. | | `resultCode` | string | The reason the connection was rejected. The result codes are listed in the following table. |
-You can find the error result codes in [live Event error codes](../media-services/latest/live-event-error-codes-reference.md).
+You can find the error result codes in [live Event error codes](/media-services/latest/live-event-error-codes-reference).
### LiveEventEncoderConnected # [Event Grid event schema](#tab/event-grid-event-schema)
-The following example shows the schema of the **LiveEventEncoderConnected** event:
+The following example shows the schema of the **LiveEventEncoderConnected** event:
```json [
- {
+ {
"topic": "/subscriptions/<subscription-id>/resourceGroups/<rg-name>/providers/Microsoft.Media/mediaservices/<account-name>", "subject": "liveEvent/mle1", "eventType": "Microsoft.Media.LiveEventEncoderConnected",
The following example shows the schema of the **LiveEventEncoderConnected** even
# [Cloud event schema](#tab/cloud-event-schema)
-The following example shows the schema of the **LiveEventEncoderConnected** event:
+The following example shows the schema of the **LiveEventEncoderConnected** event:
```json [
- {
+ {
"source": "/subscriptions/<subscription-id>/resourceGroups/<rg-name>/providers/Microsoft.Media/mediaservices/<account-name>", "subject": "liveEvent/mle1", "type": "Microsoft.Media.LiveEventEncoderConnected",
The data object has the following properties:
# [Event Grid event schema](#tab/event-grid-event-schema)
-The following example shows the schema of the **LiveEventEncoderDisconnected** event:
+The following example shows the schema of the **LiveEventEncoderDisconnected** event:
```json [
- {
+ {
"topic": "/subscriptions/<subscription-id>/resourceGroups/<rg-name>/providers/Microsoft.Media/mediaservices/<account-name>", "subject": "liveEvent/mle1", "eventType": "Microsoft.Media.LiveEventEncoderDisconnected",
The following example shows the schema of the **LiveEventEncoderDisconnected** e
# [Cloud event schema](#tab/cloud-event-schema)
-The following example shows the schema of the **LiveEventEncoderDisconnected** event:
+The following example shows the schema of the **LiveEventEncoderDisconnected** event:
```json [
- {
+ {
"source": "/subscriptions/<subscription-id>/resourceGroups/<rg-name>/providers/Microsoft.Media/mediaservices/<account-name>", "subject": "liveEvent/mle1", "type": "Microsoft.Media.LiveEventEncoderDisconnected",
The data object has the following properties:
| Property | Type | Description | | -- | - | -- |
-| `streamId` | string | Identifier of the stream or connection. Encoder or customer is responsible to add this ID in the ingest URL. |
-| `ingestUrl` | string | Ingest URL provided by the live event. |
+| `streamId` | string | Identifier of the stream or connection. Encoder or customer is responsible to add this ID in the ingest URL. |
+| `ingestUrl` | string | Ingest URL provided by the live event. |
| `encoderIp` | string | IP of the encoder. | | `encoderPort` | string | Port of the encoder from where this stream is coming. | | `resultCode` | string | The reason for the encoder disconnecting. It could be graceful disconnect or from an error. The result codes are listed in the following table. |
-You can find the error result codes in [live Event error codes](../media-services/latest/live-event-error-codes-reference.md).
+You can find the error result codes in [live Event error codes](/media-services/latest/live-event-error-codes-reference).
The graceful disconnect result codes are:
The graceful disconnect result codes are:
# [Event Grid event schema](#tab/event-grid-event-schema)
-The following example shows the schema of the **LiveEventIncomingDataChunkDropped** event:
+The following example shows the schema of the **LiveEventIncomingDataChunkDropped** event:
```json [
The following example shows the schema of the **LiveEventIncomingDataChunkDroppe
"eventType": "Microsoft.Media.LiveEventIncomingDataChunkDropped", "eventTime": "2018-01-16T01:57:26.005121Z", "id": "03da9c10-fde7-48e1-80d8-49936f2c3e7d",
- "data": {
+ "data": {
"trackType": "Video", "trackName": "Video", "bitrate": 300000,
The following example shows the schema of the **LiveEventIncomingDataChunkDroppe
# [Cloud event schema](#tab/cloud-event-schema)
-The following example shows the schema of the **LiveEventIncomingDataChunkDropped** event:
+The following example shows the schema of the **LiveEventIncomingDataChunkDropped** event:
```json [
The following example shows the schema of the **LiveEventIncomingDataChunkDroppe
"type": "Microsoft.Media.LiveEventIncomingDataChunkDropped", "time": "2018-01-16T01:57:26.005121Z", "id": "03da9c10-fde7-48e1-80d8-49936f2c3e7d",
- "data": {
+ "data": {
"trackType": "Video", "trackName": "Video", "bitrate": 300000,
The data object has the following properties:
# [Event Grid event schema](#tab/event-grid-event-schema)
-The following example shows the schema of the **LiveEventIncomingStreamReceived** event:
+The following example shows the schema of the **LiveEventIncomingStreamReceived** event:
```json [
The following example shows the schema of the **LiveEventIncomingStreamReceived*
# [Cloud event schema](#tab/cloud-event-schema)
-The following example shows the schema of the **LiveEventIncomingStreamReceived** event:
+The following example shows the schema of the **LiveEventIncomingStreamReceived** event:
```json [
The data object has the following properties:
# [Event Grid event schema](#tab/event-grid-event-schema)
-The following example shows the schema of the **LiveEventIncomingStreamsOutOfSync** event:
+The following example shows the schema of the **LiveEventIncomingStreamsOutOfSync** event:
```json [
The following example shows the schema of the **LiveEventIncomingStreamsOutOfSyn
"typeOfStreamWithMinLastTimestamp": "Audio", "maxLastTimestamp": "366000", "typeOfStreamWithMaxLastTimestamp": "Video",
- "timescaleOfMinLastTimestamp": "10000000",
- "timescaleOfMaxLastTimestamp": "10000000"
+ "timescaleOfMinLastTimestamp": "10000000",
+ "timescaleOfMaxLastTimestamp": "10000000"
}, "dataVersion": "1.0", "metadataVersion": "1"
The following example shows the schema of the **LiveEventIncomingStreamsOutOfSyn
# [Cloud event schema](#tab/cloud-event-schema)
-The following example shows the schema of the **LiveEventIncomingStreamsOutOfSync** event:
+The following example shows the schema of the **LiveEventIncomingStreamsOutOfSync** event:
```json [
The following example shows the schema of the **LiveEventIncomingStreamsOutOfSyn
"typeOfStreamWithMinLastTimestamp": "Audio", "maxLastTimestamp": "366000", "typeOfStreamWithMaxLastTimestamp": "Video",
- "timescaleOfMinLastTimestamp": "10000000",
- "timescaleOfMaxLastTimestamp": "10000000"
+ "timescaleOfMinLastTimestamp": "10000000",
+ "timescaleOfMaxLastTimestamp": "10000000"
}, "specversion": "1.0" }
The data object has the following properties:
# [Event Grid event schema](#tab/event-grid-event-schema)
-The following example shows the schema of the **LiveEventIncomingVideoStreamsOutOfSync** event:
+The following example shows the schema of the **LiveEventIncomingVideoStreamsOutOfSync** event:
```json [
The following example shows the schema of the **LiveEventIncomingVideoStreamsOut
"firstDuration": "2000", "secondTimestamp": "2162057216", "secondDuration": "2000",
- "timescale": "10000000"
+ "timescale": "10000000"
}, "dataVersion": "1.0", "metadataVersion": "1"
The following example shows the schema of the **LiveEventIncomingVideoStreamsOut
# [Cloud event schema](#tab/cloud-event-schema)
-The following example shows the schema of the **LiveEventIncomingVideoStreamsOutOfSync** event:
+The following example shows the schema of the **LiveEventIncomingVideoStreamsOutOfSync** event:
```json [
The following example shows the schema of the **LiveEventIncomingVideoStreamsOut
"firstDuration": "2000", "secondTimestamp": "2162057216", "secondDuration": "2000",
- "timescale": "10000000"
+ "timescale": "10000000"
}, "specversion": "1.0" }
The data object has the following properties:
# [Event Grid event schema](#tab/event-grid-event-schema)
-The following example shows the schema of the **LiveEventIngestHeartbeat** event:
+The following example shows the schema of the **LiveEventIngestHeartbeat** event:
```json [
The following example shows the schema of the **LiveEventIngestHeartbeat** event
# [Cloud event schema](#tab/cloud-event-schema)
-The following example shows the schema of the **LiveEventIngestHeartbeat** event:
+The following example shows the schema of the **LiveEventIngestHeartbeat** event:
```json [
The data object has the following properties:
# [Event Grid event schema](#tab/event-grid-event-schema)
-The following example shows the schema of the **LiveEventTrackDiscontinuityDetected** event:
+The following example shows the schema of the **LiveEventTrackDiscontinuityDetected** event:
```json [
The following example shows the schema of the **LiveEventTrackDiscontinuityDetec
# [Cloud event schema](#tab/cloud-event-schema)
-The following example shows the schema of the **LiveEventTrackDiscontinuityDetected** event:
+The following example shows the schema of the **LiveEventTrackDiscontinuityDetected** event:
```json [
An event has the following top-level data:
## Next steps
-[Register for job state change events](../media-services/latest/monitoring/job-state-events-cli-how-to.md)
+[Register for job state change events](/media-services/latest/monitoring/job-state-events-cli-how-to)
## See also - [EventGrid .NET SDK that includes Media Service events](https://www.nuget.org/packages/Microsoft.Azure.EventGrid/) - [Definitions of Media Services events](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/eventgrid/data-plane/Microsoft.Media/stable/2018-01-01/MediaServices.json)-- [Live Event error codes](../media-services/latest/live-event-error-codes-reference.md)
+- [Live Event error codes](/media-services/latest/live-event-error-codes-reference)
governance Assign Policy Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-terraform.md
for Azure Policy use the
```hcl provider "azurerm" {
- version = "~>2.0"
features {} }-
- resource "azurerm_policy_assignment" "auditvms" {
- name = "audit-vm-manageddisks"
- scope = var.cust_scope
- policy_definition_id = "/providers/Microsoft.Authorization/policyDefinitions/06a78e20-9358-41c9-923c-fb736d382a4d"
- description = "Shows all virtual machines not using managed disks"
- display_name = "Audit VMs without managed disks Assignment"
+
+ terraform {
+ required_providers {
+ azurerm = {
+ source = "hashicorp/azurerm"
+ version = ">= 2.96.0"
+ }
+ }
+ }
+
+ resource "azurerm_resource_policy_assignment" "auditvms" {
+ name = "audit-vm-manageddisks"
+ resource_id = var.cust_scope
+ policy_definition_id = "/providers/Microsoft.Authorization/policyDefinitions/06a78e20-9358-41c9-923c-fb736d382a4d"
+ description = "Shows all virtual machines not using managed disks"
+ display_name = "Audit VMs without managed disks assignment"
} ```
for Azure Policy use the
```hcl output "assignment_id" {
- value = azurerm_policy_assignment.auditvms.id
+ value = azurerm_resource_policy_assignment.auditvms.id
} ```
guides Azure Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/guides/developer/azure-developer-guide.md
Azure Spring Cloud is a serverless microservices platform that enables you to bu
* Easily bind connections between your apps and Azure services such as Azure Database for MySQL and Azure Cache for Redis. * Monitor and troubleshoot microservices and applications using enterprise-grade unified monitoring tools that offer deep insights on application dependencies and operational telemetry.
-> **When to use:** As a fully managed service Azure Spring Cloud is a good choice when you're minimizing operational cost running Spring Boot/Spring Cloud based microservices on Azure.
+> **When to use:** As a fully managed service Azure Spring Cloud is a good choice when you're minimizing operational cost running Spring Boot/Spring Cloud based microservices on Azure.
> > **Get started:** [Deploy your first Spring Boot app in Azure Spring Cloud](../../spring-cloud/quickstart.md).
Along with REST APIs, many Azure services also let you programmatically manage r
* [Go](/azure/go) Services such as [Mobile Apps](/previous-versions/azure/app-service-mobile/app-service-mobile-dotnet-how-to-use-client-library)
-and [Azure Media Services](../../media-services/previous/media-services-dotnet-how-to-use.md) provide client-side SDKs to let you access services from web and mobile client apps.
+and [Azure Media Services](/media-services/previous/media-services-dotnet-how-to-use) provide client-side SDKs to let you access services from web and mobile client apps.
### Azure Resource Manager
industrial-iot Industrial Iot Platform Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/industrial-iot-platform-versions.md
Last updated 11/10/2021
-# Azure Industrial IoT Platform Release 2.8.1
+# Azure Industrial IoT Platform Release 2.8.2
+We are pleased to announce the release of version 2.8.2 of our Industrial IoT Platform components as a second patch update of the 2.8 Long-Term Support (LTS) release. This release contains important backward compatibility fixes including Direct Methods API support with version 2.5.x, performance optimizations as well as security updates and bugfixes.
+
+## Azure Industrial IoT Platform Release 2.8.1
We are pleased to announce the release of version 2.8.1 of our Industrial IoT Platform components. This is the first patch update of the 2.8 Long-Term Support (LTS) release. It contains important security updates, bug fixes, and performance optimizations. ## Azure Industrial IoT Platform Release 2.8
We are pleased to announce the declaration of Long-Term Support (LTS) for versio
|[2.7.206](https://github.com/Azure/Industrial-IoT/tree/release/2.7.206) |Stable |January 2021 |Configuration through REST API (orchestrated mode), supports Samples telemetry format as well as PubSub - [Release notes](https://github.com/Azure/Industrial-IoT/releases/tag/2.7.206)| |[2.8](https://github.com/Azure/Industrial-IoT/tree/2.8.0) |Long-term support (LTS)|July 2021 |IoT Edge update to 1.1 LTS, OPC stack logging and tracing for better OPC Publisher diagnostics, Security fixes - [Release notes](https://github.com/Azure/Industrial-IoT/releases/tag/2.8.0)| |[2.8.1](https://github.com/Azure/Industrial-IoT/tree/2.8.1) |Patch release for LTS 2.8|November 2021 |Critical bug fixes, security updates, performance optimizations for LTS v2.8|
+|[2.8.2](https://github.com/Azure/Industrial-IoT/tree/2.8.2) |Patch release for LTS 2.8|March 2022 |Backwards compatibility with 2.5.x, bug fixes, security updates, performance optimizations for LTS v2.8|
## Next steps
iot-dps How To Troubleshoot Dps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-troubleshoot-dps.md
To learn more, see [alerts in Azure Monitor](../azure-monitor/alerts/alerts-over
1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Browse to your IoT hub.
+2. Browse to your Device Provisioning Service.
3. Select **Diagnostics settings**.
-4. Select **Turn on diagnostics**.
+4. Select **Add diagnostic setting**.
-5. Enable the desired logs to be collected.
+5. Configure the desired logs to be collected.
| Log Name | Description | |-|| | DeviceOperations | Logs related to device connection events | | ServiceOperations | Event logs related to using service SDK (e.g. Creating or updating enrollment groups)|
-6. Turn on **Send to Log Analytics** ([see pricing](https://azure.microsoft.com/pricing/details/log-analytics/)).
+6. Tick the box **Send to Log Analytics** ([see pricing](https://azure.microsoft.com/pricing/details/log-analytics/)) and save.
7. Go to **Logs** tab in the Azure portal under Device Provisioning Service resource.
-8. Click **Run** to view recent events.
+8. Write **AzureDiagnostics** as a query and click **Run** to view recent events.
9. If there are results, look for `OperationName`, `ResultType`, `ResultSignature`, and `ResultDescription` (error message) to get more detail on the error.
Use this table to understand and resolve common errors.
| 404 | The Device Provisioning Service instance, or a resource (e.g. an enrollment) does not exist. |404 Not Found | | 412 | The ETag in the request does not match the ETag of the existing resource, as per RFC7232. | 412 Precondition failed | | 429 | Operations are being throttled by the service. For specific service limits, see [IoT Hub Device Provisioning Service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#iot-hub-device-provisioning-service-limits). | 429 Too many requests |
-| 500 | An internal error occurred. | 500 Internal Server Error|
+| 500 | An internal error occurred. | 500 Internal Server Error|
key-vault Tutorial Import Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/tutorial-import-certificate.md
Previously updated : 04/16/2020 Last updated : 03/16/2022 ms.devlang: azurecli #Customer intent:As a security admin who is new to Azure, I want to use Key Vault to securely store certificates in Azure
If you don't have an Azure subscription, create a [free account](https://azure.m
Sign in to the Azure portal at https://portal.azure.com.
-## Create a vault
+## Create a key vault
-1. From the Azure portal menu, or from the **Home** page, select **Create a resource**.
-2. In the Search box, enter **Key Vault**.
-3. From the results list, choose **Key Vault**.
-4. On the Key Vault section, choose **Create**.
-5. On the **Create key vault** section provide the following information:
- - **Name**: A unique name is required. For this quickstart, we use **Example-Vault**.
- - **Subscription**: Choose a subscription.
- - Under **Resource Group**, choose **Create new** and enter a resource group name.
- - In the **Location** pull-down menu, choose a location.
- - Leave the other options to their defaults.
-6. After providing the information above, select **Create**.
+Create a key vault using one of these three methods:
-Take note of the two properties listed below:
+- [Create a key vault using the Azure portal](../general/quick-create-portal.md)
+- [Create a key vault using the Azure CLI](../general/quick-create-cli.md)
+- [Create a key vault using Azure PowerShell](../general/quick-create-powershell.md)
-* **Vault Name**: In the example, this is **Example-Vault**. You will use this name for other steps.
-* **Vault URI**: In the example, this is https://example-vault.vault.azure.net/. Applications that use your vault through its REST API must use this URI.
+## Import a certificate to your key vault
-At this point, your Azure account is the only one authorized to perform operations on this new vault.
-
-![Output after Key Vault creation completes](../media/certificates/tutorial-import-cert/vault-properties.png)
-
-## Import a certificate to Key Vault
-
-To import a certificate to the vault, you need to have a PEM or PFX certificate file to be on disk. In this case, we will import a certificate with file name called **ExampleCertificate**.
+To import a certificate to the vault, you need to have a PEM or PFX certificate file to be on disk. If the certificate is in PEM format, the PEM file must contain the key as well as x509 certificates. This operation requires the certificates/import permission.
> [!IMPORTANT]
-> In Azure Key Vault, supported certificate formats are PFX and PEM.
+> In Azure Key Vault, supported certificate formats are PFX and PEM.
> - .pem file format contains one or more X509 certificate files. > - .pfx file format is an archive file format for storing several cryptographic objects in a single file i.e. server certificate (issued for your domain), a matching private key, and may optionally include an intermediate CA.
+In this case, we will create a certificate called **ExampleCertificate**, or import a certificate called **ExampleCertificate** with a path of **/path/to/cert.pem". You can import a certificate with the Azure portal, Azure CLI, or Azure PowerShell.
+
+# [Azure portal](#tab/azure-portal)
+ 1. On the Key Vault properties pages, select **Certificates**. 2. Click on **Generate/Import**. 3. On the **Create a certificate** screen choose the following values:
To import a certificate to the vault, you need to have a PEM or PFX certificate
- **Password** : If you are uploading a password protected certificate file, provide that password here. Otherwise, leave it blank. Once the certificate file is successfully imported, key vault will remove that password. 4. Click **Create**.
-![Certificate properties](../media/certificates/tutorial-import-cert/cert-import.png)
-By adding a certificate using **Import** method, Azure Key vault will automatically populate certificate parameters (i.e. validity period, Issuer name, activation date etc.).
+When importing a certificate, Azure Key vault will automatically populate certificate parameters (i.e. validity period, Issuer name, activation date etc.).
-Once you receive the message that the certificate has been successfully imported, you may click on it on the list to view its properties.
+Once you receive the message that the certificate has been successfully imported, you may click on it on the list to view its properties.
-![Screenshot that shows where to view the certificate properties.](../media/certificates/tutorial-import-cert/current-version-hidden.png)
-## Import a certificate using Azure CLI
+# [Azure CLI](#tab/azure-cli)
-Import a certificate into a specified key vault. To
-import an existing valid certificate, containing a private key, into Azure Key Vault, the file to be imported can be in either PFX or PEM format. If the certificate is in PEM format, the PEM file must contain the key as well as x509 certificates. This operation requires the certificates/import permission.
+Import a certificate into your key vault using the Azure CLI [az keyvault certificate import](/cli/azure/keyvault/certificate#az-keyvault-certificate-import) command:
```azurecli
-az keyvault certificate import --file
- --name
- --vault-name
- [--disabled {false, true}]
- [--only-show-errors]
- [--password]
- [--policy]
- [--subscription]
- [--tags]
+az keyvault certificate import --vault-name "<your-key-vault-name>" -n "ExampleCertificate" -f "/path/to/ExampleCertificate.pem"
```
-Learn more about the [parameters](/cli/azure/keyvault/certificate#az-keyvault-certificate-import).
-
-After importing the certificate, you can view the certificate using [Certificate show](/cli/azure/keyvault/certificate#az-keyvault-certificate-show)
-
+After importing the certificate, you can view the certificate using the Azure CLI [az keyvault certificate show](/cli/azure/keyvault/certificate#az-keyvault-certificate-show) command.
```azurecli
-az keyvault certificate show [--id]
- [--name]
- [--only-show-errors]
- [--subscription]
- [--vault-name]
- [--version]
+az keyvault certificate show --vault-name "<your-key-vault-name>" --name "ExampleCertificate"
```
-Now, you have created a Key vault, imported a certificate and viewed Certificate's properties.
-## Import a certificate using Azure PowerShell
+# [Azure PowerShell](#tab/azure-powershell)
+
+You can import a certificate into Key Vault using the Azure PowerShell [Import-AzKeyVaultCertificate](/powershell/module/az.keyvault/import-azkeyvaultcertificate) cmdlet.
+```azurepowershell
+$Password = ConvertTo-SecureString -String "123" -AsPlainText -Force
+Import-AzKeyVaultCertificate -VaultName "<your-key-vault-name>" -Name "ExampleCertificate" -FilePath "C:\path\to\ExampleCertificate.pem" -Password $Password
```
-Import-AzureKeyVaultCertificate
- [-VaultName] <String>
- [-Name] <String>
- -FilePath <String>
- [-Password <SecureString>]
- [-Tag <Hashtable>]
- [-DefaultProfile <IAzureContextContainer>]
- [-WhatIf]
- [-Confirm]
- [<CommonParameters>]
+
+After importing the certificate, you can view the certificate using the Azure PowerShell [Import-AzKeyVaultCertificate](/powershell/module/az.keyvault/import-azkeyvaultcertificate) cmdlet
+
+```azurepowershell
+Get-AzKeyVaultCertificate -VaultName "<your-key-vault-name>" -Name "ExampleCertificate"
```
-Learn more about the [parameters](/powershell/module/azurerm.keyvault/import-azurekeyvaultcertificate?).
+
+Now, you have created a Key vault, imported a certificate and viewed a certificate's properties.
## Clean up resources
When no longer needed, delete the resource group, which deletes the Key Vault an
2. Select **Delete resource group**. 3. In the **TYPE THE RESOURCE GROUP NAME:** box type in the name of the resource group and select **Delete**. - ## Next steps In this tutorial, you created a Key Vault and imported a certificate in it. To learn more about Key Vault and how to integrate it with your applications, continue on to the articles below. - Read more about [Managing certificate creation in Azure Key Vault](./create-certificate-scenarios.md) - See examples of [Importing Certificates Using REST APIs](/rest/api/keyvault/certificates/import-certificate/import-certificate)-- Review the [Key Vault security overview](../general/security-features.md)
+- Review the [Key Vault security overview](../general/security-features.md)
key-vault Tutorial Rotate Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/tutorial-rotate-certificates.md
Sign in to the Azure portal at https://portal.azure.com.
## Create a vault
-Create an Azure Key Vault using [Azure portal](../general/quick-create-portal.md), [Azure CLI](../general/quick-create-cli.md), or [Azure PowerShell](../general/quick-create-powershell.md). In the example, the key vault name is **Example-Vault**.
+Create a key vault using one of these three methods:
-![Output after key vault creation finishes](../media/certificates/tutorial-import-cert/vault-properties.png)
+- [Create a key vault using the Azure portal](../general/quick-create-portal.md)
+- [Create a key vault using the Azure CLI](../general/quick-create-cli.md)
+- [Create a key vault using Azure PowerShell](../general/quick-create-powershell.md)
## Create a certificate in Key Vault
key-vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/overview.md
Previously updated : 06/21/2021 Last updated : 03/28/2022 #Customer intent: As an IT Pro, Decision maker or developer I am trying to learn what Managed HSM is and if it offers anything that could be used in my organization.
For pricing information, please see Managed HSM Pools section on [Azure Key Vaul
### Integrated with Azure and Microsoft PaaS/SaaS services -- Generate (or import using [BYOK](hsm-protected-keys-byok.md)) keys and use them to encrypt your data at rest in Azure services such as [Azure Storage](../../storage/common/customer-managed-keys-overview.md), [Azure SQL](../../azure-sql/database/transparent-data-encryption-byok-overview.md), and [Azure Information Protection](/azure/information-protection/byok-price-restrictions).
+- Generate (or import using [BYOK](hsm-protected-keys-byok.md)) keys and use them to encrypt your data at rest in Azure services such as [Azure Storage](../../storage/common/customer-managed-keys-overview.md), [Azure SQL](../../azure-sql/database/transparent-data-encryption-byok-overview.md), [Azure Information Protection](/azure/information-protection/byok-price-restrictions), and [Customer Key for Microsoft 365](/microsoft-365/compliance/customer-key-set-up). For a more complete list of Azure services which work with Managed HSM, see [Data Encryption Models](/azure/security/fundamentals/encryption-models#supporting-services).
### Uses same API and management interfaces as Key Vault
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-release-notes.md
Previously updated : 02/21/2022 Last updated : 03/28/2022 # Azure Machine Learning Python SDK release notes
In this article, learn about Azure Machine Learning Python SDK releases. For th
__RSS feed__: Get notified when this page is updated by copying and pasting the following URL into your feed reader: `https://docs.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes%22&locale=en-us`
+## 2022-03-28
+
+### Azure Machine Learning SDK for Python v1.40.0
+ + **azureml-automl-dnn-nlp**
+ + We're making the Long Range Text feature optional and only if the customers explicitly opt in for it, using the kwarg "enable_long_range_text"
+ + Adding data validation layer for multi-class classification scenario which leverages the same base class as multilabel for common validations, and a derived class for additional task specific data validation checks.
+ + **azureml-automl-dnn-vision**
+ + Fixing KeyError while computing class weights.
+ + **azureml-contrib-reinforcementlearning**
+ + SDK warning message for upcoming deprecation of RL service
+ + **azureml-core**
+ + * Return logs for runs that went through our new runtime when calling any of the get logs function on the run object, including `run.get_details`, `run.get_all_logs`, etc.
+ + Added experimental method Datastore.register_onpremises_hdfs to allow users to create datastores pointing to on-premises HDFS resources.
+ + Updating the cli documentation in the help command
+ + **azureml-interpret**
+ + For azureml-interpret package, remove shap pin with packaging update. Remove numba and numpy pin after CE env update.
+ + **azureml-mlflow**
+ + Bugfix for MLflow deployment client run_local failing when config object wasn't provided.
+ + **azureml-pipeline-steps**
+ + Remove broken link of deprecated pipeline EstimatorStep
+ + **azureml-responsibleai**
+ + update azureml-responsibleai package to raiwidgets and responsibleai 0.17.0 release
+ + **azureml-train-automl-runtime**
+ + Code generation for automated ML now supports ForecastTCN models (experimental).
+ + Models created via code generation will now have all metrics calculated by default (except normalized mean absolute error, normalized median absolute error, normalized RMSE, and normalized RMSLE in the case of forecasting models). The list of metrics to be calculated can be changed by editing the return value of `get_metrics_names()`. Cross validation will now be used by default for forecasting models created via code generation..
+ + **azureml-training-tabular**
+ + The list of metrics to be calculated can be changed by editing the return value of `get_metrics_names()`. Cross validation will now be used by default for forecasting models created via code generation.
+ + Converting decimal type y-test into float to allow for metrics computation to proceed without errors.
+ ## 2022-02-28 ### Azure Machine Learning SDK for Python v1.39.0
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md
Previously updated : 03/01/2022 Last updated : 03/29/2022 ms.devlang: azurecli
When you enable **No public IP**, your compute cluster doesn't use a public IP f
A compute cluster with **No public IP** enabled has **no inbound communication requirements** from public internet. Specifically, neither inbound NSG rule (`BatchNodeManagement`, `AzureMachineLearning`) is required. You still need to allow inbound from source of **VirtualNetwork** and any port source, to destination of **VirtualNetwork**, and destination port of **29876, 29877**.
-> [!IMPORTANT]
-> When creating a compute instance with no public IP, the managed identity for your workspace must be assigned the __Owner__ role on the virtual network. For more information on assigning roles, see [Steps to assign an Azure role](../role-based-access-control/role-assignments-steps.md).
- **No public IP** clusters are dependent on [Azure Private Link](how-to-configure-private-link.md) for Azure Machine Learning workspace. A compute cluster with **No public IP** also requires you to disable private endpoint network policies and private link service network policies. These requirements come from Azure private link service and private endpoints and aren't Azure Machine Learning specific. Follow instruction from [Disable network policies for Private Link service](../private-link/disable-private-link-service-network-policy.md) to set the parameters `disable-private-endpoint-network-policies` and `disable-private-link-service-network-policies` on the virtual network subnet.
machine-learning Tutorial Create Secure Workspace Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace-template.md
The template consists of multiple files. The following table describes what each
# [Bicep](#tab/bicep)
-To run the Terraform template, use the following commands from the `machine-learning-end-to-end-secure` where the `main.bicep` file is:
+To run the Bicep template, use the following commands from the `machine-learning-end-to-end-secure` where the `main.bicep` file is:
1. To create a new Azure Resource Group, use the following command. Replace `exampleRG` with your resource group name, and `eastus` with the Azure region you want to use:
After the template completes, use the following steps to connect to the DSVM:
To continue learning how to use the secured workspace from the DSVM, see [Tutorial: Get started with a Python script in Azure Machine Learning](tutorial-1st-experiment-hello-world.md).
-To learn more about common secure workspace configurations and input/output requirements, see [Azure Machine Learning secure workspace traffic flow](concept-secure-network-traffic-flow.md).
+To learn more about common secure workspace configurations and input/output requirements, see [Azure Machine Learning secure workspace traffic flow](concept-secure-network-traffic-flow.md).
marketplace Azure Private Plan Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-private-plan-troubleshooting.md
Title: Troubleshoot private plans in the commercial marketplace description: Troubleshoot private plans in the commercial marketplace-+
While troubleshooting the Azure Subscription Hierarchy, keep these things in min
## Troubleshooting Checklist -- ISV to ensure the SaaS private plan is using the correct tenant ID for the customer - [How to find your Azure Active Directory tenant ID](../active-directory/fundamentals/active-directory-how-to-find-tenant.md). For VMs use the [Azure Subscription ID. (video guide)](../media-services/latest/setup-azure-subscription-how-to.md?tabs=portal)
+- ISV to ensure the SaaS private plan is using the correct tenant ID for the customer - [How to find your Azure Active Directory tenant ID](../active-directory/fundamentals/active-directory-how-to-find-tenant.md). For VMs use the [Azure Subscription ID. (video guide)](/media-services/latest/setup-azure-subscription-how-to?tabs=portal)
- ISV to ensure that the Customer is not buying through a CSP. Private Plans are not available on a CSP-managed subscription. - Customer to ensure customer is logging in with an email ID that is registered under the same tenant ID (use the same user ID they used in step #1 above) - ISV to ask the customer to find the Private Plan in Azure Marketplace: [Private plans in Azure Marketplace](/marketplace/private-plans)
marketplace Marketplace Metering Service Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-metering-service-apis.md
description: The usage event API allows you to emit usage events for SaaS offers
Previously updated : 12/06/2021 Last updated : 03/30/2022
Bad request. The batch contained more than 25 usage events.
Code: 403<br> Forbidden. The authorization token isn't provided, is invalid or expired. Or the request is attempting to access a subscription for an offer that was published with a different Azure AD App ID from the one used to create the authorization token.
+## Metered billing retrieve usage events
+
+You can call the usage events API to get the list of usage events. ISVs can use this API to see the usage events that have been posted for a certain configurable duration of time and what state these events are at the point of calling the API.
+
+GET: https://marketplaceapi.microsoft.com/api/usageEvents
+
+*Query parameters*:
+
+| Parameter | Recommendation |
+| | - |
+| ApiVersion | Use this format: 2018-08-31 |
+| usageStartDate | DateTime in ISO8601 format. For example, 2020-12-03T15:00 or 2020-12-03 |
+| UsageEndDate (optional) | DateTime in ISO8601 format. Default = current date |
+| offerId (optional) | Default = all available |
+| planId (optional) | Default = all available |
+| dimension (optional) | Default = all available |
+| azureSubscriptionId (optional) | Default = all available |
+| reconStatus (optional) | Default = all available |
+|||
+
+*Possible values of reconStatus*:
+
+| ReconStatus | Description |
+| | - |
+| Submitted | Not yet processed by PC Analytics |
+| Accepted | Matched with PC Analytics |
+| Rejected | Rejected in the pipeline. Contact Microsoft support to investigate the cause. |
+| Mismatch | MarketplaceAPI and Partner Center Analytics quantities are both non-zero, however not matching |
+| TestHeaders | Subscription listed with test headers, and therefore not in PC Analytics |
+| DryRun | Submitted with SessionMode=DryRun, and therefore not in PC |
+|||
+
+*Request headers*:
+
+| Content type | Use application/json |
+| | - |
+| x-ms-requestid | Unique string value (preferably a GUID), for tracking the request from the client. If this value is not provided, one will be generated and provided in the response headers. |
+| x-ms-correlationid | Unique string value for operation on the client. This parameter correlates all events from client operation with events on the server side. If this value isn't provided, one will be generated and provided in the response headers. |
+| authorization | A unique access token that identifies the ISV that is making this API call. The format is `Bearer <access_token>` when the token value is retrieved by the publisher. For more information, see:<br><ul><li>SaaS in [Get the token with an HTTP POST](./partner-center-portal/pc-saas-registration.md#get-the-token-with-an-http-post)</li><li>Managed application in [Authentication strategies](marketplace-metering-service-authentication.md)</li></ul> |
+|||
+
+### Responses
+
+Response payload examples:
+
+*Accepted**
+
+```json
+[
+ {
+ "usageDate": "2020-11-30T00:00:00Z",
+ "usageResourceId": "11111111-2222-3333-4444-555555555555",
+ "dimension": "tokens",
+ "planId": "silver",
+ "planName": "Silver",
+ "offerId": "mycooloffer",
+ "offerName": "My Cool Offer",
+ "offerType": "SaaS",
+ "azureSubscriptionId": "12345678-9012-3456-7890-123456789012",
+ "reconStatus": "Accepted",
+ "submittedQuantity": 17.0,
+ "processedQuantity": 17.0,
+ "submittedCount": 17
+ }
+]
+```
+
+*Submitted*
+
+```json
+[
+ {
+ "usageDate": "2020-11-30T00:00:00Z",
+ "usageResourceId": "11111111-2222-3333-4444-555555555555",
+ "dimension": "tokens",
+ "planId": "silver",
+ "planName": "",
+ "offerId": "mycooloffer",
+ "offerName": "",
+ "offerType": "SaaS",
+ "azureSubscriptionId": "12345678-9012-3456-7890-123456789012",
+ "reconStatus": "Submitted",
+ "submittedQuantity": 17.0,
+ "processedQuantity": 0.0,
+ "submittedCount": 17
+ }
+]
+```
+
+*Mismatch*
++
+```json
+[
+ {
+ "usageDate": "2020-11-30T00:00:00Z",
+ "usageResourceId": "11111111-2222-3333-4444-555555555555",
+ "dimension": "tokens",
+ "planId": "silver",
+ "planName": "Silver",
+ "offerId": "mycooloffer",
+ "offerName": "My Cool Offer",
+ "offerType": "SaaS",
+ "azureSubscriptionId": "12345678-9012-3456-7890-123456789012",
+ "reconStatus": "Mismatch",
+ "submittedQuantity": 17.0,
+ "processedQuantity": 16.0,
+ "submittedCount": 17
+ }
+]
+```
+
+*Rejected*
+
+```json
+[
+ {
+ "usageDate": "2020-11-30T00:00:00Z",
+ "usageResourceId": "11111111-2222-3333-4444-555555555555",
+ "dimension": "tokens",
+ "planId": "silver",
+ "planName": "",
+ "offerId": "mycooloffer",
+ "offerName": "",
+ "offerType": "SaaS",
+ "azureSubscriptionId": "12345678-9012-3456-7890-123456789012",
+ "reconStatus": "Rejected",
+ "submittedQuantity": 17.0,
+ "processedQuantity": 0.0,
+ "submittedCount": 17
+ }
+]
+```
+
+**Status codes**
+
+Code: 403
+Forbidden. The authorization token isn't provided, is invalid or expired. Or the request is attempting to access a subscription for an offer that was published with a different Azure AD App ID from the one used to create the authorization token.
+ ## Development and testing best practices To test the custom meter emission, implement the integration with metering API, create a plan for your published SaaS offer with custom dimensions defined in it with zero price per unit. And publish this offer as preview so only limited users would be able to access and test the integration.
Follow the instruction in [Support for the commercial marketplace program in Par
## Next steps
-For more information on metering service APIs , see [Marketplace metering service APIs FAQ](marketplace-metering-service-apis-faq.yml).
+For more information on metering service APIs, see [Marketplace metering service APIs FAQ](marketplace-metering-service-apis-faq.yml).
media-services Azure Media Player Accessibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/azure-media-player/azure-media-player-accessibility.md
- Title: Azure Media Player Accessibility
-description: Learn more about the Azure Media Player's accessibility settings.
---- Previously updated : 04/05/2021--
-# Accessibility #
-
-Azure Media Player works with screen reader capabilities such as Windows Narrator and Apple OSX/iOS VoiceOver. Alternative tags are available for the UI buttons, and the screen reader is capable of reading these alternative tags when the user navigates to them. Additional configurations can be set at the Operating System level.
-
-## Captions and subtitles ##
-
-At this time, Azure Media Player currently supports captions for only On-Demand events with WebVTT format and live events using CEA 708. TTML format is currently unsupported. Captions and subtitles allow the player to cater to and empower a broader audience, including people with hearing disabilities or those who want to read along in a different language. Captions also increase video engagement, improve comprehension, and make it possible to view the video in sound sensitive environments, like a workplace.
-
-## High contrast mode ##
-
-The default UI in Azure Media Player is compliant with most device/browser high contrast view modes. Configurations can be set at the Operating System level.
-
-## Mobility options ##
-
-### Tabbing focus ###
-
-Tabbing focus, provided by general HTML standards, is available in Azure Media Player. In order to enable tab focusing, you must add `tabindex=0` (or another value if you understand how tab ordering is affected in HTML) to the HTML `<video>` like so: `<video ... tabindex=0>...</video>`. On some platforms, the focus for the controls may only be present if the controls are visible and if the platform supports these capabilities.
-
-Once tabbing focus is enabled, the end user can effectively navigate and control the video player without depending on their mouse. Each context menu or controllable element can be navigated to by hitting the tab button and selected with enter or spacebar. Hitting enter or spacebar on a context menu will expand it so the end user can continue tabbing through to select a menu item. Once you have context of the item you wish to select, hit enter or spacebar again to complete the selection.
-
-### HotKeys ###
-
-Azure Media Player supports controlling through keyboard hot key. In a web browser, the only way to control the underlying video element is by having focus on the player. Once there is focus on the player, hot key can control the player functionality. The table below describes the various hot keys and their associated behavior:
-
-| Hot key | Behavior |
-|-|-|
-| F/f | Player will enter/exit FullScreen mode |
-| M/m | Player volume will mute/unmute |
-| Up and Down Arrow | Player volume will increase/decrease |
-| Left and Right Arrow | Video progress will increase /decrease |
-| 0,1,2,3,4,5,6,7,8,9 | Video progress will be changed to 0%\- 90% depending on the key pressed |
-| Click Action | Video will play/pause |
-
-## Next steps
-
-<!Some context for the following links goes here>
-- [Azure Media Player Quickstart](azure-media-player-quickstart.md)
media-services Azure Media Player Api Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/azure-media-player/azure-media-player-api-methods.md
- Title: Azure Media Player API Methods
-description: The Azure Media Player API allows you to interact with the video through JavaScript, whether the browser is playing the video through HTML5 video, Flash, Silverlight, or any other supported playback technologies.
---- Previously updated : 04/05/2021---
-# API #
-
-The Azure Media Player API allows you to interact with the video through JavaScript, whether the browser is playing the video through HTML5 video, Flash, Silverlight, or any other supported playback technologies.
-
-## Referencing the player ##
-
-To use the API functions, you need access to the player object. Luckily it is easy to get. You just need to make sure your video tag has an ID. The example embed code has an ID of `vid1`. If you have multiple videos on one page, make sure every video tag has a unique ID.
-
-`var myPlayer = amp('vid1');`
-
-> [!NOTE]
-> If the player hasn't been initialized yet via the data-setup attribute or another method, this will also initialize the player.
-
-## Wait until the player is ready ##
-
-The time it takes Azure Media Player to set up the video and API will vary depending on the playback technology being used. HTML5 will often be much faster to load than Flash or Silverlight. For that reason, the player's 'ready' function should be used to trigger any code that requires the player's API.
-
-```javascript
- amp("vid_1").ready(function(){
- var myPlayer = this;
-
- // EXAMPLE: Start playing the video.
- myPlayer.play();
- });
-```
-
-OR
-
-```javascript
- var myPlayer = amp("vid_1", myOptions, function(){
- //this is the ready function and will only execute after the player is loaded
- });
-```
-
-## API methods ##
-
-Now that you have access to a ready player, you can control the video, get values, or respond to video events. The Azure Media Player API function names attempt to follow the [HTML5 media API](http://www.whatwg.org/specs/web-apps/current-work/multipage/the-video-element.html). The main difference is that getter/setter functions are used for video properties.
-
-```javascript
- // setting a property on a bare HTML5 video element
- myVideoElement.currentTime = 120;
-
- // setting a property with Azure Media Player
- myPlayer.currentTime(120);
-```
-
-## Registering for events ##
-Events should be registered directly after initializing the player for the first time to ensure all events are appropriately reported to the application, and should be done outside of the ready event.
-
-```javascript
- var myPlayer = amp("vid_1", myOptions, function(){
- //this is the ready function and will only execute after the player is loaded
- });
- myPlayer.addEventListener(amp.eventName.error, _ampEventHandler);
- //add other event listeners
-```
-
-## Next steps ##
-
-<!Some context for the following links goes here>
-- [Azure Media Player Quickstart](azure-media-player-quickstart.md)
media-services Azure Media Player Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/azure-media-player/azure-media-player-error-codes.md
- Title: Azure Media Player error codes
-description: An error code reference for Azure Media Player.
---- Previously updated : 04/05/2021--
-# Error codes #
-
-When playback can't start or has stopped, an error event will be fired and the `error()` function will return a code and an optional message to help the app developer get more details. `error().message` isn't the message displayed to the user. The message displayed to the user is based on `error().code` bits 27-20, see table below.
-
-```javascript
-
- var myPlayer = amp('vid1');
- myPlayer.addEventListener('error', function() {
- var errorDetails = myPlayer.error();
- var code = errorDetails.code;
- var message = errorDetails.message;
- }
-```
-
-## Error codes, bits [31-28] (4 bits) ##
-
-Describe the area of the error.
--- 0 - Unknown-- 1 - AMP-- 2 - AzureHtml5JS-- 3 - FlashSS-- 4 - SilverlightSS-- 5 - Html5-- 6 - Html5FairPlayHLS-
-## Error codes, bits [27-0] (28 bits) ##
-
-Describe details of the error, bits 27-20 provide a high level, bits 19-0 provide more detail if available.
--
-| amp.errorCode.[name] | Codes, Bits [27-0] (28 bits) | Description |
-||:||
-| **MEDIA_ERR_ABORTED errors range (0x0100000 - 0x01FFFFF)** | | |
-| abortedErrUnknown | 0x0100000 | Generic abort error |
-| abortedErrNotImplemented | 0x0100001 | Abort error, not implemented |
-| abortedErrHttpMixedContentBlocked | 0x0100002 | Abort error, mixed content blocked - generally occurs when loading an `http://` stream from an `https://` page |
-| **MEDIA_ERR_NETWORK errors start value (0x0200000 - 0x02FFFFF)** | | |
-| networkErrUnknown | 0x0200000 | Generic network error |
-| networkErrHttpBadUrlFormat | 0x0200190 | Http 400 error response |
-| networkErrHttpUserAuthRequired | 0x0200191 | Http 401 error response |
-| networkErrHttpUserForbidden | 0x0200193 | Http 403 error response |
-| networkErrHttpUrlNotFound | 0x0200194 | Http 404 error response |
-| networkErrHttpNotAllowed | 0x0200195 | Http 405 error response |
-| networkErrHttpGone | 0x020019A | Http 410 error response |
-| networkErrHttpPreconditionFailed | 0x020019C | Http 412 error response |
-| networkErrHttpInternalServerFailure | 0x02001F4 | Http 500 error response
-| networkErrHttpBadGateway | 0x02001F6 | Http 502 error response |
-| networkErrHttpServiceUnavailable | 0x02001F7 | Http 503 error response |
-| networkErrHttpGatewayTimeout | 0x02001F8 | Http 504 error response |
-| networkErrTimeout | 0x0200258 | Network timeout error
-| networkErrErr | 0x0200259 | Network connection error response |
-| **MEDIA_ERR_DECODE errors (0x0300000 - 0x03FFFFF)** | | |
-| decodeErrUnknown | 0x0300000 | Generic decode error |
-| **MEDIA_ERR_SRC_NOT_SUPPORTED errors (0x0400000 - 0x04FFFFF)** | | |
-| srcErrUnknown | 0x0400000 | Generic source not supported error |
-| srcErrParsePresentation | 0x0400001 | Presentation parse error |
-| srcErrParseSegment | 0x0400002 | Segment parse error |
-| srcErrUnsupportedPresentation | 0x0400003 | Presentation not supported |
-| srcErrInvalidSegment | 0x0400004 | Invalid segment |
-| srcErrLiveNoSegments | 0x0400005 | Segments not available yet |
-| **MEDIA_ERR_ENCRYPTED errors start value(0x0500000 - 0x05FFFFF)** | | |
-| encryptErrUnknown | 0x0500000 | Generic encrypted error |
-| encryptErrDecrypterNotFound | 0x0500001 | Decrypter not found |
-| encryptErrDecrypterInit | 0x0500002 | Decrypter initialization error |
-| encryptErrDecrypterNotSupported | 0x0500003 | Decrypter not supported |
-| encryptErrKeyAcquire | 0x0500004 | Key acquire failed |
-| encryptErrDecryption | 0x0500005 | Decryption of segment failed |
-| encryptErrLicenseAcquire | 0x0500006 | License acquire failed |
-| **SRC_PLAYER_MISMATCH errors start value(0x0600000 - 0x06FFFFF)** | | |
-| srcPlayerMismatchUnknown | 0x0600000 | Generic no matching tech player to play the source |
-| srcPlayerMismatchFlashNotInstalled | 0x0600001 |Flash plugin isn't installed, if installed the source may play. *OR* Flash 30 is installed and playing back AES content. If this is the case, please try a different browser. Flash 30 is unsupported today as of June 7th. See [known issues](azure-media-player-known-issues.md) for more details. Note: If 0x00600003, both Flash and Silverlight are not installed, if specified in the techOrder.|
-| srcPlayerMismatchSilverlightNotInstalled | 0x0600002 | Silverlight plugin is not installed, if installed the source may play. Note: If 0x00600003, both Flash and Silverlight are not installed, if specified in the techOrder. |
-| | 0x00600003 | Both Flash and Silverlight are not installed, if specified in the techOrder. |
-| **Unknown errors (0x0FF00000)** | | |
-| errUnknown | 0xFF00000 | Unknown errors |
-
-## User error messages ##
-
-User message displayed is based on error code's bits 27-20.
--- MEDIA_ERR_ABORTED (1) - "You aborted the video playback"-- MEDIA_ERR_NETWORK (2) - "A network error caused the video download to fail part-way."-- MEDIA_ERR_DECODE (3) - "The video playback was aborted due to a corruption problem or because the video used features your browser did not support."-- MEDIA_ERR_SRC_NOT_SUPPORTED (4) - "The video could not be loaded, either because the server or network failed or because the format is not supported."-- MEDIA_ERR_ENCRYPTED (5) - "The video is encrypted and we do not have the keys to decrypt it."-- SRC_PLAYER_MISMATCH (6) - "No compatible source was found for this video."-- MEDIA_ERR_UNKNOWN (0xFF) - "An unknown error occurred."-
-## Examples ##
-
-### 0x10600001 ##
-
-"No compatible source was found for this video." is displayed to the end user.
-
-There is no tech player that can play the requested sources, but if Flash plugin is installed, it is likely that a source could be played.
-
-### 0x20200194 ###
-
-"A network error caused the video download to fail part-way." is displayed to the end user.
-
-AzureHtml5JS failed to playback from an http 404 response.
-
-### Categorizing errors ###
-
-```javascript
- if(myPlayer.error().code & amp.errorCode.abortedErrStart) {
- // MEDIA_ERR_ABORTED errors
- }
- else if(myPlayer.error().code & amp.errorCode.networkErrStart) {
- // MEDIA_ERR_NETWORK errors
- }
- else if(myPlayer.error().code & amp.errorCode.decodeErrStart) {
- // MEDIA_ERR_DECODE errors
- }
- else if(myPlayer.error().code & amp.errorCode.srcErrStart) {
- // MEDIA_ERR_SRC_NOT_SUPPORTED errors
- }
- else if(myPlayer.error().code & amp.errorCode.encryptErrStart) {
- // MEDIA_ERR_ENCRYPTED errors
- }
- else if(myPlayer.error().code & amp.errorCode.srcPlayerMismatchStart) {
- // SRC_PLAYER_MISMATCH errors
- }
- else {
- // unknown errors
- }
-```
-
-### Catching a specific error ###
-
-The following code catches just 404 errors:
-
-```javascript
- if(myPlayer.error().code & amp.errorCode.networkErrHttpUrlNotFound) {
- // all http 404 errors
- }
-```
-
-## Next steps ##
--- [Azure Media Player Quickstart](azure-media-player-quickstart.md)
media-services Azure Media Player Feature List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/azure-media-player/azure-media-player-feature-list.md
- Title: Azure Media Player feature list
-description: A feature reference for Azure Media Player.
---- Previously updated : 04/05/2021--
-# Feature list #
-Here is the list of tested features and unsupported features:
-
-| Feature | TESTED | PARTIALLY TESTED | UNTESTED | UNSUPPORTED | NOTES |
-| - | | - | -- | -- | -- |
-| **Playback** | | | | | |
-| Basic On-Demand Playback | X | | | | Supports streams from Azure Media Services only |
-| Basic Live Playback | X | | | | Supports streams from Azure Media Services only |
-| AES | X | | | | Supports Azure Media Services Key Delivery Service |
-| Multi-DRM | | X | | | |
-| PlayReady | X | | | | Supports Azure Media Services Key Delivery Service |
-| Widevine | | X | | | Supports Widevine PSSH boxes outlined in manifest |
-| FairPlay | | X | | | Supports Azure Media Services Key Delivery Service |
-| **Techs** | | | | | |
-| MSE/EME (AzureHtml5JS) | X | | | | |
-| Flash Fallback (FlashSS) | X | | | | Not all features are available on this tech. |
-| Silverlight Fallback SilverlightSS | X | | | | Not all features are available on this tech. |
-| Native HLS Pass-through (Html5) | | X | | | Not all features are available on this tech due to platform restrictions. |
-| **Features** | | | | | |
-| API Support | X | | | | See known issues list |
-| Basic UI | X | | | |
-| Initialization through JavaScript | X | | | | |
-| Initialization through video tag | | X | | | |
-| Segment addressing - Time Based | X | | | | |
-| Segment addressing - Index Based | | | | X | |
-| Segment addressing - Byte Based | | | | X | |
-| Azure Media Services URL rewriter | | X | | | |
-| Accessibility - Captions and Subtitles | X | | | | WebVTT (on demand), CEA 708 (on demand and live) and IMSC1 (on demand and live) |
-| Accessibility - Hotkeys | X | | | | |
-| Accessibility - High Contrast | | X | | | |
-| Accessibility - Tab Focus | | X | | | |
-| Error Messaging | | X | | | Error messages are inconsistent across techs |
-| Event Triggering | X | | | | |
-| Diagnostics | | X | | | Diagnostic information is only available on the AzureHtml5JS tech and partially available on the SilverlightSS tech. |
-| Customizable Tech Order | | X | | | |
-| Heuristics - Basic | X | | | | |
-| Heuristics - Customization | | | X | | Customization is only available with the AzureHtml5JS tech. |
-| Discontinuities | X | | | | |
-| Select Bitrate | X | | | | This API is only available on the AzureHtml5JS and FlashSS techs. |
-| Multi-Audio Stream | | X | | | Programmatic audio switch is supported on AzureHtml5JS and FlashSS techs, and is available through UI selection on AzureHtml5JS, FlashSS, and native Html5 (in Safari). Most platforms require the same codec private data to switch audio streams (same codec, channel, sampling rate, etc.). |
-| UI Localization | | X | | | |
-| Multi-instance Playback | | | | X | This scenario may work for some techs but is currently unsupported and untested. You may also get this to work using iframes |
-| Ads Support | | X | | | AMP supports the insertion of pre- mid- and post-roll linear ads from VAST-compliant ad servers for VOD in the AzureHtml5JS tech |
-| Analytics | | X | | | AMP provides the ability to listen to analytics and diagnostic events in order to send to an Analytics backend of your choice. All events and properties are not available across techs due to platform limitations. |
-| Custom Skins | | | X | | This scenario can be achieved by turning setting controls to false in AMP and using your own HTML and CSS. |
-| Seek Bar Scrubbing | | | | X | |
-| Trick-Play | | | | X | |
-| Audio Only | X | | | | Supported in AzureHtml5JS. Progressive MP3 playback can work with the HTML5 tech if the platform supports it. |
-| Video Only | X | | | | Supported in AzureHtml5JS. |
-| Multi-period Presentation | | | | X |
-| Multiple camera angles | | | | X | |
-| Playback Speed | | X | | | Playback speed is supported in most scenarios except the mobile case due to a partial bug in Chrome |
-
-## Next steps ##
-- [Azure Media Player Quickstart](azure-media-player-quickstart.md)
media-services Azure Media Player Full Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/azure-media-player/azure-media-player-full-setup.md
- Title: Azure Media Player Full Setup
-description: Learn how to set up the Azure Media Player.
---- Previously updated : 04/05/2021---
-# Azure Media Player full setup #
-
-Azure Media Player is easy to set up. It only takes a few moments to get basic playback of media content right from your Azure Media Services account. [Samples](https://github.com/Azure-Samples/azure-media-player-samples) are also provided in the samples directory of the release.
-
-## Step 1: Include the JavaScript and CSS files in the head of your page ##
-
-With Azure Media Player, you can access the scripts from the CDN hosted version. It's often recommended now to put JavaScript before the end body tag `<body>` instead of the `<head>`, but Azure Media Player includes an 'HTML5 Shiv', which needs to be in the head for older IE versions to respect the video tag as a valid element.
-
-> [!NOTE]
-> If you're already using an HTML5 shiv like [Modernizr](https://modernizr.com/) you can include the Azure Media Player JavaScript anywhere. However make sure your version of Modernizr includes the shiv for video.
-
-### CDN Version ###
-
-```html
- <link href="//amp.azure.net/libs/amp/latest/skins/amp-default/azuremediaplayer.min.css" rel="stylesheet">
- <script src= "//amp.azure.net/libs/amp/latest/azuremediaplayer.min.js"></script>
-```
-
-> [!IMPORTANT]
-> You should **NOT** use the `latest` version in production, as this is subject to change on demand. Replace `latest` with a version of Azure Media Player. For example, replace `latest` with `2.1.1`. Azure Media Player versions can be queried from [here](https://amp.azure.net/libs/amp/latest/docs/changelog.html).
-
-> [!NOTE]
-> Since the `1.2.0` release, it is no longer required to include the location to the fallback techs (it will automatically pick up the location from the relative path of the azuremediaplayer.min.js file). You can modify the location of the fallback techs by adding the following script in the `<head>` after the above scripts.
-
-> [!NOTE]
-> Due to the nature of Flash and Silverlight plugins, the swf and xap files should be hosted on a domain without any sensitive information or data - this is automatically taken care of for you with the Azure CDN hosted version.
-
-```javascript
- <script>
- amp.options.flashSS.swf = "//amp.azure.net/libs/amp/latest/techs/StrobeMediaPlayback.2.0.swf"
- amp.options.silverlightSS.xap = "//amp.azure.net/libs/amp/latest/techs/SmoothStreamingPlayer.xap"
- </script>
-```
-
-## Step 2: Add an HTML5 video tag to your page ##
-
-With Azure Media Player, you can use an HTML5 video tag to embed a video. Azure Media Player will then read the tag and make it work in all browsers, not just ones that support HTML5 video. Beyond the basic markup, Azure Media Player needs a few extra pieces.
-
-1. The `<data-setup>` attribute on the `<video>` tells Azure Media Player to automatically set up the video when the page is ready, and read any (in JSON format) from the attribute.
-1. The `id` attribute: Should be used and unique for every video on the same page.
-1. The `class` attribute contains two classes:
- - `azuremediaplayer` applies styles that are required for Azure Media Player UI functionality
- - `amp-default-skin` applies the default skin to the HTML5 controls
-1. The `<source>` includes two required attributes
- - `src` attribute can include a **.ism/manifest* file from Azure Media Services is added, Azure Media Player automatically adds the URLs for DASH, SMOOTH, and HLS to the player
- - `type` attribute is the required MIME type of the stream. The MIME type associated with *".ism/manifest"* is *"application/vnd.ms-sstr+xml"*
-1. The *optional* `<data-setup>` attribute on the `<source>` tells Azure Media Player if there are any unique delivery policies for the stream from Azure Media Services, including, but not limited to, encryption type (AES or PlayReady, Widevine, or FairPlay) and token.
-
-Include/exclude attributes, settings, sources, and tracks exactly as you would for HTML5 video.
-
-```html
- <video id="vid1" class="azuremediaplayer amp-default-skin" autoplay controls width="640" height="400" poster="poster.jpg" data-setup='{"techOrder": ["azureHtml5JS", "flashSS", "html5FairPlayHLS","silverlightSS", "html5"], "nativeControlsForTouch": false}'>
- <source src="http://amssamples.streaming.mediaservices.windows.net/91492735-c523-432b-ba01-faba6c2206a2/AzureMediaServicesPromo.ism/manifest" type="application/vnd.ms-sstr+xml" />
- <p class="amp-no-js">
- To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video
- </p>
- </video>
-```
-
-By default, the large play button is located in the upper left-hand corner so it doesn't cover up the interesting parts of the poster. If you'd prefer to center the large play button, you can add an additional `amp-big-play-centered` `class` to your `<video>` element.
-
-### Alternative Setup for dynamically loaded HTML ###
-
-If your web page or application loads the video tag dynamically (ajax, appendChild, etc.), so that it may not exist when the page loads, you'll want to manually set up the player instead of relying on the data-setup attribute. To do this, first remove the data-setup attribute from the tag so there's no confusion around when the player is initialized. Next, run the following JavaScript some time after the Azure Media Player JavaScript has loaded, and after the video tag has been loaded into the DOM.
-
-```javascript
- var myPlayer = amp('vid1', { /* Options */
- techOrder: ["azureHtml5JS", "flashSS", "html5FairPlayHLS","silverlightSS", "html5"],
- "nativeControlsForTouch": false,
- autoplay: false,
- controls: true,
- width: "640",
- height: "400",
- poster: ""
- }, function() {
- console.log('Good to go!');
- // add an event listener
- this.addEventListener('ended', function() {
- console.log('Finished!');
- });
- }
- );
- myPlayer.src([{
- src: "http://samplescdn.origin.mediaservices.windows.net/e0e820ec-f6a2-4ea2-afe3-1eed4e06ab2c/AzureMediaServices_Overview.ism/manifest",
- type: "application/vnd.ms-sstr+xml"
- }]);
-```
-
-The first argument in the `amp` function is the ID of your video tag. Replace it with your own.
-
-The second argument is an options object. It allows you to set additional options like you can with the data-setup attribute.
-
-The third argument is a 'ready' callback. Once Azure Media Player has initialized, it will call this function. In the ready callback, 'this' object refers to the player instance.
-
-Instead of using an element ID, you can also pass a reference to the element itself.
-
-```javascript
-
- amp(document.getElementById('example_video_1'), {/*Options*/}, function() {
- //This is functionally the same as the previous example.
- });
- myPlayer.src([{ src: "//example/path/to/myVideo.ism/manifest", type: "application/vnd.ms-sstr+xml"]);
-```
-
-## Next steps ##
--- [Azure Media Player Quickstart](azure-media-player-quickstart.md)
media-services Azure Media Player Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/azure-media-player/azure-media-player-known-issues.md
- Title: Azure Media Player Known Issues
-description: The current release has the following known issues.
---- Previously updated : 04/05/2021--
-# Known Issues #
-
-The current release has the following known issues:
-
-## Azure Media Player ##
--- Incorrectly configured encoders may cause issues with playback-- Streams with timestamps greater than 2^53 may have playback issues.
- - Mitigation: Use 90-kHz video and 44.1-kHz audio timescales
-- No autoplay on mobile devices without user interaction. It's blocked by the platform.-- Seeking near discontinuities may cause playback failure.-- Download of large presentations may cause UI to lockup.-- Can't automatically play back same source again after presentation ended.
- - To replay a source after it has ended, it's required to set the source again.
-- Empty manifests may cause issues with the player.
- - This issue can occur when starting a live stream and not enough chunks are found in the manifest.
-- Playback position maybe outside UI seekbar.-- Event ordering isn't consistent across all techs.-- Buffered property isn't consistent across techs.-- nativeControlsForTouch must be false (default) to avoid "Object doesn't support property or method 'setControls'"-- Posters must now be absolute urls-- Minor aesthetic issues may occur in the UI when using high contrast mode of the device-- URLs containing "%" or "+" in the fully decoded string may have problems setting the source-
-## Ad insertion ##
--- Ads may have issues being inserted (on demand or live) when an ad-blocker is installed in the browser-- Mobile devices may have issues playing back ads.-- MP4 Midroll ads aren't currently supported by Azure Media Player.-
-## AzureHtml5JS ##
--- In the DVR window of Live content, when content finishes the timeline will continue to grow until seeking to the area or reaching the end of the presentation.-- Live presentations in Firefox with MSE enabled has some issues--- Assets that are audio only will not play back via the AzureHtml5JS tech.
- - If youΓÇÖd like to play back assets without audio, you can do so by inserting blank audio using the [Azure Media Services Explorer tool](https://aka.ms/amse)
- - Instructions on how to insert silent audio can be found [here](../previous/media-services-advanced-encoding-with-mes.md#silent_audio)
-
-## Flash ##
--- AES content does not play back in Flash version 30.0.0.134 due to a bug in Adobe's caching logic. Adobe has fixed the issue and released it in 30.0.0.154-- Tech and http failures (such as 404 network timeouts), the player will take longer to recover than other techs.-- Click on video area with flashSS tech won't play/pause the player.-- If the user has Flash installed but doesn't give permission to load it on the site, infinite spinning can occur. This is because the player thinks the plugin is installed and available and it thinks the plugin is running the content. JavaScript code has been sent but the browser settings have blocked the plugin from executing until the user accepts the prompt to allow the plugin. This can occur in all browsers. -
-## Silverlight ##
--- Missing features-- Tech and http failures (such as 404 network timeouts), the player will take longer to recover than other techs.-- Safari and Firefox on Mac playback with Silverlight requires explicitly defining `"http://` or `https://` for the source.-- If an API is missing for this tech, it will generally return null.-- If the user has Flash installed but doesn't give permission to load it on the site, infinite spinning can occur. This is because the player thinks the plugin is installed and available and it thinks the plugin is running the content. JavaScript code has been sent but the browser settings have blocked the plugin from executing until the user accepts the prompt to allow the plugin. This can occur in all browsers. -
-## Native HTML5 ##
--- Html5 tech is only sending canplaythrough event for first set source.-- This tech only supports what the browser has implemented. Some features may be missing in this tech. -- If an API is missing for this tech, it will generally return null.-- There are issues with Captions and Subtitles on this tech. They may or may not be available or viewable on this tech.-- Having limited bandwidth in HLS/Html5 tech scenario results in audio playing without video.-
-### Microsoft ###
--- IE8 playback does not currently work due to incompatibility with ECMAScript 5-- In IE and some versions of Edge, fullscreen cannot be entered by tabbing to the button and selecting it or using the F/f hotkey.-
-## Google ##
--- Multiple encoding profiles in the same manifest have some playback issues in Chrome and is not recommended.-- Chrome cannot play back HE-AAC with AzureHtml5JS. Follow details on the [bug tracker](https://bugs.chromium.org/p/chromium/issues/detail?id=534301).-- As of Chrome v58, widevine content must be loaded/played back via the https:// protocol otherwise playback will fail.-
-## Mozilla ##
--- Audio stream switch requires audio streams to have the same codec private data when using AzureHtml5JS. Firefox platform requires this.-
-## Apple ##
--- Safari on Mac often enables Power Saver mode by default with the setting "Stop plug-ins to save power", which blocks plugins like Flash and Silverlight when they believe it is not in favor to the user. This block does not block the plugin's existent, only capabilities. Given the default techOrder, this may cause issues when attempting to play back
- - Mitigation 1: If the video player is 'front and center' (within a 3000 x 3000 pixel boundary starting at the top-left corner of the document), it should still play.
- - Mitigation 2: Change the techOrder for Safari to be ["azureHtml5JS", "html5"]. This mitigation has implication of not getting all the features that are available in the FlashSS tech.
-- PlayReady content via Silverlight may have issues playing back in Safari.-- AES and restricted token content does not play back using iOS and older Android devices.
- - In order to achieve this scenario, a proxy must be added to your service.
-- Default skin for iOS Phone shows through.-- iPhone playback always occurs in the native player fullscreen.
- - Features, including captions, may not persist in this non-inline playback.
-- When live presentation ends, iOS devices will not properly end presentation.-- iOS does not allow for DVR capabilities.-- iOS audio stream switch can only be done though iOS native player UI and requires audio streams to have the same codec private data-- Older versions of Safari may potentially have issues playing back live streams.-
-## Older Android ##
--- AES and restricted token content does not play back using iOS and older Android devices.
- - In order to achieve this scenario, a proxy must be added to your service.
-
-## Next steps ##
--- [Azure Media Player Quickstart](azure-media-player-quickstart.md)
media-services Azure Media Player Localization https://github.com/MicrosoftDo