Updates from: 04/06/2021 03:10:15
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/access-tokens.md
In the following example, you replace these values:
- `<tenant-name>` - The name of your Azure AD B2C tenant. - `<policy-name>` - The name of your custom policy or user flow. - `<application-ID>` - The application identifier of the web application that you registered to support the user flow.
+- `<application-ID-URI>` - The application identifier URI that you set under **Expose an API** blade of the client application.
+- `<scope-name>` - The name of the scope that you added under **Expose an API** blade of the client application.
- `<redirect-uri>` - The **Redirect URI** that you entered when you registered the client application. ```http
GET https://<tenant-name>.b2clogin.com/<tenant-name>.onmicrosoft.com/<policy-nam
client_id=<application-ID> &nonce=anyRandomValue &redirect_uri=https://jwt.ms
-&scope=https://<tenant-name>.onmicrosoft.com/api/read
+&scope=<application-ID-URI>/<scope-name>
&response_type=code ```
Content-Type: application/x-www-form-urlencoded
grant_type=authorization_code &client_id=<application-ID>
-&scope=https://<tenant-name>.onmicrosoft.com/api/read
+&scope=<application-ID-URI>/<scope-name>
&code=eyJraWQiOiJjcGltY29yZV8wOTI1MjAxNSIsInZlciI6IjEuMC... &redirect_uri=https://jwt.ms &client_secret=2hMG2-_:y12n10vwH...
active-directory-b2c Configure Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/configure-tokens.md
Previously updated : 12/14/2020 Last updated : 04/05/2021
The OutputClaim element contains the following attributes:
::: zone-end
+## Authorization code lifetime
+
+When using the [OAuth 2.0 authorization code flow](authorization-code-flow.md), the app can use the authorization code to request an access token for a target resource. Authorization codes are short-lived that expire after about 10 minutes. The authorization code lifetime cannot be configured. Make sure your application redeems the authorization codes within 10 minutes.
+ ## Next steps
-Learn more about how to [request access tokens](access-tokens.md).
+Learn more about how to [request access tokens](access-tokens.md).
active-directory-b2c Configure User Input https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/configure-user-input.md
The application claims are values that are returned to the application. Update y
1. Select **Page layouts**. 1. Select **Local account sign-up page**. 1. Under **User attributes**, select **City**.
- 1. In the **User input type** drop-down, select **DropdownSingleSelect**.
+ 1. In the **User input type** drop-down, select **DropdownSingleSelect**. Optional: Use the "Move up/down" buttons to arrange the text order on the sign-up page.
1. In the **Optional** drop-down, select **No**. 1. Select **Save**.
active-directory-b2c Identity Provider Generic Openid Connect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-generic-openid-connect.md
Previously updated : 03/08/2021 Last updated : 04/05/2021
To allow users to sign in, the identity provider requires developers to register
## Scope
-Scope defines the information and permissions you are looking to gather from your custom identity provider. OpenID Connect requests must contain the `openid` scope value in order to receive the ID token from the identity provider. Without the ID token, users are not able to sign in to Azure AD B2C using the custom identity provider. Other scopes can be appended separated by space. Refer to the custom identity provider's documentation to see what other scopes may be available.
+Scope defines the information and permissions you are looking to gather from your identity provider, for example `openid profile`. In order to receive the ID token from the identity provider, the `openid` scope must be specified. Without the ID token, users are not able to sign in to Azure AD B2C using the custom identity provider. Other scopes can be appended separated by space. Refer to the custom identity provider's documentation to see what other scopes may be available.
## Response type
active-directory-b2c Microsoft Graph Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/microsoft-graph-get-started.md
Previously updated : 01/21/2021 Last updated : 04/05/2021
Before your scripts and applications can interact with the [Microsoft Graph API]
1. Select **Register**. 1. Record the **Application (client) ID** that appears on the application overview page. You use this value in a later step.
-### Grant API access
+## Grant API access
-Next, grant the registered application permissions to manipulate tenant resources through calls to the Microsoft Graph API.
+For your application to access data in Microsoft Graph, grant the registered application the relevant [application permissions](https://docs.microsoft.com/graph/permissions-reference). The effective permissions of your application are the full level of privileges implied by the permission. For example, to *create*, *read*, *update*, and *delete* every user in your Azure AD B2C tenant, add the **User.ReadWrite.All** permission.
+> [!NOTE]
+> The **User.ReadWrite.All** permission does not include the ability update user account passwords. If your application needs to update user account passwords, [grant user administrator role](#optional-grant-user-administrator-role). When granting [user administrator](../active-directory/roles/permissions-reference.md#user-administrator) role, the **User.ReadWrite.All** is not required. The user administrator role includes everything needed to manage users.
-### Create client secret
+You can grant your application multiple application permissions. For example, if your application also needs to manage groups in your Azure AD B2C tenant, add the **Group.ReadWrite.All** permission as well.
-You now have an application that has permission to *create*, *read*, *update*, and *delete* users in your Azure AD B2C tenant. Continue to the next section to add *password update* permissions.
-## Enable user delete and password update
+## [Optional] Grant user administrator role
-The *Read and write directory data* permission does **NOT** include the ability delete users or update user account passwords.
+If your application or script needs to update users' passwords, you need to assign the *User administrator* role to your application. The [User administrator](../active-directory/roles/permissions-reference.md#user-administrator) role has a fixed set of permissions you grant to your application.
-If your application or script needs to delete users or update their passwords, assign the *User administrator* role to your application:
+To add the *User administrator* role, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com) and use the **Directory + Subscription** filter to switch to your Azure AD B2C tenant. 1. Search for and select **Azure AD B2C**. 1. Under **Manage**, select **Roles and administrators**.
-1. Select the **User administrator** role.
+1. Select the **User administrator** role.
1. Select **Add assignments**.
-1. In the **Select** text box, enter the name of the application you registered earlier, for example, *managementapp1*. Select your application when it appears in the search results.
+1. In the **Select** text box, enter the name or the ID of the application you registered earlier, for example, *managementapp1*. When it appears in the search results, select your application.
1. Select **Add**. It might take a few minutes to for the permissions to fully propagate.
+## Create client secret
+
+Your application needs a client secret to prove its identity when requesting a token. To add the client secret, follow these steps:
+++ ## Next steps Now that you've registered your management application and have granted it the required permissions, your applications and services (for example, Azure Pipelines) can use its credentials and permissions to interact with the Microsoft Graph API.
active-directory-b2c Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/page-layout.md
Previously updated : 03/22/2021 Last updated : 04/05/2021
Azure AD B2C page layout uses the following version of the [jQuery library](http
## Self-asserted page (selfasserted)
+**2.1.4**
+- Updated jQuery version to 3.5.1.
+- Updated HandlebarJS version to 4.7.6.
+
+**2.1.3**
+- Security fixes.
+ **2.1.2** - Fixed the localization encoding issue for languages such as Spanish and French.
Azure AD B2C page layout uses the following version of the [jQuery library](http
> [!TIP] > If you localize your page to support multiple locales, or languages in a user flow. The [localization IDs](localization-string-ids.md) article provides the list of localization IDs that you can use for the page version you select.
+**2.1.4**
+- Updated jQuery version to 3.5.1.
+- Updated HandlebarJS version to 4.7.6.
+
+**2.1.3**
+- Security fixes.
+- Minor bug fixes.
+ **2.1.2** - Fixed the localization encoding issue for languages such as Spanish and French. - Allowing the "forgot password" link to use as claims exchange. For more information, see [Self-service password reset](add-password-reset-policy.md#self-service-password-reset-recommended).
Azure AD B2C page layout uses the following version of the [jQuery library](http
## MFA page (multifactor)
+**1.2.4**
+- Updated jQuery version to 3.5.1.
+- Updated HandlebarJS version to 4.7.6.
+
+**1.2.3**
+- Allowing tooltip string override via language localization.
+- Security fixes.
+- Minor bug fixes.
+ **1.2.2** - Fixed an issue with auto-filling the verification code when using iOS. - Fixed an issue with redirecting a token to the relying party from Android Webview.
Azure AD B2C page layout uses the following version of the [jQuery library](http
## Exception Page (globalexception)
+**1.2.1**
+- Updated jQuery version to 3.5.1.
+- Updated HandlebarJS version to 4.7.6.
+ **1.2.0** - Accessibility fixes
Azure AD B2C page layout uses the following version of the [jQuery library](http
## Other pages (ProviderSelection, ClaimsConsent, UnifiedSSD)
+**1.2.1**
+- Updated jQuery version to 3.5.1.
+- Updated HandlebarJS version to 4.7.6.
+ **1.2.0** - Accessibility fixes
active-directory-b2c Saml Service Provider Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/saml-service-provider-options.md
Previously updated : 03/15/2021 Last updated : 04/05/2021
This article describes the configuration options that are available when connect
::: zone pivot="b2c-custom-policy"
-## Encrypted SAML assertions
+
+## SAML response signature
+
+You can specify a certificate to be used to sign the SAML messages. The message is the `<samlp:Response>` element within the SAML response sent to the application.
+
+If you don't already have a policy key, [create one](saml-service-provider.md#create-a-policy-key). Then configure the `SamlMessageSigning` Metadata item in the SAML Token Issuer technical profile. The `StorageReferenceId` must reference the Policy Key name.
+
+```xml
+<ClaimsProvider>
+ <DisplayName>Token Issuer</DisplayName>
+ <TechnicalProfiles>
+ <!-- SAML Token Issuer technical profile -->
+ <TechnicalProfile Id="Saml2AssertionIssuer">
+ <DisplayName>Token Issuer</DisplayName>
+ <Protocol Name="SAML2"/>
+ <OutputTokenFormat>SAML2</OutputTokenFormat>
+ ...
+ <CryptographicKeys>
+ <Key Id="SamlMessageSigning" StorageReferenceId="B2C_1A_SamlMessageCert"/>
+ ...
+ </CryptographicKeys>
+ ...
+ </TechnicalProfile>
+```
+
+### SAML response signature algorithm
+
+You can configure the signature algorithm used to sign the SAML assertion. Possible values are `Sha256`, `Sha384`, `Sha512`, or `Sha1`. Make sure the technical profile and application use the same signature algorithm. Use only the algorithm that your certificate supports.
+
+Configure the signature algorithm using the `XmlSignatureAlgorithm` metadata key within the relying party Metadata element.
+
+```xml
+<RelyingParty>
+ <DefaultUserJourney ReferenceId="SignUpOrSignIn" />
+ <TechnicalProfile Id="PolicyProfile">
+ <DisplayName>PolicyProfile</DisplayName>
+ <Protocol Name="SAML2"/>
+ <Metadata>
+ <Item Key="XmlSignatureAlgorithm">Sha256</Item>
+ </Metadata>
+ ..
+ </TechnicalProfile>
+</RelyingParty>
+```
+
+## SAML assertions signature
+
+When your application expects SAML assertion section to be signed, make sure the SAML service provider set the `WantAssertionsSigned` to `true`. If set to `false`, or doesn't exist, the assertion section won't be sign. The following example shows a SAML service provider metadata with the `WantAssertionsSigned` set to `true`.
+
+```xml
+<EntityDescriptor ID="id123456789" entityID="https://samltestapp2.azurewebsites.net" validUntil="2099-12-31T23:59:59Z" xmlns="urn:oasis:names:tc:SAML:2.0:metadata">
+ <SPSSODescriptor WantAssertionsSigned="true" AuthnRequestsSigned="false" protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">
+ ...
+ </SPSSODescriptor>
+</EntityDescriptor>
+```
+
+### SAML assertions signature certificate
+
+Your policy must specify a certificate to be used to sign the SAML assertions section of the SAML response. If you don't already have a policy key, [create one](saml-service-provider.md#create-a-policy-key). Then configure the `SamlAssertionSigning` Metadata item in the SAML Token Issuer technical profile. The `StorageReferenceId` must reference the Policy Key name.
+
+```xml
+<ClaimsProvider>
+ <DisplayName>Token Issuer</DisplayName>
+ <TechnicalProfiles>
+ <!-- SAML Token Issuer technical profile -->
+ <TechnicalProfile Id="Saml2AssertionIssuer">
+ <DisplayName>Token Issuer</DisplayName>
+ <Protocol Name="SAML2"/>
+ <OutputTokenFormat>SAML2</OutputTokenFormat>
+ ...
+ <CryptographicKeys>
+ <Key Id="SamlAssertionSigning" StorageReferenceId="B2C_1A_SamlMessageCert"/>
+ ...
+ </CryptographicKeys>
+ ...
+ </TechnicalProfile>
+```
+
+## SAML assertions encryption
When your application expects SAML assertions to be in an encrypted format, you need to make sure that encryption is enabled in the Azure AD B2C policy.
We provide a complete sample policy that you can use for testing with the SAML t
1. Update `TenantId` to match your tenant name, for example *contoso.b2clogin.com*. 1. Keep the policy name *B2C_1A_signup_signin_saml*.
-## SAML response signature algorithm
-
-You can configure the signature algorithm used to sign the SAML assertion. Possible values are `Sha256`, `Sha384`, `Sha512`, or `Sha1`. Make sure the technical profile and application use the same signature algorithm. Use only the algorithm that your certificate supports.
-
-Configure the signature algorithm using the `XmlSignatureAlgorithm` metadata key within the relying party Metadata element.
-
-```xml
-<RelyingParty>
- <DefaultUserJourney ReferenceId="SignUpOrSignIn" />
- <TechnicalProfile Id="PolicyProfile">
- <DisplayName>PolicyProfile</DisplayName>
- <Protocol Name="SAML2"/>
- <Metadata>
- <Item Key="XmlSignatureAlgorithm">Sha256</Item>
- </Metadata>
- ..
- </TechnicalProfile>
-</RelyingParty>
-```
- ## SAML response lifetime You can configure the length of time the SAML response remains valid. Set the lifetime using the `TokenLifeTimeInSeconds` metadata item within the SAML Token Issuer technical profile. This value is the number of seconds that can elapse from the `NotBefore` timestamp calculated at the token issuance time. The default lifetime is 300 seconds (5 minutes).
Example:
You can manage the session between Azure AD B2C and the SAML relying party application using the `UseTechnicalProfileForSessionManagement` element and the [SamlSSOSessionProvider](custom-policy-reference-sso.md#samlssosessionprovider).
-## Force users to re-authenticate
+## Force users to reauthenticate
-To force users to re-authenticate, the application can include the `ForceAuthn` attribute in the SAML authentication request. The `ForceAuthn` attribute is a Boolean value. When set to true, the users session will be invalidated at Azure AD B2C, and the user is forced to re-authenticate. The following SAML authentication request demonstrates how to set the `ForceAuthn` attribute to true.
+To force users to reauthenticate, the application can include the `ForceAuthn` attribute in the SAML authentication request. The `ForceAuthn` attribute is a Boolean value. When set to true, the users' session will be invalidated at Azure AD B2C, and the user is forced to reauthenticate. The following SAML authentication request demonstrates how to set the `ForceAuthn` attribute to true.
```xml
To force users to re-authenticate, the application can include the `ForceAuthn`
</samlp:AuthnRequest> ```
+## Sign the Azure AD B2C IdP SAML Metadata
+
+You can instruct Azure AD B2C to sign its SAML IdP metadata document, if required by the application. If you don't already have a policy key, [create one](saml-service-provider.md#create-a-policy-key). Then configure the `MetadataSigning` metadata item in the SAML token issuer technical profile. The `StorageReferenceId` must reference the policy key name.
+
+```xml
+<ClaimsProvider>
+ <DisplayName>Token Issuer</DisplayName>
+ <TechnicalProfiles>
+ <!-- SAML Token Issuer technical profile -->
+ <TechnicalProfile Id="Saml2AssertionIssuer">
+ <DisplayName>Token Issuer</DisplayName>
+ <Protocol Name="SAML2"/>
+ <OutputTokenFormat>SAML2</OutputTokenFormat>
+ ...
+ <CryptographicKeys>
+ <Key Id="MetadataSigning" StorageReferenceId="B2C_1A_SamlMetadataCert"/>
+ ...
+ </CryptographicKeys>
+ ...
+ </TechnicalProfile>
+```
+ ## Debug the SAML protocol To help configure and debug the integration with your service provider, you can use a browser extension for the SAML protocol, for example, [SAML DevTools extension](https://chrome.google.com/webstore/detail/saml-devtools-extension/jndllhgbinhiiddokbeoeepbppdnhhio) for Chrome, [SAML-tracer](https://addons.mozilla.org/es/firefox/addon/saml-tracer/) for FireFox, or [Edge or IE Developer tools](https://techcommunity.microsoft.com/t5/microsoft-sharepoint-blog/gathering-a-saml-token-using-edge-or-ie-developer-tools/ba-p/320957).
active-directory-b2c Saml Service Provider https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/saml-service-provider.md
Previously updated : 03/03/2021 Last updated : 04/05/2021
To build a trust relationship between your application and Azure AD B2C, both se
| Usage | Required | Description | | | -- | -- |
-| SAML response signing | Yes | A certificate with a private key stored in Azure AD B2C. This certificate is used by Azure AD B2C to sign the SAML response sent to your application. Your application reads the Azure AD B2C metadata public key to validate the signature of the SAML response. |
+| SAML response signing | Yes | A certificate with a private key stored in Azure AD B2C. This certificate is used by Azure AD B2C to sign the SAML response sent to your application. Your application reads the Azure AD B2C metadata public key to validate the signature of the SAML response. |
+| SAML assertion signing | Yes | A certificate with a private key stored in Azure AD B2C. This certificate is used by Azure AD B2C to sign the SAML response's assertion. The `<saml:Assertion>` part of the SAML response. |
In a production environment, we recommend using certificates issued by a public certificate authority. However, you can also complete this procedure with self-signed certificates.
-### Prepare a self-signed certificate for SAML response signing
+### Create a policy key
-You must create a SAML response signing certificate so that your application can trust the assertion from Azure AD B2C.
+To have a trust relationship between your application and Azure AD B2C, create a SAML response signing certificate. Azure AD B2C uses this certificate to sign the SAML response sent to your application. Your application reads the Azure AD B2C metadata public key to validate the signature of the SAML response.
+
+> [!TIP]
+> You can use the policy key that you create in this section, for other purposes, such as sign-in the [SAML assertion](saml-service-provider-options.md#saml-assertions-signature).
+
+### Obtain a certificate
[!INCLUDE [active-directory-b2c-create-self-signed-certificate](../../includes/active-directory-b2c-create-self-signed-certificate.md)]
+### Upload the certificate
+
+You need to store your certificate in your Azure AD B2C tenant.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directory + subscription** filter in the top menu and choose the directory that contains your tenant.
+1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
+1. On the Overview page, select **Identity Experience Framework**.
+1. Select **Policy Keys** and then select **Add**.
+1. For **Options**, choose `Upload`.
+1. Enter a **Name** for the policy key. For example, `SamlIdpCert`. The prefix `B2C_1A_` is added automatically to the name of your key.
+1. Browse to and select your certificate .pfx file with the private key.
+1. Click **Create**.
+ ## Enable your policy to connect with a SAML application To connect to your SAML application, Azure AD B2C must be able to create SAML responses.
Locate the `<ClaimsProviders>` section and add the following XML snippet to impl
</Metadata> <CryptographicKeys> <Key Id="SamlAssertionSigning" StorageReferenceId="B2C_1A_SamlIdpCert"/>
+ <Key Id="SamlMessageSigning" StorageReferenceId="B2C_1A_SamlIdpCert"/>
</CryptographicKeys> <InputClaims/> <OutputClaims/>
You can change the value of the `IssuerUri` metadata item in the SAML token issu
</TechnicalProfile> ```
-#### Sign the Azure AD B2C IdP SAML Metadata (optional)
-
-You can instruct Azure AD B2C to sign its SAML IdP metadata document, if required by the application. To do so, generate and upload a SAML IdP metadata signing policy key as shown in [Prepare a self-signed certificate for SAML response signing](#prepare-a-self-signed-certificate-for-saml-response-signing). Then configure the `MetadataSigning` metadata item in the SAML token issuer technical profile. The `StorageReferenceId` must reference the policy key name.
-
-```xml
-<ClaimsProvider>
- <DisplayName>Token Issuer</DisplayName>
- <TechnicalProfiles>
- <!-- SAML Token Issuer technical profile -->
- <TechnicalProfile Id="Saml2AssertionIssuer">
- <DisplayName>Token Issuer</DisplayName>
- <Protocol Name="SAML2"/>
- <OutputTokenFormat>SAML2</OutputTokenFormat>
- ...
- <CryptographicKeys>
- <Key Id="MetadataSigning" StorageReferenceId="B2C_1A_SamlMetadataCert"/>
- ...
- </CryptographicKeys>
- ...
- </TechnicalProfile>
-```
-
-#### Sign the Azure AD B2C IdP SAML response element (optional)
-
-You can specify a certificate to be used to sign the SAML messages. The message is the `<samlp:Response>` element within the SAML response sent to the application.
-
-To specify a certificate, generate and upload a policy key as shown in [Prepare a self-signed certificate for SAML response signing](#prepare-a-self-signed-certificate-for-saml-response-signing). Then configure the `SamlMessageSigning` Metadata item in the SAML Token Issuer technical profile. The `StorageReferenceId` must reference the Policy Key name.
-
-```xml
-<ClaimsProvider>
- <DisplayName>Token Issuer</DisplayName>
- <TechnicalProfiles>
- <!-- SAML Token Issuer technical profile -->
- <TechnicalProfile Id="Saml2AssertionIssuer">
- <DisplayName>Token Issuer</DisplayName>
- <Protocol Name="SAML2"/>
- <OutputTokenFormat>SAML2</OutputTokenFormat>
- ...
- <CryptographicKeys>
- <Key Id="SamlMessageSigning" StorageReferenceId="B2C_1A_SamlMessageCert"/>
- ...
- </CryptographicKeys>
- ...
- </TechnicalProfile>
-```
## Configure your policy to issue a SAML Response Now that your policy can create SAML responses, you must configure the policy to issue a SAML response instead of the default JWT response to your application.
active-directory-b2c Troubleshoot With Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/troubleshoot-with-application-insights.md
Previously updated : 03/10/2021 Last updated : 04/05/2021
The detailed activity logs described here should be enabled **ONLY** during the
If you don't already have one, create an instance of Application Insights in your subscription.
+> [!TIP]
+> A single instance of Application Insights can be used for multiple Azure AD B2C tenants. Then in your query, you can filter by the tenant, or policy name. For more information, [see the logs in Application Insights](#see-the-logs-in-application-insights) samples.
+
+To use an exiting instance of Application Insights in your subscription, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Select the **Directory + subscription** filter in the top menu, and then select the directory that contains your Azure subscription (not your Azure AD B2C directory).
+1. Open the Application Insights resource that you created earlier.
+1. On the **Overview** page, and record the **Instrumentation Key**
+
+To create an instance of Application Insights in your subscription, follow these steps:
+ 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select the **Directory + subscription** filter in the top menu, and then select the directory that contains your Azure subscription (not your Azure AD B2C directory). 1. Select **Create a resource** in the left-hand navigation menu.
Here is a list of queries you can use to see the logs:
| Query | Description | ||--|
-`traces` | See all of the logs generated by Azure AD B2C |
-`traces | where timestamp > ago(1d)` | See all of the logs generated by Azure AD B2C for the last day
+| `traces` | Get all of the logs generated by Azure AD B2C |
+| `traces | where timestamp > ago(1d)` | Get all of the logs generated by Azure AD B2C for the last day.|
+| `traces | where message contains "exception" | where timestamp > ago(2h)`| Get all of the logs with errors from the last two hours.|
+| `traces | where customDimensions.Tenant == "contoso.onmicrosoft.com" and customDimensions.UserJourney == "b2c_1a_signinandup"` | Get all of the logs generated by Azure AD B2C *contoso.onmicrosoft.com* tenant, and user journey is *b2c_1a_signinandup*. |
+| `traces | where customDimensions.CorrelationId == "00000000-0000-0000-0000-000000000000"`| Get all of the logs generated by Azure AD B2C for a correlation ID. Replace the correlation ID with your correlation ID. |
The entries may be long. Export to CSV for a closer look.
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/whats-new-docs.md
Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Previously updated : 03/08/2021 Last updated : 04/05/2021
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Azure Active Directory](../active-directory/fundamentals/whats-new.md).
+## March 2021
+
+### New articles
+
+- [Enable custom domains for Azure Active Directory B2C](custom-domain.md)
+- [Investigate risk with Identity Protection in Azure AD B2C](identity-protection-investigate-risk.md)
+- [Set up sign-up and sign-in with an Apple ID using Azure Active Directory B2C (Preview)](identity-provider-apple-id.md)
+- [Set up a force password reset flow in Azure Active Directory B2C](force-password-reset.md)
+- [Embedded sign-in experience](embedded-login.md)
+
+### Updated articles
+
+- [Set up sign-up and sign-in with an Amazon account using Azure Active Directory B2C](identity-provider-amazon.md)
+- [Set up sign-in with a Salesforce SAML provider by using SAML protocol in Azure Active Directory B2C](identity-provider-salesforce-saml.md)
+- [Migrate an OWIN-based web API to b2clogin.com or a custom domain](multiple-token-endpoints.md)
+- [Technical profiles](technicalprofiles.md)
+- [Add Conditional Access to user flows in Azure Active Directory B2C](conditional-access-user-flow.md)
+- [Set up a password reset flow in Azure Active Directory B2C](add-password-reset-policy.md)
+- [RelyingParty](relyingparty.md)
++ ## February 2021 ### New articles
active-directory Use Scim To Provision Users And Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
There are several endpoints defined in the SCIM RFC. You can start with the `/Us
|--|--| |/User|Perform CRUD operations on a user object.| |/Group|Perform CRUD operations on a group object.|
-|/ServiceProviderConfig|Provides details about the features of the SCIM standard that are supported, for example the resources that are supported and the authentication method.|
-|/ResourceTypes|Specifies metadata about each resource|
|/Schemas|The set of attributes supported by each client and service provider can vary. One service provider might include `name`, `title`, and `emails`, while another service provider uses `name`, `title`, and `phoneNumbers`. The schemas endpoint allows for discovery of the attributes supported.| |/Bulk|Bulk operations allow you to perform operations on a large collection of resource objects in a single operation (e.g. update memberships for a large group).|
+|/ServiceProviderConfig|Provides details about the features of the SCIM standard that are supported, for example the resources that are supported and the authentication method.|
+|/ResourceTypes|Specifies metadata about each resource.|
**Example list of endpoints**
Use the checklist to onboard your application quickly and customers have a smoot
> * 3 Non-expiring test credentials for your application (Required) > * Support the OAuth authorization code grant or a long lived token as described below (Required) > * Establish an engineering and support point of contact to support customers post gallery onboarding (Required)
+> * [Support schema discovery (required)](https://tools.ietf.org/html/rfc7643#section-6)
> * Support updating multiple group memberships with a single PATCH > * Document your SCIM endpoint publicly
-> * [Support schema discovery](https://tools.ietf.org/html/rfc7643#section-6)
### Authorization to provisioning connectors in the application gallery The SCIM spec doesn't define a SCIM-specific scheme for authentication and authorization and relies on the use of existing industry standards.
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/whats-new-docs.md
Title: "What's new in Azure Active Directory application provisioning" description: "New and updated documentation for the Azure Active Directory application provisioning." Previously updated : 03/08/2021 Last updated : 04/05/2021
Welcome to what's new in Azure Active Directory application provisioning documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the provisioning service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## March 2021
+
+### Updated articles
+
+- [Syncing extension attributes for app provisioning](user-provisioning-sync-attributes-for-mapping.md)
+- [Application provisioning in quarantine status](application-provisioning-quarantine-status.md)
+- [Managing user account provisioning for enterprise apps in the Azure portal](configure-automatic-user-provisioning-portal.md)
+- [Reference for writing expressions for attribute mappings in Azure AD](functions-for-customizing-application-data.md)
+- [Tutorial: Develop a sample SCIM endpoint](use-scim-to-build-users-and-groups-endpoints.md)
++ ## February 2021 ### Updated articles
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/whats-new-docs.md
Title: "What's new in Azure Active Directory external identities" description: "New and updated documentation for the Azure Active Directory external identities." Previously updated : 03/08/2021 Last updated : 04/05/2021
Welcome to what's new in Azure Active Directory external identities documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the external identities service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## March 2021
+
+### New articles
+
+- [Microsoft Account (MSA) identity provider for External Identities (Preview)](microsoft-account.md)
+
+### Updated articles
+
+- [Add an API connector to a user flow](self-service-sign-up-add-api-connector.md)
+- [What is guest user access in Azure Active Directory B2B?](what-is-b2b.md)
+- [Properties of an Azure Active Directory B2B collaboration user](user-properties.md)
+- [Tutorial: Bulk invite Azure AD B2B collaboration users](tutorial-bulk-invite.md)
+- [Add a custom approval workflow to self-service sign-up](self-service-sign-up-add-approvals.md)
+- [Reset redemption status for a guest user](reset-redemption-status.md)
+- [Use API connectors to customize and extend self-service sign-up](api-connectors-overview.md)
+- [Azure Active Directory B2B collaboration FAQs](faq.md)
+- [Azure Active Directory B2B collaboration invitation redemption](redemption-experience.md)
+- [Identity Providers for External Identities](identity-providers.md)
+- [Add a self-service sign-up user flow to an app (Preview)](self-service-sign-up-user-flow.md)
+- [Email one-time passcode authentication](one-time-passcode.md)
+- [Add Azure Active Directory B2B collaboration users in the Azure portal](add-users-administrator.md)
++ ## February 2021 ### New articles
active-directory How To Connect Staged Rollout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-staged-rollout.md
-# Migrate to cloud authentication using staged rollout (preview)
+# Migrate to cloud authentication using staged rollout
Staged rollout allows you to selectively test groups of users with cloud authentication capabilities like Azure AD Multi-Factor Authentication (MFA), Conditional Access, Identity Protection for leaked credentials, Identity Governance, and others, before cutting over your domains. This article discusses how to make the switch. Before you begin the staged rollout, however, you should consider the implications if one or more of the following conditions is true:
The following scenarios are not supported for staged rollout:
- Admins can roll out cloud authentication by using security groups. To avoid sync latency when you're using on-premises Active Directory security groups, we recommend that you use cloud security groups. The following conditions apply: - You can use a maximum of 10 groups per feature. That is, you can use 10 groups each for *password hash sync*, *pass-through authentication*, and *seamless SSO*.
- - Nested groups are *not supported*. This scope applies to public preview as well.
+ - Nested groups are *not supported*.
- Dynamic groups are *not supported* for staged rollout. - Contact objects inside the group will block the group from being added.
You can roll out one of these options:
Do the following:
-1. To access the preview UX, sign in to the [Azure AD portal](https://aka.ms/stagedrolloutux).
+1. To access the UX, sign in to the [Azure AD portal](https://aka.ms/stagedrolloutux).
-2. Select the **Enable staged rollout for managed user sign-in (Preview)** link.
+2. Select the **Enable staged rollout for managed user sign-in** link.
For example, if you want to enable *Option A*, slide the **Password Hash Sync** and **Seamless single sign-on** controls to **On**, as shown in the following images.
- ![The Azure AD Connect page](./media/how-to-connect-staged-rollout/sr4.png)
+
- ![The "Enable staged rollout features (Preview)" page](./media/how-to-connect-staged-rollout/sr5.png)
+
3. Add the groups to the feature to enable *pass-through authentication* and *seamless SSO*. To avoid a UX time-out, ensure that the security groups contain no more than 200 members initially.
- ![The "Manage groups for Password Hash Sync (Preview)" page](./media/how-to-connect-staged-rollout/sr6.png)
+
>[!NOTE] >The members in a group are automatically enabled for staged rollout. Nested and dynamic groups are not supported for staged rollout.
active-directory Application Proxy Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-migration.md
Azure AD Application Proxy offers unique benefits when compared to similar produ
## Next steps -- [Use Azure AD Application to provide secure remote access to on-premises applications](application-proxy.md)
+- [Use Azure AD Application Proxy to provide secure remote access to on-premises applications](application-proxy.md)
active-directory Plan An Application Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/plan-an-application-integration.md
Previously updated : 03/19/2021 Last updated : 04/05/2021
You can add any application that already exists in your organization, or any thi
- Self-service integration of any application that supports [Security Assertion Markup Language (SAML) 2.0](https://wikipedia.org/wiki/SAML_2.0) identity providers (SP-initiated or IdP-initiated) - Self-service integration of any web application that has an HTML-based sign-in page using [password-based SSO](sso-options.md#password-based-sso) - Self-service connection of applications that use the [System for Cross-Domain Identity Management (SCIM) protocol for user provisioning](../app-provisioning/use-scim-to-provision-users-and-groups.md)-- Ability to add links to any application in the [Office 365 app launcher](https://www.microsoft.com/microsoft-365/blog/2014/10/16/organize-office-365-new-app-launcher-2/) or [My Apps](sso-options.md#linked-sign-on)
+- Ability to add links to any application in the [Office 365 app launcher](https://support.microsoft.com/office/meet-the-microsoft-365-app-launcher-79f12104-6fed-442f-96a0-eb089a3f476a) or [My Apps](https://myapplications.microsoft.com/)
If you're looking for developer guidance on how to integrate custom apps with Azure AD, see [Authentication Scenarios for Azure AD](../develop/authentication-vs-authorization.md). When you develop an app that uses a modern protocol like [OpenId Connect/OAuth](../develop/active-directory-v2-protocols.md) to authenticate users, you can register it with the Microsoft identity platform by using the [App registrations](../develop/quickstart-register-app.md) experience in the Azure portal.
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/whats-new-docs.md
Title: "What's new in Azure Active Directory application management" description: "New and updated documentation for the Azure Active Directory application management." Previously updated : 03/08/2021 Last updated : 04/04/2021
Welcome to what's new in Azure Active Directory application management documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the application management service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## March 2021
+
+### New articles
+
+- [Azure Active Directory (Azure AD) Application Management certificates frequently asked questions](application-management-certs-faq.md)
+- [Azure Active Directory PowerShell examples for Application Management](app-management-powershell-samples.md)
+- [Disable auto-acceleration to a federated IDP during user sign-in with Home Realm Discovery policy](prevent-domain-hints-with-home-realm-discovery.md)
+
+### Updated articles
+
+- [Plan Azure Active Directory My Apps configuration](my-apps-deployment-plan.md)
+- [Integrating Azure Active Directory with applications getting started guide](plan-an-application-integration.md)
+- [Integrate with SharePoint (SAML)](application-proxy-integrate-with-sharepoint-server-saml.md)
+- [Migrate application authentication to Azure Active Directory](migrate-application-authentication-to-azure-active-directory.md)
+- [Use the AD FS application activity report to migrate applications to Azure AD](migrate-adfs-application-activity.md)
+- [Plan a single sign-on deployment](plan-sso-deployment.md)
+- [Azure Active Directory PowerShell examples for Application Management](app-management-powershell-samples.md)
+- [Troubleshoot Kerberos constrained delegation configurations for Application Proxy](application-proxy-back-end-kerberos-constrained-delegation-how-to.md)
+- [Quickstart: Set up SAML-based single sign-on (SSO) for an application in your Azure Active Directory (Azure AD) tenant](add-application-portal-setup-sso.md)
+- [Azure Active Directory application management: What's new](whats-new-docs.md)
+- [Active Directory (Azure AD) Application Proxy frequently asked questions](application-proxy-faq.md)
+- [Troubleshoot problems signing in to an application from Azure AD My Apps](application-sign-in-other-problem-access-panel.md)
+- [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory](application-proxy-add-on-premises-application.md)
+- [Optimize traffic flow with Azure Active Directory Application Proxy](application-proxy-network-topology.md)
+- [Azure AD Application Proxy: Version release history](application-proxy-release-version-history.md)
+- [Configure Azure Active Directory sign in behavior for an application by using a Home Realm Discovery policy](configure-authentication-for-federated-users-portal.md)
+- [Moving application authentication from Active Directory Federation Services to Azure Active Directory](migrate-adfs-apps-to-azure.md)
++ ## February 2021 ### New articles
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/overview.md
ms.devlang: Previously updated : 10/06/2020 Last updated : 04/05/2021
What can a managed identity be used for?
Here are some of the benefits of using Managed identities: - You don't need to manage credentials. Credentials are not even accessible to you.-- You can use managed identities to authenticate to any Azure service that supports Azure AD authentication including Azure Key Vault.
+- You can use managed identities to authenticate to any resource that supports Azure Active Directory authentication including your own applications.
- Managed identities can be used without any additional cost. > [!NOTE]
active-directory Cornerstone Ondemand Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/cornerstone-ondemand-tutorial.md
Previously updated : 03/09/2021 Last updated : 04/02/2021
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Cornerstone Single Sign-On SSO
-1. Sign in to the Cornerstone Single Sign-On as an administrator.
-
-1. Go to the **Admin -> Tools**.
-
- ![screeenshot for Admin page.](./media/cornerstone-ondemand-tutorial/admin.png)
-
-1. Select **EDGE** panel in **Configuration Tools**.
-
- ![screeenshot for EDGE panel.](./media/cornerstone-ondemand-tutorial/edge-panel.png)
-
-1. Select Single Sign-On in the **Integrate** section.
-
- ![screeenshot for Single Sign-On option.](./media/cornerstone-ondemand-tutorial/single-sign-on.png)
-
-1. Click on **Add SSO** button. Select **Inbound SAML** in the below shown pop up window and then click **Add**.
-
- ![screeenshot for Inbound SAML.](./media/cornerstone-ondemand-tutorial/inbound.png)
-
-1. Perform the below steps in the following page:
-
- ![screeenshot for Configuration section for Cornerstone.](./media/cornerstone-ondemand-tutorial/configuration.png)
-
- a. In the **General Properties**, click on **Upload File** to upload the **Certificate (Base64)** file, which you have downloaded from the Azure portal.
-
- b. Select the **Enable** checkbox and in the **IDP URL** textbox, paste the **Login URL** value which you have copied from the Azure portal.
-
- c. Click **Save**.
+To configure single sign-on on **Cornerstone Single Sign-On** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Cornerstone Single Sign-On support team](mailto:moreinfo@csod.com) or please contact your partner. They set this setting to have the SAML SSO connection set properly on both sides.
### Create Cornerstone Single Sign-On test user The objective of this section is to create a user called B.Simon in Cornerstone Single Sign-On. Cornerstone Single Sign-On supports automatic user provisioning, which is by default enabled. You can find more details [here](./cornerstone-ondemand-provisioning-tutorial.md) on how to configure automatic user provisioning.
-**If you need to create user manually, perform following steps:**
-
-1. Sign in to the Cornerstone Single Sign-On as an administrator.
-
-1. Go to the **Admin -> Users** and click on **Add User** in the bottom of the page.
-
- ![screeenshot for test user creation of Cornerstone.](./media/cornerstone-ondemand-tutorial/user-1.png)
-
-1. Fill the required fields in **Add new user** page and click on **Save**.
-
- ![screeenshot for test user creation with the required fields.](./media/cornerstone-ondemand-tutorial/user-2.png)
- ## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
active-directory Fuze Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/fuze-provisioning-tutorial.md
Previously updated : 07/26/2019 Last updated : 04/05/2021
Once you've configured provisioning, use the following resources to monitor your
## Connector limitations * Fuze supports custom SCIM attributes called **Entitlements**. These attributes are only able to be created and not updated.
+* The Fuze SCIM API does not support filtering on the userName attribute. As a result, you may see failures in the logs when trying to sync an existing user who does not have a userName attribute but exists with an email that matches the userPrincipalName in Azure AD.
## Change log
Once you've configured provisioning, use the following resources to monitor your
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md).
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md).
aks Enable Host Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/enable-host-encryption.md
This feature can only be set at cluster creation or node pool creation time.
- Ensure you have the `aks-preview` CLI extension v0.4.73 or higher version installed. - Ensure you have the `EnableEncryptionAtHostPreview` feature flag under `Microsoft.ContainerService` enabled.
-In order to be able to use encryption at host for your VMs or virtual machine scale sets, you must get the feature enabled on your subscription. Email **encryptionAtHost@microsoft.com** with your subscription IDs to get the feature enabled for your subscriptions.
+You must enable the feature for your subscription before you use the EncryptionAtHost property for your Azure Kubernetes Service cluster. Please follow the steps below to enable the feature for your subscription:
-> [!IMPORTANT]
-> You must email **encryptionAtHost@microsoft.com** with your subscription IDs to get the feature enabled for compute resources. You cannot enable it yourself for compute resources.
+1. Execute the following command to register the feature for your subscription
+```azurecli-interactive
+Register-AzProviderFeature -FeatureName "EncryptionAtHost" -ProviderNamespace "Microsoft.Compute"
+```
+2. Please check that the registration state is Registered (takes a few minutes) using the command below before trying out the feature.
+
+```azurecli-interactive
+Get-AzProviderFeature -FeatureName "EncryptionAtHost" -ProviderNamespace "Microsoft.Compute"
+```
### Install aks-preview CLI extension
api-management Api Management Sample Flexible Throttling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-sample-flexible-throttling.md
When the throttling key is defined using a [policy expression](./api-management-
This enables the developer's client application to choose how they want to create the rate limiting key. The client developers could create their own rate tiers by allocating sets of keys to users and rotating the key usage. ## Summary
-Azure API Management provides rate and quote throttling to both protect and add value to your API service. The new throttling policies with custom scoping rules allow you finer grained control over those policies to enable your customers to build even better applications. The examples in this article demonstrate the use of these new policies by manufacturing rate limiting keys with client IP addresses, user identity, and client generated values. However, there are many other parts of the message that could be used such as user agent, URL path fragments, message size.
+Azure API Management provides rate and quota throttling to both protect and add value to your API service. The new throttling policies with custom scoping rules allow you finer grained control over those policies to enable your customers to build even better applications. The examples in this article demonstrate the use of these new policies by manufacturing rate limiting keys with client IP addresses, user identity, and client generated values. However, there are many other parts of the message that could be used such as user agent, URL path fragments, message size.
## Next steps Please give us your feedback as a GitHub issue for this topic. It would be great to hear about other potential key values that have been a logical choice in your scenarios.
app-service Quickstart Python Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-python-portal.md
+
+ Title: 'Quickstart: Create a Python app in the Azure portal'
+description: Get started with Azure App Service by deploying your first Python app to a Linux container in App Service by using the Azure portal.
+ Last updated : 04/01/2021+++
+# Quickstart: Create a Python app using Azure App Service on Linux (Azure portal)
+
+In this quickstart, you deploy a Python web app to [App Service on Linux](overview.md#app-service-on-linux), Azure's highly scalable, self-patching web hosting service. You use the Azure portal to deploy a sample with either the Flask or Django frameworks. The web app you configure uses a basic App Service tier that incurs a small cost in your Azure subscription.
+
+## Configure accounts
+
+- If you don't yet have an Azure account with an active subscription, [create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
+
+- If you don't have a GitHub account, visit [github.com](https://github.com) to create one.
+
+## Fork the sample GitHub repository
+
+1. Open [github.com](https://github.com) and sign in.
+
+1. Navigate to one of the following sample repositories:
+ - [Flask Hello World](https://github.com/Azure-Samples/python-docs-hello-world)
+ - [Django Hello World](https://github.com/Azure-Samples/python-docs-hello-django)
+
+1. On the upper right of the GitHub page, select **Fork** to make a copy of the repository in your own GitHub account:
+
+ ![Github fork command](media/quickstart-python-portal/github-fork-command.png)
+
+ Azure requires that you have access to the GitHub organization that contains the repository. By forking the sample to your own GitHub account, you automatically have the necessary access and can also make changes to the code.
+
+## Provision the App Service web app
+
+An App Service web app is the web server to which you deploy your code.
+
+1. Open the Azure portal at [https://portal.azure.com](https://portal.azure.com) and sign in if needed.
+
+1. In the search bar at the top of the Azure portal, enter "App Service", then select **App Services**.
+
+ ![Portal search bar and selecting App Service](media/quickstart-python-portal/portal-search-bar.png)
+
+1. On the **App Services** page, select "**+Add**:
+
+ ![Add App Service command](media/quickstart-python-portal/add-app-service.png)
+
+1. On the **Create Web App** page, do the following actions:
+
+ | Field | Action |
+ | | |
+ | Subscription | Select the Azure subscription you want to use. |
+ | Resource Group | Select **Create New** below the drop-down. In the popup, enter "AppService-PythonQuickstart" and select "**OK**. |
+ | Name | Enter a name that's unique across all of Azure, typically using a combination of your personal or company names, such as *contoso-testapp-123*. |
+ | Publish | Select **Code**. |
+ | Runtime stack | Select **Python 3.8**. |
+ | Operating System | Select **Linux** (Python is supported only on Linux). |
+ | Region | Select a region near you. |
+ | Linux Plan | Select an exiting App Service Plan or use **Create new** to create a new one. We recommend using the **Basic B1** plan. |
+
+ ![Create web app form on the Azure portal](media/quickstart-python-portal/create-web-app.png)
+
+1. At the bottom of the page, select **Review + Create**, review the details, then select **Create**.
+
+1. When provisioning is complete, select **Go to resource** to navigate to the new App Service page. Your web app at this point contains only a default page, so the next step deploys sample code.
+
+Having issues? [Let us know](https://aka.ms/FlaskPortalQuickstartHelp).
+
+## Deploy the sample code
+
+1. On the web app page on the Azure portal, select **Deployment Center**:
+
+ ![Deployment Center command on the App Service menu](media/quickstart-python-portal/deployment-center-command.png)
++
+1. On the **Deployment Center** page, select the **Settings** tab if it's not already open:
+
+ ![Deployment Center settings tab](media/quickstart-python-portal/deployment-center-settings-tab.png)
+
+1. Under **Source**, select **GitHub**, then on the **GitHub** form that appears, do the following actions:
+
+ | Field | Action |
+ | | |
+ | Signed in as | If you're not signed into GitHub already, sign in now or select **Change Account* if needed. |
+ | Organization | Select your GitHub organization, if needed. |
+ | Repository | Select the name of the sample repository you forked earlier, either **python-docs-hello-world** (Flask) or **python-docs-hello-django** (Django). |
+ | Branch | Select **main**. |
+
+ ![Deployment Center GitHub source configuration](media/quickstart-python-portal/deployment-center-configure-github-source.png)
+
+1. At the top of the page, select **Save** to apply the settings.:
+
+ ![Save the GitHub source configuration on Deployment Center](media/quickstart-python-portal/deployment-center-configure-save.png)
+
+1. Select the **Logs** tab to view the status of the deployment. It takes a few minutes to build and deploy the sample and additional logs appear during the process. Upon completion, the logs should reflect a Status of **Success (Active)**:
+
+ ![Deployment Center logs tab](media/quickstart-python-portal/deployment-center-logs.png)
+
+Having issues? [Let us know](https://aka.ms/FlaskPortalQuickstartHelp).
+
+## Browse to the app
+
+1. Once deployment is complete, select **Overview** on the left-hand menu to return to the main page for the web app.
+
+1. Select the **URL** that contains address of the web app:
+
+ ![Web app URL on the overview page](media/quickstart-python-portal/web-app-url.png)
+
+1. Verify that the output of the app is "Hello, World!":
+
+ ![App running after initial deployment](media/quickstart-python-portal/web-app-first-deploy-results.png)
+
+Having issues? Refer first to the [Troubleshooting guide](configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/FlaskPortalQuickstartHelp).
+
+## Make a change and redeploy
+
+Because you connected App Service to your repository, changes that you commit to your source repository are automatically deployed to the web app.
+
+1. You can make changes directly in your forked repository on GitHub, or you can clone the repository locally, make and commit changes, and then push those changes to GitHub. Either method results in a change to the repository that's connected to App Service.
+
+1. **In your forked repository**, change the app's message from "Hello, World!" to "Hello, Azure!" as follows:
+ - Flask (python-docs-hello-world sample): Change the text string on line 6 of the *application.py* file.
+ - Django (python-docs-hello-django sample): Change the text string on line 5 of the *views.py* file within the *hello* folder.
+
+1. Commit the change to the repository.
+
+ If you're using a local clone, also push those changes to GitHub.
+
+1. On the Azure portal for the web app, return to the **Deployment Center**, select the **Logs** tab, and note the new deployment activity that should be underway.
+
+1. When the deployment is complete, return to the web app's **Overview** page, open the web app's URL again, and observe the changes in the app:
+
+ ![App running after redeployment](media/quickstart-python-portal/web-app-second-deploy-results.png)
+
+Having issues? Refer first to the [Troubleshooting guide](configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/FlaskCLIQuickstartHelp).
+
+## Clean up resources
+
+In the preceding steps, you created Azure resources in a resource group named "AppService-PythonQuickstart", which is shown on the web app's *Overview** page. If you keep the web app running, you will incur some ongoing costs (see [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/)).
+
+If you don't expect to need these resources in the future, select the name of the resource group on the web app **Overview** page to navigate to the resource groups overview. These select **Delete resource group** and follow the prompts.
+
+![Deleting the resource group](media/quickstart-python-portal/delete-resource-group.png)
++
+Having issues? [Let us know](https://aka.ms/FlaskCLIQuickstartHelp).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Tutorial: Python (Django) web app with PostgreSQL](/azure/developer/python/tutorial-python-postgresql-app-portal)
+
+> [!div class="nextstepaction"]
+> [Configure Python app](configure-language-python.md)
+
+> [!div class="nextstepaction"]
+> [Add user sign-in to a Python web app](../active-directory/develop/quickstart-v2-python-webapp.md)
+
+> [!div class="nextstepaction"]
+> [Tutorial: Run Python app in custom container](tutorial-custom-container.md)
app-service Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-python.md
adobe-target-content: ./quickstart-python-1
# Quickstart: Create a Python app using Azure App Service on Linux
-In this quickstart, you deploy a Python web app to [App Service on Linux](overview.md#app-service-on-linux), Azure's highly scalable, self-patching web hosting service. You use the local [Azure command-line interface (CLI)](/cli/azure/install-azure-cli) on a Mac, Linux, or Windows computer to deploy a sample with either the Flask or Django frameworks. The web app you configure uses a free App Service tier, so you incur no costs in the course of this article.
-
-> [!TIP]
-> If you prefer using Visual Studio Code instead, follow our **[Visual Studio Code App Service quickstart](/azure/developer/python/tutorial-deploy-app-service-on-linux-01)**.
+In this quickstart, you deploy a Python web app to [App Service on Linux](overview.md#app-service-on-linux), Azure's highly scalable, self-patching web hosting service. You use the local [Azure command-line interface (CLI)](/cli/azure/install-azure-cli) on a Mac, Linux, or Windows computer to deploy a sample with either the Flask or Django frameworks. The web app you configure uses a basic App Service tier that incurs a small cost in your Azure subscription.
## Set up your initial environment
application-gateway Rewrite Http Headers Url https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/rewrite-http-headers-url.md
Title: Rewrite HTTP headers and URL with Azure Application Gateway | Microsoft Docs description: This article provides an overview of rewriting HTTP headers and URL in Azure Application Gateway -+ Previously updated : 07/16/2020- Last updated : 04/05/2021+ # Rewrite HTTP headers and URL with Application Gateway
To learn how to rewrite request and response headers with Application Gateway us
You can rewrite all headers in requests and responses, except for the Connection, and Upgrade headers. You can also use the application gateway to create custom headers and add them to the requests and responses being routed through it.
-### URL path and query string (Preview)
+### URL path and query string
With URL rewrite capability in Application Gateway, you can:
To learn how to rewrite URL with Application Gateway using Azure portal, see [he
![Diagram that describes the process for rewriting a URL with Application Gateway.](./media/rewrite-http-headers-url/url-rewrite-overview.png)
->[!NOTE]
-> URL rewrite feature is in preview and is available only for Standard_v2 and WAF_v2 SKU of Application Gateway. It is not recommended for use in production environment. To learn more about previews, see [terms of use here](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Rewrite actions You use rewrite actions to specify the URL, request headers or response headers that you want to rewrite and the new value to which you intend to rewrite them to. The value of a URL or a new or existing header can be set to these types of values:
Application gateway supports the following server variables:
| client_port | The client port. | | client_tcp_rtt | Information about the client TCP connection. Available on systems that support the TCP_INFO socket option. | | client_user | When HTTP authentication is used, the user name supplied for authentication. |
-| host | In this order of precedence: the host name from the request line, the host name from the Host request header field, or the server name matching a request. Example: in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, host value will be is `contoso.com` |
+| host | In this order of precedence: the host name from the request line, the host name from the Host request header field, or the server name matching a request. Example: In the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, host value will be is `contoso.com` |
| cookie_*name* | The *name* cookie. | | http_method | The method used to make the URL request. For example, GET or POST. | | http_status | The session status. For example, 200, 400, or 403. | | http_version | The request protocol. Usually HTTP/1.0, HTTP/1.1, or HTTP/2.0. |
-| query_string | The list of variable/value pairs that follows the "?" in the requested URL. Example: in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, query_string value will be `id=123&title=fabrikam` |
+| query_string | The list of variable/value pairs that follows the "?" in the requested URL. Example: In the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, query_string value will be `id=123&title=fabrikam` |
| received_bytes | The length of the request (including the request line, header, and request body). | | request_query | The arguments in the request line. | | request_scheme | The request scheme: http or https. |
Application gateway supports the following server variables:
| server_port | The port of the server that accepted a request. | | ssl_connection_protocol | The protocol of an established TLS connection. | | ssl_enabled | ΓÇ£OnΓÇ¥ if the connection operates in TLS mode. Otherwise, an empty string. |
-| uri_path | Identifies the specific resource in the host that the web client wants to access. This is the part of the request URI without the arguments. Example: in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, uri_path value will be `/article.aspx` |
+| uri_path | Identifies the specific resource in the host that the web client wants to access. This is the part of the request URI without the arguments. Example: In the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, uri_path value will be `/article.aspx` |
### Mutual authentication server variables (Preview)
A rewrite rule set contains:
* **URL Query String**: The value to which the query string is to be rewritten to. * **Re-evaluate path map**: Used to determine whether the URL path map is to be re-evaluated or not. If kept unchecked, the original URL path will be used to match the path-pattern in the URL path map. If set to true, the URL path map will be re-evaluated to check the match with the rewritten path. Enabling this switch helps in routing the request to a different backend pool post rewrite.
+### Using URL rewrite or Host header rewrite with Web Application Firewall (WAF_v2 SKU)
+
+When you configure URL rewrite or host header rewrite, the WAF evaluation will happen after the modification to the request header or URL parameters (post-rewrite). And when you remove the URL rewrite or host header rewrite configuration on your Application Gateway, the WAF evaluation will be done before the header rewrite (pre-rewrite). This order ensures that WAF rules are applied to the final request that would be received by your backend pool.
+
+For example, say you have the following header rewrite rule for the header `"Accept" : "text/html"` - if the value of header `"Accept"` is equal to `"text/html"`, then rewrite the value to `"image/png"`.
+
+Here, with only header rewrite configured, the WAF evaluation will be done on `"Accept" : "text/html"`. But when you configure URL rewrite or host header rewrite, then the WAF evaluation will be done on `"Accept" : "image/png"`.
+
+>[!NOTE]
+> URL rewrite operations are expected to cause a minor increase in the CPU utilization of your WAF Application Gateway. It is recommended that you monitor the [CPU utilization metric](high-traffic-support.md) for a brief period of time after enabling the URL rewrite rules on your WAF Application Gateway.
+ ### Common scenarios for header rewrite #### Remove port information from the X-Forwarded-For header
application-gateway Rewrite Url Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/rewrite-url-portal.md
Title: Rewrite URL and query string with Azure Application Gateway - Azure portal description: Learn how to use the Azure portal to configure an Azure Application Gateway to rewrite URL and query string -+ Previously updated : 7/16/2020- Last updated : 4/05/2021+
-# Rewrite URL with Azure Application Gateway - Azure portal (Preview)
+# Rewrite URL with Azure Application Gateway - Azure portal
This article describes how to use the Azure portal to configure an [Application Gateway v2 SKU](application-gateway-autoscaling-zone-redundant.md) instance to rewrite URL. >[!NOTE]
-> URL rewrite feature is in preview and is available only for Standard_v2 and WAF_v2 SKU of Application Gateway. It is not recommended for use in production environment. To learn more about previews, see [terms of use here](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> URL rewrite feature is available only for Standard_v2 and WAF_v2 SKU of Application Gateway. When URL rewrite is configured on a WAF enabled gateway, WAF evaluation will take place on the rewritten request headers and URL. [Learn more](rewrite-http-headers-url.md#using-url-rewrite-or-host-header-rewrite-with-web-application-firewall-waf_v2-sku).
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
attestation Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/faq.md
Azure PCK caching service:
## Is SGX attestation supported by Azure Attestation in non-Azure environments
-Azure Attestation depends on the security baseline stated by Azure PCK caching service to validate the TEEs. Azure PCK caching service is currently designed to support only Azure Confidential computing nodes.
+No. Azure Attestation depends on the security baseline stated by Azure PCK caching service to validate the TEEs. Azure PCK caching service is currently designed to support only Azure Confidential computing nodes.
## What validations does Azure Attestation perform for attesting SGX enclaves
attestation Policy Examples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/policy-examples.md
Issuance rules section is not mandatory. This section can be used by the users t
``` version= 1.0;
-authorizationrules
-{
- c:[type=="$is-debuggable"] => permit();
+authorizationrules {
+ => permit();
};-
-issuancerules
-{
- c:[type=="$is-debuggable"] => issue(type="is-debuggable", value=c.value);
- c:[type=="$sgx-mrsigner"] => issue(type="sgx-mrsigner", value=c.value);
- c:[type=="$sgx-mrenclave"] => issue(type="sgx-mrenclave", value=c.value);
- c:[type=="$product-id"] => issue(type="product-id", value=c.value);
- c:[type=="$svn"] => issue(type="svn", value=c.value);
- c:[type=="$tee"] => issue(type="tee", value=c.value);
+issuancerules {
+ c:[type=="x-ms-sgx-is-debuggable"] => issue(type="is-debuggable", value=c.value);
+ c:[type=="x-ms-sgx-mrsigner"] => issue(type="sgx-mrsigner", value=c.value);
+ c:[type=="x-ms-sgx-mrenclave"] => issue(type="sgx-mrenclave", value=c.value);
+ c:[type=="x-ms-sgx-product-id"] => issue(type="product-id", value=c.value);
+ c:[type=="x-ms-sgx-svn"] => issue(type="svn", value=c.value);
+ c:[type=="x-ms-attestation-type"] => issue(type="tee", value=c.value);
}; ```
eyJhbGciOiJSU0EyNTYiLCJ4NWMiOlsiTUlJQzFqQ0NBYjZnQXdJQkFnSUlTUUdEOUVGakJcdTAwMkJZ
## Next steps - [How to author and sign an attestation policy](author-sign-policy.md)-- [Set up Azure Attestation using PowerShell](quickstart-powershell.md)
+- [Set up Azure Attestation using PowerShell](quickstart-powershell.md)
availability-zones Az Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/availability-zones/az-overview.md
If a service offering is not available in a specific region, you can share your
As mentioned previously, Azure classifies services into three categories: foundational, mainstream, and specialized. Service categories are assigned at general availability. Often, services start their lifecycle as a specialized service and as demand and utilization increases may be promoted to mainstream or foundational. The following table lists the category for services as foundational, mainstream. You should note the following about the table: - Some services are non-regional. For information and a list of non-regional services, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/).-- Older generation of services or virtual machines are not listed. For more information, see documentation at [Previous generations of virtual machine sizes](../virtual-machines/sizes-previous-gen.md)-- .Services are not assigned a category until General Availability (GA). For information, and a list of preview services, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/).
+- Older generation of services or virtual machines are not listed. For more information, see documentation at [Previous generations of virtual machine sizes](../virtual-machines/sizes-previous-gen.md).
+- Services are not assigned a category until General Availability (GA). For information, and a list of preview services, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/).
> [!div class="mx-tableFixed"] > | Foundational | Mainstream |
As mentioned previously, Azure classifies services into three categories: founda
### Specialized Services
-As mentioned previously, Azure classifies services into three categories: foundational, mainstream, and specialized. Service categories are assigned at general availability. Often, services start their lifecycle as a specialized service and as demand and utilization increases may be promoted to mainstream or foundational. The following table lists specialized services.
+As mentioned previously, Azure classifies services into three categories: foundational, mainstream, and specialized. Service categories are assigned at general availability. Often, services start their lifecycle as a specialized service and as demand and utilization increases may be promoted to mainstream or foundational. The following table lists specialized services.
> [!div class="mx-tableFixed"] > | Specialized |
azure-arc Agent Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/agent-upgrade.md
az connectedk8s update --name AzureArcTest1 --resource-group AzureArcTest --auto
If you have disabled auto-upgrade for agents, you can manually initiate upgrades for these agents using the `az connectedk8s upgrade` command as shown below: ```console
-az connectedk8s upgrade -g AzureArcTest1 -n AzureArcTest --agent-version 1.0.1
+az connectedk8s upgrade -g AzureArcTest1 -n AzureArcTest --agent-version 1.1.0
``` Azure Arc enabled Kubernetes follows the standard [semantic versioning scheme](https://semver.org/) of `MAJOR.MINOR.PATCH` for versioning its agents.
azure-arc Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/azure-rbac.md
+
+ Title: "Azure RBAC for Azure Arc enabled Kubernetes clusters"
++ Last updated : 04/05/2021+++
+description: "Use Azure RBAC for authorization checks on Azure Arc enabled Kubernetes clusters"
++
+# Azure RBAC for Azure Arc enabled Kubernetes clusters
+
+Kubernetes [ClusterRoleBinding and RoleBinding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) object types help to define authorization in Kubernetes natively. With Azure RBAC, you can use Azure Active Directory and role assignments in Azure to control authorization checks on the cluster. This implies you can now use Azure role assignments to granularly control who can read, write, delete your Kubernetes objects such as Deployment, Pod and Service
+
+A conceptual overview of this feature is available in [Azure RBAC - Azure Arc enabled Kubernetes](conceptual-azure-rbac.md) article.
++
+## Prerequisites
+
+- [Install or upgrade Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli) to version >= 2.16.0
+
+- Install the `connectedk8s` Azure CLI extension of version >= 1.1.0:
+
+ ```azurecli
+ az extension add --name connectedk8s
+ ```
+
+ If the `connectedk8s` extension is already installed, you can update it to the latest version using the following command:
+
+ ```azurecli
+ az extension update --name connectedk8s
+ ```
+
+- An existing Azure Arc enabled Kubernetes connected cluster.
+ - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md).
+ - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to version >= 1.1.0.
+
+> [!NOTE]
+> This feature can't be set up for managed Kubernetes offerings of cloud providers like Elastic Kubernetes Service or Google Kubernetes Engine where the user doesn't have access to `apiserver` of the cluster. For Azure Kubernetes Service (AKS) clusters, this [feature is available natively](../../aks/manage-azure-rbac.md) and doesn't require the AKS cluster to be connected to Azure Arc.
+
+## Set up Azure AD applications
+
+### Create server application
+
+1. Create a new Azure AD application and get its `appId` value, which is used in later steps as `serverApplicationId`:
+
+ ```azurecli
+ az ad app create --display-name "<clusterName>Server" --identifier-uris "https://<clusterName>Server" --query appId -o tsv
+ ```
+
+1. Update the application group membership claims:
+
+ ```azurecli
+ az ad app update --id <serverApplicationId> --set groupMembershipClaims=All
+ ```
+
+1. Create a service principal and get its `password` field value, which is required later as `serverApplicationSecret` when enabling this feature on the cluster:
+
+ ```azurecli
+ az ad sp create --id <serverApplicationId>
+ az ad sp credential reset --name <serverApplicationId> --credential-description "ArcSecret" --query password -o tsv
+ ```
+
+1. Grant the application API permissions:
+
+ ```azurecli
+ az ad app permission add --id <serverApplicationId> --api 00000003-0000-0000-c000-000000000000 --api-permissions e1fe6dd8-ba31-4d61-89e7-88639da4683d=Scope
+ az ad app permission grant --id <serverApplicationId> --api 00000003-0000-0000-c000-000000000000
+ ```
+
+ > [!NOTE]
+ > * This step has to be executed by an Azure tenant administrator.
+ > * For usage of this feature in production, it is recommended to create a different server application for every cluster.
+
+### Create client application
+
+1. Create a new Azure AD application and get its 'appId' value, which is used in later steps as `clientApplicationId`:
+
+ ```azurecli
+ az ad app create --display-name "<clusterName>Client" --native-app --reply-urls "https://<clusterName>Client" --query appId -o tsv
+ ```
+
+2. Create a service principal for this client application:
+
+ ```azurecli
+ az ad sp create --id <clientApplicationId>
+ ```
+
+3. Get the `oAuthPermissionId` for the server application:
+
+ ```azurecli
+ az ad app show --id <serverApplicationId> --query "oauth2Permissions[0].id" -o tsv
+ ```
+
+4. Grant the required permissions for the client application:
+
+ ```azurecli
+ az ad app permission add --id <clientApplicationId> --api <serverApplicationId> --api-permissions <oAuthPermissionId>=Scope
+ az ad app permission grant --id <clientApplicationId> --api <serverApplicationId>
+ ```
+
+## Create a role assignment for the server application
+
+The server application needs the `Microsoft.Authorization/*/read` permissions to check if the user making the request is authorized on the Kubernetes objects that are a part of the request.
+
+1. Create a file named accessCheck.json with the following contents:
+
+ ```json
+ {
+ "Name": "Read authorization",
+ "IsCustom": true,
+ "Description": "Read authorization",
+ "Actions": ["Microsoft.Authorization/*/read"],
+ "NotActions": [],
+ "DataActions": [],
+ "NotDataActions": [],
+ "AssignableScopes": [
+ "/subscriptions/<subscription-id>"
+ ]
+ }
+ ```
+
+ Replace the `<subscription-id>` with the actual subscription ID.
+
+2. Execute the following command to create the new custom role:
+
+ ```azurecli
+ az role definition create --role-definition ./accessCheck.json
+ ```
+
+3. From the output of above command, store the value of `id` field, which is used in later steps as `roleId`.
+
+4. Create a role assignment on the server application as assignee using the role created above:
+
+ ```azurecli
+ az role assignment create --role <roleId> --assignee <serverApplicationId> --scope /subscriptions/<subscription-id>
+ ```
+
+## Enable Azure RBAC on cluster
+
+1. Enable Azure RBAC on your Arc enabled Kubernetes cluster by running the following command:
+
+ ```console
+ az connectedk8s enable-features -n <clusterName> -g <resourceGroupName> --features azure-rbac --app-id <serverApplicationId> --app-secret <serverApplicationSecret>
+ ```
+
+ > [!NOTE]
+ > 1. Before running the above command, ensure that the `kubeconfig` file on the machine is pointing to the cluster on which to enable the Azure RBAC feature.
+ > 2. Use `--skip-azure-rbac-list` with the above command for a comma-separated list of usernames/email/oid undergoing authorization checks using Kubernetes native ClusterRoleBinding and RoleBinding objects instead of Azure RBAC.
+
+### For a generic cluster where there is no reconciler running on apiserver specification:
+
+1. SSH into every master node of the cluster and execute the following steps:
+
+ 1. Open `apiserver` manifest in edit mode:
+
+ ```console
+ sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml
+ ```
+
+ 1. Add the following specification under `volumes`:
+
+ ```yml
+ - name: azure-rbac
+ secret:
+ secretName: azure-arc-guard-manifests
+ ```
+
+ 1. Add the following specification under `volumeMounts`:
+
+ ```yml
+ - mountPath: /etc/guard
+ name: azure-rbac
+ readOnly: true
+ ```
+
+ 1. Add the following `apiserver` arguments:
+
+ ```yml
+ - --authentication-token-webhook-config-file=/etc/guard/guard-authn-webhook.yaml
+ - --authentication-token-webhook-cache-ttl=5m0s
+ - --authorization-webhook-cache-authorized-ttl=5m0s
+ - --authorization-webhook-config-file=/etc/guard/guard-authz-webhook.yaml
+ - --authorization-webhook-version=v1
+ - --authorization-mode=Node,Webhook,RBAC
+ ```
+
+ If the Kubernetes cluster is of version >= 1.19.0, then the following `apiserver argument` needs to be set as well:
+
+ ```yml
+ - --authentication-token-webhook-version=v1
+ ```
+
+ 1. Save and exit the editor to update the `apiserver` pod.
++
+### For a cluster created using Cluster API
+
+1. Copy the guard secret containing authentication and authorization webhook config files `from the workload cluster` on to your machine:
+
+ ```console
+ kubectl get secret azure-arc-guard-manifests -n kube-system -o yaml > azure-arc-guard-manifests.yaml
+ ```
+
+1. Change the `namespace` field in the `azure-arc-guard-manifests.yaml` file to the namespace within the management cluster where you are applying the custom resources for creation of workload clusters.
+
+1. Apply this manifest:
+
+ ```console
+ kubectl apply -f azure-arc-guard-manifests.yaml
+ ```
+
+1. Edit the `KubeadmControlPlane` object by executing `kubectl edit kcp <clustername>-control-plane`:
+
+ 1. Add the following snippet under `files:`:
+
+ ```console
+ - contentFrom:
+ secret:
+ key: guard-authn-webhook.yaml
+ name: azure-arc-guard-manifests
+ owner: root:root
+ path: /etc/kubernetes/guard-authn-webhook.yaml
+ permissions: "0644"
+ - contentFrom:
+ secret:
+ key: guard-authz-webhook.yaml
+ name: azure-arc-guard-manifests
+ owner: root:root
+ path: /etc/kubernetes/guard-authz-webhook.yaml
+ permissions: "0644"
+ ```
+
+ 1. Add the following snippet under `apiServer:` -> `extraVolumes:`:
+
+ ```console
+ - hostPath: /etc/kubernetes/guard-authn-webhook.yaml
+ mountPath: /etc/guard/guard-authn-webhook.yaml
+ name: guard-authn
+ readOnly: true
+ - hostPath: /etc/kubernetes/guard-authz-webhook.yaml
+ mountPath: /etc/guard/guard-authz-webhook.yaml
+ name: guard-authz
+ readOnly: true
+ ```
+
+ 1. Add the following snippet under `apiServer:` -> `extraArgs:`:
+
+ ```console
+ authentication-token-webhook-cache-ttl: 5m0s
+ authentication-token-webhook-config-file: /etc/guard/guard-authn-webhook.yaml
+ authentication-token-webhook-version: v1
+ authorization-mode: Node,Webhook,RBAC
+ authorization-webhook-cache-authorized-ttl: 5m0s
+ authorization-webhook-config-file: /etc/guard/guard-authz-webhook.yaml
+ authorization-webhook-version: v1
+ ```
+
+ 1. Save and exit to update the `KubeadmControlPlane` object. Wait for the these changes to be realized on the workload cluster.
++
+## Create role assignments for users to access the cluster
+
+Owners of the Azure Arc enabled Kubernetes resource can either use built-in roles or custom roles to grant other users access to the Kubernetes cluster.
+
+### Built-in roles
+
+| Role | Description |
+|||
+| Azure Arc Kubernetes Viewer | Allows read-only access to see most objects in a namespace. This role doesn't allow viewing secrets. This is because `read` permission on secrets would enable access to `ServiceAccount` credentials in the namespace, which would in turn allow API access using that `ServiceAccount` (a form of privilege escalation). |
+| Azure Arc Kubernetes Writer | Allows read/write access to most objects in a namespace. This role doesn't allow viewing or modifying roles or role bindings. However, this role allows accessing secrets and running pods as any `ServiceAccount` in the namespace, so it can be used to gain the API access levels of any `ServiceAccount` in the namespace. |
+| Azure Arc Kubernetes Admin | Allows admin access. Intended to be granted within a namespace using a RoleBinding. If used in a RoleBinding, allows read/write access to most resources in a namespace, including the ability to create roles and role bindings within the namespace. This role doesn't allow write access to resource quota or to the namespace itself. |
+| Azure Arc Kubernetes Cluster Admin | Allows super-user access to execute any action on any resource. When used in a ClusterRoleBinding, it gives full control over every resource in the cluster and in all namespaces. When used in a RoleBinding, it gives full control over every resource in the role binding's namespace, including the namespace itself.|
+
+You can create role assignments scoped to the Arc enabled Kubernetes cluster on the `Access Control (IAM)` blade of the cluster resource on Azure portal. You can also use Azure CLI commands, as shown below:
+
+```azurecli
+az role assignment create --role "Azure Arc Kubernetes Cluster Admin" --assignee <AZURE-AD-ENTITY-ID> --scope $ARM_ID
+```
+
+where `AZURE-AD-ENTITY-ID` could be a username (for example, testuser@mytenant.onmicrosoft.com) or even the `appId` of a service principal.
+
+Here's another example of creating a role assignment scoped to a specific namespace within the cluster -
+
+```azurecli
+az role assignment create --role "Azure Arc Kubernetes Viewer" --assignee <AZURE-AD-ENTITY-ID> --scope $ARM_ID/namespaces/<namespace-name>
+```
+
+> [!NOTE]
+> While role assignments scoped to the cluster can be created using either the Azure portal or CLI, role assignments scoped to namespaces can only be created using the CLI.
+
+### Custom roles
+
+You may choose to create your own role definition for usage in role assignments.
+
+Walk through the below example of a role definition that allows a user to only read deployments. For more information, see [the full list of data actions you can use to construct a role definition](../../role-based-access-control/resource-provider-operations.md#microsoftkubernetes).
+
+Copy the below JSON object into a file called custom-role.json. Replace the `<subscription-id>` placeholder with the actual subscription ID. The below custom role uses one of the data actions and lets you view all deployments in the scope (cluster/namespace) where the role assignment is created.
+
+```json
+{
+ "Name": "Arc Deployment Viewer",
+ "Description": "Lets you view all deployments in cluster/namespace.",
+ "Actions": [],
+ "NotActions": [],
+ "DataActions": [
+ "Microsoft.Kubernetes/connectedClusters/apps/deployments/read"
+ ],
+ "NotDataActions": [],
+ "assignableScopes": [
+ "/subscriptions/<subscription-id>"
+ ]
+}
+```
+
+1. Create the role definition by running the below command from the folder where you saved `custom-role.json`:
+
+ ```bash
+ az role definition create --role-definition @custom-role.json
+ ```
+
+1. Create a role assignment using this custom role definition:
+
+ ```bash
+ az role assignment create --role "Arc Deployment Viewer" --assignee <AZURE-AD-ENTITY-ID> --scope $ARM_ID/namespaces/<namespace-name>
+ ```
+
+## Configure kubectl with user credentials
+
+There are two ways to obtain `kubeconfig` file needed to access the cluster:
+1. Use [Cluster Connect](cluster-connect.md) feature (`az connectedk8s proxy`) of the Azure Arc enabled Kubernetes cluster.
+1. Cluster admin shares `kubeconfig` file with every other user.
+
+### If you are accessing cluster using Cluster Connect feature
+
+Execute the following command to start the proxy process:
+
+```console
+az connectedk8s proxy -n <clusterName> -g <resourceGroupName>
+```
+
+After the proxy process is running, you can open another tab in your console to [start sending your requests to cluster](#sending-requests-to-cluster).
+
+### If cluster admin shared the `kubeconfig` file with you
+
+1. Execute the following command to set credentials for user:
+
+ ```console
+ kubectl config set-credentials <testuser>@<mytenant.onmicrosoft.com> \
+ --auth-provider=azure \
+ --auth-provider-arg=environment=AzurePublicCloud \
+ --auth-provider-arg=client-id=<clientApplicationId> \
+ --auth-provider-arg=tenant-id=<tenantId> \
+ --auth-provider-arg=apiserver-id=<serverApplicationId>
+ ```
+
+1. Open the `kubeconfig` file you created earlier. Under `contexts`, verify the context associated with cluster points to the user credentials created in previous step.
+
+1. Add **config-mode** setting under user config:
+
+ ```console
+ name: testuser@mytenant.onmicrosoft.com
+ user:
+ auth-provider:
+ config:
+ apiserver-id: $SERVER_APP_ID
+ client-id: $CLIENT_APP_ID
+ environment: AzurePublicCloud
+ tenant-id: $TENANT_ID
+ config-mode: "1"
+ name: azure
+ ```
+
+## Sending requests to cluster
+
+1. Run any `kubectl` command. For example:
+ * `kubectl get nodes`
+ * `kubectl get pods`
+
+1. Once prompted for a browser-based authentication, copy the device login URL `https://microsoft.com/devicelogin` and open on your web browser.
+
+1. Enter the code printed on your console, copy and paste the code on your terminal into the device authentication input prompt.
+
+1. Enter the username (testuser@mytenant.onmicrosoft.com) and associated password.
+
+1. If you see an error message like this, it means you are unauthorized to access the requested resource:
+
+ ```console
+ Error from server (Forbidden): nodes is forbidden: User "testuser@mytenant.onmicrosoft.com" cannot list resource "nodes" in API group "" at the cluster scope: User doesn't have access to the resource in Azure. Update role assignment to allow access.
+ ```
+
+ An administrator needs to create a new role assignment authorizing this user to have access on the resource.
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> Securely connect to the cluster using [Cluster Connect](cluster-connect.md)
azure-arc Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/cluster-connect.md
+
+ Title: "Use Cluster Connect to connect to Azure Arc enabled Kubernetes clusters"
++ Last updated : 04/05/2021+++
+description: "Use Cluster Connect to securely connect to Azure Arc enabled Kubernetes clusters"
++
+# Use Cluster Connect to connect to Azure Arc enabled Kubernetes clusters
+
+With Cluster Connect, you can securely connect to Azure Arc enabled Kubernetes clusters without requiring any inbound port to be enabled on the firewall. Access to the `apiserver` of the Arc enabled Kubernetes cluster enables the following scenarios:
+* Enable interactive debugging and troubleshooting.
+* Provide cluster access to Azure services for [custom locations](custom-locations.md) and other resources created on top of it.
+
+A conceptual overview of this feature is available in [Cluster connect - Azure Arc enabled Kubernetes](conceptual-cluster-connect.md) article.
++
+## Prerequisites
+
+- [Install or upgrade Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli) to version >= 2.16.0
+
+- Install the `connectedk8s` Azure CLI extension of version >= 1.1.0:
+
+ ```azurecli
+ az extension add --name connectedk8s
+ ```
+
+ If you've already installed the `connectedk8s` extension, update the extension to the latest version:
+
+ ```azurecli
+ az extension update --name connectedk8s
+ ```
+
+- An existing Azure Arc enabled Kubernetes connected cluster.
+ - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md).
+ - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to version >= 1.1.0.
+
+- Enable the Cluster Connect on any Azure Arc enabled Kubernetes cluster by running the following command on a machine where the `kubeconfig` file is pointed to the cluster of concern:
+
+ ```azurecli
+ az connectedk8s enable-features --features cluster-connect -n <clusterName> -g <resourceGroupName>
+ ```
+
+- Enable the below endpoints for outbound access in addition to the ones mentioned under [connecting a Kubernetes cluster to Azure Arc](quickstart-connect-cluster.md#meet-network-requirements):
+
+ | Endpoint | Port |
+ |-|-|
+ |`*.servicebus.windows.net` | 443 |
+ |`*.guestnotificationservice.azure.com` | 443 |
+
+## Usage
+
+Two authentication options are supported with the Cluster Connect feature:
+* Azure Active Directory (Azure AD)
+* Service account token
+
+### Option 1: Azure Active Directory
+
+1. With the `kubeconfig` file pointing to the `apiserver` of your Kubernetes cluster, create a ClusterRoleBinding or RoleBinding to the Azure AD entity (service principal or user) requiring access:
+
+ **For user:**
+
+ ```console
+ kubectl create clusterrolebinding admin-user-binding --clusterrole cluster-admin --user=<testuser>@<mytenant.onmicrosoft.com>
+ ```
+
+ **For Azure AD application:**
+
+ 1. Get the `objectId` associated with your Azure AD application:
+
+ ```azurecli
+ az ad sp show --id <id> --query objectId -o tsv
+ ```
+
+ 1. Create a ClusterRoleBinding or RoleBinding to the Azure AD entity (service principal or user) that needs to access this cluster:
+
+ ```console
+ kubectl create clusterrolebinding admin-user-binding --clusterrole cluster-admin --user=<objectId>
+ ```
+
+1. After logging into Azure CLI using the Azure AD entity of interest, get the Cluster Connect `kubeconfig` needed to communicate with the cluster from anywhere (from even outside the firewall surrounding the cluster):
+
+ ```azurecli
+ az connectedk8s proxy -n <cluster-name> -g <resource-group-name>
+ ```
+
+1. Use `kubectl` to send requests to the cluster:
+
+ ```console
+ kubectl get pods
+ ```
+
+ You should now see a response from the cluster containing the list of all pods under the `default` namespace.
+
+### Option 2: Service Account Bearer Token
+
+1. With the `kubeconfig` file pointing to the `apiserver` of your Kubernetes cluster, create a service account in any namespace (following command creates it in the default namespace):
+
+ ```console
+ kubectl create serviceaccount admin-user
+ ```
+
+1. Create ClusterRoleBinding or RoleBinding to grant this [service account the appropriate permissions on the cluster](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#kubectl-create-rolebinding):
+
+ ```console
+ kubectl create clusterrolebinding admin-user-binding --clusterrole cluster-admin --serviceaccount default:admin-user
+ ```
+
+1. Get the service account's token using the following commands
+
+ ```console
+ SECRET_NAME=$(kubectl get serviceaccount admin-user -o jsonpath='{$.secrets[0].name}')
+ ```
+
+ ```console
+ TOKEN=$(kubectl get secret ${SECRET_NAME} -o jsonpath='{$.data.token}' | base64 -d | sed $'s/$/\\\n/g')
+ ```
+
+1. Get the Cluster Connect `kubeconfig` needed to communicate with the cluster from anywhere (from even outside the firewall surrounding the cluster):
+
+ ```azurecli
+ az connectedk8s proxy -n <cluster-name> -g <resource-group-name> --token $TOKEN
+ ```
+
+1. Use `kubectl` to send requests to the cluster:
+
+ ```console
+ kubectl get pods
+ ```
+
+ You should now see a response from the cluster containing the list of all pods under the `default` namespace.
+
+## Known limitations
+
+When making requests to the Kubernetes cluster, if the Azure AD entity used is a part of more than 200 groups, the following error is observed as this is a known limitation:
+
+```console
+You must be logged in to the server (Error:Error while retrieving group info. Error:Overage claim (users with more than 200 group membership) is currently not supported.
+```
+
+To get past this error:
+1. Create a [service principal](https://docs.microsoft.com/cli/azure/create-an-azure-service-principal-azure-cli), which is less likely to be a member of more than 200 groups.
+1. [Sign in](https://docs.microsoft.com/cli/azure/create-an-azure-service-principal-azure-cli#sign-in-using-a-service-principal) to Azure CLI with the service principal before running `az connectedk8s proxy` command.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> Set up [Azure AD RBAC](azure-rbac.md) on your clusters
azure-arc Conceptual Agent Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/conceptual-agent-architecture.md
Most on-prem datacenters enforce strict network rules that prevent inbound commu
| `deployment.apps/cluster-metadata-operator` | Gathers cluster metadata, including cluster version, node count, and Azure Arc agent version. | | `deployment.apps/resource-sync-agent` | Syncs the above-mentioned cluster metadata to Azure. | | `deployment.apps/flux-logs-agent` | Collects logs from the flux operators deployed as a part of source control configuration. |
-
+ | `deployment.apps/extension-manager` | Installs and manages lifecycle of extension helm charts |
+ | `deployment.apps/clusterconnect-agent` | Reverse proxy agent that enables cluster connect feature to provide access to `apiserver` of cluster. This is an optional component deployed only if `cluster-connect` feature is enabled on the cluster |
+ | `deployment.apps/guard` | Authentication and authorization webhook server used for AAD RBAC feature. This is an optional component deployed only if `azure-rbac` feature is enabled on the cluster |
+ 1. Once all the Azure Arc enabled Kubernetes agent pods are in `Running` state, verify that your cluster connected to Azure Arc. You should see: * An Azure Arc enabled Kubernetes resource in [Azure Resource Manager](../../azure-resource-manager/management/overview.md). Azure tracks this resource as a projection of the customer-managed Kubernetes cluster, not the actual Kubernetes cluster itself. * Cluster metadata (like Kubernetes version, agent version, and number of nodes) appears on the Azure Arc enabled Kubernetes resource as metadata.
azure-arc Conceptual Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/conceptual-azure-rbac.md
+
+ Title: "Azure RBAC - Azure Arc enabled Kubernetes"
++ Last updated : 04/05/2021+++
+description: "This article provides a conceptual overview of Azure RBAC capability on Azure Arc enabled Kubernetes"
++
+# Azure RBAC on Azure Arc enabled Kubernetes
+
+Kubernetes [ClusterRoleBinding and RoleBinding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) object types help to define authorization in Kubernetes natively. With Azure RBAC, you can use Azure Active Directory (Azure AD) and role assignments in Azure to control authorization checks on the cluster.
+
+With this feature, all the benefits of Azure role assignments, such as activity logs showing all Azure RBAC changes to an Azure resource, now become applicable for your Azure Arc enabled Kubernetes cluster.
+
+## Architecture - Azure RBAC on Azure Arc enabled Kubernetes
+
+[ ![Azure RBAC architecture](./media/conceptual-azure-rbac.png) ](./media/conceptual-azure-rbac.png#lightbox)
+
+In order to route all authorization access checks to the authorization service in Azure, a webhook server ([guard](https://github.com/appscode/guard)) is deployed on the cluster.
+
+The `apiserver` of the cluster is configured to use [webhook token authentication](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication) and [webhook authorization](https://kubernetes.io/docs/reference/access-authn-authz/webhook/) so that `TokenAccessReview` and `SubjectAccessReview` requests are routed to the guard webhook server. The `TokenAccessReview` and `SubjectAccessReview` requests are triggered by requests for Kubernetes resources sent to the `apiserver`.
+
+Guard then makes a `checkAccess` call on the authorization service in Azure to see if the requesting Azure AD entity has access to the resource of concern.
+
+If a role in assignment that permits this access exists, then an `allowed` response is sent from the authorization service guard. Guard, in turn, sends an `allowed` response to the `apiserver`, enabling the calling entity to access the requested Kubernetes resource.
++
+If a role in assignment permitting this access doesn't exist, then a `denied` response is sent from the authorization service to guard. Guard sends a `denied` response to the `apiserver`, giving the calling entity a 403 forbidden error on the requested resource.
+
+## Next steps
+
+* Use our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md).
+* [Set up Azure RBAC](./azure-rbac.md) on your Azure Arc enabled Kubernetes cluster cluster.
azure-arc Conceptual Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/conceptual-cluster-connect.md
+
+ Title: "Cluster Connect - Azure Arc enabled Kubernetes"
++ Last updated : 04/05/2021+++
+description: "This article provides a conceptual overview of Cluster Connect capability of Azure Arc enabled Kubernetes"
++
+# Cluster connect on Azure Arc enabled Kubernetes
+
+The Azure Arc enabled Kubernetes *cluster connect* feature provides connectivity to the `apiserver` of the cluster without requiring any inbound port to be enabled on the firewall. A reverse proxy agent running on the cluster can securely start a session with the Azure Arc service in an outbound manner.
+
+Cluster connect allows developers to access their clusters from anywhere for interactive development and debugging. It also lets cluster users and administrators access or manage their clusters from anywhere. You can even use hosted agents/runners of Azure Pipelines, GitHub Actions, or any other hosted CI/CD service to deploy applications to on-prem clusters, without requiring self-hosted agents.
++
+## Architecture
+
+[ ![Cluster connect architecture](./media/conceptual-cluster-connect.png) ](./media/conceptual-cluster-connect.png#lightbox)
+
+On the cluster side, a reverse proxy agent called `clusterconnect-agent` deployed as part of agent helm chart, makes outbound calls to Azure Arc service to establish the session.
+
+When the user calls `az connectedk8s proxy`:
+1. Azure Arc proxy binary is downloaded and spun up as a process on the client machine.
+1. Azure Arc proxy fetches a `kubeconfig` file associated with the Azure Arc enabled Kubernetes cluster on which the `az connectedk8s proxy` is invoked.
+ * Azure Arc proxy uses the caller's Azure access token and the Azure Resource Manager ID name.
+1. The `kubeconfig` file, saved on the machine by Azure Arc proxy, points the server URL to an endpoint on the Azure Arc proxy process.
+
+When a user sends a request using this `kubeconfig` file:
+1. Azure Arc proxy maps the endpoint receiving the request to the Azure Arc service.
+1. Azure Arc service then forwards the request to the `clusterconnect-agent` running on the cluster.
+1. The `clusterconnect-agent` passes on the request to the `kube-aad-proxy` component, which performs Azure AD authentication on the calling entity.
+1. After Azure AD authentication, `kube-aad-proxy` uses Kubernetes [user impersonation](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#user-impersonation) feature to forward the request to the cluster's `apiserver`.
+
+## Next steps
+
+* Use our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md).
+* [Access your cluster](./cluster-connect.md) securely from anywhere using Cluster connect.
azure-arc Conceptual Configurations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/conceptual-configurations.md
This at-scale enforcement ensures a common baseline configuration (containing co
## Next steps
-* Walk through our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md).
-* Already have a Kubernetes cluster connected Azure Arc? [Create configurations on your Arc enabled Kubernetes cluster](./tutorial-use-gitops-connected-cluster.md).
-* Learn how to [use Azure Policy to apply configurations at scale](./use-azure-policy.md).
+* Use our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md).
+* [Create configurations](./tutorial-use-gitops-connected-cluster.md) on your Azure Arc enabled Kubernetes cluster.
+* [Use Azure Policy to apply configurations at scale](./use-azure-policy.md).
azure-arc Conceptual Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/conceptual-custom-locations.md
+
+ Title: "Custom Locations - Azure Arc enabled Kubernetes"
++ Last updated : 04/05/2021+++
+description: "This article provides a conceptual overview of Custom Locations capability of Azure Arc enabled Kubernetes"
++
+# Custom locations on top of Azure Arc enabled Kubernetes
+
+As an extension of the Azure location construct, *Custom Locations* provides a way for tenant administrators to use their Azure Arc enabled Kubernetes clusters as target locations for deploying Azure services instances. Azure resources examples include Azure Arc enabled SQL Managed Instance and Azure Arc enabled PostgreSQL Hyperscale.
+
+Similar to Azure locations, end users within the tenant with access to Custom Locations can deploy resources there using their company's private compute.
+
+[ ![Arc platform layers](./media/conceptual-arc-platform-layers.png) ](./media/conceptual-arc-platform-layers.png#lightbox)
+
+You can visualize Custom Locations as an abstraction layer on top of Azure Arc enabled Kubernetes cluster, cluster connect, and cluster extensions. Custom Locations creates the granular [RoleBindings and ClusterRoleBindings](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) necessary for other Azure services to access the cluster. These other Azure services require cluster access to manage resources the customer wants to deploy on their clusters.
++
+## Architecture
+
+When the admin enables the Custom Locations feature on the cluster, a ClusterRoleBinding is created on the cluster, authorizing the Azure AD application used by the Custom Locations Resource Provider (RP). Once authorized, Custom Locations RP can create ClusterRoleBindings or RoleBindings needed by other Azure RPs to create custom resources on this cluster. The cluster extensions installed on the cluster determines the list of RPs to authorize.
+
+[ ![Use custom locations](./media/conceptual-custom-locations-usage.png) ](./media/conceptual-custom-locations-usage.png#lightbox)
+
+When the user creates a data service instance on the cluster:
+1. The PUT request is sent to Azure Resource Manager.
+1. The PUT request is forwarded to the Azure Arc enabled Data Services RP.
+1. The RP fetches the `kubeconfig` file associated with the Azure Arc enabled Kubernetes cluster, on which the Custom Location exists.
+ * Custom Location is referenced as `extendedLocation` in the original PUT request.
+1. Azure Arc enabled Data Services RP uses the `kubeconfig` to communicate with the cluster to create a custom resource of the Azure Arc enabled Data Services type on the namespace mapped to the Custom Location.
+ * The Azure Arc enabled Data Services operator was deployed via cluster extension creation before the Custom Location existed.
+1. The Azure Arc enabled Data Services operator reads the new custom resource created on the cluster and creates the data controller, translating into realization of the desired state on the cluster.
+
+The sequence of steps to create the SQL managed instance and PostgreSQL instance are identical to the sequence of steps described above.
+
+## Next steps
+
+* Use our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md).
+* [Create a custom location](./custom-locations.md) on your Azure Arc enabled Kubernetes cluster.
azure-arc Conceptual Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/conceptual-extensions.md
+
+ Title: "Cluster extensions - Azure Arc enabled Kubernetes"
++ Last updated : 04/05/2021+++
+description: "This article provides a conceptual overview of cluster extensions capability of Azure Arc enabled Kubernetes"
++
+# Cluster extensions on Azure Arc enabled Kubernetes
+
+[Helm charts](https://helm.sh/) help you manage Kubernetes applications by providing the building blocks needed to define, install, and upgrade even the most complex Kubernetes applications. Cluster extension feature seeks to build on top of the packaging components of Helm. It does so by providing an Azure Resource Manager driven experience for installation and lifecycle management of cluster extensions such as Azure Monitor and Azure Defender for Kubernetes. The cluster extensions feature provide the following extra benefits over and above what is already available natively with Helm charts:
+
+- Get an inventory of all clusters and the extensions installed on those clusters.
+- Use Azure Policy to automate at-scale deployment of cluster extensions.
+- Subscribe to release trains of every extension.
+- Set up auto-upgrade for extensions.
+- Supportability for the extension instance creation and lifecycle management events of update and delete.
++
+## Architecture
+
+[ ![Cluster extensions architecture](./media/conceptual-extensions.png) ](./media/conceptual-extensions.png#lightbox)
+
+The cluster extension instance is created as an extension Azure Resource Manager resource (`Microsoft.KubernetesConfiguration/extensions`) on top of the Azure Arc enabled Kubernetes resource (represented by `Microsoft.Kubernetes/connectedClusters`) in Azure Resource Manager. Representation in Azure Resource Manager allows you to author a policy that checks for all the Azure Arc enabled Kubernetes resources with or without a specific cluster extension. Once you've determined which clusters lack cluster extensions with desired property values, you can remediate these non-compliant resources using Azure Policy.
+
+The `config-agent` running in your cluster tracks new or updated extension resources on the Azure Arc enabled Kubernetes resource. The `extensions-manager` running in your cluster pulls the Helm chart from Azure Container Registry or Microsoft Container Registry and installs it on the cluster.
+
+Both the `config-agent` and `extensions-manager` components running in the cluster handle version updates and extension instance deletion.
+
+> [!NOTE]
+> * `config-agent` monitors for new or updated extension resources to be available on the Arc enabled Kubernetes resource. Thus, agents require connectivity for the desired state to be pulled down to the cluster. If agents are unable to connect to Azure, propagation of the desired state to the cluster is delayed.
+> * Protected configuration settings for an extension are stored for up to 48 hours in the Azure Arc enabled Kubernetes services. As a result, if the cluster remains disconnected during the 48 hours after the extension resource was created on Azure, the extension transitions from a `Pending` state to `Failed` state. We advise bringing the clusters online as regularly as possible.
+
+## Next steps
+
+* Use our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md).
+* [Deploy cluster extensions](./extensions.md) on your Azure Arc enabled Kubernetes cluster.
azure-arc Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/custom-locations.md
+
+ Title: "Custom locations on Azure Arc enabled Kubernetes"
++ Last updated : 04/05/2021++++
+description: "Use custom locations to deploy Azure PaaS services on Azure Arc enabled Kubernetes clusters"
++
+# Custom locations on Azure Arc enabled Kubernetes
+
+As an Azure location extension, *Custom Locations* provides a way for tenant administrators to use their Azure Arc enabled Kubernetes clusters as target locations for deploying Azure services instances. Azure resources examples include Azure Arc enabled SQL Managed Instance and Azure Arc enabled PostgreSQL Hyperscale.
+
+Similar to Azure locations, end users within the tenant with access to Custom Locations can deploy resources there using their company's private compute.
+
+A conceptual overview of this feature is available in [Custom locations - Azure Arc enabled Kubernetes](conceptual-custom-locations.md) article.
++
+## Prerequisites
+
+- [Install or upgrade Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli) to version >= 2.16.0.
+
+- `connectedk8s` (version >= 1.1.0), `k8s-extension` (version >= 0.2.0) and `customlocation` (version >= 0.1.0) Azure CLI extensions. Install these Azure CLI extensions by running the following commands:
+
+ ```azurecli
+ az extension add --name connectedk8s
+ az extension add --name k8s-extension
+ az extension add --name customlocation
+ ```
+
+ If the `connectedk8s`, `k8s-extension` and `customlocation` extensions are already installed, you can update them to the latest version using the following command:
+
+ ```azurecli
+ az extension update --name connectedk8s
+ az extension update --name k8s-extension
+ az extension update --name customlocation
+ ```
+
+- Provider registration is complete for `Microsoft.ExtendedLocation`.
+ 1. Enter the following commands:
+
+ ```azurecli
+ az provider register --namespace Microsoft.ExtendedLocation
+ ```
+
+ 2. Monitor the registration process. Registration may take up to 10 minutes.
+
+ ```azurecli
+ az provider show -n Microsoft.ExtendedLocation -o table
+ ```
+
+>[!NOTE]
+>**Supported regions for custom locations:**
+>* East US
+>* West Europe
+
+## Enable custom locations on cluster
+
+To enable this feature on your cluster, execute the following command:
+
+```console
+az connectedk8s enable-features -n <clusterName> -g <resourceGroupName> --features cluster-connect custom-locations
+```
+
+> [!NOTE]
+> 1. Custom Locations feature is dependent on the Cluster Connect feature. So both features have to be enabled for custom locations to work.
+> 2. `az connectedk8s enable-features` needs to be run on a machine where the `kubeconfig` file is pointing to the cluster on which the features are to be enabled.
+
+## Create custom location
+
+1. Create an Azure Arc enabled Kubernetes cluster.
+ - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md).
+ - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to version >= 1.1.0.
+
+1. Deploy the cluster extension of the Azure service whose instance you eventually want on top of the custom location:
+
+ ```azurecli
+ az k8s-extension create --name <extensionInstanceName> --extension-type microsoft.arcdataservices --cluster-type connectedClusters -c <clusterName> -g <resourceGroupName> --scope cluster --release-namespace arc --config Microsoft.CustomLocation.ServiceAccount=sa-bootstrapper
+ ```
+
+ > [!NOTE]
+ > Outbound proxy without authentication and outbound proxy with basic authentication are supported by the Arc enabled Data Services cluster extension. Outbound proxy that expects trusted certificates is currently not supported.
+
+1. Get the Azure Resource Manager identifier of the Azure Arc enabled Kubernetes cluster, referenced in later steps as `connectedClusterId`:
+
+ ```azurecli
+ az connectedk8s show -n <clusterName> -g <resourceGroupName> --query id -o tsv
+ ```
+
+1. Get the Azure Resource Manager identifier of the cluster extension deployed on top of Azure Arc enabled Kubernetes cluster, referenced in later steps as `extensionId`:
+
+ ```azurecli
+ az k8s-extension show --name <extensionInstanceName> --cluster-type connectedClusters -c <clusterName> -g <resourceGroupName> --query id -o tsv
+ ```
+
+1. Create custom location by referencing the Azure Arc enabled Kubernetes cluster and the extension:
+
+ ```azurecli
+ az customlocation create -n <customLocationName> -g <resourceGroupName> --namespace arc --host-resource-id <connectedClusterId> --cluster-extension-ids <extensionId>
+ ```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> Securely connect to the cluster using [Cluster Connect](cluster-connect.md)
azure-arc Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/extensions.md
+
+ Title: "Azure Arc enabled Kubernetes cluster extensions"
++ Last updated : 04/05/2021+++
+description: "Deploy and manage lifecycle of extensions on Azure Arc enabled Kubernetes"
++
+# Kubernetes cluster extensions
+
+The Kubernetes extensions feature enables the following on Azure Arc enabled Kubernetes clusters:
+
+* Azure Resource Manager-based deployment of cluster extension.
+* Lifecycle management of extension Helm charts.
+
+A conceptual overview of this feature is available in [Cluster extensions - Azure Arc enabled Kubernetes](conceptual-extensions.md) article.
++
+## Prerequisites
+
+- [Install or upgrade Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli) to version >= 2.16.0.
+- `connectedk8s` (version >= 1.1.0) and `k8s-extension` (version >= 0.2.0) Azure CLI extensions. Install these Azure CLI extensions by running the following commands:
+
+ ```azurecli
+ az extension add --name connectedk8s
+ az extension add --name k8s-extension
+ ```
+
+ If the `connectedk8s` and `k8s-extension` extension are already installed, you can update them to the latest version using the following command:
+
+ ```azurecli
+ az extension update --name connectedk8s
+ az extension update --name k8s-extension
+ ```
+
+- An existing Azure Arc enabled Kubernetes connected cluster.
+ - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md).
+ - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to version >= 1.1.0.
+
+## Currently available extensions
+
+| Extension | Description |
+| | -- |
+| [Azure Monitor](../../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md?toc=/azure/azure-arc/kubernetes/toc.json) | Provides visibility into the performance of workloads deployed on the Kubernetes cluster. Collects memory and CPU utilization metrics from controllers, nodes, and containers. |
+| [Azure Defender](../../security-center/defender-for-kubernetes-azure-arc.md?toc=/azure/azure-arc/kubernetes/toc.json) | Gathers audit log data from control plane nodes of the Kubernetes cluster. Provides recommendations and threat alerts based on gathered data. |
+
+## Usage of cluster extensions
+
+### Create extensions instance
+
+Create a new extension instance with `k8s-extension create`, passing in values for the mandatory parameters. The below command creates an Azure Monitor for containers extension instance on your Azure Arc enabled Kubernetes cluster:
+
+```azurecli
+az k8s-extension create --name azuremonitor-containers --extension-type Microsoft.AzureMonitor.Containers --scope cluster --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type connectedClusters
+```
+
+**Output:**
+
+```json
+{
+ "autoUpgradeMinorVersion": true,
+ "configurationProtectedSettings": null,
+ "configurationSettings": {
+ "logAnalyticsWorkspaceResourceID": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/defaultresourcegroup-eus/providers/microsoft.operationalinsights/workspaces/defaultworkspace-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx-eus"
+ },
+ "creationTime": "2021-04-02T12:13:06.7534628+00:00",
+ "errorInfo": {
+ "code": null,
+ "message": null
+ },
+ "extensionType": "microsoft.azuremonitor.containers",
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/demo/providers/Microsoft.Kubernetes/connectedClusters/demo/providers/Microsoft.KubernetesConfiguration/extensions/azuremonitor-containers",
+ "identity": null,
+ "installState": "Pending",
+ "lastModifiedTime": "2021-04-02T12:13:06.753463+00:00",
+ "lastStatusTime": null,
+ "name": "azuremonitor-containers",
+ "releaseTrain": "Stable",
+ "resourceGroup": "demo",
+ "scope": {
+ "cluster": {
+ "releaseNamespace": "azuremonitor-containers"
+ },
+ "namespace": null
+ },
+ "statuses": [],
+ "systemData": null,
+ "type": "Microsoft.KubernetesConfiguration/extensions",
+ "version": "2.8.2"
+}
+```
+
+> [!NOTE]
+> * The service is unable to retain sensitive information for more than 48 hours. If Azure Arc enabled Kubernetes agents don't have network connectivity for more than 48 hours and cannot determine whether to create an extension on the cluster, then the extension transitions to `Failed` state. Once in `Failed` state, you will need to run `k8s-extension create` again to create a fresh extension Azure resource.
+> * * Azure Monitor for containers is a singleton extension (only one required per cluster). You'll need to clean up any previous Helm chart installations of Azure Monitor for containers (without extensions) before installing the same via extensions. Follow the instructions for [deleting the Helm chart before running `az k8s-extension create`](https://docs.microsoft.com/azure/azure-monitor/insights/container-insights-optout-hybrid).
+
+**Required parameters**
+
+| Parameter name | Description |
+|-||
+| `--name` | Name of the extension instance |
+| `--extension-type` | The type of extension you want to install on the cluster. For example: Microsoft.AzureMonitor.Containers, microsoft.azuredefender.kubernetes |
+| `--scope` | Scope of installation for the extension - `cluster` or `namespace` |
+| `--cluster-name` | Name of the Azure Arc enabled Kubernetes resource on which the extension instance has to be created |
+| `--resource-group` | The resource group containing the Azure Arc enabled Kubernetes resource |
+| `--cluster-type` | The cluster type on which the extension instance has to be created. Current only `connectedClusters`, which corresponds to Azure Arc enabled Kubernetes, is an accepted value |
+
+**Optional parameters**
+
+| Parameter name | Description |
+|--||
+| `--auto-upgrade-minor-version` | Boolean property that specifies if the extension minor version will be upgraded automatically or not. Default: `true`. If this parameter is set to true, you cannot set `version` parameter, as the version will be dynamically updated. If set to `false`, extension will not be auto-upgraded even for patch versions. |
+| `--version` | Version of the extension to be installed (specific version to pin the extension instance to). Must not be supplied if auto-upgrade-minor-version is set to `true`. |
+| `--configuration-settings` | Settings that can be passed into the extension to control its functionality. They are to be passed in as space separated `key=value` pairs after the parameter name. If this parameter is used in the command, then `--configuration-settings-file` can't be used in the same command. |
+| `--configuration-settings-file` | Path to the JSON file having key value pairs to be used for passing in configuration settings to the extension. If this parameter is used in the command, then `--configuration-settings` can't be used in the same command. |
+| `--configuration-protected-settings` | These settings are not retrievable using `GET` API calls or `az k8s-extension show` commands, and are thus used to pass in sensitive settings. They are to be passed in as space separated `key=value` pairs after the parameter name. If this parameter is used in the command, then `--configuration-protected-settings-file` can't be used in the same command. |
+| `--configuration-protected-settings-file` | Path to the JSON file having key value pairs to be used for passing in sensitive settings to the extension. If this parameter is used in the command, then `--configuration-protected-settings` can't be used in the same command. |
+| `--release-namespace` | This parameter indicates the namespace within which the release is to be created. This parameter is only relevant if `scope` parameter is set to `cluster`. |
+| `--release-train` | Extension authors can publish versions in different release trains such as `Stable`, `Preview`, etc. If this parameter is not set explicitly, `Stable` is used as default. This parameter can't be used when `autoUpgradeMinorVersion` parameter is set to `false`. |
+| `--target-namespace` | This parameter indicates the namespace within which the release will be created. Permission of the system account created for this extension instance will be restricted to this namespace. This parameter is only relevant if the `scope` parameter is set to `namespace`. |
+
+### Show details of an extension instance
+
+View details of a currently installed extension instance with `k8s-extension show`, passing in values for the mandatory parameters:
+
+```azurecli
+az k8s-extension show --name azuremonitor-containers --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type connectedClusters
+```
+
+**Output:**
+
+```json
+{
+ "autoUpgradeMinorVersion": true,
+ "configurationProtectedSettings": null,
+ "configurationSettings": {
+ "logAnalyticsWorkspaceResourceID": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/defaultresourcegroup-eus/providers/microsoft.operationalinsights/workspaces/defaultworkspace-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx-eus"
+ },
+ "creationTime": "2021-04-02T12:13:06.7534628+00:00",
+ "errorInfo": {
+ "code": null,
+ "message": null
+ },
+ "extensionType": "microsoft.azuremonitor.containers",
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/demo/providers/Microsoft.Kubernetes/connectedClusters/demo/providers/Microsoft.KubernetesConfiguration/extensions/azuremonitor-containers",
+ "identity": null,
+ "installState": "Installed",
+ "lastModifiedTime": "2021-04-02T12:13:06.753463+00:00",
+ "lastStatusTime": "2021-04-02T12:13:49.636+00:00",
+ "name": "azuremonitor-containers",
+ "releaseTrain": "Stable",
+ "resourceGroup": "demo",
+ "scope": {
+ "cluster": {
+ "releaseNamespace": "azuremonitor-containers"
+ },
+ "namespace": null
+ },
+ "statuses": [],
+ "systemData": null,
+ "type": "Microsoft.KubernetesConfiguration/extensions",
+ "version": "2.8.2"
+}
+```
+
+### List all extensions installed on the cluster
+
+List all extensions installed on a cluster with `k8s-extension list`, passing in values for the mandatory parameters.
+
+```azurecli
+az k8s-extension list --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type connectedClusters
+```
+
+**Output:**
+
+```json
+[
+ {
+ "autoUpgradeMinorVersion": true,
+ "creationTime": "2020-09-15T02:26:03.5519523+00:00",
+ "errorInfo": {
+ "code": null,
+ "message": null
+ },
+ "extensionType": "Microsoft.AzureMonitor.Containers",
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myRg/providers/Microsoft.Kubernetes/connectedClusters/myCluster/providers/Microsoft.KubernetesConfiguration/extensions/myExtInstanceName",
+ "identity": null,
+ "installState": "Pending",
+ "lastModifiedTime": "2020-09-15T02:48:45.6469664+00:00",
+ "lastStatusTime": null,
+ "name": "myExtInstanceName",
+ "releaseTrain": "Stable",
+ "resourceGroup": "myRG",
+ "scope": {
+ "cluster": {
+ "releaseNamespace": "myExtInstanceName1"
+ }
+ },
+ "statuses": [],
+ "type": "Microsoft.KubernetesConfiguration/extensions",
+ "version": "0.1.0"
+ },
+ {
+ "autoUpgradeMinorVersion": true,
+ "creationTime": "2020-09-02T00:41:16.8005159+00:00",
+ "errorInfo": {
+ "code": null,
+ "message": null
+ },
+ "extensionType": "microsoft.azuredefender.kubernetes",
+ "id": "/subscriptions/0e849346-4343-582b-95a3-e40e6a648ae1/resourceGroups/myRg/providers/Microsoft.Kubernetes/connectedClusters/myCluster/providers/Microsoft.KubernetesConfiguration/extensions/defender",
+ "identity": null,
+ "installState": "Pending",
+ "lastModifiedTime": "2020-09-02T00:41:16.8005162+00:00",
+ "lastStatusTime": null,
+ "name": "microsoft.azuredefender.kubernetes",
+ "releaseTrain": "Stable",
+ "resourceGroup": "myRg",
+ "scope": {
+ "cluster": {
+ "releaseNamespace": "myExtInstanceName2"
+ }
+ },
+ "type": "Microsoft.KubernetesConfiguration/extensions",
+ "version": "0.1.0"
+ }
+]
+```
+
+### Update an existing extension instance
+
+Update an extension instance on a cluster with `k8s-extension update`, passing in the values to update. This command only updates the `auto-upgrade-minor-version`, `release-train`, and `version` properties. For example:
+
+- **Update release train:**
+
+ ```azurecli
+ az k8s-extension update --name azuremonitor-containers --cluster-type connectedClusters --cluster-name <clusterName> --resource-group <resourceGroupName> --release-train Preview
+ ```
+
+- **Turn off auto-upgrade and pin extension instance to a specific version:**
+
+ ```azurecli
+ az k8s-extension update --name azuremonitor-containers --cluster-type connectedClusters --cluster-name <clusterName> --resource-group <resourceGroupName> --auto-upgrade-minor-version false --version 2.2.2
+ ```
+
+- **Turn on auto-upgrade for the extension instance:**
+
+ ```azurecli
+ az k8s-extension update --name azuremonitor-containers --cluster-type connectedClusters --cluster-name <clusterName> --resource-group <resourceGroupName> --auto-upgrade-minor-version true
+ ```
+
+> [!NOTE]
+> The `version` parameter can be set only when `--auto-upgrade-minor-version` is set to `false`.
+
+### Delete extension instance
+
+Delete an extension instance on a cluster with `k8s-extension delete`, passing in values for the mandatory parameters.
+
+```azurecli
+az k8s-extension delete --name azuremonitor-containers --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type connectedClusters
+```
+
+>[!NOTE]
+> The Azure resource representing this extension gets deleted immediately. The Helm release on the cluster associated with this extension is only deleted when the agents running on the Kubernetes cluster have network connectivity and can reach out to Azure services again to fetch the desired state.
++
+## Next steps
+
+Learn more about the cluster extensions currently available for Azure Arc enabled Kubernetes:
+> [!div class="nextstepaction"]
+> [Azure Monitor](../../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md?toc=/azure/azure-arc/kubernetes/toc.json)
+> [Azure Defender](../../security-center/defender-for-kubernetes-azure-arc.md?toc=/azure/azure-arc/kubernetes/toc.json)
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/overview.md
Azure Arc enabled Kubernetes supports the following scenarios:
* Deploy applications and apply configuration using GitOps-based configuration management.
-* View and monitor your clusters using Azure Monitor for containers.
+* View and monitor your clusters using Azure Monitor for containers.
-* Apply policies using Azure Policy for Kubernetes.
+* Enforce threat protection using Azure Defender for Kubernetes.
+
+* Apply policies using Azure Policy for Kubernetes.
[!INCLUDE [azure-lighthouse-supported-service](../../../includes/azure-lighthouse-supported-service.md)]
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
Last updated 03/03/2021-+ keywords: "Kubernetes, Arc, Azure, cluster"
In this quickstart, we'll reap the benefits of Azure Arc enabled Kubernetes and
[!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)]
-* Verify you have:
- * An up-and-running Kubernetes cluster.
- * A `kubeconfig` file pointing to the cluster you want to connect to Azure Arc.
- * 'Read' and 'Write' permissions for the user or service principal connecting creating the Azure Arc enabled Kubernetes resource type (`Microsoft.Kubernetes/connectedClusters`).
+* An up-and-running Kubernetes cluster. If you don't have one, you can create a cluster using one of these options:
+ * [Kubernetes in Docker (KIND)](https://kind.sigs.k8s.io/)
+ * Create a Kubernetes cluster using Docker for [Mac](https://docs.docker.com/docker-for-mac/#kubernetes) or [Windows](https://docs.docker.com/docker-for-windows/#kubernetes)
+ * Self-managed Kubernetes cluster using [Cluster API](https://cluster-api.sigs.k8s.io/user/quick-start.html)
+
+ >[!NOTE]
+ > The cluster needs to have at least one node of operating system and architecture type `linux/amd64`. Clusters with only `linux/arm64` nodes aren't yet supported.
+
+* A `kubeconfig` file and context pointing to your cluster.
+* 'Read' and 'Write' permissions on the Azure Arc enabled Kubernetes resource type (`Microsoft.Kubernetes/connectedClusters`).
+ * Install the [latest release of Helm 3](https://helm.sh/docs/intro/install).
-* Install the following Azure Arc enabled Kubernetes CLI extensions of versions >= 1.0.0:
+
+- [Install or upgrade Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli) to version >= 2.16.0
+* Install the `connectedk8s` Azure CLI extension of version >= 1.0.0:
```azurecli az extension add --name connectedk8s
- az extension add --name k8s-configuration
- ```
- * To update these extensions to the latest version, run the following commands:
-
- ```azurecli
- az extension update --name connectedk8s
- az extension update --name k8s-configuration
```
+>[!TIP]
+> If the `connectedk8s` extension is already installed, update it to the latest version using the following command - `az extension update --name connectedk8s`
++
+>[!NOTE]
+>The list of regions supported by Azure Arc enabled Kubernetes can be found [here](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc).
+ >[!NOTE]
->**Supported regions:**
->* East US
->* West Europe
->* West Central US
->* South Central US
->* Southeast Asia
->* UK South
->* West US 2
->* Australia East
->* East US 2
->* North Europe
+> If you want to use custom locations on the cluster, then use East US or West Europe regions for connecting your cluster as custom locations is only available in these regions as of now. All other Azure Arc enabled Kubernetes features are available in all regions listed above.
## Meet network requirements
In this quickstart, we'll reap the benefits of Azure Arc enabled Kubernetes and
| Endpoint (DNS) | Description | | -- | - | | `https://management.azure.com` | Required for the agent to connect to Azure and register the cluster. |
-| `https://eastus.dp.kubernetesconfiguration.azure.com`, `https://westeurope.dp.kubernetesconfiguration.azure.com`, `https://westcentralus.dp.kubernetesconfiguration.azure.com`, `https://southcentralus.dp.kubernetesconfiguration.azure.com`, `https://southeastasia.dp.kubernetesconfiguration.azure.com`, `https://uksouth.dp.kubernetesconfiguration.azure.com`, `https://westus2.dp.kubernetesconfiguration.azure.com`, `https://australiaeast.dp.kubernetesconfiguration.azure.com`, `https://eastus2.dp.kubernetesconfiguration.azure.com`, `https://northeurope.dp.kubernetesconfiguration.azure.com` | Data plane endpoint for the agent to push status and fetch configuration information. |
+| `https://<region>.dp.kubernetesconfiguration.azure.com` | Data plane endpoint for the agent to push status and fetch configuration information. |
| `https://login.microsoftonline.com` | Required to fetch and update Azure Resource Manager tokens. | | `https://mcr.microsoft.com` | Required to pull container images for Azure Arc agents. | | `https://eus.his.arc.azure.com`, `https://weu.his.arc.azure.com`, `https://wcus.his.arc.azure.com`, `https://scus.his.arc.azure.com`, `https://sea.his.arc.azure.com`, `https://uks.his.arc.azure.com`, `https://wus2.his.arc.azure.com`, `https://ae.his.arc.azure.com`, `https://eus2.his.arc.azure.com`, `https://ne.his.arc.azure.com` | Required to pull system-assigned Managed Service Identity (MSI) certificates. |
In this quickstart, we'll reap the benefits of Azure Arc enabled Kubernetes and
```azurecli az provider register --namespace Microsoft.Kubernetes az provider register --namespace Microsoft.KubernetesConfiguration
+ az provider register --namespace Microsoft.ExtendedLocation
``` 2. Monitor the registration process. Registration may take up to 10 minutes. ```azurecli az provider show -n Microsoft.Kubernetes -o table
- az provider show -n Microsoft.KubernetesConfiguration -o table
+ az provider show -n Microsoft.KubernetesConfiguration -o table
+ az provider show -n Microsoft.ExtendedLocation -o table
``` ## Create a resource group
azure-arc Tutorial Use Gitops Connected Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md
In this tutorial, you will apply configurations using GitOps on an Azure Arc ena
- An existing Azure Arc enabled Kubernetes connected cluster. - If you haven't connected a cluster yet, walk through our [Connect an Azure Arc enabled Kubernetes cluster quickstart](quickstart-connect-cluster.md). - An understanding of the benefits and architecture of this feature. Read more in [Configurations and GitOps - Azure Arc enabled Kubernetes article](conceptual-configurations.md).
+- Install the `k8s-configuration` Azure CLI extension of version >= 1.0.0:
+
+ ```azurecli
+ az extension add --name k8s-configuration
+ ```
+
+ >[!TIP]
+ > If the `k8s-configuration` extension is already installed, you can update it to the latest version using the following command - `az extension update --name k8s-configuration`
## Create a configuration
azure-arc Use Gitops With Helm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/use-gitops-with-helm.md
Helm is an open-source packaging tool that helps you install and manage the life
This article shows you how to configure and use Helm with Azure Arc enabled Kubernetes.
-## Before you begin
-
-Verify you have an existing Azure Arc enabled Kubernetes connected cluster. If you need a connected cluster, see the [Connect an Azure Arc enabled Kubernetes cluster quickstart](./quickstart-connect-cluster.md).
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An existing Azure Arc enabled Kubernetes connected cluster.
+ - If you haven't connected a cluster yet, walk through our [Connect an Azure Arc enabled Kubernetes cluster quickstart](quickstart-connect-cluster.md).
+- An understanding of the benefits and architecture of this feature. Read more in [Configurations and GitOps - Azure Arc enabled Kubernetes article](conceptual-configurations.md).
+- Install the `k8s-configuration` Azure CLI extension of version >= 1.0.0:
+
+ ```azurecli
+ az extension add --name k8s-configuration
+ ```
## Overview of using GitOps and Helm with Azure Arc enabled Kubernetes
azure-functions Functions Create Function App Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-function-app-portal.md
Next, create a function in the new function app.
![Copy the function URL from the Azure portal](./media/functions-create-first-azure-function/function-app-develop-tab-testing.png)
-1. Paste the function URL into your browser's address bar. Add the query string value `?name=<your_name>` to the end of this URL and press Enter to run the request.
-
- The following example shows the response in the browser:
-
- ![Function response in the browser.](./media/functions-create-first-azure-function/function-app-browser-testing.png)
+1. Paste the function URL into your browser's address bar. Add the query string value `?name=<your_name>` to the end of this URL and press Enter to run the request. The browser should display a response message that echoes back your query string value.
If the request URL included an [access key](functions-bindings-http-webhook-trigger.md#authorization-keys) (`?code=...`), it means you choose **Function** instead of **Anonymous** access level when creating the function. In this case, you should instead append `&name=<your_name>`.
azure-monitor Alerts Log Webhook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-log-webhook.md
Last updated 09/22/2020
[Log alert](alerts-log.md) supports [configuring webhook action groups](./action-groups.md#webhook). In this article, we'll describe what properties are available and how to configure a custom JSON webhook. > [!NOTE]
-> Custom JSON-based webhook is not currently supported in the API version `2020-05-01-preview`
+> Custom JSON-based webhook is not currently supported in the API version `2020-05-01-preview`.
> [!NOTE]
-> It is recommended you use [common alert schema](../alerts/alerts-common-schema.md) for your webhook integrations. The common alert schema provides the advantage of having a single extensible and unified alert payload across all the alert services in Azure Monitor. For log alerts rules that have a custom JSON payload defined, enabling the common schema reverts payload schema to the one described [here](../alerts/alerts-common-schema-definitions.md#log-alerts). Alerts with the common schema enabled have an upper size limit of 256 KB per alert, bigger alert will not include search results. When the search results aren't included, you should use the `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results via the Log Analytics API.
+> It is recommended you use [common alert schema](../alerts/alerts-common-schema.md) for your webhook integrations. The common alert schema provides the advantage of having a single extensible and unified alert payload across all the alert services in Azure Monitor. For log alerts rules that have a custom JSON payload defined, enabling the common alert schema reverts the payload schema to the one described [here](../alerts/alerts-common-schema-definitions.md#log-alerts). This means that if you want to have a custom JSON payload defined, the webhook can't use the common alert schema. Alerts with the common schema enabled have an upper size limit of 256 KB per alert, bigger alert will not include search results. When the search results aren't included, you should use the `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results via the Log Analytics API.
## Webhook payload properties
The following sample payload is for a standard webhook action that's used for al
```json {
- "SubscriptionId": "12345a-1234b-123c-123d-12345678e",
- "AlertRuleName": "AcmeRule",
- "SearchQuery": "Perf | where ObjectName == \"Processor\" and CounterName == \"% Processor Time\" | summarize AggregatedValue = avg(CounterValue) by bin(TimeGenerated, 5m), Computer",
- "SearchIntervalStartTimeUtc": "2018-03-26T08:10:40Z",
- "SearchIntervalEndtimeUtc": "2018-03-26T09:10:40Z",
- "AlertThresholdOperator": "Greater Than",
- "AlertThresholdValue": 0,
- "ResultCount": 2,
- "SearchIntervalInSeconds": 3600,
- "LinkToSearchResults": "https://portal.azure.com/#Analyticsblade/search/index?_timeInterval.intervalEnd=2018-03-26T09%3a10%3a40.0000000Z&_timeInterval.intervalDuration=3600&q=Usage",
- "LinkToFilteredSearchResultsUI": "https://portal.azure.com/#Analyticsblade/search/index?_timeInterval.intervalEnd=2018-03-26T09%3a10%3a40.0000000Z&_timeInterval.intervalDuration=3600&q=Usage",
- "LinkToSearchResultsAPI": "https://api.loganalytics.io/v1/workspaces/workspaceID/query?query=Heartbeat&timespan=2020-05-07T18%3a11%3a51.0000000Z%2f2020-05-07T18%3a16%3a51.0000000Z",
- "LinkToFilteredSearchResultsAPI": "https://api.loganalytics.io/v1/workspaces/workspaceID/query?query=Heartbeat&timespan=2020-05-07T18%3a11%3a51.0000000Z%2f2020-05-07T18%3a16%3a51.0000000Z",
- "Description": "log alert rule",
- "Severity": "Warning",
- "AffectedConfigurationItems": [
- "INC-Gen2Alert"
- ],
- "Dimensions": [
- {
- "name": "Computer",
- "value": "INC-Gen2Alert"
- }
- ],
- "SearchResult": {
- "tables": [
+ "schemaId":"Microsoft.Insights/LogAlert",
+ "data":{
+ "SubscriptionId":"12345a-1234b-123c-123d-12345678e",
+ "AlertRuleName":"AcmeRule",
+ "SearchQuery":"Perf | where ObjectName == \"Processor\" and CounterName == \"% Processor Time\" | summarize AggregatedValue = avg(CounterValue) by bin(TimeGenerated, 5m), Computer",
+ "SearchIntervalStartTimeUtc":"2018-03-26T08:10:40Z",
+ "SearchIntervalEndtimeUtc":"2018-03-26T09:10:40Z",
+ "AlertThresholdOperator":"Greater Than",
+ "AlertThresholdValue":0,
+ "ResultCount":2,
+ "SearchIntervalInSeconds":3600,
+ "LinkToSearchResults":"https://portal.azure.com/#Analyticsblade/search/index?_timeInterval.intervalEnd=2018-03-26T09%3a10%3a40.0000000Z&_timeInterval.intervalDuration=3600&q=Usage",
+ "LinkToFilteredSearchResultsUI":"https://portal.azure.com/#Analyticsblade/search/index?_timeInterval.intervalEnd=2018-03-26T09%3a10%3a40.0000000Z&_timeInterval.intervalDuration=3600&q=Usage",
+ "LinkToSearchResultsAPI":"https://api.loganalytics.io/v1/workspaces/workspaceID/query?query=Heartbeat&timespan=2020-05-07T18%3a11%3a51.0000000Z%2f2020-05-07T18%3a16%3a51.0000000Z",
+ "LinkToFilteredSearchResultsAPI":"https://api.loganalytics.io/v1/workspaces/workspaceID/query?query=Heartbeat&timespan=2020-05-07T18%3a11%3a51.0000000Z%2f2020-05-07T18%3a16%3a51.0000000Z",
+ "Description":"log alert rule",
+ "Severity":"Warning",
+ "AffectedConfigurationItems":[
+ "INC-Gen2Alert"
+ ],
+ "Dimensions":[
+ {
+ "name":"Computer",
+ "value":"INC-Gen2Alert"
+ }
+ ],
+ "SearchResult":{
+ "tables":[
{
- "name": "PrimaryResult",
- "columns": [
- {
- "name": "$table",
- "type": "string"
- },
- {
- "name": "Computer",
- "type": "string"
- },
- {
- "name": "TimeGenerated",
- "type": "datetime"
- }
- ],
- "rows": [
- [
- "Fabrikam",
- "33446677a",
- "2018-02-02T15:03:12.18Z"
- ],
- [
- "Contoso",
- "33445566b",
- "2018-02-02T15:16:53.932Z"
- ]
- ]
+ "name":"PrimaryResult",
+ "columns":[
+ {
+ "name":"$table",
+ "type":"string"
+ },
+ {
+ "name":"Computer",
+ "type":"string"
+ },
+ {
+ "name":"TimeGenerated",
+ "type":"datetime"
+ }
+ ],
+ "rows":[
+ [
+ "Fabrikam",
+ "33446677a",
+ "2018-02-02T15:03:12.18Z"
+ ],
+ [
+ "Contoso",
+ "33445566b",
+ "2018-02-02T15:16:53.932Z"
+ ]
+ ]
}
- ]
- },
- "WorkspaceId": "12345a-1234b-123c-123d-12345678e",
- "AlertType": "Metric measurement"
+ ]
+ },
+ "WorkspaceId":"12345a-1234b-123c-123d-12345678e",
+ "AlertType":"Metric measurement"
+ }
} ```
The following sample payload is for a custom webhook action for any log alert:
- Understand how to [manage log alerts in Azure](alerts-log.md). - Create and manage [action groups in Azure](./action-groups.md). - Learn more about [Application Insights](../logs/log-query-overview.md).-- Learn more about [log queries](../logs/log-query-overview.md).
+- Learn more about [log queries](../logs/log-query-overview.md).
azure-monitor Get Metric https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/get-metric.md
Title: Get-Metric in Azure Monitor Application Insights description: Learn how to effectively use the GetMetric() call to capture locally pre-aggregated metrics for .NET and .NET Core applications with Azure Monitor Application Insights -+ Last updated 04/28/2020
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-in-process-agent.md
The 3.0 agent supports Java 8 and above.
> Please review all the [configuration options](./java-standalone-config.md) carefully, > as the json structure has completely changed, in addition to the file name itself which went all lowercase.
-Download [applicationinsights-agent-3.0.2.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.0.2/applicationinsights-agent-3.0.2.jar)
+Download [applicationinsights-agent-3.0.3.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.0.3/applicationinsights-agent-3.0.3.jar)
**2. Point the JVM to the agent**
-Add `-javaagent:path/to/applicationinsights-agent-3.0.2.jar` to your application's JVM args
+Add `-javaagent:path/to/applicationinsights-agent-3.0.3.jar` to your application's JVM args
Typical JVM args include `-Xmx512m` and `-XX:+UseG1GC`. So if you know where to add these, then you already know where to add this.
Point the agent to your Application Insights resource, either by setting an envi
APPLICATIONINSIGHTS_CONNECTION_STRING=InstrumentationKey=... ```
-Or by creating a configuration file named `applicationinsights.json`, and placing it in the same directory as `applicationinsights-agent-3.0.2.jar`, with the following content:
+Or by creating a configuration file named `applicationinsights.json`, and placing it in the same directory as `applicationinsights-agent-3.0.3.jar`, with the following content:
```json {
See [configuration options](./java-standalone-config.md) for full details.
* Micrometer (including Spring Boot Actuator metrics) * JMX Metrics
+### Azure SDKs
+
+* This feature is in preview, see the [configuration options](./java-standalone-config.md#auto-collected-azure-sdk-telemetry) for how to enable it.
+ ## Send custom telemetry from your application Our goal in 3.0+ is to allow you to send your custom telemetry using standard APIs.
requestTelemetry.setName("myname");
### Get the request telemetry id and the operation id using the 2.x SDK > [!NOTE]
-> This feature is only in 3.0.3-BETA and later
+> This feature is only in 3.0.3 and later
Add `applicationinsights-web-2.6.2.jar` to your application (all 2.x versions are supported by Application Insights Java 3.0, but it's worth using the latest if you have a choice):
azure-monitor Java Standalone Arguments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-arguments.md
Configure [App Services](../../app-service/configure-language-java.md#set-java-r
## Spring Boot
-Add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.0.2.jar` somewhere before `-jar`, for example:
+Add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.0.3.jar` somewhere before `-jar`, for example:
```
-java -javaagent:path/to/applicationinsights-agent-3.0.2.jar -jar <myapp.jar>
+java -javaagent:path/to/applicationinsights-agent-3.0.3.jar -jar <myapp.jar>
``` ## Spring Boot via Docker entry point
-If you are using the *exec* form, add the parameter `"-javaagent:path/to/applicationinsights-agent-3.0.2.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you are using the *exec* form, add the parameter `"-javaagent:path/to/applicationinsights-agent-3.0.3.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.0.2.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.0.3.jar", "-jar", "<myapp.jar>"]
```
-If you are using the *shell* form, add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.0.2.jar` somewhere before `-jar`, for example:
+If you are using the *shell* form, add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.0.3.jar` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.0.2.jar -jar <myapp.jar>
+ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.0.3.jar -jar <myapp.jar>
``` ## Tomcat 8 (Linux)
ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.0.2.jar -jar <mya
If you installed Tomcat via `apt-get` or `yum`, then you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file: ```
-JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.0.2.jar"
+JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.0.3.jar"
``` ### Tomcat installed via download and unzip
JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.0.2.jar"
If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), then you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content: ```
-CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.0.2.jar"
+CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.0.3.jar"
```
-If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.0.2.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.0.3.jar` to `CATALINA_OPTS`.
## Tomcat 8 (Windows)
If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and a
Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content: ```
-set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.0.2.jar
+set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.0.3.jar
``` Quotes are not necessary, but if you want to include them, the proper placement is: ```
-set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.0.2.jar"
+set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.0.3.jar"
```
-If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.0.2.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.0.3.jar` to `CATALINA_OPTS`.
### Running Tomcat as a Windows service
-Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.0.2.jar` to the `Java Options` under the `Java` tab.
+Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.0.3.jar` to the `Java Options` under the `Java` tab.
## JBoss EAP 7 ### Standalone server
-Add `-javaagent:path/to/applicationinsights-agent-3.0.2.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
+Add `-javaagent:path/to/applicationinsights-agent-3.0.3.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
```java ...
- JAVA_OPTS="<b>-javaagent:path/to/applicationinsights-agent-3.0.2.jar</b> -Xms1303m -Xmx1303m ..."
+ JAVA_OPTS="<b>-javaagent:path/to/applicationinsights-agent-3.0.3.jar</b> -Xms1303m -Xmx1303m ..."
... ``` ### Domain server
-Add `-javaagent:path/to/applicationinsights-agent-3.0.2.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.0.3.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
```xml ...
Add `-javaagent:path/to/applicationinsights-agent-3.0.2.jar` to the existing `jv
<jvm-options> <option value="-server"/> <!--Add Java agent jar file here-->
- <option value="-javaagent:path/to/applicationinsights-agent-3.0.2.jar"/>
+ <option value="-javaagent:path/to/applicationinsights-agent-3.0.3.jar"/>
<option value="-XX:MetaspaceSize=96m"/> <option value="-XX:MaxMetaspaceSize=256m"/> </jvm-options>
Add these lines to `start.ini`
``` --exec--javaagent:path/to/applicationinsights-agent-3.0.2.jar
+-javaagent:path/to/applicationinsights-agent-3.0.3.jar
``` ## Payara 5
-Add `-javaagent:path/to/applicationinsights-agent-3.0.2.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.0.3.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
```xml ... <java-config ...> <!--Edit the JVM options here--> <jvm-options>
- -javaagent:path/to/applicationinsights-agent-3.0.2.jar>
+ -javaagent:path/to/applicationinsights-agent-3.0.3.jar>
</jvm-options> ... </java-config>
Java and Process Management > Process definition > Java Virtual Machine
``` In "Generic JVM arguments" add the following: ```--javaagent:path/to/applicationinsights-agent-3.0.2.jar
+-javaagent:path/to/applicationinsights-agent-3.0.3.jar
``` After that, save and restart the application server.
After that, save and restart the application server.
Create a new file `jvm.options` in the server directory (for example `<openliberty>/usr/servers/defaultServer`), and add this line: ```--javaagent:path/to/applicationinsights-agent-3.0.2.jar
+-javaagent:path/to/applicationinsights-agent-3.0.3.jar
```
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-config.md
You will find more details and additional configuration options below.
## Configuration file path
-By default, Application Insights Java 3.0 expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.0.2.jar`.
+By default, Application Insights Java 3.0 expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.0.3.jar`.
You can specify your own configuration file path using either * `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable, or * `applicationinsights.configuration.file` Java system property
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.0.2.jar` is located.
+If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.0.3.jar` is located.
## Connection string
Connection string is required. You can find your connection string in your Appli
``` You can also set the connection string using the environment variable `APPLICATIONINSIGHTS_CONNECTION_STRING`
-(which will then take precedence if the connection string is also specified in the json configuration).
+(which will then take precedence over connection string specified in the json configuration).
Not setting the connection string will disable the Java agent.
If you want to set the cloud role name:
If cloud role name is not set, the Application Insights resource's name will be used to label the component on the application map. You can also set the cloud role name using the environment variable `APPLICATIONINSIGHTS_ROLE_NAME`
-(which will then take precedence if the cloud role name is also specified in the json configuration).
+(which will then take precedence over cloud role name specified in the json configuration).
## Cloud role instance
If you want to set the cloud role instance to something different rather than th
``` You can also set the cloud role instance using the environment variable `APPLICATIONINSIGHTS_ROLE_INSTANCE`
-(which will then take precedence if the cloud role instance is also specified in the json configuration).
+(which will then take precedence over cloud role instance specified in the json configuration).
## Sampling
Here is an example how to set the sampling to capture approximately **1/3 of all
``` You can also set the sampling percentage using the environment variable `APPLICATIONINSIGHTS_SAMPLING_PERCENTAGE`
-(which will then take precedence if the sampling percentage is also specified in the json configuration).
+(which will then take precedence over sampling percentage specified in the json configuration).
> [!NOTE] > For the sampling percentage, choose a percentage that is close to 100/N where N is an integer. Currently sampling doesn't support other values. ## Sampling overrides (preview)
-This feature is in preview, starting from 3.0.3-BETA.2.
+This feature is in preview, starting from 3.0.3.
Sampling overrides allow you to override the [default sampling percentage](#sampling), for example: * Set the sampling percentage to 0 (or some small value) for noisy health checks.
The default level configured for Application Insights is `INFO`. If you want to
``` You can also set the level using the environment variable `APPLICATIONINSIGHTS_INSTRUMENTATION_LOGGING_LEVEL`
-(which will then take precedence if the level is also specified in the json configuration).
+(which will then take precedence over level specified in the json configuration).
These are the valid `level` values that you can specify in the `applicationinsights.json` file, and how they correspond to logging levels in different logging frameworks:
To disable auto-collection of Micrometer metrics (including Spring Boot Actuator
} ```
+## Auto-collected Azure SDK telemetry
+
+This feature is in preview.
+
+Many of the latest Azure SDK libraries emit telemetry.
+
+Starting from version 3.0.3, you can enable collection of this telemetry:
+
+```json
+{
+ "preview": {
+ "instrumentation": {
+ "azureSdk": {
+ "enabled": true
+ }
+ }
+ }
+}
+```
+
+You can also enable this feature using the environment variable
+`APPLICATIONINSIGHTS_PREVIEW_INSTRUMENTATION_AZURE_SDK_ENABLED`
+(which will then take precedence over enabled specified in the json configuration).
+ ## Suppressing specific auto-collected telemetry
-Starting from version 3.0.2, specific auto-collected telemetry can be suppressed using these configuration options:
+Starting from version 3.0.3, specific auto-collected telemetry can be suppressed using these configuration options:
```json {
Starting from version 3.0.2, specific auto-collected telemetry can be suppressed
"jdbc": { "enabled": false },
+ "jms": {
+ "enabled": false
+ },
"kafka": { "enabled": false },
Starting from version 3.0.2, specific auto-collected telemetry can be suppressed
}, "redis": { "enabled": false
+ },
+ "springScheduling": {
+ "enabled": false
} } } ```
+You can also suppress these instrumentations using these environment variables:
+
+* `APPLICATIONINSIGHTS_INSTRUMENTATION_CASSANDRA_ENABLED`
+* `APPLICATIONINSIGHTS_INSTRUMENTATION_JDBC_ENABLED`
+* `APPLICATIONINSIGHTS_INSTRUMENTATION_JMS_ENABLED`
+* `APPLICATIONINSIGHTS_INSTRUMENTATION_KAFKA_ENABLED`
+* `APPLICATIONINSIGHTS_INSTRUMENTATION_MICROMETER_ENABLED`
+* `APPLICATIONINSIGHTS_INSTRUMENTATION_MONGO_ENABLED`
+* `APPLICATIONINSIGHTS_INSTRUMENTATION_REDIS_ENABLED`
+* `APPLICATIONINSIGHTS_INSTRUMENTATION_SPRING_SCHEDULING_ENABLED`
+
+(which will then take precedence over enabled specified in the json configuration).
+ > NOTE > If you are looking for more fine-grained control, e.g. to suppress some redis calls but not all redis calls, > see [sampling overrides](./java-standalone-sampling-overrides.md). - ## Heartbeat By default, Application Insights Java 3.0 sends a heartbeat metric once every 15 minutes. If you are using the heartbeat metric to trigger alerts, you can increase the frequency of this heartbeat:
This feature is in preview.
By default, metrics are captured every 60 seconds.
-Starting from version 3.0.3-BETA, you can change this interval:
+Starting from version 3.0.3, you can change this interval:
```json {
and the console, corresponding to this configuration:
`level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`. `path` can be an absolute or relative path. Relative paths are resolved against the directory where
-`applicationinsights-agent-3.0.2.jar` is located.
+`applicationinsights-agent-3.0.3.jar` is located.
`maxSizeMb` is the max size of the log file before it rolls over. `maxHistory` is the number of rolled over log files that are retained (in addition to the current log file). Starting from version 3.0.2, you can also set the self-diagnostics `level` using the environment variable `APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_LEVEL`
-(which will then take precedence if the self-diagnostics `level` is also specified in the json configuration).
+(which will then take precedence over self-diagnostics level specified in the json configuration).
## An example
azure-monitor Java Standalone Sampling Overrides https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-sampling-overrides.md
# Sampling overrides (preview) - Azure Monitor Application Insights for Java > [!NOTE]
-> The sampling overrides feature is in preview, starting from 3.0.3-BETA.2.
+> The sampling overrides feature is in preview, starting from 3.0.3.
Sampling overrides allow you to override the [default sampling percentage](./java-standalone-config.md#sampling), for example:
azure-monitor Java Standalone Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-troubleshoot.md
In this article, we cover some of the common issues that you might face while in
## Check the self-diagnostic log file
-By default, the Java 3.0 agent for Application Insights produces a log file named `applicationinsights.log` in the same directory that holds the `applicationinsights-agent-3.0.2.jar` file.
+By default, the Java 3.0 agent for Application Insights produces a log file named `applicationinsights.log` in the same directory that holds the `applicationinsights-agent-3.0.3.jar` file.
This log file is the first place to check for hints to any issues you might be experiencing.
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
In the 2.x SDK, the operation names were prefixed by the http method (`GET`, `PO
:::image type="content" source="media/java-ipa/upgrade-from-2x/operation-names-prefixed-by-http-method.png" alt-text="Operation names prefixed by http method":::
-The snippet below configures 3 telemetry processors that combine to replicate the previous behavior.
-The telemetry processors perform the following actions (in order):
-
-1. The first telemetry processor is a span processor (has type `span`),
- which means it applies to `requests` and `dependencies`.
-
- It will match any span that has an attribute named `http.method` and has a span name that begins with `/`.
-
- Then it will extract that span name into an attribute named `tempName`.
-
-2. The second telemetry processor is also a span processor.
-
- It will match any span that has an attribute named `tempName`.
-
- Then it will update the span name by concatenating the two attributes `http.method` and `tempName`,
- separated by a space.
-
-3. The last telemetry processor is an attribute processor (has type `attribute`),
- which means it applies to all telemetry which has attributes
- (currently `requests`, `dependencies` and `traces`).
-
- It will match any telemetry that has an attribute named `tempName`.
-
- Then it will delete the attribute named `tempName`, so that it won't be reported as a custom dimension.
+Starting in 3.0.3, you can bring back this 2.x behavior using
-```
+```json
{ "preview": {
- "processors": [
- {
- "type": "span",
- "include": {
- "matchType": "regexp",
- "attributes": [
- { "key": "http.method", "value": "" }
- ],
- "spanNames": [ "^/" ]
- },
- "name": {
- "toAttributes": {
- "rules": [ "^(?<tempName>.*)$" ]
- }
- }
- },
- {
- "type": "span",
- "include": {
- "matchType": "strict",
- "attributes": [
- { "key": "tempName" }
- ]
- },
- "name": {
- "fromAttributes": [ "http.method", "tempName" ],
- "separator": " "
- }
- },
- {
- "type": "attribute",
- "include": {
- "matchType": "strict",
- "attributes": [
- { "key": "tempName" }
- ]
- },
- "actions": [
- { "key": "tempName", "action": "delete" }
- ]
- }
- ]
+ "httpMethodInOperationName": true
} } ```
azure-monitor Mobile Center Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/mobile-center-quickstart.md
Title: Monitor mobile apps with Azure Monitor Application Insights description: Provides instructions to quickly set up a mobile app for monitoring with Azure Monitor Application Insights and App Center-+
azure-monitor Nodejs Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/nodejs-quick-start.md
Title: 'Quickstart: Monitor Node.js with Azure Monitor Application Insights' description: Provides instructions to quickly set up a Node.js Web App for monitoring with Azure Monitor Application Insights-+
azure-monitor Resource Manager App Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/resource-manager-app-resource.md
Title: Resource Manager template samples for Application Insights Resources description: Sample Azure Resource Manager templates to deploy Application Insights resources in Azure Monitor.-+
azure-monitor Resource Manager Function App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/resource-manager-function-app.md
Title: Resource Manager template samples for Azure Function App + Application Insights Resources description: Sample Azure Resource Manager templates to deploy an Azure Function App with an Application Insights resource.-+
azure-monitor Resource Manager Web App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/resource-manager-web-app.md
Title: Resource Manager template samples for Azure App Service + Application Insights Resources description: Sample Azure Resource Manager templates to deploy an Azure App Service with an Application Insights resource.-+
azure-monitor Standard Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/standard-metrics.md
description: This article lists Azure Application Insights metrics with supporte
Last updated 07/03/2019-+ # Application Insights standard metrics
azure-monitor Tutorial Alert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/tutorial-alert.md
Title: Send alerts from Azure Application Insights | Microsoft Docs description: Tutorial to send alerts in response to errors in your application using Azure Application Insights.-+
azure-monitor Tutorial App Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/tutorial-app-dashboards.md
Title: Create custom dashboards in Azure Application Insights | Microsoft Docs description: Tutorial to create custom KPI dashboards using Azure Application Insights.-+
azure-monitor Tutorial Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/tutorial-performance.md
Title: Diagnose performance issues using Azure Application Insights | Microsoft Docs description: Tutorial to find and diagnose performance issues in your application using Azure Application Insights.-+
azure-monitor Tutorial Runtime Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/tutorial-runtime-exceptions.md
Title: Diagnose run-time exceptions using Azure Application Insights | Microsoft Docs description: Tutorial to find and diagnose run-time exceptions in your application using Azure Application Insights.-+
azure-monitor Tutorial Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/tutorial-users.md
Title: Understand your customers in Azure Application Insights | Microsoft Docs description: Tutorial on using Azure Application Insights to understand how customers are using your application.-+
azure-monitor Container Insights Enable Arc Enabled Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md
Title: Configure Azure Arc enabled Kubernetes cluster with Container insights | Microsoft Docs
-description: This article describes how to configure monitoring with Container insights on Azure Arc enabled Kubernetes clusters.
- Previously updated : 09/23/2020
+ Title: "Monitor Azure Arc enabled Kubernetes clusters"
Last updated : 04/05/2021+++
+description: "Collect metrics and logs of Azure Arc enabled Kubernetes clusters using Azure Monitor"
-# Enable monitoring of Azure Arc enabled Kubernetes cluster
+# Azure Monitor Container Insights for Azure Arc enabled Kubernetes clusters
-Container insights provides rich monitoring experience for the Azure Kubernetes Service (AKS) and AKS Engine clusters. This article describes how to enable monitoring of your Kubernetes clusters hosted outside of Azure that are enabled with Azure Arc, to achieve a similar monitoring experience.
+[Azure Monitor Container Insights](container-insights-overview.md) provides rich monitoring experience for Azure Arc enabled Kubernetes clusters.
-Container insights can be enabled for one or more existing deployments of Kubernetes using either a PowerShell or Bash script.
## Supported configurations
-Container insights supports monitoring Azure Arc enabled Kubernetes (preview) as described in the [Overview](container-insights-overview.md) article, except for the following features:
--- Live Data (preview)-
-The following is officially supported with Container insights:
--- Versions of Kubernetes and support policy are the same as versions of [AKS supported](../../aks/supported-kubernetes-versions.md).--- The following container runtimes are supported: Docker, Moby, and CRI compatible runtimes such CRI-O and ContainerD.--- Linux OS release for master and worker nodes supported are: Ubuntu (18.04 LTS and 16.04 LTS).
+- Azure Monitor Container Insights supports monitoring Azure Arc enabled Kubernetes (preview) as described in the [Overview](container-insights-overview.md) article, except the live data (preview) feature. Also, users aren't required to have [Owner](../../role-based-access-control/built-in-roles.md#owner) permissions to [enable metrics](container-insights-update-metrics.md)
+- `Docker`, `Moby`, and CRI compatible container runtimes such `CRI-O` and `containerd`.
+- Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported.
## Prerequisites
-Before you start, make sure that you have the following:
--- A Log Analytics workspace.-
- Container insights supports a Log Analytics workspace in the regions listed in Azure [Products by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). To create your own workspace, it can be created through [Azure Resource Manager](../logs/resource-manager-workspace.md), through [PowerShell](../logs/powershell-sample-create-workspace.md?toc=%2fpowershell%2fmodule%2ftoc.json), or in the [Azure portal](../logs/quick-create-workspace.md).
--- To enable and access the features in Container insights, at a minimum you need to be a member of the Azure *Contributor* role in the Azure subscription, and a member of the [*Log Analytics Contributor*](../logs/manage-access.md#manage-access-using-azure-permissions) role of the Log Analytics workspace configured with Container insights.
+- You've met the pre-requisites listed under the [generic cluster extensions documentation](../../azure-arc/kubernetes/extensions.md#prerequisites).
+- A Log Analytics workspace: Azure Monitor Container Insights supports a Log Analytics workspace in the regions listed under Azure [products by region page](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). You can create your own workspace through [Azure Resource Manager](../logs/resource-manager-workspace.md), [PowerShell](../logs/powershell-sample-create-workspace.md), or [Azure portal](../logs/quick-create-workspace.md).
+- You need to have [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role assignment on the Azure subscription containing the Azure Arc enabled Kubernetes resource. If the Log Analytics workspace is in a different subscription, then [Log Analytics Contributor](../logs/manage-access.md#manage-access-using-azure-permissions) role assignment is needed on the Log Analytics workspace.
+- To view the monitoring data, you need to have [Log Analytics Reader](../logs/manage-access.md#manage-access-using-azure-permissions) role assignment on the Log Analytics workspace.
+- The following endpoints need to be enabled for outbound access in addition to the ones mentioned under [connecting a Kubernetes cluster to Azure Arc](../../azure-arc/kubernetes/quickstart-connect-cluster.md#meet-network-requirements).
-- You are a member of the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role on the Azure Arc cluster resource.
+ | Endpoint | Port |
+ |-||
+ | `*.ods.opinsights.azure.com` | 443 |
+ | `*.oms.opinsights.azure.com` | 443 |
+ | `dc.services.visualstudio.com` | 443 |
+ | `*.monitoring.azure.com` | 443 |
+ | `login.microsoftonline.com` | 443 |
-- To view the monitoring data, you are a member of the [*Log Analytics reader*](../logs/manage-access.md#manage-access-using-azure-permissions) role permission with the Log Analytics workspace configured with Container insights.
+ If your Arc enabled Kubernetes resource is in Azure US Government environment, following endpoints need to be enabled for outbound access:
-- [HELM client](https://helm.sh/docs/using_helm/) to onboard the Container insights chart for the specified Kubernetes cluster.
+ | Endpoint | Port |
+ |-||
+ | `*.ods.opinsights.azure.us` | 443 |
+ | `*.oms.opinsights.azure.us` | 443 |
+ | `dc.services.visualstudio.com` | 443 |
+
-- The following proxy and firewall configuration information is required for the containerized version of the Log Analytics agent for Linux to communicate with Azure Monitor:
+- If you had previously deployed Azure Monitor Container Insights on this cluster using script without cluster extensions, follow the instructions listed [here](container-insights-optout-hybrid.md) to delete this Helm chart. You can then continue to creating a cluster extension instance for Azure Monitor Container Insights.
- |Agent Resource|Ports |
- |||
- |`*.ods.opinsights.azure.com` |Port 443 |
- |`*.oms.opinsights.azure.com` |Port 443 |
- |`*.dc.services.visualstudio.com` |Port 443 |
+ >[!NOTE]
+ > The script-based version of deploying Azure Monitor Container Insights (preview) is being replaced by the [cluster extension](../../azure-arc/kubernetes/extensions.md) form of deployment. Azure Monitor deployed previously via script is only supported till June 2021 and it is thus advised to migrate to the cluster extension form of deployment at the earliest.
-- The containerized agent requires Kubelet's `cAdvisor secure port: 10250` or `unsecure port :10255` to be opened on all nodes in the cluster to collect performance metrics. We recommend you configure `secure port: 10250` on the Kubelet's cAdvisor if it's not configured already.
+### Identify workspace resource ID
-- The containerized agent requires the following environmental variables to be specified on the container in order to communicate with the Kubernetes API service within the cluster to collect inventory data - `KUBERNETES_SERVICE_HOST` and `KUBERNETES_PORT_443_TCP_PORT`.-
- >[!IMPORTANT]
- >The minimum agent version supported for monitoring Arc-enabled Kubernetes clusters is ciprod04162020 or later.
--- [PowerShell Core](/powershell/scripting/install/installing-powershell?view=powershell-6&preserve-view=true) is required if you enable monitoring using the PowerShell scripted method.--- [Bash version 4](https://www.gnu.org/software/bash/) is required if you enable monitoring using the Bash scripted method.-
-## Identify workspace resource ID
-
-To enable monitoring of your cluster using the PowerShell or bash script you downloaded earlier and integrate with an existing Log Analytics workspace, perform the following steps to first identify the full resource ID of your Log Analytics workspace. This is required for the `workspaceResourceId` parameter when you run the command to enable the monitoring add-on against the specified workspace. If you don't have a workspace to specify, you can skip including the `workspaceResourceId` parameter, and let the script create a new workspace for you.
+Run the following commands to locate the full Azure Resource Manager identifier of the Log Analytics workspace.
1. List all the subscriptions that you have access to using the following command:
To enable monitoring of your cluster using the PowerShell or bash script you dow
az account list --all -o table ```
- The output will resemble the following:
-
- ```azurecli
- Name CloudName SubscriptionId State IsDefault
- -- - --
- Microsoft Azure AzureCloud 0fb60ef2-03cc-4290-b595-e71108e8f4ce Enabled True
- ```
-
- Copy the value for **SubscriptionId**.
- 2. Switch to the subscription hosting the Log Analytics workspace using the following command: ```azurecli
To enable monitoring of your cluster using the PowerShell or bash script you dow
az resource list --resource-type Microsoft.OperationalInsights/workspaces -o json ```
- In the output, find the workspace name, and then copy the full resource ID of that Log Analytics workspace under the field **ID**.
-
-## Enable monitoring using PowerShell
-
-1. Download and save the script to a local folder that configures your cluster with the monitoring add-on using the following commands:
+ In the output, find the workspace name of interest. The `id` field of that represents the Azure Resource Manager identifier of that Log Analytics workspace.
- ```powershell
- Invoke-WebRequest https://aka.ms/enable-monitoring-powershell-script -OutFile enable-monitoring.ps1
- ```
-
-2. Configure the `$azureArcClusterResourceId` variable by setting the corresponding values for `subscriptionId`, `resourceGroupName` and `clusterName` representing the resource ID of your Azure Arc-enabled Kubernetes cluster resource.
+ >[!TIP]
+ > This `id` can also be found in the *Overview* blade of the Log Analytics workspace through the Azure portal.
- ```powershell
- $azureArcClusterResourceId = "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Kubernetes/connectedClusters/<clusterName>"
- ```
+## Create extension instance using Azure CLI
-3. Configure the `$kubeContext` variable with the **kube-context** of your cluster by running the command `kubectl config get-contexts`.
+### Option 1 - With default values
- ```powershell
- $kubeContext = "<kubeContext name of your k8s cluster>"
- ```
+This option uses the following defaults:
-4. If you want to use existing Azure Monitor Log Analytics workspace, configure the variable `$logAnalyticsWorkspaceResourceId` with the corresponding value representing the resource ID of the workspace. Otherwise, set the variable to `""` and the script creates a default workspace in the default resource group of the cluster subscription if one does not already exist in the region. The default workspace created resembles the format of *DefaultWorkspace-\<SubscriptionID>-\<Region>*.
+- Creates or uses existing default log analytics workspace corresponding to the region of the cluster
+- Auto-upgrade is enabled for the Azure Monitor cluster extension
- ```powershell
- $logAnalyticsWorkspaceResourceId = "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>/providers/microsoft.operationalinsights/workspaces/<workspaceName>"
- ```
+```console
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers
+```
-5. If your Arc-enabled Kubernetes cluster communicates through a proxy server, configure the variable `$proxyEndpoint` with the URL of the proxy server. If the cluster does not communicate through a proxy server, then you can set the value to `""`. For more information, see [Configure proxy endpoint](#configure-proxy-endpoint) later in this article.
+### Option 2 - With existing Azure Log Analytics workspace
-6. Run the following command to enable monitoring.
+You can use an existing Azure Log Analytics workspace in any subscription on which you have *Contributor* or a more permissive role assignment.
- ```powershell
- .\enable-monitoring.ps1 -clusterResourceId $azureArcClusterResourceId -kubeContext $kubeContext -workspaceResourceId $logAnalyticsWorkspaceResourceId -proxyEndpoint $proxyEndpoint
- ```
+```console
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings logAnalyticsWorkspaceResourceID=<armResourceIdOfExistingWorkspace>
+```
-After you've enabled monitoring, it might take about 15 minutes before you can view health metrics for the cluster.
+### Option 3 - With advanced configuration
-### Using service principal
-The script *enable-monitoring.ps1* uses the interactive device login. If you prefer non-interactive login, you can use an existing service principal or create a new one that has the required permissions as described in [Prerequisites](#prerequisites). To use service principal, you will have to pass $servicePrincipalClientId, $servicePrincipalClientSecret and $tenantId parameters with values of service principal you have intended to use to *enable-monitoring.ps1* script.
+If you want to tweak the default resource requests and limits, you can use the advanced configurations settings:
-```powershell
-$subscriptionId = "<subscription Id of the Azure Arc connected cluster resource>"
-$servicePrincipal = New-AzADServicePrincipal -Role Contributor -Scope "/subscriptions/$subscriptionId"
+```console
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings omsagent.resources.daemonset.limits.cpu=150m omsagent.resources.daemonset.limits.memory=600Mi omsagent.resources.deployment.limits.cpu=1 omsagent.resources.deployment.limits.memory=750Mi
```
-The role assignment below is only applicable if you are using existing Log Analytics workspace in a different Azure Subscription than the Arc K8s Connected Cluster resource.
+Checkout the [resource requests and limits section of Helm chart](https://github.com/helm/charts/blob/master/incubator/azuremonitor-containers/values.yaml) for the available configuration settings.
-```powershell
-$logAnalyticsWorkspaceResourceId = "<Azure Resource Id of the Log Analytics Workspace>" # format of the Azure Log Analytics workspace should be /subscriptions/<subId>/resourcegroups/<rgName>/providers/microsoft.operationalinsights/workspaces/<workspaceName>
-New-AzRoleAssignment -RoleDefinitionName 'Log Analytics Contributor' -ObjectId $servicePrincipal.Id -Scope $logAnalyticsWorkspaceResourceId
+### Option 4 - On Azure Stack Edge
-$servicePrincipalClientId = $servicePrincipal.ApplicationId.ToString()
-$servicePrincipalClientSecret = [System.Net.NetworkCredential]::new("", $servicePrincipal.Secret).Password
-$tenantId = (Get-AzSubscription -SubscriptionId $subscriptionId).TenantId
-```
-
-For example:
+If the Azure Arc enabled Kubernetes cluster is on Azure Stack Edge, then a custom mount path `/home/data/docker` needs to be used.
-```powershell
-.\enable-monitoring.ps1 -clusterResourceId $azureArcClusterResourceId -servicePrincipalClientId $servicePrincipalClientId -servicePrincipalClientSecret $servicePrincipalClientSecret -tenantId $tenantId -kubeContext $kubeContext -workspaceResourceId $logAnalyticsWorkspaceResourceId -proxyEndpoint $proxyEndpoint
+```console
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings omsagent.logsettings.custommountpath=/home/data/docker
```
+>[!NOTE]
+> If you are explicitly specifying the version of the extension to be installed in the create command, then ensure that the version specified is >= 2.8.2.
+## Create extension instance using Azure portal
-## Enable using bash script
+>[!IMPORTANT]
+> If you are deploying Azure Monitor on a Kubernetes cluster running on top of Azure Stack Edge, then the Azure CLI option needs to be followed instead of the Azure portal option as a custom mount path needs to be set for these clusters.
-Perform the following steps to enable monitoring using the provided bash script.
+### Onboarding from the Azure Arc enabled Kubernetes resource blade
-1. Download and save the script to a local folder that configures your cluster with the monitoring add-on using the following commands:
+1. In the Azure portal, select the Arc enabled Kubernetes cluster that you wish to monitor.
- ```bash
- curl -o enable-monitoring.sh -L https://aka.ms/enable-monitoring-bash-script
- ```
+2. Select the 'Insights (preview)' item under the 'Monitoring' section of the resource blade.
-2. Configure the `azureArcClusterResourceId` variable by setting the corresponding values for `subscriptionId`, `resourceGroupName` and `clusterName` representing the resource ID of your Azure Arc-enabled Kubernetes cluster resource.
+3. On the onboarding page, select the 'Configure Azure Monitor' button
- ```bash
- export azureArcClusterResourceId="/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Kubernetes/connectedClusters/<clusterName>"
- ```
+4. You can now choose the [Log Analytics workspace](../logs/quick-create-workspace.md) to send your metrics and logs data to.
-3. Configure the `kubeContext` variable with the **kube-context** of your cluster by running the command `kubectl config get-contexts`.
+5. Select the 'Configure' button to deploy the Azure Monitor Container Insights cluster extension.
- ```bash
- export kubeContext="<kubeContext name of your k8s cluster>"
- ```
-
-4. If you want to use existing Azure Monitor Log Analytics workspace, configure the variable `logAnalyticsWorkspaceResourceId` with the corresponding value representing the resource ID of the workspace. Otherwise, set the variable to `""` and the script creates a default workspace in the default resource group of the cluster subscription if one does not already exist in the region. The default workspace created resembles the format of *DefaultWorkspace-\<SubscriptionID>-\<Region>*.
+### Onboarding from Azure Monitor blade
- ```bash
- export logAnalyticsWorkspaceResourceId="/subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>/providers/microsoft.operationalinsights/workspaces/<workspaceName>"
- ```
+1. In the Azure portal, navigate to the 'Monitor' blade, and select the 'Containers' option under the 'Insights' menu.
-5. If your Arc-enabled Kubernetes cluster communicates through a proxy server, configure the variable `proxyEndpoint` with the URL of the proxy server. If the cluster does not communicate through a proxy server, then you can set the value to `""`. For more information, see [Configure proxy endpoint](#configure-proxy-endpoint) later in this article.
+2. Select the 'Unmonitored clusters' tab to view the Azure Arc enabled Kubernetes clusters that you can enable monitoring for.
-6. To enable monitoring on your cluster, there are different commands provided based on your deployment scenario.
+3. Click on the 'Enable' link next to the cluster that you want to enable monitoring for.
- Run the following command to enable monitoring with default options, such as using current kube-context, create a default Log Analytics workspace, and without specifying a proxy server:
+4. Choose the Log Analytics workspace and select the 'Configure' button to continue.
- ```bash
- bash enable-monitoring.sh --resource-id $azureArcClusterResourceId
- ```
+## Create extension instance using Azure Resource Manager
- Run the following command to create a default Log Analytics workspace and without specifying a proxy server:
+1. Download Azure Resource Manager template and parameter:
- ```bash
- bash enable-monitoring.sh --resource-id $azureArcClusterResourceId --kube-context $kubeContext
+ ```console
+ curl -L https://aka.ms/arc-k8s-azmon-extension-arm-template -o arc-k8s-azmon-extension-arm-template.json
+ curl -L https://aka.ms/arc-k8s-azmon-extension-arm-template-params -o arc-k8s-azmon-extension-arm-template-params.json
```
- Run the following command to use an existing Log Analytics workspace and without specifying a proxy server:
+2. Update parameter values in arc-k8s-azmon-extension-arm-template-params.json file.For Azure public cloud, `opinsights.azure.com` needs to be used as the value of workspaceDomain.
- ```bash
- bash enable-monitoring.sh --resource-id $azureArcClusterResourceId --kube-context $kubeContext --workspace-id $logAnalyticsWorkspaceResourceId
- ```
-
- Run the following command to use an existing Log Analytics workspace and specify a proxy server:
+3. Deploy the template to create Azure Monitor Container Insights extension
- ```bash
- bash enable-monitoring.sh --resource-id $azureArcClusterResourceId --kube-context $kubeContext --workspace-id $logAnalyticsWorkspaceResourceId --proxy $proxyEndpoint
+ ```console
+ az login
+ az account set --subscription "Subscription Name"
+ az deployment group create --resource-group <resource-group> --template-file ./arc-k8s-azmon-extension-arm-template.json --parameters @./arc-k8s-azmon-extension-arm-template-params.json
```
-After you've enabled monitoring, it might take about 15 minutes before you can view health metrics for the cluster.
-
-### Using service principal
-The bash script *enable-monitoring.sh* uses the interactive device login. If you prefer non-interactive login, you can use an existing service principal or create a new one that has the required permissions as described in [Prerequisites](#prerequisites). To use service principal, you will have to pass --client-id, --client-secret and --tenant-id values of service principal you have intended to use to *enable-monitoring.sh* bash script.
-
-```bash
-subscriptionId="<subscription Id of the Azure Arc connected cluster resource>"
-servicePrincipal=$(az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/${subscriptionId}")
-servicePrincipalClientId=$(echo $servicePrincipal | jq -r '.appId')
-```
-
-The role assignment below is only applicable if you are using existing Log Analytics workspace in a different Azure Subscription than the Arc K8s Connected Cluster resource.
-
-```bash
-logAnalyticsWorkspaceResourceId="<Azure Resource Id of the Log Analytics Workspace>" # format of the Azure Log Analytics workspace should be /subscriptions/<subId>/resourcegroups/<rgName>/providers/microsoft.operationalinsights/workspaces/<workspaceName>
-az role assignment create --role 'Log Analytics Contributor' --assignee $servicePrincipalClientId --scope $logAnalyticsWorkspaceResourceId
-
-servicePrincipalClientSecret=$(echo $servicePrincipal | jq -r '.password')
-tenantId=$(echo $servicePrincipal | jq -r '.tenant')
-```
+## Delete extension instance
-For example:
+The following command only deletes the extension instance, but doesn't delete the Log Analytics workspace. The data within the Log Analytics resource is left intact.
```bash
-bash enable-monitoring.sh --resource-id $azureArcClusterResourceId --client-id $servicePrincipalClientId --client-secret $servicePrincipalClientSecret --tenant-id $tenantId --kube-context $kubeContext --workspace-id $logAnalyticsWorkspaceResourceId --proxy $proxyEndpoint
+az k8s-extension delete --name azuremonitor-containers --cluster-type connectedClusters --cluster-name <cluster-name> --resource-group <resource-group>
```
-## Configure proxy endpoint
-
-With the containerized agent for Container insights, you can configure a proxy endpoint to allow it to communicate through your proxy server. Communication between the containerized agent and Azure Monitor can be an HTTP or HTTPS proxy server, and both anonymous and basic authentication (username/password) are supported.
-
-The proxy configuration value has the following syntax: `[protocol://][user:password@]proxyhost[:port]`
-
-> [!NOTE]
->If your proxy server does not require authentication, you still need to specify a psuedo username/password. This can be any username or password.
-
-|Property| Description |
-|--|-|
-|Protocol | http or https |
-|user | Optional username for proxy authentication |
-|password | Optional password for proxy authentication |
-|proxyhost | Address or FQDN of the proxy server |
-|port | Optional port number for the proxy server |
-
-For example: `http://user01:password@proxy01.contoso.com:3128`
-
-If you specify the protocol as **http**, the HTTP requests are created using SSL/TLS secure connection. Your proxy server must support SSL/TLS protocols.
-
-### Configure using PowerShell
-
-Specify the username and password, IP address or FQDN, and port number for the proxy server. For example:
-
-```powershell
-$proxyEndpoint = https://<user>:<password>@<proxyhost>:<port>
-```
-
-### Configure using bash
-
-Specify the username and password, IP address or FQDN, and port number for the proxy server. For example:
-
-```bash
-export proxyEndpoint=https://<user>:<password>@<proxyhost>:<port>
-```
+## Disconnected cluster
+If your cluster is disconnected from Azure for > 48 hours, then Azure Resource Graph won't have information about your cluster. As a result the Insights blade may display incorrect information about your cluster state.
## Next steps
export proxyEndpoint=https://<user>:<password>@<proxyhost>:<port>
- By default, the containerized agent collects the stdout/ stderr container logs of all the containers running in all the namespaces except kube-system. To configure container log collection specific to particular namespace or namespaces, review [Container Insights agent configuration](container-insights-agent-config.md) to configure desired data collection settings to your ConfigMap configurations file. - To scrape and analyze Prometheus metrics from your cluster, review [Configure Prometheus metrics scraping](container-insights-prometheus-integration.md)--- To learn how to stop monitoring your Arc enabled Kubernetes cluster with Container insights, see [How to stop monitoring your hybrid cluster](container-insights-optout-hybrid.md#how-to-stop-monitoring-on-arc-enabled-kubernetes).
azure-monitor Container Insights Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-manage-agent.md
export azureAroV4ClusterResourceId="/subscriptions/<subscriptionId>/resourceGrou
bash upgrade-monitoring.sh --resource-id $ azureAroV4ClusterResourceId ```
-See **Using service principal** in [Enable monitoring of Azure Arc enabled Kubernetes cluster](container-insights-enable-arc-enabled-clusters.md#enable-using-bash-script) for details on using a service principal with this command.
-
-### Upgrade agent on Azure Arc enabled Kubernetes
-
-Perform the following command to upgrade the agent on an Azure Arc enabled Kubernetes cluster.
-
-```console
-curl -o upgrade-monitoring.sh -L https://aka.ms/upgrade-monitoring-bash-script
-export azureArcClusterResourceId="/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Kubernetes/connectedClusters/<clusterName>"
-bash upgrade-monitoring.sh --resource-id $azureArcClusterResourceId
-```
-
-See **Using service principal** in [Enable monitoring of Azure Arc enabled Kubernetes cluster](container-insights-enable-arc-enabled-clusters.md#enable-using-bash-script) for details on using a service principal with this command.
-- ## How to disable environment variable collection on a container Container insights collects environmental variables from the containers running in a pod and presents them in the property pane of the selected container in the **Containers** view. You can control this behavior by disabling collection for a specific container either during deployment of the Kubernetes cluster, or after by setting the environment variable *AZMON_COLLECT_ENV*. This feature is available from the agent version ΓÇô ciprod11292018 and higher.
azure-monitor Metrics Aggregation Explained https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/metrics-aggregation-explained.md
Last updated 03/10/2021- # Azure Monitor Metrics metrics aggregation and display explained
azure-monitor Design Logs Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/design-logs-deployment.md
Title: Designing your Azure Monitor Logs deployment | Microsoft Docs description: This article describes the considerations and recommendations for customers preparing to deploy a workspace in Azure Monitor.-
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/logs-data-export.md
Title: Log Analytics workspace data export in Azure Monitor (preview) description: Log Analytics data export allows you to continuously export data of selected tables from your Log Analytics workspace to an Azure storage account or Azure Event Hubs as it's collected. -
azure-monitor Vminsights Health Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/vminsights-health-enable.md
description: Describes how to enable VM insights guest health in your subscripti
Previously updated : 11/16/2020 Last updated : 04/05/2021
VM insights guest health has the following limitations in public preview:
## Supported operating systems Virtual Machine must run one of the following operating systems:
+ - CentOS 7.5, 7.6, 7.7, 7.8, 7.9
+ - RedHat 7.5, 7.6, 7.7, 7.8, 7.9
- Ubuntu 16.04 LTS, Ubuntu 18.04 LTS - Windows Server 2012 or later
azure-netapp-files Azure Netapp Files Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-faqs.md
na ms.devlang: na Previously updated : 03/29/2021 Last updated : 04/05/2021 # FAQs About Azure NetApp Files
No, currently you cannot apply Network Security Groups to the delegated subnet o
### Can I use Azure RBAC with Azure NetApp Files?
-Yes, Azure NetApp Files supports Azure RBAC features.
+Yes, Azure NetApp Files supports Azure RBAC features. Along with the built-in Azure roles, you can [create custom roles](../role-based-access-control/custom-roles.md) for Azure NetApp Files.
+
+For the complete list of Azure NetApp Files permissions, see Azure resource provider operations for [`Microsoft.NetApp`](../role-based-access-control/resource-provider-operations.md#microsoftnetapp).
+
+### Are Azure Activity Logs supported on Azure NetApp Files?
+
+Azure NetApp Files is an Azure native service. All PUT, POST, and DELETE APIs against Azure NetApp Files are logged. For example, the logs show activities such as who created the snapshot, who modified the volume, and so on.
+
+For the complete list of API operations, see [Azure NetApp Files REST API](/rest/api/netapp/).
+
+### How do I audit file access on Azure NetApp Files NFS (v3 and v4.1) volumes?
+
+You can configure audit logs on the client side. All read, write, and attribute changes are logged.
+
+### Can I use Azure policies with Azure NetApp Files?
+
+Yes, you can create [custom Azure policies](../governance/policy/tutorials/create-custom-policy-definition.md).
+
+However, you cannot create Azure policies (custom naming policies) on the Azure NetApp Files interface. See [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md#considerations).
## Performance FAQs
azure-resource-manager Copy Outputs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/copy-outputs.md
Title: Define multiple instances of an output value description: Use copy operation in an Azure Resource Manager template (ARM template) to iterate multiple times when returning a value from a deployment. Previously updated : 04/17/2020 Last updated : 04/01/2021 # Output iteration in ARM templates
-This article shows you how to create more than one value for an output in your Azure Resource Manager template (ARM template). By adding the `copy` element to the outputs section of your template, you can dynamically return a number of items during deployment.
+This article shows you how to create more than one value for an output in your Azure Resource Manager template (ARM template). By adding copy loop to the outputs section of your template, you can dynamically return a number of items during deployment.
-You can also use copy with [resources](copy-resources.md), [properties in a resource](copy-properties.md), and [variables](copy-variables.md).
+You can also use copy loop with [resources](copy-resources.md), [properties in a resource](copy-properties.md), and [variables](copy-variables.md).
## Syntax
-The copy element has the following general format:
+# [JSON](#tab/json)
+
+Add the `copy` element to the output section of your template to return a number of items. The copy element has the following general format:
```json "copy": {
The `count` property specifies the number of iterations you want for the output
The `input` property specifies the properties that you want to repeat. You create an array of elements constructed from the value in the `input` property. It can be a single property (like a string), or an object with several properties.
+# [Bicep](#tab/bicep)
+
+Loops can be used to return a number of items during deployment:
+
+- Iterating over an array:
+
+ ```bicep
+ output <output-name> array = [for <item> in <collection>: {
+ <properties>
+ }]
+
+ ```
+
+- Iterating over the elements of an array
+
+ ```bicep
+ output <output-name> array = [for <item>, <index> in <collection>: {
+ <properties>
+ }]
+ ```
+
+- Using loop index
+
+ ```bicep
+ output <output-name> array = [for <index> in range(<start>, <stop>): {
+ <properties>
+ }]
+ ```
+++ ## Copy limits The count can't exceed 800. The count can't be a negative number. It can be zero if you deploy the template with a recent version of Azure CLI, PowerShell, or REST API. Specifically, you must use:
-* Azure PowerShell **2.6** or later
-* Azure CLI **2.0.74** or later
-* REST API version **2019-05-10** or later
-* [Linked deployments](linked-templates.md) must use API version **2019-05-10** or later for the deployment resource type
+- Azure PowerShell **2.6** or later
+- Azure CLI **2.0.74** or later
+- REST API version **2019-05-10** or later
+- [Linked deployments](linked-templates.md) must use API version **2019-05-10** or later for the deployment resource type
Earlier versions of PowerShell, CLI, and the REST API don't support zero for count.
Earlier versions of PowerShell, CLI, and the REST API don't support zero for cou
The following example creates a variable number of storage accounts and returns an endpoint for each storage account:
+# [JSON](#tab/json)
+ ```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "storageCount": {
- "type": "int",
- "defaultValue": 2
- }
- },
- "variables": {
- "baseName": "[concat('storage', uniqueString(resourceGroup().id))]"
- },
- "resources": [
- {
- "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2019-04-01",
- "name": "[concat(copyIndex(), variables('baseName'))]",
- "location": "[resourceGroup().location]",
- "sku": {
- "name": "Standard_LRS"
- },
- "kind": "Storage",
- "properties": {},
- "copy": {
- "name": "storagecopy",
- "count": "[parameters('storageCount')]"
- }
- }
- ],
- "outputs": {
- "storageEndpoints": {
- "type": "array",
- "copy": {
- "count": "[parameters('storageCount')]",
- "input": "[reference(concat(copyIndex(), variables('baseName'))).primaryEndpoints.blob]"
- }
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "storageCount": {
+ "type": "int",
+ "defaultValue": 2
}
+ },
+ "variables": {
+ "baseName": "[concat('storage', uniqueString(resourceGroup().id))]"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2019-04-01",
+ "name": "[concat(copyIndex(), variables('baseName'))]",
+ "location": "[resourceGroup().location]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "Storage",
+ "properties": {},
+ "copy": {
+ "name": "storagecopy",
+ "count": "[parameters('storageCount')]"
+ }
+ }
+ ],
+ "outputs": {
+ "storageEndpoints": {
+ "type": "array",
+ "copy": {
+ "count": "[parameters('storageCount')]",
+ "input": "[reference(concat(copyIndex(), variables('baseName'))).primaryEndpoints.blob]"
+ }
+ }
+ }
} ```
+# [Bicep](#tab/bicep)
+
+```bicep
+param storageCount int = 2
+
+var baseName_var = 'storage${uniqueString(resourceGroup().id)}'
+
+resource baseName 'Microsoft.Storage/storageAccounts@2019-04-01' = [for i in range(0, storageCount): {
+ name: '${i}${baseName_var}'
+ location: resourceGroup().location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'Storage'
+ properties: {}
+}]
+
+output storageEndpoints array = [for i in range(0, storageCount): reference(${i}${baseName_var}).primaryEndpoints.blob]
+```
+++ The preceding template returns an array with the following values: ```json [
- "https://0storagecfrbqnnmpeudi.blob.core.windows.net/",
- "https://1storagecfrbqnnmpeudi.blob.core.windows.net/"
+ "https://0storagecfrbqnnmpeudi.blob.core.windows.net/",
+ "https://1storagecfrbqnnmpeudi.blob.core.windows.net/"
] ``` The next example returns three properties from the new storage accounts.
+# [JSON](#tab/json)
+ ```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "storageCount": {
- "type": "int",
- "defaultValue": 2
- }
- },
- "variables": {
- "baseName": "[concat('storage', uniqueString(resourceGroup().id))]"
- },
- "resources": [
- {
- "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2019-04-01",
- "name": "[concat(copyIndex(), variables('baseName'))]",
- "location": "[resourceGroup().location]",
- "sku": {
- "name": "Standard_LRS"
- },
- "kind": "Storage",
- "properties": {},
- "copy": {
- "name": "storagecopy",
- "count": "[parameters('storageCount')]"
- }
- }
- ],
- "outputs": {
- "storageInfo": {
- "type": "array",
- "copy": {
- "count": "[parameters('storageCount')]",
- "input": {
- "id": "[reference(concat(copyIndex(), variables('baseName')), '2019-04-01', 'Full').resourceId]",
- "blobEndpoint": "[reference(concat(copyIndex(), variables('baseName'))).primaryEndpoints.blob]",
- "status": "[reference(concat(copyIndex(), variables('baseName'))).statusOfPrimary]"
- }
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "storageCount": {
+ "type": "int",
+ "defaultValue": 2
+ }
+ },
+ "variables": {
+ "baseName": "[concat('storage', uniqueString(resourceGroup().id))]"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2019-04-01",
+ "name": "[concat(copyIndex(), variables('baseName'))]",
+ "location": "[resourceGroup().location]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "Storage",
+ "properties": {},
+ "copy": {
+ "name": "storagecopy",
+ "count": "[parameters('storageCount')]"
+ }
+ }
+ ],
+ "outputs": {
+ "storageInfo": {
+ "type": "array",
+ "copy": {
+ "count": "[parameters('storageCount')]",
+ "input": {
+ "id": "[reference(concat(copyIndex(), variables('baseName')), '2019-04-01', 'Full').resourceId]",
+ "blobEndpoint": "[reference(concat(copyIndex(), variables('baseName'))).primaryEndpoints.blob]",
+ "status": "[reference(concat(copyIndex(), variables('baseName'))).statusOfPrimary]"
}
+ }
}
+ }
} ```
+# [Bicep](#tab/bicep)
+
+```bicep
+param storageCount int = 2
+
+var baseName_var = 'storage${uniqueString(resourceGroup().id)}'
+
+resource baseName 'Microsoft.Storage/storageAccounts@2019-04-01' = [for i in range(0, storageCount): {
+ name: '${i}${baseName_var}'
+ location: resourceGroup().location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'Storage'
+ properties: {}
+}]
+
+output storageInfo array = [for i in range(0, storageCount): {
+ id: reference(concat(i, baseName_var), '2019-04-01', 'Full').resourceId
+ blobEndpoint: reference(concat(i, baseName_var)).primaryEndpoints.blob
+ status: reference(concat(i, baseName_var)).statusOfPrimary
+}]
+```
+++ The preceding example returns an array with the following values: ```json [
- {
- "id": "Microsoft.Storage/storageAccounts/0storagecfrbqnnmpeudi",
- "blobEndpoint": "https://0storagecfrbqnnmpeudi.blob.core.windows.net/",
- "status": "available"
- },
- {
- "id": "Microsoft.Storage/storageAccounts/1storagecfrbqnnmpeudi",
- "blobEndpoint": "https://1storagecfrbqnnmpeudi.blob.core.windows.net/",
- "status": "available"
- }
+ {
+ "id": "Microsoft.Storage/storageAccounts/0storagecfrbqnnmpeudi",
+ "blobEndpoint": "https://0storagecfrbqnnmpeudi.blob.core.windows.net/",
+ "status": "available"
+ },
+ {
+ "id": "Microsoft.Storage/storageAccounts/1storagecfrbqnnmpeudi",
+ "blobEndpoint": "https://1storagecfrbqnnmpeudi.blob.core.windows.net/",
+ "status": "available"
+ }
] ``` ## Next steps
-* To go through a tutorial, see [Tutorial: Create multiple resource instances with ARM templates](template-tutorial-create-multiple-instances.md).
-* For other uses of the copy element, see:
- * [Resource iteration in ARM templates](copy-resources.md)
- * [Property iteration in ARM templates](copy-properties.md)
- * [Variable iteration in ARM templates](copy-variables.md)
-* If you want to learn about the sections of a template, see [Understand the structure and syntax of ARM templates](template-syntax.md).
-* To learn how to deploy your template, see [Deploy resources with ARM templates and Azure PowerShell](deploy-powershell.md).
+- To go through a tutorial, see [Tutorial: Create multiple resource instances with ARM templates](template-tutorial-create-multiple-instances.md).
+- For other uses of the copy loop, see:
+ - [Resource iteration in ARM templates](copy-resources.md)
+ - [Property iteration in ARM templates](copy-properties.md)
+ - [Variable iteration in ARM templates](copy-variables.md)
+- If you want to learn about the sections of a template, see [Understand the structure and syntax of ARM templates](template-syntax.md).
+- To learn how to deploy your template, see [Deploy resources with ARM templates and Azure PowerShell](deploy-powershell.md).
azure-resource-manager Copy Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/copy-properties.md
Title: Define multiple instances of a property description: Use copy operation in an Azure Resource Manager template (ARM template) to iterate multiple times when creating a property on a resource. Previously updated : 09/15/2020 Last updated : 04/01/2021 # Property iteration in ARM templates
-This article shows you how to create more than one instance of a property in your Azure Resource Manager template (ARM template). By adding the `copy` element to the properties section of a resource in your template, you can dynamically set the number of items for a property during deployment. You also avoid having to repeat template syntax.
+This article shows you how to create more than one instance of a property in your Azure Resource Manager template (ARM template). By adding copy loop to the properties section of a resource in your template, you can dynamically set the number of items for a property during deployment. You also avoid having to repeat template syntax.
-You can only use `copy` with top-level resources, even when applying `copy` to a property. To learn about changing a child resource to a top-level resource, see [Iteration for a child resource](copy-resources.md#iteration-for-a-child-resource).
+You can only use copy loop with top-level resources, even when applying copy loop to a property. To learn about changing a child resource to a top-level resource, see [Iteration for a child resource](copy-resources.md#iteration-for-a-child-resource).
-You can also use copy with [resources](copy-resources.md), [variables](copy-variables.md), and [outputs](copy-outputs.md).
+You can also use copy loop with [resources](copy-resources.md), [variables](copy-variables.md), and [outputs](copy-outputs.md).
## Syntax
-The copy element has the following general format:
+# [JSON](#tab/json)
+
+Add the `copy` element to the resources section of your template to set the number of items for a property. The copy element has the following general format:
```json "copy": [
The `count` property specifies the number of iterations you want for the propert
The `input` property specifies the properties that you want to repeat. You create an array of elements constructed from the value in the `input` property.
+# [Bicep](#tab/bicep)
+
+Loops can be used declare multiple properties by:
+
+- Iterating over an array:
+
+ ```bicep
+ <property-name>: [for <item> in <collection>: {
+ <properties>
+ }
+ ```
+
+- Iterating over the elements of an array
+
+ ```bicep
+ <property-name>: [for (<item>, <index>) in <collection>: {
+ <properties>
+ }
+ ```
+
+- Using loop index
+
+ ```bicep
+ <property-name>: [for <index> in range(<start>, <stop>): {
+ <properties>
+ }
+ ```
+++ ## Copy limits The count can't exceed 800. The count can't be a negative number. It can be zero if you deploy the template with a recent version of Azure CLI, PowerShell, or REST API. Specifically, you must use:
-* Azure PowerShell **2.6** or later
-* Azure CLI **2.0.74** or later
-* REST API version **2019-05-10** or later
-* [Linked deployments](linked-templates.md) must use API version **2019-05-10** or later for the deployment resource type
+- Azure PowerShell **2.6** or later
+- Azure CLI **2.0.74** or later
+- REST API version **2019-05-10** or later
+- [Linked deployments](linked-templates.md) must use API version **2019-05-10** or later for the deployment resource type
Earlier versions of PowerShell, CLI, and the REST API don't support zero for count. ## Property iteration
-The following example shows how to apply `copy` to the `dataDisks` property on a virtual machine:
+The following example shows how to apply copy loop to the `dataDisks` property on a virtual machine:
+
+# [JSON](#tab/json)
```json {
The following example shows how to apply `copy` to the `dataDisks` property on a
"resources": [ { "type": "Microsoft.Compute/virtualMachines",
- "apiVersion": "2017-03-30",
+ "apiVersion": "2020-06-01",
... "properties": { "storageProfile": {
The following example shows how to apply `copy` to the `dataDisks` property on a
"name": "dataDisks", "count": "[parameters('numberOfDataDisks')]", "input": {
- "diskSizeGB": 1023,
"lun": "[copyIndex('dataDisks')]",
- "createOption": "Empty"
+ "createOption": "Empty",
+ "diskSizeGB": 1023
} } ] }
+ ...
} } ]
The following example shows how to apply `copy` to the `dataDisks` property on a
Notice that when using `copyIndex` inside a property iteration, you must provide the name of the iteration. Property iteration also supports an offset argument. The offset must come after the name of the iteration, such as `copyIndex('dataDisks', 1)`.
-Resource Manager expands the `copy` array during deployment. The name of the array becomes the name of the property. The input values become the object properties. The deployed template becomes:
+The deployed template becomes:
```json { "name": "examplevm", "type": "Microsoft.Compute/virtualMachines",
- "apiVersion": "2017-03-30",
+ "apiVersion": "2020-06-01",
"properties": { "storageProfile": { "dataDisks": [
The following example template creates a failover group for databases that are p
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "primaryServerName": {
- "type": "string"
- },
- "secondaryServerName": {
- "type": "string"
- },
- "databaseNames": {
- "type": "array",
- "defaultValue": [
- "mydb1",
- "mydb2",
- "mydb3"
- ]
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "primaryServerName": {
+ "type": "string"
},
- "variables": {
- "failoverName": "[concat(parameters('primaryServerName'),'/', parameters('primaryServerName'),'failovergroups')]"
+ "secondaryServerName": {
+ "type": "string"
},
- "resources": [
- {
- "type": "Microsoft.Sql/servers/failoverGroups",
- "apiVersion": "2015-05-01-preview",
- "name": "[variables('failoverName')]",
- "properties": {
- "readWriteEndpoint": {
- "failoverPolicy": "Automatic",
- "failoverWithDataLossGracePeriodMinutes": 60
- },
- "readOnlyEndpoint": {
- "failoverPolicy": "Disabled"
- },
- "partnerServers": [
- {
- "id": "[resourceId('Microsoft.Sql/servers', parameters('secondaryServerName'))]"
- }
- ],
- "copy": [
- {
- "name": "databases",
- "count": "[length(parameters('databaseNames'))]",
- "input": "[resourceId('Microsoft.Sql/servers/databases', parameters('primaryServerName'), parameters('databaseNames')[copyIndex('databases')])]"
- }
- ]
- }
- }
- ],
- "outputs": {
+ "databaseNames": {
+ "type": "array",
+ "defaultValue": [
+ "mydb1",
+ "mydb2",
+ "mydb3"
+ ]
}
+ },
+ "variables": {
+ "failoverName": "[concat(parameters('primaryServerName'),'/', parameters('primaryServerName'),'failovergroups')]"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Sql/servers/failoverGroups",
+ "apiVersion": "2015-05-01-preview",
+ "name": "[variables('failoverName')]",
+ "properties": {
+ "readWriteEndpoint": {
+ "failoverPolicy": "Automatic",
+ "failoverWithDataLossGracePeriodMinutes": 60
+ },
+ "readOnlyEndpoint": {
+ "failoverPolicy": "Disabled"
+ },
+ "partnerServers": [
+ {
+ "id": "[resourceId('Microsoft.Sql/servers', parameters('secondaryServerName'))]"
+ }
+ ],
+ "copy": [
+ {
+ "name": "databases",
+ "count": "[length(parameters('databaseNames'))]",
+ "input": "[resourceId('Microsoft.Sql/servers/databases', parameters('primaryServerName'), parameters('databaseNames')[copyIndex('databases')])]"
+ }
+ ]
+ }
+ }
+ ],
+ "outputs": {
+ }
} ```
The `copy` element is an array so you can specify more than one property for the
} ```
+# [Bicep](#tab/bicep)
+
+```bicep
+@minValue(0)
+@maxValue(16)
+@description('The number of dataDisks to be returned in the output array.')
+param numberOfDataDisks int = 16
+
+resource vmName 'Microsoft.Compute/virtualMachines@2020-06-01' = {
+ ...
+ properties: {
+ storageProfile: {
+ ...
+ dataDisks: [for i in range(0, numberOfDataDisks): {
+ lun: i
+ createOption: 'Empty'
+ diskSizeGB: 1023
+ }]
+ }
+ ...
+ }
+}
+```
+
+The deployed template becomes:
+
+```json
+{
+ "name": "examplevm",
+ "type": "Microsoft.Compute/virtualMachines",
+ "apiVersion": "2020-06-01",
+ "properties": {
+ "storageProfile": {
+ "dataDisks": [
+ {
+ "lun": 0,
+ "createOption": "Empty",
+ "diskSizeGB": 1023
+ },
+ {
+ "lun": 1,
+ "createOption": "Empty",
+ "diskSizeGB": 1023
+ },
+ {
+ "lun": 2,
+ "createOption": "Empty",
+ "diskSizeGB": 1023
+ }
+ ],
+ ...
+```
+++ You can use resource and property iteration together. Reference the property iteration by name.
+# [JSON](#tab/json)
+ ```json { "type": "Microsoft.Network/virtualNetworks",
You can use resource and property iteration together. Reference the property ite
} ```
+# [Bicep](#tab/bicep)
+
+```bicep
+resource vnetname_resource 'Microsoft.Network/virtualNetworks@2018-04-01' = [for i in range(0, 2): {
+ name: concat(vnetname, i)
+ location: resourceGroup().location
+ properties: {
+ addressSpace: {
+ addressPrefixes: [
+ addressPrefix
+ ]
+ }
+ subnets: [for j in range(0, 2): {
+ name: 'subnet-${j}'
+ properties: {
+ addressPrefix: subnetAddressPrefix[j]
+ }
+ }]
+ }
+}]
+```
+++ ## Example templates The following example shows a common scenario for creating more than one value for a property.
The following example shows a common scenario for creating more than one value f
## Next steps
-* To go through a tutorial, see [Tutorial: Create multiple resource instances with ARM templates](template-tutorial-create-multiple-instances.md).
-* For other uses of the copy element, see:
- * [Resource iteration in ARM templates](copy-resources.md)
- * [Variable iteration in ARM templates](copy-variables.md)
- * [Output iteration in ARM templates](copy-outputs.md)
-* If you want to learn about the sections of a template, see [Understand the structure and syntax of ARM templates](template-syntax.md).
-* To learn how to deploy your template, see [Deploy resources with ARM templates and Azure PowerShell](deploy-powershell.md).
+- To go through a tutorial, see [Tutorial: Create multiple resource instances with ARM templates](template-tutorial-create-multiple-instances.md).
+- For other uses of the copy loop, see:
+ - [Resource iteration in ARM templates](copy-resources.md)
+ - [Variable iteration in ARM templates](copy-variables.md)
+ - [Output iteration in ARM templates](copy-outputs.md)
+- If you want to learn about the sections of a template, see [Understand the structure and syntax of ARM templates](template-syntax.md).
+- To learn how to deploy your template, see [Deploy resources with ARM templates and Azure PowerShell](deploy-powershell.md).
azure-resource-manager Copy Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/copy-resources.md
Title: Deploy multiple instances of resources description: Use copy operation and arrays in an Azure Resource Manager template (ARM template) to deploy resource type many times. Previously updated : 12/21/2020 Last updated : 04/01/2021 # Resource iteration in ARM templates
-This article shows you how to create more than one instance of a resource in your Azure Resource Manager template (ARM template). By adding the `copy` element to the resources section of your template, you can dynamically set the number of resources to deploy. You also avoid having to repeat template syntax.
+This article shows you how to create more than one instance of a resource in your Azure Resource Manager template (ARM template). By adding copy loop to the resources section of your template, you can dynamically set the number of resources to deploy. You also avoid having to repeat template syntax.
-You can also use `copy` with [properties](copy-properties.md), [variables](copy-variables.md), and [outputs](copy-outputs.md).
+You can also use copy loop with [properties](copy-properties.md), [variables](copy-variables.md), and [outputs](copy-outputs.md).
If you need to specify whether a resource is deployed at all, see [condition element](conditional-resource-deployment.md). ## Syntax
-The `copy` element has the following general format:
+# [JSON](#tab/json)
+
+Add the `copy` element to the resources section of your template to deploy multiple instances of the resource. The `copy` element has the following general format:
```json "copy": {
The `name` property is any value that identifies the loop. The `count` property
Use the `mode` and `batchSize` properties to specify if the resources are deployed in parallel or in sequence. These properties are described in [Serial or Parallel](#serial-or-parallel).
+# [Bicep](#tab/bicep)
+
+Loops can be used declare multiple resources by:
+
+- Iterating over an array:
+
+ ```bicep
+ @batchSize(<number>)
+ resource <resource-symbolic-name> '<resource-type>@<api-version>' = [for <item> in <collection>: {
+ <resource-properties>
+ }
+ ```
+
+- Iterating over the elements of an array
+
+ ```bicep
+ @batchSize(<number>)
+ resource <resource-symbolic-name> '<resource-type>@<api-version>' = [for (<item>, <index>) in <collection>: {
+ <resource-properties>
+ }
+ ```
+
+- Using loop index
+
+ ```bicep
+ @batchSize(<number>)
+ resource <resource-symbolic-name> '<resource-type>@<api-version>' = [for <index> in range(<start>, <stop>): {
+ <resource-properties>
+ }
+ ```
+++ ## Copy limits The count can't exceed 800. The count can't be a negative number. It can be zero if you deploy the template with a recent version of Azure CLI, PowerShell, or REST API. Specifically, you must use:
-* Azure PowerShell **2.6** or later
-* Azure CLI **2.0.74** or later
-* REST API version **2019-05-10** or later
-* [Linked deployments](linked-templates.md) must use API version **2019-05-10** or later for the deployment resource type
+- Azure PowerShell **2.6** or later
+- Azure CLI **2.0.74** or later
+- REST API version **2019-05-10** or later
+- [Linked deployments](linked-templates.md) must use API version **2019-05-10** or later for the deployment resource type
Earlier versions of PowerShell, CLI, and the REST API don't support zero for count.
-Be careful using [complete mode deployment](deployment-modes.md) with copy. If you redeploy with complete mode to a resource group, any resources that aren't specified in the template after resolving the copy loop are deleted.
+Be careful using [complete mode deployment](deployment-modes.md) with copy loop. If you redeploy with complete mode to a resource group, any resources that aren't specified in the template after resolving the copy loop are deleted.
## Resource iteration The following example creates the number of storage accounts specified in the `storageCount` parameter.
+# [JSON](#tab/json)
+ ```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "storageCount": {
- "type": "int",
- "defaultValue": 2
- }
- },
- "resources": [
- {
- "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2019-04-01",
- "name": "[concat(copyIndex(),'storage', uniqueString(resourceGroup().id))]",
- "location": "[resourceGroup().location]",
- "sku": {
- "name": "Standard_LRS"
- },
- "kind": "Storage",
- "properties": {},
- "copy": {
- "name": "storagecopy",
- "count": "[parameters('storageCount')]"
- }
- }
- ],
- "outputs": {}
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "storageCount": {
+ "type": "int",
+ "defaultValue": 2
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2019-04-01",
+ "name": "[concat(copyIndex(),'storage', uniqueString(resourceGroup().id))]",
+ "location": "[resourceGroup().location]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "Storage",
+ "properties": {},
+ "copy": {
+ "name": "storagecopy",
+ "count": "[parameters('storageCount')]"
+ }
+ }
+ ]
} ```
Notice that the name of each resource includes the `copyIndex()` function, which
Creates these names:
-* storage0
-* storage1
-* storage2.
+- storage0
+- storage1
+- storage2.
To offset the index value, you can pass a value in the `copyIndex()` function. The number of iterations is still specified in the copy element, but the value of `copyIndex` is offset by the specified value. So, the following example:
To offset the index value, you can pass a value in the `copyIndex()` function. T
Creates these names:
-* storage1
-* storage2
-* storage3
+- storage1
+- storage2
+- storage3
The copy operation is helpful when working with arrays because you can iterate through each element in the array. Use the `length` function on the array to specify the count for iterations, and `copyIndex` to retrieve the current index in the array.
+# [Bicep](#tab/bicep)
+
+```bicep
+param storageCount int = 2
+
+resource storage_id 'Microsoft.Storage/storageAccounts@2019-04-01' = [for i in range(0, storageCount): {
+ name: '${i}storage${uniqueString(resourceGroup().id)}'
+ location: resourceGroup().location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'Storage'
+ properties: {}
+}]
+```
+
+Notice the index `i` is used in creating the storage account resource name.
+++ The following example creates one storage account for each name provided in the parameter.
+# [JSON](#tab/json)
+ ```json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
The following example creates one storage account for each name provided in the
} ```
+# [Bicep](#tab/bicep)
+
+```bicep
+param storageNames array = [
+ 'contoso'
+ 'fabrikam'
+ 'coho'
+]
+
+resource storageNames_id 'Microsoft.Storage/storageAccounts@2019-04-01' = [for name in storageNames: {
+ name: concat(name, uniqueString(resourceGroup().id))
+ location: resourceGroup().location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'Storage'
+ properties: {}
+}]
+```
+++ If you want to return values from the deployed resources, you can use [copy in the outputs section](copy-outputs.md). ## Serial or Parallel By default, Resource Manager creates the resources in parallel. It applies no limit to the number of resources deployed in parallel, other than the total limit of 800 resources in the template. The order in which they're created isn't guaranteed.
-However, you may want to specify that the resources are deployed in sequence. For example, when updating a production environment, you may want to stagger the updates so only a certain number are updated at any one time. To serially deploy more than one instance of a resource, set `mode` to **serial** and `batchSize` to the number of instances to deploy at a time. With serial mode, Resource Manager creates a dependency on earlier instances in the loop, so it doesn't start one batch until the previous batch completes.
-
-The value for `batchSize` can't exceed the value for `count` in the copy element.
+However, you may want to specify that the resources are deployed in sequence. For example, when updating a production environment, you may want to stagger the updates so only a certain number are updated at any one time.
For example, to serially deploy storage accounts two at a time, use:
+# [JSON](#tab/json)
+
+To serially deploy more than one instance of a resource, set `mode` to **serial** and `batchSize` to the number of instances to deploy at a time. With serial mode, Resource Manager creates a dependency on earlier instances in the loop, so it doesn't start one batch until the previous batch completes.
+
+The value for `batchSize` can't exceed the value for `count` in the copy element.
+ ```json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
For example, to serially deploy storage accounts two at a time, use:
The `mode` property also accepts **parallel**, which is the default value.
+# [Bicep](#tab/bicep)
+
+To serially deploy more than one instance of a resource, set the `batchSize` [decorator](./bicep-file.md#resource-and-module-decorators) to the number of instances to deploy at a time. With serial mode, Resource Manager creates a dependency on earlier instances in the loop, so it doesn't start one batch until the previous batch completes.
+
+```bicep
+@batchSize(2)
+resource storage_id 'Microsoft.Storage/storageAccounts@2019-04-01' = [for i in range(0, 4): {
+ name: '${i}storage${uniqueString(resourceGroup().id)}'
+ location: resourceGroup().location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'Storage'
+ properties: {}
+}]
+```
+++ ## Iteration for a child resource You can't use a copy loop for a child resource. To create more than one instance of a resource that you typically define as nested within another resource, you must instead create that resource as a top-level resource. You define the relationship with the parent resource through the type and name properties.
For example, suppose you typically define a dataset as a child resource within a
```json "resources": [ {
- "type": "Microsoft.DataFactory/datafactories",
+ "type": "Microsoft.DataFactory/factories",
"name": "exampleDataFactory", ... "resources": [
To establish a parent/child relationship with an instance of the data factory, p
The following example shows the implementation:
+# [JSON](#tab/json)
+ ```json "resources": [ {
- "type": "Microsoft.DataFactory/datafactories",
+ "type": "Microsoft.DataFactory/factories",
"name": "exampleDataFactory", ... }, {
- "type": "Microsoft.DataFactory/datafactories/datasets",
+ "type": "Microsoft.DataFactory/factories/datasets",
"name": "[concat('exampleDataFactory', '/', 'exampleDataSet', copyIndex())]", "dependsOn": [ "exampleDataFactory"
The following example shows the implementation:
}] ```
+# [Bicep](#tab/bicep)
+
+```bicep
+resource dataFactoryName_resource 'Microsoft.DataFactory/factories@2018-06-01' = {
+ name: "exampleDataFactory"
+ ...
+}
+
+resource dataFactoryName_ArmtemplateTestDatasetIn 'Microsoft.DataFactory/factories/datasets@2018-06-01' = [for i in range(0, 3): {
+ name: 'exampleDataFactory/exampleDataset${i}'
+ ...
+}
+```
+++ ## Example templates The following examples show common scenarios for creating more than one instance of a resource or property.
The following examples show common scenarios for creating more than one instance
## Next steps
-* To set dependencies on resources that are created in a copy loop, see [Define the order for deploying resources in ARM templates](define-resource-dependency.md).
-* To go through a tutorial, see [Tutorial: Create multiple resource instances with ARM templates](template-tutorial-create-multiple-instances.md).
-* For a Microsoft Learn module that covers resource copy, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
-* For other uses of the copy element, see:
- * [Property iteration in ARM templates](copy-properties.md)
- * [Variable iteration in ARM templates](copy-variables.md)
- * [Output iteration in ARM templates](copy-outputs.md)
-* For information about using copy with nested templates, see [Using copy](linked-templates.md#using-copy).
+- To set dependencies on resources that are created in a copy loop, see [Define the order for deploying resources in ARM templates](define-resource-dependency.md).
+- To go through a tutorial, see [Tutorial: Create multiple resource instances with ARM templates](template-tutorial-create-multiple-instances.md).
+- For a Microsoft Learn module that covers resource copy, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
+- For other uses of the copy loop, see:
+ - [Property iteration in ARM templates](copy-properties.md)
+ - [Variable iteration in ARM templates](copy-variables.md)
+ - [Output iteration in ARM templates](copy-outputs.md)
+- For information about using copy with nested templates, see [Using copy](linked-templates.md#using-copy).
azure-resource-manager Copy Variables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/copy-variables.md
The count can't exceed 800.
The count can't be a negative number. It can be zero if you deploy the template with a recent version of Azure CLI, PowerShell, or REST API. Specifically, you must use:
-* Azure PowerShell **2.6** or later
-* Azure CLI **2.0.74** or later
-* REST API version **2019-05-10** or later
-* [Linked deployments](linked-templates.md) must use API version **2019-05-10** or later for the deployment resource type
+- Azure PowerShell **2.6** or later
+- Azure CLI **2.0.74** or later
+- REST API version **2019-05-10** or later
+- [Linked deployments](linked-templates.md) must use API version **2019-05-10** or later for the deployment resource type
Earlier versions of PowerShell, CLI, and the REST API don't support zero for count.
The following example shows how to create an array of string values:
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "itemCount": {
- "type": "int",
- "defaultValue": 5
- }
- },
- "variables": {
- "copy": [
- {
- "name": "stringArray",
- "count": "[parameters('itemCount')]",
- "input": "[concat('item', copyIndex('stringArray', 1))]"
- }
- ]
- },
- "resources": [],
- "outputs": {
- "arrayResult": {
- "type": "array",
- "value": "[variables('stringArray')]"
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "itemCount": {
+ "type": "int",
+ "defaultValue": 5
+ }
+ },
+ "variables": {
+ "copy": [
+ {
+ "name": "stringArray",
+ "count": "[parameters('itemCount')]",
+ "input": "[concat('item', copyIndex('stringArray', 1))]"
+ }
+ ]
+ },
+ "resources": [],
+ "outputs": {
+ "arrayResult": {
+ "type": "array",
+ "value": "[variables('stringArray')]"
}
+ }
} ```
The preceding template returns an array with the following values:
```json [
- "item1",
- "item2",
- "item3",
- "item4",
- "item5"
+ "item1",
+ "item2",
+ "item3",
+ "item4",
+ "item5"
] ```
The next example shows how to create an array of objects with three properties -
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "itemCount": {
- "type": "int",
- "defaultValue": 5
- }
- },
- "variables": {
- "copy": [
- {
- "name": "objectArray",
- "count": "[parameters('itemCount')]",
- "input": {
- "name": "[concat('myDataDisk', copyIndex('objectArray', 1))]",
- "diskSizeGB": "1",
- "diskIndex": "[copyIndex('objectArray')]"
- }
- }
- ]
- },
- "resources": [],
- "outputs": {
- "arrayResult": {
- "type": "array",
- "value": "[variables('objectArray')]"
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "itemCount": {
+ "type": "int",
+ "defaultValue": 5
+ }
+ },
+ "variables": {
+ "copy": [
+ {
+ "name": "objectArray",
+ "count": "[parameters('itemCount')]",
+ "input": {
+ "name": "[concat('myDataDisk', copyIndex('objectArray', 1))]",
+ "diskSizeGB": "1",
+ "diskIndex": "[copyIndex('objectArray')]"
}
+ }
+ ]
+ },
+ "resources": [],
+ "outputs": {
+ "arrayResult": {
+ "type": "array",
+ "value": "[variables('objectArray')]"
}
+ }
} ```
The preceding example returns an array with the following values:
```json [
- {
- "name": "myDataDisk1",
- "diskSizeGB": "1",
- "diskIndex": 0
- },
- {
- "name": "myDataDisk2",
- "diskSizeGB": "1",
- "diskIndex": 1
- },
- {
- "name": "myDataDisk3",
- "diskSizeGB": "1",
- "diskIndex": 2
- },
- {
- "name": "myDataDisk4",
- "diskSizeGB": "1",
- "diskIndex": 3
- },
- {
- "name": "myDataDisk5",
- "diskSizeGB": "1",
- "diskIndex": 4
- }
+ {
+ "name": "myDataDisk1",
+ "diskSizeGB": "1",
+ "diskIndex": 0
+ },
+ {
+ "name": "myDataDisk2",
+ "diskSizeGB": "1",
+ "diskIndex": 1
+ },
+ {
+ "name": "myDataDisk3",
+ "diskSizeGB": "1",
+ "diskIndex": 2
+ },
+ {
+ "name": "myDataDisk4",
+ "diskSizeGB": "1",
+ "diskIndex": 3
+ },
+ {
+ "name": "myDataDisk5",
+ "diskSizeGB": "1",
+ "diskIndex": 4
+ }
] ```
You can also use the `copy` element within a variable. The following example cre
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "itemCount": {
- "type": "int",
- "defaultValue": 5
- }
- },
- "variables": {
- "topLevelObject": {
- "sampleProperty": "sampleValue",
- "copy": [
- {
- "name": "disks",
- "count": "[parameters('itemCount')]",
- "input": {
- "name": "[concat('myDataDisk', copyIndex('disks', 1))]",
- "diskSizeGB": "1",
- "diskIndex": "[copyIndex('disks')]"
- }
- }
- ]
- }
- },
- "resources": [],
- "outputs": {
- "objectResult": {
- "type": "object",
- "value": "[variables('topLevelObject')]"
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "itemCount": {
+ "type": "int",
+ "defaultValue": 5
+ }
+ },
+ "variables": {
+ "topLevelObject": {
+ "sampleProperty": "sampleValue",
+ "copy": [
+ {
+ "name": "disks",
+ "count": "[parameters('itemCount')]",
+ "input": {
+ "name": "[concat('myDataDisk', copyIndex('disks', 1))]",
+ "diskSizeGB": "1",
+ "diskIndex": "[copyIndex('disks')]"
+ }
}
+ ]
}
+ },
+ "resources": [],
+ "outputs": {
+ "objectResult": {
+ "type": "object",
+ "value": "[variables('topLevelObject')]"
+ }
+ }
} ```
The preceding example returns an object with the following values:
```json {
- "sampleProperty": "sampleValue",
- "disks": [
- {
- "name": "myDataDisk1",
- "diskSizeGB": "1",
- "diskIndex": 0
- },
- {
- "name": "myDataDisk2",
- "diskSizeGB": "1",
- "diskIndex": 1
- },
- {
- "name": "myDataDisk3",
- "diskSizeGB": "1",
- "diskIndex": 2
- },
- {
- "name": "myDataDisk4",
- "diskSizeGB": "1",
- "diskIndex": 3
- },
- {
- "name": "myDataDisk5",
- "diskSizeGB": "1",
- "diskIndex": 4
- }
- ]
+ "sampleProperty": "sampleValue",
+ "disks": [
+ {
+ "name": "myDataDisk1",
+ "diskSizeGB": "1",
+ "diskIndex": 0
+ },
+ {
+ "name": "myDataDisk2",
+ "diskSizeGB": "1",
+ "diskIndex": 1
+ },
+ {
+ "name": "myDataDisk3",
+ "diskSizeGB": "1",
+ "diskIndex": 2
+ },
+ {
+ "name": "myDataDisk4",
+ "diskSizeGB": "1",
+ "diskIndex": 3
+ },
+ {
+ "name": "myDataDisk5",
+ "diskSizeGB": "1",
+ "diskIndex": 4
+ }
+ ]
} ```
The following examples show common scenarios for creating more than one value fo
## Next steps
-* To go through a tutorial, see [Tutorial: Create multiple resource instances with ARM templates](template-tutorial-create-multiple-instances.md).
-* For other uses of the copy element, see:
- * [Resource iteration in ARM templates](copy-resources.md)
- * [Property iteration in ARM templates](copy-properties.md)
- * [Output iteration in ARM templates](copy-outputs.md)
-* If you want to learn about the sections of a template, see [Understand the structure and syntax of ARM templates](template-syntax.md).
-* To learn how to deploy your template, see [Deploy resources with ARM templates and Azure PowerShell](deploy-powershell.md).
+- To go through a tutorial, see [Tutorial: Create multiple resource instances with ARM templates](template-tutorial-create-multiple-instances.md).
+- For other uses of the copy element, see:
+ - [Resource iteration in ARM templates](copy-resources.md)
+ - [Property iteration in ARM templates](copy-properties.md)
+ - [Output iteration in ARM templates](copy-outputs.md)
+- If you want to learn about the sections of a template, see [Understand the structure and syntax of ARM templates](template-syntax.md).
+- To learn how to deploy your template, see [Deploy resources with ARM templates and Azure PowerShell](deploy-powershell.md).
azure-sql Dns Alias Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/dns-alias-overview.md
Presently, a DNS alias has the following limitations:
- [Overview of business continuity with Azure SQL Database](business-continuity-high-availability-disaster-recover-hadr-overview.md), including disaster recovery. - [Azure REST API reference](/rest/api/azure/)-- [Server Dns Aliases API](/rest/api/sql/serverdnsaliases)
+- [Server Dns Aliases API](/rest/api/sql/2020-11-01-preview/serverdnsaliases)
## Next steps
azure-sql Dh2i High Availability Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/linux/dh2i-high-availability-tutorial.md
Title: "Setup Always On availability group with DH2i DxEnterprise running on Linux-based Azure Virtual Machines" description: Use DH2i DxEnterprise as the cluster manager to achieve high availability with an availability group on SQL Server on Linux Azure Virtual Machines Last updated 03/04/2021-+
azure-sql Rhel High Availability Listener Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/linux/rhel-high-availability-listener-tutorial.md
Title: Configure an availability group listener for SQL Server on RHEL virtual machines in Azure - Linux virtual machines | Microsoft Docs description: Learn about setting up an availability group listener in SQL Server on RHEL virtual machines in Azure-+
azure-sql Rhel High Availability Stonith Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/linux/rhel-high-availability-stonith-tutorial.md
Title: Configure availability groups for SQL Server on RHEL virtual machines in Azure - Linux virtual machines | Microsoft Docs description: Learn about setting up high availability in an RHEL cluster environment and set up STONITH-+
azure-sql Sql Server On Linux Vm What Is Iaas Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/linux/sql-server-on-linux-vm-what-is-iaas-overview.md
Title: Overview of SQL Server on Azure Virtual Machines for Linux| Microsoft Docs description: Learn about how to run full SQL Server editions on Azure Virtual Machines for Linux. Get direct links to all Linux SQL Server VM images and related content.-+ documentationcenter: '' tags: azure-service-management
azure-sql Sql Vm Create Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/linux/sql-vm-create-portal-quickstart.md
Title: "Quickstart: Create a Linux SQL Server VM in Azure" description: This tutorial shows how to create a Linux SQL Server 2017 virtual machine in the Azure portal.-+ Last updated 10/22/2019 tags: azure-service-management
azure-vmware Tutorial Expressroute Global Reach Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-expressroute-global-reach-private-cloud.md
Now that you've created an authorization key for the private cloud ExpressRoute
1. Create an on-premises cloud connection. Do one of the following and then select **Create**: - Select the **ExpressRoute circuit** from the list, or
- - If you have the circuit ID, paste it in the field and and provide the authorization key.
+ - If you have the circuit ID, paste it in the field and and provide the authorization key you just created.
:::image type="content" source="./media/expressroute-global-reach/on-premises-cloud-connections.png" alt-text="Enter the ExpressRoute ID and the authorization key, and then select Create.":::
Continue to the next tutorial to learn how to deploy and configure VMware HCX so
<!-- LINKS - external-->
-<!-- LINKS - internal -->
+<!-- LINKS - internal -->
cloud-services-extended-support Deploy Prerequisite https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-prerequisite.md
# Prerequisites for deploying Azure Cloud Services (extended support)
-> [!IMPORTANT]
-> Cloud Services (extended support) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- To ensure a successful Cloud Services (extended support) deployment review the below steps and complete each item prior to attempting any deployments. ## Required Service Configuration (.cscfg) file updates
cloud-services-extended-support Deploy Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-template.md
If you are using a Static IP, it needs to be referenced as a Reserved IP in Serv
} }, {
- "apiVersion": "2020-10-01-preview",
+ "apiVersion": "2021-03-01",
"type": "Microsoft.Compute/cloudServices", "name": "[variables('cloudServiceName')]", "location": "[parameters('location')]",
cloud-services-extended-support Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/faq.md
Cloud Services (extended support) deployment only supports the Stopped- Allocate
### Do Cloud Services (extended support) deployments support scaling across clusters, availability zones, and regions? Cloud Services (extended support) deployments cannot scale across multiple clusters, availability zones and regions.
+### How can I get the deployment ID for my Cloud Service (extended support)
+Deployment ID aka Private ID can be accessed using the [CloudServiceInstanceView](https://docs.microsoft.com/rest/api/compute/cloudservices/getinstanceview#cloudserviceinstanceview) API. It is also available on the Azure portal under the Role and Instances blade of the Cloud Service (extended support)
+ ### Are there any pricing differences between Cloud Services (classic) and Cloud Services (extended support)? Cloud Services (extended support) uses Azure Key Vault and Basic (ARM) Public IP addresses. Customers requiring certificates need to use Azure Key Vault for certificate management ([learn more](https://azure.microsoft.com/pricing/details/key-vault/) about Azure Key Vault pricing.)  Each Public IP address for Cloud Services (extended support) is charged separately ([learn more](https://azure.microsoft.com/pricing/details/ip-addresses/) about Public IP Address pricing.) ## Resources
cloud-services-extended-support Override Sku https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/override-sku.md
+
+ Title: Override SKU information over CSCFG/CSDEF for Azure Cloud Services (extended support)
+description: Override SKU information over CSCFG/CSDEF for Azure Cloud Services (extended support)
+++++ Last updated : 04/05/2021+++
+# Override SKU information over CSCFG/CSDEF in Cloud Services (extended support)
+
+This feature will allow the user to update the role size and instance count in their Cloud Service using the **allowModelOverride** property without having to update the service configuration and service definition files, thereby allowing the cloud service to scale up/down/in/out without doing a repackage and redeploy.
+
+## Set allowModelOverride property
+The allowModelOverride property can be set in the following ways:
+* When allowModelOverride = true , the API call will update the role size and instance count for the cloud service without validating the values with the csdef and cscfg files.
+> [!Note]
+> The cscfg will be updated to reflect the role instance count but the csdef (within the cspkg) will retain the old values
+* When allowModelOverride = false , the API call would throw an error when the role size and instance count values do not match with the csdef and cscfg files respectively
+
+Default value is set to be false. If the property is reset to false back from true, the csdef and cscfg files would again be checked for validation.
+
+Please go through the below samples to apply the property in PowerShell, template and SDK
+
+### Azure Resource Manager template
+Setting the property ΓÇ£allowModelOverrideΓÇ¥ = true here will update the cloud service with the role properties defined in the roleProfile section
+```json
+"properties": {
+ "packageUrl": "[parameters('packageSasUri')]",
+ "configurationUrl": "[parameters('configurationSasUri')]",
+ "upgradeMode": "[parameters('upgradeMode')]",
+ ΓÇ£**allowModelOverride**ΓÇ¥ : true,
+ "roleProfile": {
+ "roles": [
+ {
+ "name": "WebRole1",
+ "sku": {
+ "name": "Standard_D1_v2",
+ "capacity": "1"
+ }
+ },
+ {
+ "name": "WorkerRole1",
+ "sku": {
+ "name": "Standard_D1_v2",
+ "capacity": "1"
+ }
+ }
+ ]
+ },
+
+```
+### PowerShell
+Setting the switch ΓÇ£AllowModelOverrideΓÇ¥ on the new New-AzCloudService cmdlet, will update the cloud service with the SKU properties defined in the RoleProfile
+```powershell
+New-AzCloudService `
+-Name ΓÇ£ContosoCSΓÇ¥ `
+-ResourceGroupName ΓÇ£ContosOrgΓÇ¥ `
+-Location ΓÇ£East USΓÇ¥ `
+-AllowModelOverride `
+-PackageUrl $cspkgUrl `
+-ConfigurationUrl $cscfgUrl `
+-UpgradeMode 'Auto' `
+-RoleProfile $roleProfile `
+-NetworkProfile $networkProfile `
+-ExtensionProfile $extensionProfile `
+-OSProfile $osProfile `
+-Tag $tag
+```
+### SDK
+Setting the variable AllowModelOverride= true will update the cloud service with the SKU properties defined in the RoleProfile
+
+```csharp
+CloudService cloudService = new CloudService
+ {
+ Properties = new CloudServiceProperties
+ {
+ RoleProfile = cloudServiceRoleProfile
+ Configuration = < Add Cscfg xml content here>
+ PackageUrl = <Add cspkg SAS url here>,
+ ExtensionProfile = cloudServiceExtensionProfile,
+ OsProfile= cloudServiceOsProfile,
+ NetworkProfile = cloudServiceNetworkProfile,
+ UpgradeMode = 'Auto',
+ AllowModelOverride = true
+ },
+ Location = m_location
+ };
+CloudService createOrUpdateResponse = m_CrpClient.CloudServices.CreateOrUpdate(ΓÇ£ContosOrgΓÇ¥, ΓÇ£ContosoCSΓÇ¥, cloudService);
+```
+### Azure portal
+The portal does not allow the above property to override the role size and instance count in the csdef and cscfg.
++
+## Next steps
+- Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support).
+- Review [frequently asked questions](faq.md) for Cloud Services (extended support).
cloud-services-extended-support Role Startup Failure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/role-startup-failure.md
Statuses PlatformFaultDomain PlatformUpdateDomain
``` * Azure portal: Go to your cloud service and select Roles and Instances tab. Click on the role instance to get its status details
+
:::image type="content" source="media/role-startup-failure-1.png" alt-text="Image shows role startup failure on portal."::: Here are some common problems and solutions related to Azure Cloud Services (extended support) roles that fail to start or it cycles between the initializing, busy, and stopping states.
cloud-services Diagnostics Extension To Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/diagnostics-extension-to-storage.md
Last updated 08/01/2016 -+ # Store and view diagnostic data in Azure Storage
cognitive-services Custom Speech Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/custom-speech-overview.md
To create your first project, select **Speech-to-text/Custom speech**, and then
## Model and Endpoint lifecycle
-Older models typically become less useful over time because the newest model usually has higher accuracy. Therefore, base models as well as custom models and endpoints created through the portal are subject to expiration after 1 year for adaptation and 2 years for decoding. See a detailed description in the [Model and Endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md) article.
+Older models typically become less useful over time because the newest model usually has higher accuracy. Therefore, base models as well as custom models and endpoints created through the portal are subject to expiration after 1 year for adaptation and 2 years for decoding. See a detailed description in the [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md) article.
## Next steps
cognitive-services Faq Stt https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/faq-stt.md
The other results are likely worse and might not have full capitalization and pu
**Q: Why are there different base models?**
-**A**: You can choose from more than one base model in the Speech service. Each model name contains the date when it was added. When you start training a custom model, use the latest model to get the best accuracy. Older base models are still available for some time when a new model is made available. You can continue using the model that you have worked with until it is retired (see [Model and Endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md)). It is still recommended to switch to the latest base model for better accuracy.
+**A**: You can choose from more than one base model in the Speech service. Each model name contains the date when it was added. When you start training a custom model, use the latest model to get the best accuracy. Older base models are still available for some time when a new model is made available. You can continue using the model that you have worked with until it is retired (see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md)). It is still recommended to switch to the latest base model for better accuracy.
**Q: Can I update my existing model (model stacking)?**
The old dataset and the new dataset must be combined in a single .zip file (for
If you have adapted and deployed a model, that deployment will remain as is. You can decommission the deployed model, readapt using the newer version of the base model and redeploy for better accuracy.
-Both base models and custom models will be retired after some time (see [Model and Endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md)).
+Both base models and custom models will be retired after some time (see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md)).
**Q: Can I download my model and run it locally?**
cognitive-services How To Custom Speech Model And Endpoint Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-model-and-endpoint-lifecycle.md
Previously updated : 03/10/2021 Last updated : 04/2/2021
-# Model and Endpoint lifecycle
+# Model and endpoint lifecycle
-Custom Speech uses both *base models* and *custom models*. Each language has one or more base models. Generally, when a new speech model is released to the regular speech service, it's also imported to the Custom Speech service as a new base model. They're updated every 6 to 12 months. Older models typically become less useful over time because the newest model usually has higher accuracy.
-
-In contrast, custom models are created by adapting a chosen base model with data from your particular customer scenario. You can keep using a particular custom model for a long time after you have one that meets your needs. But we recommend that you periodically update to the latest base model and retrain it over time with additional data.
+Our standard (not customized) speech is built upon AI models that we call base models. In most cases, we train a different base model for each spoken language we support. We update the speech service with new base models every few months to improve accuracy and quality.
+With Custom Speech, custom models are created by adapting a chosen base model with data from your particular customer scenario. Once you create a custom model, that model will not be updated or changed, even if the corresponding base model from which it was adapted gets updated in the standard speech service.
+This policy allows you to keep using a particular custom model for a long time after you have a custom model that meets your needs. But we recommend that you periodically recreate your custom model so you can adapt from the latest base model to take advantage of the improved accuracy and quality.
Other key terms related to the model lifecycle include:
And also from the model training detail page:
You can also check the expiration dates via the [`GetModel`](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModel) and [`GetBaseModel`](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModel) custom speech APIs under the `deprecationDates` property in the JSON response.
-Here is an example of the expiration data from the GetModel API call. The "DEPRECATIONDATES" show the :
+Here is an example of the expiration data from the GetModel API call. The **DEPRECATIONDATES** show when the model expires:
```json { "SELF": "HTTPS://WESTUS2.API.COGNITIVE.MICROSOFT.COM/SPEECHTOTEXT/V3.0/MODELS/{id}",
Here is an example of the expiration data from the GetModel API call. The "DEPRE
}, "PROPERTIES": { "DEPRECATIONDATES": {
- "ADAPTATIONDATETIME": "2022-01-15T00:00:00Z", // last date this model can be used for adaptation
+ "ADAPTATIONDATETIME": "2022-01-15T00:00:00Z", // last date the base model can be used for adaptation
"TRANSCRIPTIONDATETIME": "2023-03-01T21:27:29Z" // last date this model can be used for decoding } },
Here is an example of the expiration data from the GetModel API call. The "DEPRE
``` Note that you can upgrade the model on a custom speech endpoint without downtime by changing the model used by the endpoint in the deployment section of the Speech Studio, or via the custom speech API.
+## What happens when models expire and how to update them
+What happens when a model expires and how to update the model depends on how it is being used.
+### Batch transcription
+If a model expires that is used with [batch transcription](batch-transcription.md) transcription requests will fail with a 4xx error. To prevent this update the `model` parameter in the JSON sent in the **Create Transcription** request body to either point to a more recent base model or more recent custom model. You can also remove the `model` entry from the JSON to always use the latest base model.
+### Custom speech endpoint
+If a model expires that is used by a [custom speech endpoint](how-to-custom-speech-train-model.md), then the service will automatically fall back to using the latest base model for the language you are using. , you are using you can select **Deployment** in the **Custom Speech** menu at the top of the page and then click on the endpoint name to see its details. At the top of the details page, you will see an **Update Model** button that lets you seamlessly update the model used by this endpoint without downtime. You can also make this change programmatically by using the [**Update Model**](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateModel) Rest API.
+ ## Next steps * [Train and deploy a model](how-to-custom-speech-train-model.md)
cognitive-services How To Custom Speech Train Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-train-model.md
The **Training** table displays a new entry that corresponds to the new model. T
See the [how-to](how-to-custom-speech-evaluate-data.md) on evaluating and improving Custom Speech model accuracy. If you choose to test accuracy, it's important to select an acoustic dataset that's different from the one you used with your model to get a realistic sense of the model's performance. > [!NOTE]
-> Both base models and custom models can be used only up to a certain date (see [Model and Endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md)). Speech Studio shows this date in the **Expiration** column for each model and endpoint. After that date request to an endpoint or to batch transcription might fail or fall back to base model.
+> Both base models and custom models can be used only up to a certain date (see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md)). Speech Studio shows this date in the **Expiration** column for each model and endpoint. After that date request to an endpoint or to batch transcription might fail or fall back to base model.
> > Retrain your model using the then most recent base model to benefit from accuracy improvements and to avoid that your model expires.
cognitive-services Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-to-text.md
Speech-to-text, also known as speech recognition, enables real-time transcriptio
The speech-to-text service defaults to using the Universal language model. This model was trained using Microsoft-owned data and is deployed in the cloud. It's optimal for conversational and dictation scenarios. When using speech-to-text for recognition and transcription in a unique environment, you can create and train custom acoustic, language, and pronunciation models. Customization is helpful for addressing ambient noise or industry-specific vocabulary.
-With additional reference text as input, speech-to-text service also enables [pronunciation assessment](rest-speech-to-text.md#pronunciation-assessment-parameters) capability to evaluate speech pronunciation and gives speakers feedback on the accuracy and fluency of spoken audio. With pronunciation assessment, language learners can practice, get instant feedback, and improve their pronunciation so that they can speak and present with confidence. Educators can use the capability to evaluate pronunciation of multiple speakers in real-time. The feature currently supports US English, and correlates highly with speech assessments conducted by experts.
+This documentation contains the following article types:
+
+* **Quickstarts** are getting-started instructions to guide you through making requests to the service.
+* **How-to guides** contain instructions for using the service in more specific or customized ways.
+* **Concepts** provide in-depth explanations of the service functionality and features.
+* **Tutorials** are longer guides that show you how to use the service as a component in broader business solutions.
> [!NOTE] > Bing Speech was decommissioned on October 15, 2019. If your applications, tools, or products are using the Bing Speech APIs, we've created guides to help you migrate to the Speech service.
cognitive-services Speech Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-translation.md
keywords: speech translation
In this overview, you learn about the benefits and capabilities of the speech translation service, which enables real-time, [multi-language speech-to-speech](language-support.md#speech-translation) and speech-to-text translation of audio streams. With the Speech SDK, your applications, tools, and devices have access to source transcriptions and translation outputs for provided audio. Interim transcription and translation results are returned as speech is detected, and final results can be converted into synthesized speech.
+This documentation contains the following article types:
+
+* **Quickstarts** are getting-started instructions to guide you through making requests to the service.
+* **How-to guides** contain instructions for using the service in more specific or customized ways.
+* **Concepts** provide in-depth explanations of the service functionality and features.
+* **Tutorials** are longer guides that show you how to use the service as a component in broader business solutions.
+ ## Core features * Speech-to-text translation with recognition results.
cognitive-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/text-to-speech.md
keywords: text to speech
In this overview, you learn about the benefits and capabilities of the text-to-speech service, which enables your applications, tools, or devices to convert text into human-like synthesized speech. Choose from standard and neural voices, or create a custom voice unique to your product or brand. 75+ standard voices are available in more than 45 languages and locales, and 5 neural voices are available in a select number of languages and locales. For a full list of supported voices, languages, and locales, see [supported languages](language-support.md#text-to-speech).
+This documentation contains the following article types:
+
+* **Quickstarts** are getting-started instructions to guide you through making requests to the service.
+* **How-to guides** contain instructions for using the service in more specific or customized ways.
+* **Concepts** provide in-depth explanations of the service functionality and features.
+* **Tutorials** are longer guides that show you how to use the service as a component in broader business solutions.
+ > [!NOTE] > Bing Speech was decommissioned on October 15, 2019. If your applications, tools, or products are using the Bing Speech APIs or Custom Speech, we've created guides to help you migrate to the Speech service. > - [Migrate from Bing Speech to the Speech service](how-to-migrate-from-bing-speech.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/overview.md
Metrics Advisor is a part of Azure Cognitive Services that uses AI to perform da
:::image type="content" source="media/metrics-advisor-overview.png" alt-text="Metrics Advisor overview":::
+This documentation contains the following types of articles:
+* The [quickstarts](./Quickstarts/web-portal.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
+* The [how-to guides](./how-tos/onboard-your-data.md) contain instructions for using the service in more specific or customized ways.
+* The [conceptual articles](glossary.md) provide in-depth explanations of the service's functionality and features.
+ ## Connect to a variety of data sources Metrics Advisor can connect to, and [ingest multi-dimensional metric](how-tos/onboard-your-data.md) data from many data stores, including: SQL Server, Azure Blob Storage, MongoDB and more.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/language-support.md
| English | `en` | Γ£ô | 2019-10-01 | | | French | `fr` | Γ£ô | 2019-10-01 | | | German | `de` | Γ£ô | 2019-10-01 | |
+| Hindi | `hi` | Γ£ô | 2020-04-01 | |
| Italian | `it` | Γ£ô | 2019-10-01 | | | Japanese | `ja` | Γ£ô | 2019-10-01 | | | Korean | `ko` | Γ£ô | 2019-10-01 | |
connectors Connectors Sftp Ssh https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connectors-sftp-ssh.md
ms.suite: integration
Previously updated : 03/08/2021 Last updated : 04/05/2021 tags: connectors
-# Monitor, create, and manage SFTP files by using SSH and Azure Logic Apps
+# Create and manage SFTP files using SSH and Azure Logic Apps
-To automate tasks that monitor, create, send, and receive files on a [Secure File Transfer Protocol (SFTP)](https://www.ssh.com/ssh/sftp/) server by using the [Secure Shell (SSH)](https://www.ssh.com/ssh/protocol/) protocol, you can build and automate integration workflows by using Azure Logic Apps and the SFTP-SSH connector. SFTP is a network protocol that provides file access, file transfer, and file management over any reliable data stream.
+To automate tasks that create and manage files on a [Secure File Transfer Protocol (SFTP)](https://www.ssh.com/ssh/sftp/) server using the [Secure Shell (SSH)](https://www.ssh.com/ssh/protocol/) protocol, you can create automated integration workflows by using Azure Logic Apps and the SFTP-SSH connector. SFTP is a network protocol that provides file access, file transfer, and file management over any reliable data stream.
Here are some example tasks you can automate:
Here are some example tasks you can automate:
* Get file content and metadata. * Extract archives to folders.
-You can use triggers that monitor events on your SFTP server and make output available to other actions. You can use actions that perform various tasks on your SFTP server. You can also have other actions in your logic app use the output from SFTP actions. For example, if you regularly retrieve files from your SFTP server, you can send email alerts about those files and their content by using the Office 365 Outlook connector or Outlook.com connector. If you're new to logic apps, review [What is Azure Logic Apps?](../logic-apps/logic-apps-overview.md)
+In your workflow, you can use a trigger that monitors events on your SFTP server and makes output available to other actions. You can then use actions to perform various tasks on your SFTP server. You can also include other actions that use the output from SFTP-SSH actions. For example, if you regularly retrieve files from your SFTP server, you can send email alerts about those files and their content using the Office 365 Outlook connector or Outlook.com connector. If you're new to logic apps, review [What is Azure Logic Apps?](../logic-apps/logic-apps-overview.md)
For differences between the SFTP-SSH connector and the SFTP connector, review the [Compare SFTP-SSH versus SFTP](#comparison) section later in this topic.
For differences between the SFTP-SSH connector and the SFTP connector, review th
* OpenText Secure MFT * OpenText GXS
-* The SFTP-SSH connector supports either private key authentication or password authentication, not both.
-
-* SFTP-SSH actions that support [chunking](../logic-apps/logic-apps-handle-large-messages.md) can handle files up to 1 GB, while SFTP-SSH actions that don't support chunking can handle files up to 50 MB. Although the default chunk size is 15 MB, this size can dynamically change, starting from 5 MB and gradually increasing to the 50-MB maximum, based on factors such as network latency, server response time, and so on.
+* SFTP-SSH actions that support [chunking](../logic-apps/logic-apps-handle-large-messages.md) can handle files up to 1 GB, while SFTP-SSH actions that don't support chunking can handle files up to 50 MB. The default chunk size is 15 MB. However, this size can dynamically change, starting from 5 MB and gradually increasing to the 50-MB maximum. Dynamic sizing is based on factors such as network latency, server response time, and so on.
> [!NOTE] > For logic apps in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md),
For differences between the SFTP-SSH connector and the SFTP connector, review th
You can override this adaptive behavior when you [specify a constant chunk size](#change-chunk-size) to use instead. This size can range from 5 MB to 50 MB. For example, suppose you have a 45-MB file and a network that can that support that file size without latency. Adaptive chunking results in several calls, rather that one call. To reduce the number of calls, you can try setting a 50-MB chunk size. In different scenario, if your logic app is timing out, for example, when using 15-MB chunks, you can try reducing the size to 5 MB.
- Chunk size is associated with a connection, which means that you can use the same connection for actions that support chunking and then for actions that don't support chunking. In this case, the chunk size for actions that don't support chunking ranges from 5 MB to 50 MB. This table shows which SFTP-SSH actions support chunking:
+ Chunk size is associated with a connection. This attribute means you can use the same connection for both actions that support chunking and actions that don't support chunking. In this case, the chunk size for actions that don't support chunking ranges from 5 MB to 50 MB. This table shows which SFTP-SSH actions support chunking:
| Action | Chunking support | Override chunk size support | |--||--|
For differences between the SFTP-SSH connector and the SFTP connector, review th
* SFTP-SSH triggers don't support message chunking. When requesting file content, triggers select only files that are 15 MB or smaller. To get files larger than 15 MB, follow this pattern instead:
- 1. Use an SFTP-SSH trigger that returns only file properties, such as **When a file is added or modified (properties only)**.
+ 1. Use an SFTP-SSH trigger that returns only file properties. These triggers have names that include the description, **(properties only)**.
- 1. Follow the trigger with the SFTP-SSH **Get file content** action, which reads the complete file and implicitly uses message chunking.
+ 1. Follow the trigger with the SFTP-SSH **Get file content** action. This action reads the complete file and implicitly uses message chunking.
<a name="comparison"></a> ## Compare SFTP-SSH versus SFTP
-Here are other key differences between the SFTP-SSH connector and the SFTP connector where the SFTP-SSH connector has these capabilities:
+The following list describes key SFTP-SSH capabilities that differ from the SFTP connector:
* Uses the [SSH.NET library](https://github.com/sshnet/SSH.NET), which is an open-source Secure Shell (SSH) library that supports .NET.
Here are other key differences between the SFTP-SSH connector and the SFTP conne
* Provides the **Rename file** action, which renames a file on the SFTP server.
-* Caches the connection to SFTP server *for up to 1 hour*, which improves performance and reduces the number of attempts at connecting to the server. To set the duration for this caching behavior, edit the [**ClientAliveInterval**](https://man.openbsd.org/sshd_config#ClientAliveInterval) property in the SSH configuration on your SFTP server.
+* Caches the connection to SFTP server *for up to 1 hour*. This capability improves performance and reduces how often the connector tries connecting to the server. To set the duration for this caching behavior, edit the [**ClientAliveInterval** property](https://man.openbsd.org/sshd_config#ClientAliveInterval) in the SSH configuration on your SFTP server.
## Prerequisites * An Azure subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
-* Your SFTP server address and account credentials, which let your logic app access your SFTP account. You also need access to an SSH private key and the SSH private key password. To use chunking when uploading large files, you need both read and write permissions for the root folder on your SFTP server. Otherwise, you get a "401 Unauthorized" error.
-
- > [!IMPORTANT]
- >
- > The SFTP-SSH connector supports *only* these private key formats, algorithms, and fingerprints:
- >
- > * **Private key formats**: RSA (Rivest Shamir Adleman) and DSA (Digital Signature Algorithm) keys in both OpenSSH and ssh.com formats. If your private key is in PuTTY (.ppk) file format, first [convert the key to the OpenSSH (.pem) file format](#convert-to-openssh).
- >
- > * **Encryption algorithms**: DES-EDE3-CBC, DES-EDE3-CFB, DES-CBC, AES-128-CBC, AES-192-CBC, and AES-256-CBC
- >
- > * **Fingerprint**: MD5
- >
- > After you add the SFTP-SSH trigger or action you want to your logic app,
- > you have to provide connection information for your SFTP server. When you
- > provide your SSH private key for this connection, ***don't manually enter or edit the key***,
- > which might cause the connection to fail. Instead, make sure that you ***copy the key*** from
- > your SSH private key file, and ***paste*** that key into the connection details.
- > For more information, see the [Connect to SFTP with SSH](#connect) section later this article.
+* Your SFTP server address and account credentials, so your workflow can access your SFTP account. You also need access to an SSH private key and the SSH private key password. To upload large files using chunking, you need both read and write access for the root folder on your SFTP server. Otherwise, you get a "401 Unauthorized" error.
+
+ The SFTP-SSH connector supports both private key authentication and password authentication. However, the SFTP-SSH connector supports *only* these private key formats, algorithms, and fingerprints:
+
+ * **Private key formats**: RSA (Rivest Shamir Adleman) and DSA (Digital Signature Algorithm) keys in both OpenSSH and ssh.com formats. If your private key is in PuTTY (.ppk) file format, first [convert the key to the OpenSSH (.pem) file format](#convert-to-openssh).
+ * **Encryption algorithms**: DES-EDE3-CBC, DES-EDE3-CFB, DES-CBC, AES-128-CBC, AES-192-CBC, and AES-256-CBC
+ * **Fingerprint**: MD5
+
+ After you add an SFTP-SSH trigger or action to your workflow, you have to provide connection information for your SFTP server. When you provide your SSH private key for this connection, ***don't manually enter or edit the key***, which might cause the connection to fail. Instead, make sure that you ***copy the key*** from your SSH private key file, and ***paste*** that key into the connection details. For more information, see the [Connect to SFTP with SSH](#connect) section later this article.
* Basic knowledge about [how to create logic apps](../logic-apps/quickstart-create-first-logic-app-workflow.md)
-* The logic app where you want to access your SFTP account. To start with an SFTP-SSH trigger, [create a blank logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md). To use an SFTP-SSH action, start your logic app with another trigger, for example, the **Recurrence** trigger.
+* The logic app workflow where you want to access your SFTP account. To start with an SFTP-SSH trigger, [create a blank logic app workflow](../logic-apps/quickstart-create-first-logic-app-workflow.md). To use an SFTP-SSH action, start your workflow with another trigger, for example, the **Recurrence** trigger.
## How SFTP-SSH triggers work
When a trigger finds a new file, the trigger checks that the new file is complet
### Trigger recurrence shift and drift
-Connection-based triggers where you need to create a connection first, such as the SFTP-SSH trigger, differ from built-in triggers that run natively in Azure Logic Apps, such as the [Recurrence trigger](../connectors/connectors-native-recurrence.md). In recurring connection-based triggers, the recurrence schedule isn't the only driver that controls execution, and the time zone only determines the initial start time. Subsequent runs depend on the recurrence schedule, the last trigger execution, *and* other factors that might cause run times to drift or produce unexpected behavior, for example, not maintaining the specified schedule when daylight saving time (DST) starts and ends. To make sure that the recurrence time doesn't shift when DST takes effect, manually adjust the recurrence so that your logic app continues to run at the expected time. Otherwise, the start time shifts one hour forward when DST starts and one hour backward when DST ends. For more information, see [Recurrence for connection-based triggers](../connectors/apis-list.md#recurrence-connection-based).
+Connection-based triggers where you need to create a connection first, such as the SFTP-SSH trigger, differ from built-in triggers that run natively in Azure Logic Apps, such as the [Recurrence trigger](../connectors/connectors-native-recurrence.md). In recurring connection-based triggers, the recurrence schedule isn't the only driver that controls execution, and the time zone only determines the initial start time. Subsequent runs depend on the recurrence schedule, the last trigger execution, *and* other factors that might cause run times to drift or produce unexpected behavior. For example, unexpected behavior can include failure to maintain the specified schedule when daylight saving time (DST) starts and ends. To make sure that the recurrence time doesn't shift when DST takes effect, manually adjust the recurrence. That way, your workflow continues to run at the expected time. Otherwise, the start time shifts one hour forward when DST starts and one hour backward when DST ends. For more information, see [Recurrence for connection-based triggers](../connectors/apis-list.md#recurrence-connection-based).
<a name="convert-to-openssh"></a> ## Convert PuTTY-based key to OpenSSH
-If your private key is in PuTTY format, which uses the .ppk (PuTTY Private Key) file name extension, first convert the key to the OpenSSH format, which uses the .pem (Privacy Enhanced Mail) file name extension.
+The PuTTY format and OpenSSH format use different file name extensions. The PuTTY format uses the .ppk, or PuTTY Private Key, file name extension. The OpenSSH format uses the .pem, or Privacy Enhanced Mail, file name extension. If your private key is in PuTTY format, and you have to use OpenSSH format, first convert the key to the OpenSSH format by following these steps:
### Unix-based OS
This section describes considerations to review when you use this connector's tr
### Use different SFTP folders for file upload and processing
-On your SFTP server, make sure that you use separate folders for where you store uploaded files and where the trigger monitors those files for processing, which means that you need a way to move files between those folders. Otherwise, the trigger won't fire and behaves unpredictably, for example, skipping a random number of files that the trigger processes.
+On your SFTP server, use separate folders for storing uploaded files and for the trigger to monitor those files for processing. Otherwise, the trigger won't fire and behaves unpredictably, for example, skipping a random number of files that the trigger processes. However, this requirement means that you need a way to move files between those folders.
-If this problem happens, remove the files from the folder that the trigger monitors, and use a different folder to store the uploaded files.
+If this trigger problem happens, remove the files from the folder that the trigger monitors, and use a different folder to store the uploaded files.
<a name="create-file"></a>
To create a file on your SFTP server, you can use the SFTP-SSH **Create file** a
> [!IMPORTANT] >
- > When you enter your SSH private key in the **SSH private key** property,
- > follow these additional steps, which help make sure you provide the
- > complete and correct value for this property. An invalid key causes the connection to fail.
+ > When you enter your SSH private key in the **SSH private key** property, follow these additional steps, which help
+ > make sure you provide the complete and correct value for this property. An invalid key causes the connection to fail.
Although you can use any text editor, here are sample steps that show how to correctly copy and paste your key by using Notepad.exe as an example.
To create a file on your SFTP server, you can use the SFTP-SSH **Create file** a
1. Select **Edit** > **Copy**.
- 1. In the SFTP-SSH trigger or action you added, paste the *complete* key you copied into the **SSH private key** property, which supports multiple lines. ***Make sure you paste*** the key. ***Don't manually enter or edit the key***.
+ 1. In the SFTP-SSH trigger or action, *paste the complete* copied key in the **SSH private key** property, which supports multiple lines. ***Don't manually enter or edit the key***.
1. After you finish entering the connection details, select **Create**.
To override the default adaptive behavior that chunking uses, you can specify a
### SFTP - SSH trigger: When a file is added or modified
-This trigger starts a logic app workflow when a file is added or changed on an SFTP server. For example, you can add a condition that checks the file's content and gets the content based on whether the content meets a specified condition. You can then add an action that gets the file's content, and puts that content in a folder on the SFTP server.
+This trigger starts a workflow when a file is added or changed on an SFTP server. As example follow-up actions, the workflow can use a condition to check whether the file content meets specified criteria. If the content meets the condition, the **Get file content** SFTP-SSH action can get the content, and then another SFTP-SSH action can put that file in a different folder on the SFTP server.
-**Enterprise example**: You can use this trigger to monitor an SFTP folder for new files that represent customer orders. You can then use an SFTP action such as **Get file content** so you get the order's contents for further processing and store that order in an orders database.
+**Enterprise example**: You can use this trigger to monitor an SFTP folder for new files that represent customer orders. You can then use an SFTP-SSH action such as **Get file content** so you get the order's contents for further processing and store that order in an orders database.
<a name="get-content"></a>
This error can happen when your logic app can't successfully establish a connect
### 404 error: "A reference was made to a file or folder which does not exist"
-This error can happen when your logic app creates a new file on your SFTP server through the SFTP-SSH **Create file** action, but immediately moves the newly created file before the Logic Apps service can get the file's metadata. When your logic app runs the **Create file** action, the Logic Apps service also automatically calls your SFTP server to get the file's metadata. However, if your logic app moves the file, the Logic Apps service can no longer find the file so you get the `404` error message.
+This error can happen when your workflow creates a file on your SFTP server with the SFTP-SSH **Create file** action, but immediately moves that file before the Logic Apps service can get the file's metadata. When your workflow runs the **Create file** action, the Logic Apps service automatically calls your SFTP server to get the file's metadata. However, if your logic app moves the file, the Logic Apps service can no longer find the file so you get the `404` error message.
If you can't avoid or delay moving the file, you can skip reading the file's metadata after file creation instead by following these steps:
container-registry Container Registry Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-concepts.md
A basic manifest for a Linux `hello-world` image looks similar to the following:
"schemaVersion": 2, "mediaType": "application/vnd.docker.distribution.manifest.v2+json", "config": {
- "mediaType": "application/vnd.docker.container.image.v1+json",
- "size": 1510,
- "digest": "sha256:fbf289e99eb9bca977dae136fbe2a82b6b7d4c372474c9235adc1741675f587e"
- },
+ "mediaType": "application/vnd.docker.container.image.v1+json",
+ "size": 1510,
+ "digest": "sha256:fbf289e99eb9bca977dae136fbe2a82b6b7d4c372474c9235adc1741675f587e"
+ },
"layers": [
- {
- "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
- "size": 977,
- "digest": "sha256:2c930d010525941c1d56ec53b97bd057a67ae1865eebf042686d2a2d18271ced"
- }
- ]
+ {
+ "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
+ "size": 977,
+ "digest": "sha256:2c930d010525941c1d56ec53b97bd057a67ae1865eebf042686d2a2d18271ced"
+ }
+ ]
} ```
cosmos-db Cassandra Import Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra-import-data.md
Title: 'Migrate your data to a Cassandra API account in Azure Cosmos DB- Tutorial'
-description: In this tutorial, learn how to use the CQL Copy command & Spark to copy data from Apache Cassandra to a Cassandra API account in Azure Cosmos DB
+description: In this tutorial, learn how to copy data from Apache Cassandra to a Cassandra API account in Azure Cosmos DB.
#Customer intent: As a developer, I want to migrate my existing Cassandra workloads to Azure Cosmos DB so that the overhead to manage resources, clusters, and garbage collection is automatically handled by Azure Cosmos DB.
-# Tutorial: Migrate your data to Cassandra API account in Azure Cosmos DB
+# Tutorial: Migrate your data to a Cassandra API account
[!INCLUDE[appliesto-cassandra-api](includes/appliesto-cassandra-api.md)] As a developer, you might have existing Cassandra workloads that are running on-premises or in the cloud, and you might want to migrate them to Azure. You can migrate such workloads to a Cassandra API account in Azure Cosmos DB. This tutorial provides instructions on different options available to migrate Apache Cassandra data into the Cassandra API account in Azure Cosmos DB.
This tutorial covers the following tasks:
> [!div class="checklist"] > * Plan for migration > * Prerequisites for migration
-> * Migrate data using cqlsh COPY command
-> * Migrate data using Spark
+> * Migrate data by using the `cqlsh` `COPY` command
+> * Migrate data by using Spark
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites for migration
-* **Estimate your throughput needs:** Before migrating data to the Cassandra API account in Azure Cosmos DB, you should estimate the throughput needs of your workload. In general, it's recommended to start with the average throughput required by the CRUD operations and then include the additional throughput required for the Extract Transform Load (ETL) or spiky operations. You need the following details to plan for migration:
+* **Estimate your throughput needs:** Before migrating data to the Cassandra API account in Azure Cosmos DB, you should estimate the throughput needs of your workload. In general, start with the average throughput required by the CRUD operations, and then include the additional throughput required for the Extract Transform Load or spiky operations. You need the following details to plan for migration:
- * **Existing data size or estimated data size:** Defines the minimum database size and throughput requirement. If you are estimating data size for a new application, you can assume that the data is uniformly distributed across the rows and estimate the value by multiplying with the data size.
+ * **Existing data size or estimated data size:** Defines the minimum database size and throughput requirement. If you are estimating data size for a new application, you can assume that the data is uniformly distributed across the rows, and estimate the value by multiplying with the data size.
- * **Required throughput:** Approximate read (query/get) and write (update/delete/insert) throughput rate. This value is required to compute the required request units along with steady state data size.
+ * **Required throughput:** Approximate throughput rate of read (query/get) and write (update/delete/insert) operations. This value is required to compute the required request units, along with steady-state data size.
- * **The schema:** Connect to your existing Cassandra cluster through cqlsh and export the schema from Cassandra:
+ * **The schema:** Connect to your existing Cassandra cluster through `cqlsh`, and export the schema from Cassandra:
```bash cqlsh [IP] "-e DESC SCHEMA" > orig_schema.cql ```
- After you identify the requirements of your existing workload, you should create an Azure Cosmos account, database, and containers according to the gathered throughput requirements.
+ After you identify the requirements of your existing workload, create an Azure Cosmos DB account, database, and containers, according to the gathered throughput requirements.
* **Determine the RU charge for an operation:** You can determine the RUs by using any of the SDKs supported by the Cassandra API. This example shows the .NET version of getting RU charges.
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
* **Allocate the required throughput:** Azure Cosmos DB can automatically scale storage and throughput as your requirements grow. You can estimate your throughput needs by using the [Azure Cosmos DB request unit calculator](https://www.documentdb.com/capacityplanner).
-* **Create tables in the Cassandra API account:** Before you start migrating data, pre-create all your tables from the Azure portal or from cqlsh. If you are migrating to an Azure Cosmos account that has database level throughput, make sure to provide a partition key when creating the Azure Cosmos containers.
+* **Create tables in the Cassandra API account:** Before you start migrating data, pre-create all your tables from the Azure portal or from `cqlsh`. If you're migrating to an Azure Cosmos DB account that has database-level throughput, make sure to provide a partition key when you create the containers.
-* **Increase throughput:** The duration of your data migration depends on the amount of throughput you provisioned for the tables in Azure Cosmos DB. Increase the throughput for the duration of migration. With the higher throughput, you can avoid rate limiting and migrate in less time. After you've completed the migration, decrease the throughput to save costs. ItΓÇÖs also recommended to have the Azure Cosmos account in the same region as your source database.
+* **Increase throughput:** The duration of your data migration depends on the amount of throughput you provisioned for the tables in Azure Cosmos DB. Increase the throughput for the duration of migration. With the higher throughput, you can avoid rate limiting and migrate in less time. After you've completed the migration, decrease the throughput to save costs. We also recommend that you have the Azure Cosmos DB account in the same region as your source database.
* **Enable TLS:** Azure Cosmos DB has strict security requirements and standards. Be sure to enable TLS when you interact with your account. When you use CQL with SSH, you have an option to provide TLS information. ## Options to migrate data
-You can move data from existing Cassandra workloads to Azure Cosmos DB by using the following options:
+You can move data from existing Cassandra workloads to Azure Cosmos DB by using the `cqlsh` `COPY` command, or by using Spark.
-* [Using cqlsh COPY command](#migrate-data-using-cqlsh-copy-command)
-* [Using Spark](#migrate-data-using-spark)
+### Migrate data by using the cqlsh COPY command
-## Migrate data using cqlsh COPY command
-
-The [CQL COPY command](https://cassandra.apache.org/doc/latest/tools/cqlsh.html#cqlsh) is used to copy local data to the Cassandra API account in Azure Cosmos DB. Use the following steps to copy data:
+Use the [CQL COPY command](https://cassandra.apache.org/doc/latest/tools/cqlsh.html#cqlsh) to copy local data to the Cassandra API account in Azure Cosmos DB.
1. Get your Cassandra API accountΓÇÖs connection string information:
- * Sign in to the [Azure portal](https://portal.azure.com), and navigate to your Azure Cosmos account.
+ * Sign in to the [Azure portal](https://portal.azure.com), and go to your Azure Cosmos DB account.
- * Open the **Connection String** pane that contains all the information that you need to connect to your Cassandra API account from cqlsh.
+ * Open the **Connection String** pane. Here you see all the information that you need to connect to your Cassandra API account from `cqlsh`.
-2. Sign in to cqlsh using the connection information from the portal.
+1. Sign in to `cqlsh` by using the connection information from the portal.
-3. Use the CQL COPY command to copy local data to the Cassandra API account.
+1. Use the `CQL` `COPY` command to copy local data to the Cassandra API account.
```bash COPY exampleks.tablename FROM filefolderx/*.csv ```
-## Migrate data using Spark
+### Migrate data by using Spark
Use the following steps to migrate data to the Cassandra API account with Spark: -- Provision an [Azure Databricks cluster](cassandra-spark-databricks.md) or an [HDInsight cluster](cassandra-spark-hdinsight.md)
+1. Provision an [Azure Databricks cluster](cassandra-spark-databricks.md) or an [Azure HDInsight cluster](cassandra-spark-hdinsight.md).
-- Move data to the destination Cassandra API endpoint by using the [table copy operation](cassandra-spark-table-copy-ops.md)
+1. Move data to the destination Cassandra API endpoint by using the [table copy operation](cassandra-spark-table-copy-ops.md).
-Migrating data by using Spark jobs is a recommended option if you have data residing in an existing cluster in Azure virtual machines or any other cloud. This option requires Spark to be set up as an intermediary for one time or regular ingestion. You can accelerate this migration by using Azure ExpressRoute connectivity between on-premises and Azure.
+Migrating data by using Spark jobs is a recommended option if you have data residing in an existing cluster in Azure virtual machines or any other cloud. To do this, you must set up Spark as an intermediary for one-time or regular ingestion. You can accelerate this migration by using Azure ExpressRoute connectivity between your on-premises environment and Azure.
## Clean up resources
-When they're no longer needed, you can delete the resource group, the Azure Cosmos account, and all the related resources. To do so, select the resource group for the virtual machine, select **Delete**, and then confirm the name of the resource group to delete.
+When they're no longer needed, you can delete the resource group, the Azure Cosmos DB account, and all the related resources. To do so, select the resource group for the virtual machine, select **Delete**, and then confirm the name of the resource group to delete.
## Next steps
-In this tutorial, you've learned how to migrate your data to Cassandra API account in Azure Cosmos DB. You can now proceed to the following article to learn about other Azure Cosmos DB concepts:
+In this tutorial, you've learned how to migrate your data to a Cassandra API account in Azure Cosmos DB. You can now learn about other concepts in Azure Cosmos DB:
> [!div class="nextstepaction"] > [Tunable data consistency levels in Azure Cosmos DB](../cosmos-db/consistency-levels.md) ++
cosmos-db Cosmosdb Migrationchoices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cosmosdb-migrationchoices.md
The following factors determine the choice of the migration tool:
|Migration type|Solution|Supported sources|Supported targets|Considerations| ||||||
-|Offline|[cqlsh COPY command](cassandra-import-data.md#migrate-data-using-cqlsh-copy-command)|CSV Files | Azure Cosmos DB Cassandra API| &bull; Easy to set up. <br/>&bull; Not suitable for large datasets. <br/>&bull; Works only when the source is a Cassandra table.|
-|Offline|[Copy table with Spark](cassandra-import-data.md#migrate-data-using-spark) | &bull;Apache Cassandra<br/>&bull;Azure Cosmos DB Cassandra API| Azure Cosmos DB Cassandra API | &bull; Can make use of Spark capabilities to parallelize transformation and ingestion. <br/>&bull; Needs configuration with a custom retry policy to handle throttles.|
+|Offline|[cqlsh COPY command](cassandra-import-data.md#migrate-data-by-using-the-cqlsh-copy-command)|CSV Files | Azure Cosmos DB Cassandra API| &bull; Easy to set up. <br/>&bull; Not suitable for large datasets. <br/>&bull; Works only when the source is a Cassandra table.|
+|Offline|[Copy table with Spark](cassandra-import-data.md#migrate-data-by-using-spark) | &bull;Apache Cassandra<br/>&bull;Azure Cosmos DB Cassandra API| Azure Cosmos DB Cassandra API | &bull; Can make use of Spark capabilities to parallelize transformation and ingestion. <br/>&bull; Needs configuration with a custom retry policy to handle throttles.|
|Online|[Striim (from Oracle DB/Apache Cassandra)](cosmosdb-cassandra-api-migrate-data-striim.md)| &bull;Oracle<br/>&bull;Apache Cassandra<br/><br/> See the [Striim website](https://www.striim.com/sources-and-targets/) for other supported sources.|&bull;Azure Cosmos DB SQL API<br/>&bull;Azure Cosmos DB Cassandra API <br/><br/> See the [Striim website](https://www.striim.com/sources-and-targets/) for other supported targets.| &bull; Works with a large variety of sources like Oracle, DB2, SQL Server. <br/>&bull; Easy to build ETL pipelines and provides a dashboard for monitoring. <br/>&bull; Supports larger datasets. <br/>&bull; Since this is a third-party tool, it needs to be purchased from the marketplace and installed in the user's environment.| |Online|[Blitzz (from Oracle DB/Apache Cassandra)](oracle-migrate-cosmos-db-blitzz.md)|&bull;Oracle<br/>&bull;Apache Cassandra<br/><br/>See the [Blitzz website](https://www.blitzz.io/) for other supported sources. |Azure Cosmos DB Cassandra API. <br/><br/>See the [Blitzz website](https://www.blitzz.io/) for other supported targets. | &bull; Supports larger datasets. <br/>&bull; Since this is a third-party tool, it needs to be purchased from the marketplace and installed in the user's environment.|
cosmos-db Create Sql Api Xamarin Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-sql-api-xamarin-dotnet.md
Go back to the Azure portal to get your API key information and copy it into the
```csharp //#error Enter the URL of your Azure Cosmos DB endpoint here
- public static readonly string CosmosEndpointUrl = "[URI Copied from Azure Portal]";
+ public static readonly string CosmosEndpointUrl = "[URI Copied from Azure Portal]";
``` 4. In the Azure Portal, using the copy button, copy the **PRIMARY KEY** value and make it the value of the `Cosmos Auth Key` in APIKeys.cs. ```csharp //#error Enter the read/write authentication key of your Azure Cosmos DB endpoint here
- public static readonly string CosmosAuthKey = "[PRIMARY KEY copied from Azure Portal";
+ public static readonly string CosmosAuthKey = "[PRIMARY KEY copied from Azure Portal";
``` [!INCLUDE [cosmos-db-auth-key-info](../../includes/cosmos-db-auth-key-info.md)]
The following steps will demonstrate how to run the app using the Visual Studio
In this quickstart, you've learned how to create an Azure Cosmos account, create a container using the Data Explorer, and build and deploy a Xamarin app. You can now import additional data to your Azure Cosmos account. > [!div class="nextstepaction"]
-> [Import data into Azure Cosmos DB](import-data.md)
+> [Import data into Azure Cosmos DB](import-data.md)
cosmos-db How To Query Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-query-container.md
When you query data from containers, if the query has a partition key filter spe
For example, consider the below query with an equality filter on `DeviceId`. If we run this query on a container partitioned on `DeviceId`, this query will filter to a single physical partition. ```sql
- SELECT * FROM c WHERE c.DeviceId = 'XMS-0001'
+SELECT * FROM c WHERE c.DeviceId = 'XMS-0001'
``` As with the earlier example, this query will also filter to a single partition. Adding the additional filter on `Location` does not change this: ```sql
- SELECT * FROM c WHERE c.DeviceId = 'XMS-0001' AND c.Location = 'Seattle'
+SELECT * FROM c WHERE c.DeviceId = 'XMS-0001' AND c.Location = 'Seattle'
``` Here's a query that has a range filter on the partition key and won't be scoped to a single physical partition. In order to be an in-partition query, the query must have an equality filter that includes the partition key: ```sql
- SELECT * FROM c WHERE c.DeviceId > 'XMS-0001'
+SELECT * FROM c WHERE c.DeviceId > 'XMS-0001'
``` ## Cross-partition query
Here's a query that has a range filter on the partition key and won't be scoped
The following query doesn't have a filter on the partition key (`DeviceId`). Therefore, it must fan-out to all physical partitions where it is run against each partition's index: ```sql
- SELECT * FROM c WHERE c.Location = 'Seattle`
+SELECT * FROM c WHERE c.Location = 'Seattle`
``` Each physical partition has its own index. Therefore, when you run a cross-partition query on a container, you are effectively running one query *per* physical partition. Azure Cosmos DB will automatically aggregate results across different physical partitions.
cosmos-db Table Import https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-import.md
Title: Migrate existing data to Table API account in Azure Cosmos DB
-description: Learn how migrate or import on-premises or cloud data to Azure Table API account in Azure Cosmos DB.
+ Title: Migrate existing data to a Table API account in Azure Cosmos DB
+description: Learn how to migrate or import on-premises or cloud data to an Azure Table API account in Azure Cosmos DB.
-# Migrate your data to Azure Cosmos DB Table API account
+# Migrate your data to an Azure Cosmos DB Table API account
[!INCLUDE[appliesto-table-api](includes/appliesto-table-api.md)]
-This tutorial provides instructions on importing data for use with the Azure Cosmos DB [Table API](table-introduction.md). If you have data stored in Azure Table storage, you can use either the Data Migration Tool or AzCopy to import your data to Azure Cosmos DB Table API. If you have data stored in an Azure Cosmos DB Table API (preview) account, you must use the Data Migration tool to migrate your data.
+This tutorial provides instructions on importing data for use with the Azure Cosmos DB [Table API](table-introduction.md). If you have data stored in Azure Table Storage, you can use either the data migration tool or AzCopy to import your data to the Azure Cosmos DB Table API.
This tutorial covers the following tasks: > [!div class="checklist"]
-> * Importing data with the Data Migration tool
+> * Importing data with the data migration tool
> * Importing data with AzCopy
-> * Migrating from Table API (preview) to Table API
## Prerequisites
-* **Increase throughput:** The duration of your data migration depends on the amount of throughput you set up for an individual container or a set of containers. Be sure to increase the throughput for larger data migrations. After you've completed the migration, decrease the throughput to save costs. For more information about increasing throughput in the Azure portal, see Performance levels and pricing tiers in Azure Cosmos DB.
+* **Increase throughput:** The duration of your data migration depends on the amount of throughput you set up for an individual container or a set of containers. Be sure to increase the throughput for larger data migrations. After you've completed the migration, decrease the throughput to save costs.
-* **Create Azure Cosmos DB resources:** Before you start the migrating data, pre-create all your tables from the Azure portal. If you are migrating to an Azure Cosmos DB account that has database level throughput, make sure to provide a partition key when creating the Azure Cosmos DB tables.
+* **Create Azure Cosmos DB resources:** Before you start migrating the data, create all your tables from the Azure portal. If you're migrating to an Azure Cosmos DB account that has database-level throughput, make sure to provide a partition key when you create the Azure Cosmos DB tables.
-## Data Migration tool
+## Data migration tool
-The command-line Azure Cosmos DB Data Migration tool (dt.exe) can be used to import your existing Azure Table storage data to a Table API GA account, or migrate data from a Table API (preview) account into a Table API GA account. Other sources are not currently supported. The UI based Data Migration tool (dtui.exe) is not currently supported for Table API accounts.
+You can use the command-line data migration tool (dt.exe) in Azure Cosmos DB to import your existing Azure Table Storage data to a Table API account.
-To perform a migration of table data, complete the following tasks:
+To migrate table data:
1. Download the migration tool from [GitHub](https://github.com/azure/azure-documentdb-datamigrationtool).
-2. Run `dt.exe` using the command-line arguments for your scenario. `dt.exe` takes a command in the following format:
+2. Run `dt.exe` by using the command-line arguments for your scenario. `dt.exe` takes a command in the following format:
```bash dt.exe [/<option>:<value>] /s:<source-name> [/s.<source-option>:<value>] /t:<target-name> [/t.<target-option>:<value>]
To perform a migration of table data, complete the following tasks:
The supported options for this command are:
-* **/ErrorLog:** Optional. Name of the CSV file to redirect data transfer failures
-* **/OverwriteErrorLog:** Optional. Overwrite error log file
-* **/ProgressUpdateInterval:** Optional, default is 00:00:01. Time interval to refresh on-screen data transfer progress
-* **/ErrorDetails:** Optional, default is None. Specifies that detailed error information should be displayed for the following errors: None, Critical, All
-* **/EnableCosmosTableLog:** Optional. Direct the log to a cosmos table account. If set, this defaults to destination account connection string unless /CosmosTableLogConnectionString is also provided. This is useful if multiple instances of DT are being run simultaneously.
-* **/CosmosTableLogConnectionString:** Optional. ConnectionString to direct the log to a remote cosmos table account.
+* **/ErrorLog:** Optional. Name of the CSV file to redirect data transfer failures.
+* **/OverwriteErrorLog:** Optional. Overwrite the error log file.
+* **/ProgressUpdateInterval:** Optional, default is `00:00:01`. The time interval to refresh on-screen data transfer progress.
+* **/ErrorDetails:** Optional, default is `None`. Specifies that detailed error information should be displayed for the following errors: `None`, `Critical`, or `All`.
+* **/EnableCosmosTableLog:** Optional. Direct the log to an Azure Cosmos DB table account. If set, this defaults to the destination account connection string unless `/CosmosTableLogConnectionString` is also provided. This is useful if multiple instances of the tool are being run simultaneously.
+* **/CosmosTableLogConnectionString:** Optional. The connection string to direct the log to a remote Azure Cosmos DB table account.
### Command-line source settings
-Use the following source options when defining Azure Table Storage or Table API preview as the source of the migration.
+Use the following source options when you define Azure Table Storage as the source of the migration.
-* **/s:AzureTable:** Reads data from Azure Table storage
-* **/s.ConnectionString:** Connection string for the table endpoint. This can be retrieved from the Azure portal
-* **/s.LocationMode:** Optional, default is PrimaryOnly. Specifies which location mode to use when connecting to Azure Table storage: PrimaryOnly, PrimaryThenSecondary, SecondaryOnly, SecondaryThenPrimary
-* **/s.Table:** Name of the Azure Table
-* **/s.InternalFields:** Set to All for table migration as RowKey and PartitionKey are required for import.
-* **/s.Filter:** Optional. Filter string to apply
-* **/s.Projection:** Optional. List of columns to select
+* **/s:AzureTable:** Reads data from Table Storage.
+* **/s.ConnectionString:** Connection string for the table endpoint. You can retrieve this from the Azure portal.
+* **/s.LocationMode:** Optional, default is `PrimaryOnly`. Specifies which location mode to use when connecting to Table Storage: `PrimaryOnly`, `PrimaryThenSecondary`, `SecondaryOnly`, `SecondaryThenPrimary`.
+* **/s.Table:** Name of the Azure table.
+* **/s.InternalFields:** Set to `All` for table migration, because `RowKey` and `PartitionKey` are required for import.
+* **/s.Filter:** Optional. Filter string to apply.
+* **/s.Projection:** Optional. List of columns to select,
-To retrieve the source connection string when importing from Azure Table storage, open the Azure portal and click **Storage accounts** > **Account** > **Access keys**, and then use the copy button to copy the **Connection string**.
+To retrieve the source connection string when you import from Table Storage, open the Azure portal. Select **Storage accounts** > **Account** > **Access keys**, and copy the **Connection string**.
-
-To retrieve the source connection string when importing from an Azure Cosmos DB Table API (preview) account, open the Azure portal, click **Azure Cosmos DB** > **Account** > **Connection String** and use the copy button to copy the **Connection String**.
--
-[Sample Azure Table Storage command](#azure-table-storage)
-
-[Sample Azure Cosmos DB Table API (preview) command](#table-api-preview)
### Command-line target settings
-Use the following target options when defining Azure Cosmos DB Table API as the target of the migration.
+Use the following target options when you define the Azure Cosmos DB Table API as the target of the migration.
-* **/t:TableAPIBulk:** Uploads data into Azure CosmosDB Table in batches
-* **/t.ConnectionString:** Connection string for the table endpoint
-* **/t.TableName:** Specifies the name of the table to write to
-* **/t.Overwrite:** Optional, default is false. Specifies if existing values should be overwritten
-* **/t.MaxInputBufferSize:** Optional, default is 1GB. Approximate estimate of input bytes to buffer before flushing data to sink
-* **/t.Throughput:** Optional, service defaults if not specified. Specifies throughput to configure for table
-* **/t.MaxBatchSize:** Optional, default is 2MB. Specify the batch size in bytes
+* **/t:TableAPIBulk:** Uploads data into the Azure Cosmos DB Table API in batches.
+* **/t.ConnectionString:** The connection string for the table endpoint.
+* **/t.TableName:** Specifies the name of the table to write to.
+* **/t.Overwrite:** Optional, default is `false`. Specifies if existing values should be overwritten.
+* **/t.MaxInputBufferSize:** Optional, default is `1GB`. Approximate estimate of input bytes to buffer before flushing data to sink.
+* **/t.Throughput:** Optional, service defaults if not specified. Specifies throughput to configure for table.
+* **/t.MaxBatchSize:** Optional, default is `2MB`. Specify the batch size in bytes.
-<a id="azure-table-storage"></a>
-### Sample command: Source is Azure Table storage
+### Sample command: Source is Table Storage
-Here is a command-line sample showing how to import from Azure Table storage to Table API:
+Here's a command-line sample showing how to import from Table Storage to the Table API:
```bash dt /s:AzureTable /s.ConnectionString:DefaultEndpointsProtocol=https;AccountName=<Azure Table storage account name>;AccountKey=<Account Key>;EndpointSuffix=core.windows.net /s.Table:<Table name> /t:TableAPIBulk /t.ConnectionString:DefaultEndpointsProtocol=https;AccountName=<Azure Cosmos DB account name>;AccountKey=<Azure Cosmos DB account key>;TableEndpoint=https://<Account name>.table.cosmosdb.azure.com:443 /t.TableName:<Table name> /t.Overwrite ```
-<a id="table-api-preview"></a>
-### Sample command: Source is Azure Cosmos DB Table API (preview)
-
-Here is a command-line sample to import from Table API preview to Table API GA:
-
-```bash
-dt /s:AzureTable /s.ConnectionString:DefaultEndpointsProtocol=https;AccountName=<Table API preview account name>;AccountKey=<Table API preview account key>;TableEndpoint=https://<Account Name>.documents.azure.com; /s.Table:<Table name> /t:TableAPIBulk /t.ConnectionString:DefaultEndpointsProtocol=https;AccountName=<Azure Cosmos DB account name>;AccountKey=<Azure Cosmos DB account key>;TableEndpoint=https://<Account name>.table.cosmosdb.azure.com:443 /t.TableName:<Table name> /t.Overwrite
-```
- ## Migrate data by using AzCopy
-Using the AzCopy command-line utility is the other option for migrating data from Azure Table storage to the Azure Cosmos DB Table API. To use AzCopy, you first export your data as described in [Export data from Table storage](/previous-versions/azure/storage/storage-use-azcopy#export-data-from-table-storage), then import the data to Azure Cosmos DB as described in [Azure Cosmos DB Table API](/previous-versions/azure/storage/storage-use-azcopy#import-data-into-table-storage).
+You can also use the AzCopy command-line utility to migrate data from Table Storage to the Azure Cosmos DB Table API. To use AzCopy, you first export your data as described in [Export data from Table Storage](/previous-versions/azure/storage/storage-use-azcopy#export-data-from-table-storage). Then, you import the data to Azure Cosmos DB as described in [Azure Cosmos DB Table API](/previous-versions/azure/storage/storage-use-azcopy#import-data-into-table-storage).
-When performing the import into Azure Cosmos DB, refer to the following sample. Note that the /Dest value uses cosmosdb, not core.
+Refer to the following sample when you're importing into Azure Cosmos DB. Note that the `/Dest` value uses `cosmosdb`, not `core`.
Example import command:
Example import command:
AzCopy /Source:C:\myfolder\ /Dest:https://myaccount.table.cosmosdb.windows.net/mytable1/ /DestKey:key /Manifest:"myaccount_mytable_20140103T112020.manifest" /EntityOperation:InsertOrReplace ```
-## Migrate from Table API (preview) to Table API
-
-> [!WARNING]
-> If you want to immediately enjoy the benefits of the generally available tables then please migrate your existing preview tables as specified in this section, otherwise we will be performing auto-migrations for existing preview customers in the coming weeks, note however that auto-migrated preview tables will have certain restrictions to them that newly created tables will not.
-
-The Table API is now generally available (GA). There are differences between the preview and GA versions of tables both in the code that runs in the cloud as well as in code that runs at the client. Therefore it is not advised to try to mix a preview SDK client with a GA Table API account, and vice versa. Table API preview customers who want to continue to use their existing tables but in a production environment need to migrate from the preview to the GA environment, or wait for auto-migration. If you wait for auto-migration, you will be notified of the restrictions on the migrated tables. After migration, you will be able to create new tables on your existing account without restrictions (only migrated tables will have restrictions).
-
-To migrate from Table API (preview) to the generally available Table API:
-
-1. Create a new Azure Cosmos DB account and set its API type to Azure Table as described in [Create a database account](create-table-dotnet.md#create-a-database-account).
-
-2. Change clients to use a GA release of the [Table API SDKs](table-sdk-dotnet.md).
+## Next steps
-3. Migrate the client data from preview tables to GA tables by using the Data Migration tool. Instructions on using the data migration tool for this purpose are described in [Data Migration tool](#data-migration-tool).
+Learn how to query data by using the Azure Cosmos DB Table API.
-## Next steps
+> [!div class="nextstepaction"]
+>[How to query data?](../cosmos-db/tutorial-query-table.md)
-In this tutorial you learned how to:
-> [!div class="checklist"]
-> * Import data with the Data Migration tool
-> * Import data with AzCopy
-> * Migrate from Table API (preview) to Table API
-You can now proceed to the next tutorial and learn how to query data using the Azure Cosmos DB Table API.
-> [!div class="nextstepaction"]
->[How to query data?](../cosmos-db/tutorial-query-table.md)
data-factory Pipeline Trigger Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/pipeline-trigger-troubleshoot-guide.md
Pipelines may use the Web activity to call ADF REST API methods if and only if t
**Resolution**
-Before using the Azure Data FactoryΓÇÖs REST API in a Web activityΓÇÖs Settings tab, security must be configured. Azure Data Factory pipelines may use the Web activity to call ADF REST API methods if and only if the Azure Data Factory managed identity is assigned the *Contributor* role. Begin by opening the Azure portal and clicking the **All resources** link on the left menu. Select **Azure Data Factory** to add ADF managed identity with Contributor role by clicking the **Add** button in the *Add a role assignment** box.
+Before using the Azure Data FactoryΓÇÖs REST API in a Web activityΓÇÖs Settings tab, security must be configured. Azure Data Factory pipelines may use the Web activity to call ADF REST API methods if and only if the Azure Data Factory managed identity is assigned the *Contributor* role. Begin by opening the Azure portal and clicking the **All resources** link on the left menu. Select **Azure Data Factory** to add ADF managed identity with Contributor role by clicking the **Add** button in the *Add a role assignment* box.
-### How to perform activity-level errors and failures in pipelines
+### How to check and branch on activity-level success and failure in pipelines
**Cause**
data-factory Quickstart Create Data Factory Dot Net https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-dot-net.md
Next, create a C# .NET console application in Visual Studio:
string pipelineName = "Adfv2QuickStartPipeline"; ``` > [!NOTE]
-> For US Azure Gov accounts, you have to use BaseUri of *https://management.usgovcloudapi.net* instead of *https://management.azure.com/*, and then create data factory management client.
->
+> For Sovereign clouds, you must use the appropriate cloud-specific endpoints for ActiveDirectoryAuthority and ResourceManagerUrl (BaseUri).
+> For example, in US Azure Gov you would use authority of https://login.microsoftonline.us instead of https://login.microsoftonline.com, and use https://management.usgovcloudapi.net instead of https://management.azure.com/, and then create the data factory management client.
+> You can use Powershell to easily get the endpoint Urls for various clouds by executing ΓÇ£Get-AzEnvironment | Format-ListΓÇ¥, which will return a list of endpoints for each cloud environment.
3. Add the following code to the **Main** method that creates an instance of **DataFactoryManagementClient** class. You use this object to create a data factory, a linked service, datasets, and a pipeline. You also use this object to monitor the pipeline run details. ```csharp // Authenticate and create a data factory management client
- var context = new AuthenticationContext("https://login.windows.net/" + tenantID);
+ var context = new AuthenticationContext("https://login.microsoftonline.com/" + tenantID);
ClientCredential cc = new ClientCredential(applicationId, authenticationKey); AuthenticationResult result = context.AcquireTokenAsync( "https://management.azure.com/", cc).Result;
data-factory Quickstart Create Data Factory Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-python.md
ms.devlang: python Previously updated : 01/15/2021 Last updated : 04/06/2021
Pipelines can ingest data from disparate data stores. Pipelines process or trans
``` > [!NOTE] > The "azure-identity" package might have conflicts with "azure-cli" on some common dependencies. If you meet any authentication issue, remove "azure-cli" and its dependencies, or use a clean machine without installing "azure-cli" package to make it work.
+ > For Sovereign clouds, you must use the appropriate cloud-specific constants. Please refer to [Connect to all regions using Azure libraries for Python Multi-cloud | Microsoft Docs for instructions to connect with Python in Sovereign clouds.](https://docs.microsoft.com/azure/developer/python/azure-sdk-sovereign-domain)
+
## Create a data factory client
+
1. Create a file named **datafactory.py**. Add the following statements to add references to namespaces. ```python
Pipelines can ingest data from disparate data stores. Pipelines process or trans
``` 3. Add the following code to the **Main** method that creates an instance of DataFactoryManagementClient class. You use this object to create the data factory, linked service, datasets, and pipeline. You also use this object to monitor the pipeline run details. Set **subscription_id** variable to the ID of your Azure subscription. For a list of Azure regions in which Data Factory is currently available, select the regions that interest you on the following page, and then expand **Analytics** to locate **Data Factory**: [Products available by region](https://azure.microsoft.com/global-infrastructure/services/). The data stores (Azure Storage, Azure SQL Database, etc.) and computes (HDInsight, etc.) used by data factory can be in other regions.
+
```python def main():
Pipelines can ingest data from disparate data stores. Pipelines process or trans
# Specify your Active Directory client ID, client secret, and tenant ID credentials = ClientSecretCredential(client_id='<service principal ID>', client_secret='<service principal key>', tenant_id='<tenant ID>')
+
+ # Specify following for Soverign Clouds, import right cloud constant and then use it to connect.
+ # from msrestazure.azure_cloud import AZURE_PUBLIC_CLOUD as CLOUD
+ # credentials = DefaultAzureCredential(authority=CLOUD.endpoints.active_directory, tenant_id=tenant_id)
+
resource_client = ResourceManagementClient(credentials, subscription_id) adf_client = DataFactoryManagementClient(credentials, subscription_id)
data-factory Quickstart Create Data Factory Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-rest-api.md
If you don't have an Azure subscription, create a [free](https://azure.microsoft
* Create a **blob container** in Blob Storage, create an input **folder** in the container, and upload some files to the folder. You can use tools such as [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) to connect to Azure Blob storage, create a blob container, upload input file, and verify the output file. * Install **Azure PowerShell**. Follow the instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-Az-ps). This quickstart uses PowerShell to invoke REST API calls. * **Create an application in Azure Active Directory** following [this instruction](../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal). Make note of the following values that you use in later steps: **application ID**, **clientSecrets**, and **tenant ID**. Assign application to "**Contributor**" role.-
+>[!NOTE]
+> For Sovereign clouds, you must use the appropriate cloud-specific endpoints for ActiveDirectoryAuthority and ResourceManagerUrl (BaseUri).
+> You can use Powershell to easily get the endpoint Urls for various clouds by executing ΓÇ£Get-AzEnvironment | Format-ListΓÇ¥, which will return a list of endpoints for each cloud environment.
+>
## Set global variables 1. Launch **PowerShell**. Keep Azure PowerShell open until the end of this quickstart. If you close and reopen, you need to run the commands again.
dedicated-hsm Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dedicated-hsm/faq.md
The [Thales Luna 7 HSM model A790](https://cpl.thalesgroup.com/encryption/hardwa
### Q: How many keys can be supported in Dedicated HSM?
-The maximum number of keys is a function of the memory available. The SafeNet Luna 7 model A790 in use has 32MB of memory. The following numbers are also applicable to key pairs if using asymmetric keys.
+The maximum number of keys is a function of the memory available. The Thales Luna 7 model A790 in use has 32MB of memory. The following numbers are also applicable to key pairs if using asymmetric keys.
* RSA-2048 - 19,000 * ECC-P256 - 91,000
defender-for-iot Architecture Agent Based https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/architecture-agent-based.md
Title: Agent-based solution architecture
+ Title: What is agent-based solution architecture
description: Learn about Azure Defender for IoT agent-based architecture and information flow. Last updated 1/25/2021
-# Agent-based solution for device builders
+# What is agent-based solution for device builders
This article describes the functional system architecture of the Defender for IoT agent-based solution. Azure Defender for IoT offers two sets of capabilities to fit your environment's needs, agentless solution for organizations, and agent-based solution for device builders.
Defender for IoT recommendations and alerts (analytics pipeline output) is writt
:::image type="content" source="media/architecture/micro-agent-architecture.png" alt-text="The micro agent architecture.":::
-## See also
+## Next steps
[Defender for IoT FAQ](resources-frequently-asked-questions.md)
defender-for-iot Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/architecture.md
Title: Agentless solution architecture
+ Title: What is agentless solution architecture
description: Learn about Azure Defender for IoT agentless architecture and information flow. Last updated 1/25/2021
The Defender for IoT portal in Azure is used to help you:
- Update Threat Intelligence packages
-## See also
+## Next steps
[Defender for IoT FAQ](resources-frequently-asked-questions.md)
defender-for-iot Concept Agent Portfolio Overview Os Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-agent-portfolio-overview-os-support.md
Title: Agent portfolio overview and OS support (Preview) description: Azure Defender for IoT provides a large portfolio of agents based on the device type. Last updated 1/20/2021-+ # Agent portfolio overview and OS support (Preview)
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/getting-started.md
Title: "Quickstart: Getting started"
-description: In this quickstart you will learn how to get started with understanding the basic workflow for Defender for IoT deployment.
+ Title: 'Quickstart: Getting started'
+description: In this quickstart, learn how to get started with understanding the basic workflow for Defender for IoT deployment.
Last updated 2/18/2021
This article provides an overview of the steps you'll take to set up Azure Defen
## Prerequisites
-None
+- None
## Permission requirements
Registration includes:
To register: 1. Go to the Azure Defender for IoT portal.+ 1. Select **Onboard subscription**.+ 1. On the **Pricing** page, select a subscription or create a new one, and add the number of committed devices.+ 1. Select the **Download the on-premises management console** tab and save the downloaded activation file. This file contains the aggregate committed devices that you defined. The file will be uploaded to the management console after initial sign-in.
-For information on how to offboard a subscription, see [Offboard a subscription](how-to-manage-sensors-on-the-cloud.md#offboard-a-subscription).
+For information on how to offboard a subscription, see [Offboard a subscription](how-to-manage-subscriptions.md#offboard-a-subscription).
## Install and set up the on-premises management console
To install and set up:
Onboard a sensor by registering it with Azure Defender for IoT and downloading a sensor activation file: 1. Define a sensor name and associate it with a subscription.+ 1. Choose a sensor management mode: - **Cloud connected sensors**: Information that sensors detect is displayed in the sensor console. In addition, alert information is delivered through an IoT hub and can be shared with other Azure services, such as Azure Sentinel.
For more information, see [Onboard and manage sensors in the Defender for IoT po
Download the ISO package from the Azure Defender for IoT portal, install the software, and set up the sensor. 1. Select **Getting Started** from the Defender for IoT portal.+ 1. Select **Set up sensor**.+ 1. Choose a version and select **Download**.+ 1. Install the sensor software. For more information, see [Defender for IoT installation](how-to-install-software.md).+ 1. Activate and set up your sensor. For more information, see [Sign in and activate a sensor](how-to-activate-and-set-up-your-sensor.md). ## Connect sensors to an on-premises management console
defender-for-iot How To Activate And Set Up Your On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-activate-and-set-up-your-on-premises-management-console.md
To sign in to the management console:
If you forgot your password, select the **Recover Password** option, and see [Password recovery](how-to-manage-the-on-premises-management-console.md#password-recovery) for instructions on how to recover your password.
-## Get and upload an activation file
+## Activate the on-premises management console
After you sign in for the first time, you will need to activate the on-premises management console by getting, and uploading an activation file.
-To get an activation file:
+To activate the on-premises management console:
-1. Navigate to the **Pricing** page of the Azure Defender for IoT portal.
-1. Select the subscription to associate the on-premises management console to.
-1. Select the **Download the activation file for the management console** tab. The activation file is downloaded.
+1. Sign in to the on-premises management console.
+
+1. In the alert notification at the top of the screen, select the **Take Action** link.
+
+ :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/take-action.png" alt-text="Select the Take Action link from the alert on the top of the screen.":::
+
+1. In the Activation popup screen, select the **Azure portal** link.
+
+ :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/azure-portal.png" alt-text="Select the Azure portal link from the popup message.":::
+
+1. Select a subscription to associate the on-premises management console to, and then select the **Download on-premises management console activation file** button. The activation file is downloaded.
:::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/cloud_download_opm_activation_file.png" alt-text="Download the activation file.":::
-To upload an activation file:
+ If you have not already onboarded a subscription, then [Onboard a subscription](how-to-manage-subscriptions.md#onboard-a-subscription).
-1. Navigate to the **System Settings** page on the on-premises management console.
-1. Select the **Activation** icon :::image type="icon" source="media/how-to-manage-sensors-from-the-on-premises-management-console/activation-icon.png" border="false":::.
-1. Select **Choose a File**, and select the file that downloaded.
+1. Navigate back to the **Activation** popup screen and select **Choose File**.
+
+1. Select the downloaded file.
After initial activation, the number of monitored devices can exceed the number of committed devices defined during onboarding. This occurs if you connect more sensors to the management console. If there's a discrepancy between the number of monitored devices, and the number of committed devices, a warning will appear on the management console. If this happens, upload a new activation file. ## Set up a certificate
-Following installation of the management console, a local self-signed certificate is generated and used to access the console. After an administrator signs in to the management console for the first time, that user is prompted to onboard an SSL/TLS certificate.
+After you install the management console, a local self-signed certificate is generated. This certificate is used to access the console. After an administrator signs in to the management console for the first time, that user is prompted to onboard an SSL/TLS certificate.
Two levels of security are available:
The console supports the following types of certificates:
To upload a certificate: 1. When you're prompted after sign-in, define a certificate name.+ 1. Upload the CRT and key files.+ 1. Enter a passphrase and upload a PEM file if necessary.
-You might need to refresh your screen after you upload the CA-signed certificate.
+You may need to refresh your screen after you upload the CA-signed certificate.
To disable validation between the management console and connected sensors: 1. Select **Next**.+ 1. Turn off the **Enable system-wide validation** toggle. For information about uploading a new certificate, supported certificate files, and related items, see [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md).
Two options are available for connecting Azure Defender for IoT sensors to the o
After connecting, you must set up a site with these sensors.
-### Connect sensors from the sensor console
+### Connect sensors to the on-premises management console from the sensor console
-To connect specific sensors to the on-premises management console from the sensor console:
+You can connect sensors to the on-premises management console from the sensor console:
-1. On the left pane of the sensor console, select **System Settings**.
+1. On the on-premises management console, select **System Settings**.
-2. Select **Connection to Management**.
+1. Copy the **Copy Connection String**.
- :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/connection-status-window-not-connected.png" alt-text="Screenshot of the status window of an on-premises management console, showing Unconnected.":::
+ :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/connection-string.png" alt-text="Copy the connection string for the sensor.":::
-3. In the **Address** text box, enter the IP address of the on-premises management console to which you want to connect.
+1. On the sensor, navigate to **System Settings** and select **Connection to Management Console** :::image type="icon" source="media/how-to-manage-sensors-from-the-on-premises-management-console/connection-to-management-console.png" border="false":::
-4. Select **Connect**. The status changes:
+1. Paste the copied connection string from the on-premises management console into the **Connection string** field.
- :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/connection-status-window-connected.png" alt-text="Screenshot of the status window of an on-premises management console, showing Connected.":::
+ :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/paste-connection-string.png" alt-text="Paste the copied connection string into the connection string field.":::
+
+1. Select **Connect**.
### Connect sensors by using tunneling
Access groups enable better control over where users manage and analyze devices
### How it works
-For each site, you can define a business unit and a region. Then you can add zones, which are logical entities in your network.
+You can define a business unit, and a region for each site in your organization. You can then add zones, which are logical entities that exist in your network.
-For each zone, you should assign at least one sensor. The five-level model provides the flexibility and granularity required to deliver the protection system that reflects the structure of your organization.
+You should assign at least one sensor per zone. The five-level model provides the flexibility and granularity required to deliver the protection system that reflects the structure of your organization.
-You can edit your sites directly from any of the map views. When you're opening a site from a map view, the number of open alerts appears next to each zone.
+Using the Enterprise View, you can edit your sites directly. When you select a site from the Enterprise View, the number of open alerts appears next to each zone.
To set up a site: 1. Add new business units to reflect your organization's logical structure.
-2. Add new regions to reflect your organization's regions.
-
-3. Add a site.
-
-4. Add zones to a site.
-
-5. Connect the sensors.
+ 1. From the Enterprise view, select **All Sites** > **Manage Business Units**.
-6. Assign sensor to site zones.
+ :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/manage-business-unit.png" alt-text="Select manage business unit from the all sites drop down menu on the enterprise view screen.":::
-To add business units:
+ 1. Enter the new business unit name and select **ADD**.
-1. From the Enterprise view, select **All Sites** > **Manage Business Units**.
+1. Add a new regions to reflect your organization's regions.
- :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/manage-business-unit-screen.png" alt-text="Screenshot showing the Manage Business Units view.":::
+ 1. From the Enterprise View, select **All Regions** > **Manage Regions**.
-2. Enter the new business unit name and select **ADD**.
+ :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/manage-regions.png" alt-text="Select all regions and then manage regions to manage the regions in your enterprise.":::
-To add a new region:
+ 1. Enter the new region name and select **ADD**.
-1. From the Enterprise view, select **All Regions** > **Manage Regions**.
+1. Add a site.
- :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/manage-regions-screen.png" alt-text="Screenshot showing the Manage Regions view.":::
+ 1. From the Enterprise view, select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/new-site-icon.png" border="false"::: on the top bar. Your cursor appears as a plus sign (**+**).
-2. Enter the new region name and select **ADD**.
+ 1. Position the **+** at the location of the new site and select it. The **Create New Site** dialog box opens.
-To add a new site:
+ :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/create-new-site-screen.png" alt-text="Screenshot of the Create New Site view.":::
-1. From the Enterprise view, select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/new-site-icon.png" border="false"::: on the top bar. Your cursor appears as a plus sign (**+**).
+ 1. Define the name and the physical address for the new site and select **SAVE**. The new site appears on the site map.
-2. Position the **+** at the location of the new site and select it. The **Create New Site** dialog box opens.
+4. [Add zones to a site](#create-enterprise-zones).
- :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/create-new-site-screen.png" alt-text="Screenshot of the Create New Site view.":::
+5. [Connect the sensors](how-to-manage-individual-sensors.md#connect-a-sensor-to-the-management-console).
-3. Define the name and the physical address for the new site and select **SAVE**. The new site appears on the site map.
+6. [Assign sensor to site zones](#assign-sensors-to-zones).
To delete a site: 1. In the **Site Management** window, select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/expand-view-icon.png" border="false"::: from the bar that contains the site name, and then select **Delete Site**. The confirmation box appears, verifying that you want to delete the site.
-2. In the confirmation box, select **YES**. The confirmation box closes, and the **Site Management** window appears without the site that you've deleted.
+2. In the confirmation box, select **CONFIRM**.
## Create enterprise zones
To add a zone to a site:
:::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/create-new-zone-screen.png" alt-text="Screenshot of the Create New Zone view.":::
-2. Enter the zone name.
+1. Enter the zone name.
-3. Enter a description for the new zone that clearly states the characteristics that you used to divide the site into zones.
+1. Enter a description for the new zone that clearly states the characteristics that you used to divide the site into zones.
-4. Select **SAVE**. The new zone appears in the **Site Management** window under the site that this zone belongs to.
+1. Select **SAVE**. The new zone appears in the **Site Management** window under the site that this zone belongs to.
To edit a zone:
To edit a zone:
:::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/zone-edit-screen.png" alt-text="Screenshot that shows the Edit Zone dialog box.":::
-2. Edit the zone parameters and select **SAVE**.
+1. Edit the zone parameters and select **SAVE**.
To delete a zone: 1. In the **Site Management** window, select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/expand-view-icon.png" border="false"::: from the bar that contains the zone name, and then select **Delete Zone**.
-2. In the confirmation box, select **YES**.
+1. In the confirmation box, select **YES**.
To filter according to the connectivity status:
To assign a sensor:
:::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/unassigned-sensors-view.png" alt-text="Screenshot of the Unassigned Sensors view.":::
-2. Verify that the **Connectivity** status is connected. If not, see [Connect sensors to the on-premises management console](#connect-sensors-to-the-on-premises-management-console) for details about connecting.
+1. Verify that the **Connectivity** status is connected. If not, see [Connect sensors to the on-premises management console](#connect-sensors-to-the-on-premises-management-console) for details about connecting.
-3. Select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/assign-icon.png" border="false"::: for the sensor that you want to assign.
+1. Select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/assign-icon.png" border="false"::: for the sensor that you want to assign.
-4. In the **Assign Sensor** dialog box, select the business unit, region, site, and zone to assign.
+1. In the **Assign Sensor** dialog box, select the business unit, region, site, and zone to assign.
:::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/assign-sensor-screen.png" alt-text="Screenshot of the Assign Sensor view.":::
-5. Select **ASSIGN**.
+1. Select **ASSIGN**.
To unassign and delete a sensor: 1. Disconnect the sensor from the on-premises management console. See [Connect sensors to the on-premises management console](#connect-sensors-to-the-on-premises-management-console) for details.
-2. In the **Site Management** window, select the sensor and select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/unassign-sensor-icon.png" border="false":::. The sensor appears in the list of unassigned sensors after a few moments.
+1. In the **Site Management** window, select the sensor and select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/unassign-sensor-icon.png" border="false":::. The sensor appears in the list of unassigned sensors after a few moments.
-3. To delete the unassigned sensor from the site, select the sensor from the list of unassigned sensors and select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/delete-icon.png" border="false":::.
+1. To delete the unassigned sensor from the site, select the sensor from the list of unassigned sensors and select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/delete-icon.png" border="false":::.
## See also
defender-for-iot How To Activate And Set Up Your Sensor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-activate-and-set-up-your-sensor.md
Your sensor was onboarded to Azure Defender for IoT in a specific management mod
| **Cloud connected mode** | Information that the sensor detects is displayed in the sensor console. Alert information is also delivered through the IoT hub and can be shared with other Azure services, such as Azure Sentinel. | | **Locally connected mode** | Information that the sensor detects is displayed in the sensor console. Detection information is also shared with the on-premises management console, if the sensor is connected to it. |
-A locally connected or cloud-connected activation file was generated and downloaded for this sensor during onboarding. The activation file contains instructions for the management mode of the sensor. *A unique activation file should be uploaded to each sensor you deploy.* The first time you sign in, you need to upload the relevant activation file for this sensor.
+A locally connected, or cloud-connected activation file was generated and downloaded for this sensor during onboarding. The activation file contains instructions for the management mode of the sensor. *A unique activation file should be uploaded to each sensor you deploy.* The first time you sign in, you need to upload the relevant activation file for this sensor.
:::image type="content" source="media/how-to-activate-and-set-up-your-sensor/azure-defender-for-iot-activation-file-download-button.png" alt-text="Azure Defender for IoT portal, onboard sensor.":::
defender-for-iot How To Manage Sensors On The Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-manage-sensors-on-the-cloud.md
Title: Onboard and manage sensors and subscriptions in the Defender for IoT portal
+ Title: Manage sensors and subscriptions in the Defender for IoT portal
description: Learn how to onboard, view, and manage sensors in the Defender for IoT portal. Last updated 2/18/2021
-# Onboard and manage sensors and subscriptions in the Defender for IoT portal
+# Manage sensors and subscriptions in the Defender for IoT portal
This article describes how to onboard, view, and manage sensors in the [Defender for IoT portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started).
To offboard a subscription:
The on-premises environment is not affected, but you should uninstall the sensor from the on-premises environment, or reassign the sensor to another subscription, so as to prevent any related data from flowing to the on-premises management console.
-## See also
+## Next steps
[Activate and set up your sensor](how-to-activate-and-set-up-your-sensor.md)
defender-for-iot How To Manage Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-manage-subscriptions.md
+
+ Title: Manage subscriptions
+description: Subscriptions consist of managed committed devices and can be onboarded or offboarded as needed.
Last updated : 3/30/2021+++
+# Manage a subscription
+
+Subscriptions are managed on a monthly basis. When you onboard a subscription, you will be billed for that subscription until the end of the month. Similarly if you when you offboard a subscription, you will be billed for the remainder of the month in which you offboarded that subscription.
+
+## Onboard a subscription
+
+To get started with Azure Defender for IoT, you must have a Microsoft Azure subscription. If you do not have a subscription, you can sign up for a free account. If you already have access to an Azure subscription, but it isn't listed, check your account details, and confirm your permissions with the subscription owner.
+
+To onboard a subscription:
+
+1. Navigate to the Azure portal's Pricing page.
+
+ :::image type="content" source="media/how-to-manage-subscriptions/no-subscription.png" alt-text="Navigate to the Azure portal's Pricing page.":::
+
+1. Select **Onboard subscription**.
+
+1. In the **Onboard subscription** window select your subscription, and the number of committed devices from the drop-down menus.
+
+ :::image type="content" source="media/how-to-manage-subscriptions/onboard-subscription.png" alt-text="select your subscription and the number of committed devices.":::
+
+1. Select **Onboard**.
+
+## Offboard a subscription
+
+Subscriptions are managed on a monthly basis. When you offboard a subscription, you will be billed for that subscription until the end of the month.
+
+Uninstall all sensors that are associated with the subscription prior to offboarding the subscription. For more information on how to delete a sensor, see [Delete a sensor](how-to-manage-sensors-on-the-cloud.md#delete-a-sensor).
+
+To offboard a subscription:
+
+1. Navigate to the **Pricing** page.
+1. Select the subscription, and then select the **delete** icon :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/delete-icon.png" border="false":::.
+1. In the confirmation popup, select the checkbox to confirm you have deleted all sensors associated with the subscription.
+
+ :::image type="content" source="media/how-to-manage-sensors-on-the-cloud/offboard-popup.png" alt-text="Select the checkbox and select offboard to offboard your sensor.":::
+
+1. Select the **Offboard** button.
+
+## Next steps
+
+[Activate and set up your on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md)
defender-for-iot Quickstart Azure Rtos Security Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-azure-rtos-security-module.md
Title: "Quickstart: Configure and enable the Defender-IoT-micro-agent for Azure RTOS"
-description: Learn how to onboard and enable the Defender-IoT-micro-agent for Azure RTOS service in your Azure IoT Hub.
+ Title: 'Quickstart: Configure and enable the Defender-IoT-micro-agent for Azure RTOS'
+description: In this quickstart, learn how to onboard and enable the Defender-IoT-micro-agent for Azure RTOS service in your Azure IoT Hub.
Last updated 01/24/2021
defender-for-iot Quickstart Building The Defender Micro Agent From Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-building-the-defender-micro-agent-from-source.md
Title: Build the Defender micro agent from source code (Preview)
-description: Micro Agent includes an infrastructure, which can be used to customize your distribution.
+ Title: 'Quickstart: Build the Defender micro agent from source code (Preview)'
+description: In this quickstart, learn about the Micro Agent which includes an infrastructure that can be used to customize your distribution.
Last updated 1/18/2021
-# Build the Defender micro agent from source code (Preview)
+# Quickstart: Build the Defender micro agent from source code (Preview)
The Micro Agent includes an infrastructure, which can be used to customize your distribution. To see a list of the available configuration parameters look at the `configs/LINUX_BASE.conf` file.
To override the values:
`cmake -DCMAKE_BUILD_TYPE=Debug -Dlog_level=DEBUG -Dlog_level_cmdline:BOOL=ON -DIOT_SECURITY_MODULE_DIST_TARGET=UBUNTU1804 ..`
-## Baseline Configuration signing
-
-The agent verifies the authenticity of configuration files that are placed on the disk to mitigate tampering, by default.
-
-You can stop this process by defining the preprocessor flag `ASC_BASELINE_CONF_SIGN_CHECK_DISABLE`.
-
-We don't recommend turning off the signature check for production environments.
-
-If you require a different configuration for production scenarios, contact the Defender for IoT team.
-
-## Prerequisites
+## Prerequisites
1. Contact your account manager to ask for access to Defender for IoT source code.
If you require a different configuration for production scenarios, contact the D
1. (Optional) Download and install [VSCode](https://code.visualstudio.com/download )
-1. (Optional) Install the [C/C++ extension](https://code.visualstudio.com/docs/languages/cpp ) for VSCode.
+1. (Optional) Install the [C/C++ extension](https://code.visualstudio.com/docs/languages/cpp ) for VSCode.- None
+
+## Baseline Configuration signing
+
+The agent verifies the authenticity of configuration files that are placed on the disk to mitigate tampering, by default.
+
+You can stop this process by defining the preprocessor flag `ASC_BASELINE_CONF_SIGN_CHECK_DISABLE`.
+
+We don't recommend turning off the signature check for production environments.
+
+If you require a different configuration for production scenarios, contact the Defender for IoT team.
## Building the Defender IoT Micro Agent
defender-for-iot Quickstart Configure Your Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-configure-your-solution.md
Title: "Quickstart: Add Azure resources to your IoT solution"
+ Title: 'Quickstart: Add Azure resources to your IoT solution'
description: In this quickstart, learn how to configure your end-to-end IoT solution using Azure Defender for IoT. Last updated 01/25/2021
This article provides an explanation of how to perform initial configuration of
## Prerequisites
-None
+- None
## What is Defender for IoT?
defender-for-iot Quickstart Create Custom Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-create-custom-alerts.md
Title: Create custom alerts description: Understand, create, and assign custom device alerts for the Azure Defender for IoT security service.-+ Last updated 09/04/2020
defender-for-iot Quickstart Create Micro Agent Module Twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-create-micro-agent-module-twin.md
Title: Create a Defender IoT micro agent module twin (Preview)
-description: Learn how to create individual DefenderIotMicroAgent module twins for new devices.
+ Title: 'Quickstart: Create a Defender IoT micro agent module twin (Preview)'
+description: In this quickstart, learn how to create individual DefenderIotMicroAgent module twins for new devices.
Last updated 1/20/2021
-# Create a Defender IoT micro agent module twin (Preview)
+# Quickstart: Create a Defender IoT micro agent module twin (Preview)
You can create individualΓÇ»**DefenderIotMicroAgent** module twins for new devices. You can also batch create module twins for all devices in an IoT Hub.
+## Prerequisites
+
+- None
+ ## Device twins For IoT solutions built in Azure, device twins play a key role in both device management and process automation.
To verify if a Defender-IoT-micro-agent twin exists for a specific device:
## Next steps
-Advance to the next article to learn how to [investigate security recommendations](quickstart-investigate-security-recommendations.md).
+> [!div class="nextstepaction"]
+> [investigate security recommendations](quickstart-investigate-security-recommendations.md)
defender-for-iot Quickstart Create Security Twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-create-security-twin.md
Title: "Quickstart: Create a security module twin"
+ Title: 'Quickstart: Create a security module twin'
description: In this quickstart, learn how to create a Defender for IoT module twin for use with Azure Defender for IoT. Last updated 1/21/2021
This quickstart explains how to create individual _azureiotsecurity_ module twin
## Prerequisites
-None
+- None
## Understanding azureiotsecurity module twins
defender-for-iot Quickstart Onboard Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-onboard-iot-hub.md
Title: "Quickstart: Onboard Defender for IoT to an agent-based solution"
-description: In this quickstart you will learn how to onboard and enable the Defender for IoT security service in your Azure IoT Hub.
+ Title: 'Quickstart: Onboard Defender for IoT to an agent-based solution'
+description: In this quickstart, you will learn how to onboard and enable the Defender for IoT security service in your Azure IoT Hub.
Last updated 1/20/2021
defender-for-iot Quickstart Standalone Agent Binary Installation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-standalone-agent-binary-installation.md
Title: Install Defender for IoT micro agent (Preview)
-description: Learn how to install, and authenticate the Defender Micro Agent.
+ Title: 'Quickstart: Install Defender for IoT micro agent (Preview)'
+description: In this quickstart, learn how to install, and authenticate the Defender Micro Agent.
Last updated 3/9/2021
-# Install Defender for IoT micro agent (Preview)
+# Quickstart: Install Defender for IoT micro agent (Preview)
This article provides an explanation of how to install, and authenticate the Defender micro agent.
sudo apt-get install defender-iot-micro-agent=<version>
## Next steps
-[Building the Defender micro agent from source code](quickstart-building-the-defender-micro-agent-from-source.md)
+> [!div class="nextstepaction"]
+> [Building the Defender micro agent from source code](quickstart-building-the-defender-micro-agent-from-source.md)
defender-for-iot Quickstart System Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-system-prerequisites.md
Title: System prerequisites
-description: Get system prerequisites needed to run Azure Defender for IoT.
+ Title: 'Quickstart: System prerequisites'
+description: In this quickstart, get the system prerequisites needed to run Azure Defender for IoT.
Last updated 11/30/2020
-# System prerequisites
+# Quickstart: System prerequisites
+ This article lists the system prerequisites for running Azure Defender for IoT.
+## Prerequisites
+
+- None
+ ## Minimum requirements - Network switches that support traffic monitoring via SPAN port.
Defender for IoT routes all traffic from all European regions to the West Europe
For more information, see [IoT Hub supported regions](https://azure.microsoft.com/global-infrastructure/services/?products=iot-hub).
-## See also
+## Next steps
-- [Identify required appliances](how-to-identify-required-appliances.md)-- [About Azure Defender for IoT network setup](how-to-set-up-your-network.md)
+> [!div class="nextstepaction"]
+> [Identify required appliances](how-to-identify-required-appliances.md)
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/release-notes.md
Title: What's new in Azure Defender for IoT description: This article lets you know what's new in the latest release of Defender for IoT.-+ Last updated 03/14/2021
defender-for-iot Security Agent Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/security-agent-architecture.md
Title: "Quickstart: Security agents overview"
-description: In this quickstart you will learn how to understand security agent architecture for the agents used in the Azure Defender for IoT service.
+ Title: 'Quickstart: Security agents overview'
+description: In this quickstart, learn how to understand security agent architecture for the agents used in the Azure Defender for IoT service.
Previously updated : 01/24/2021 Last updated : 4/4/2021 # Quickstart: Security agent reference architecture
Defender for IoT Security agents is developed as open-source projects, and are a
## Prerequisites
-None
+- None
## Agent supported platforms
Defender for IoT offers different installer agents for 32 bit and 64-bit Windows
## Next steps In this article, you got a high-level overview about Defender for IoT Defender-IoT-micro-agent architecture, and the available installers.-
-To continue getting started with Defender for IoT deployment, use the following articles:
+To continue getting started with Defender for IoT deployment,
> [!div class="nextstepaction"] > [security agent authentication methods](concept-security-agent-authentication-methods.md)
digital-twins Troubleshoot Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/troubleshoot-diagnostics.md
description: See how to enable logging with diagnostics settings and query the logs for immediate viewing. Previously updated : 11/9/2020 Last updated : 2/24/2021
Here are more details about the categories of logs that Azure Digital Twins coll
| ADTModelsOperation | Log all API calls pertaining to Models | | ADTQueryOperation | Log all API calls pertaining to Queries | | ADTEventRoutesOperation | Log all API calls pertaining to Event Routes as well as egress of events from Azure Digital Twins to an endpoint service like Event Grid, Event Hubs and Service Bus |
-| ADTDigitalTwinsOperation | Log all API calls pertaining to Azure Digital Twins |
+| ADTDigitalTwinsOperation | Log all API calls pertaining individual twins |
Each log category consists of operations of write, read, delete, and action. These map to REST API calls as follows:
Here is a comprehensive list of the operations and corresponding [Azure Digital
Each log category has a schema that defines how events in that category are reported. Each individual log entry is stored as text and formatted as a JSON blob. The fields in the log and example JSON bodies are provided for each log type below.
-`ADTDigitalTwinsOperation`, `ADTModelsOperation`, and `ADTQueryOperation` use a consistent API log schema; `ADTEventRoutesOperation` has its own separate schema.
+`ADTDigitalTwinsOperation`, `ADTModelsOperation`, and `ADTQueryOperation` use a consistent API log schema. `ADTEventRoutesOperation` extends the schema to contain an `endpointName` field in properties.
### API log schemas
-This log schema is consistent for `ADTDigitalTwinsOperation`, `ADTModelsOperation`, and `ADTQueryOperation`. It contains information pertinent to API calls to an Azure Digital Twins instance.
+This log schema is consistent for `ADTDigitalTwinsOperation`, `ADTModelsOperation`, `ADTQueryOperation`. The same schema is also used for `ADTEventRoutesOperation`, with the **exception** of the `Microsoft.DigitalTwins/eventroutes/action` operation name (for more information about that schema, see the next section, [*Egress log schemas*](#egress-log-schemas)).
+
+The schema contains information pertinent to API calls to an Azure Digital Twins instance.
Here are the field and property descriptions for API logs. | Field name | Data type | Description | |--||-| | `Time` | DateTime | The date and time that this event occurred, in UTC |
-| `ResourceID` | String | The Azure Resource Manager Resource ID for the resource where the event took place |
+| `ResourceId` | String | The Azure Resource Manager Resource ID for the resource where the event took place |
| `OperationName` | String | The type of action being performed during the event | | `OperationVersion` | String | The API Version utilized during the event | | `Category` | String | The type of resource being emitted |
Here are the field and property descriptions for API logs.
| `DurationMs` | String | How long it took to perform the event in milliseconds | | `CallerIpAddress` | String | A masked source IP address for the event | | `CorrelationId` | Guid | Customer provided unique identifier for the event |
-| `Level` | String | The logging severity of the event |
+| `ApplicationId` | Guid | Application ID used in bearer authorization |
+| `Level` | Int | The logging severity of the event |
| `Location` | String | The region where the event took place | | `RequestUri` | Uri | The endpoint utilized during the event |
+| `TraceId` | String | `TraceId`, as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). The ID of the whole trace used to uniquely identify a distributed trace across systems. |
+| `SpanId` | String | `SpanId` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). The ID of this request in the trace. |
+| `ParentId` | String | `ParentId` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). A request without a parent ID is the root of the trace. |
+| `TraceFlags` | String | `TraceFlags` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Controls tracing flags such as sampling, trace level, etc. |
+| `TraceState` | String | `TraceState` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Additional vendor-specific trace identification information to span across different distributed tracing systems. |
Below are example JSON bodies for these types of logs.
Below are example JSON bodies for these types of logs.
"resultType": "Success", "resultSignature": "200", "resultDescription": "",
- "durationMs": "314",
+ "durationMs": 8,
"callerIpAddress": "13.68.244.*", "correlationId": "2f6a8e64-94aa-492a-bc31-16b9f0b16ab3",
+ "identity": {
+ "claims": {
+ "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
+ }
+ },
"level": "4", "location": "southcentralus",
- "uri": "https://myinstancename.api.scus.digitaltwins.azure.net/digitaltwins/factory-58d81613-2e54-4faa-a930-d980e6e2a884?api-version=2020-10-31"
+ "uri": "https://myinstancename.api.scus.digitaltwins.azure.net/digitaltwins/factory-58d81613-2e54-4faa-a930-d980e6e2a884?api-version=2020-10-31",
+ "properties": {},
+ "traceContext": {
+ "traceId": "95ff77cfb300b04f80d83e64d13831e7",
+ "spanId": "b630da57026dd046",
+ "parentId": "9f0de6dadae85945",
+ "traceFlags": "01",
+ "tracestate": "k1=v1,k2=v2"
+ }
} ```
Below are example JSON bodies for these types of logs.
"resultType": "Success", "resultSignature": "201", "resultDescription": "",
- "durationMs": "935",
+ "durationMs": "80",
"callerIpAddress": "13.68.244.*", "correlationId": "9dcb71ea-bb6f-46f2-ab70-78b80db76882",
+ "identity": {
+ "claims": {
+ "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
+ }
+ },
"level": "4", "location": "southcentralus", "uri": "https://myinstancename.api.scus.digitaltwins.azure.net/Models?api-version=2020-10-31",
+ "properties": {},
+ "traceContext": {
+ "traceId": "95ff77cfb300b04f80d83e64d13831e7",
+ "spanId": "b630da57026dd046",
+ "parentId": "9f0de6dadae85945",
+ "traceFlags": "01",
+ "tracestate": "k1=v1,k2=v2"
+ }
} ```
Below are example JSON bodies for these types of logs.
"resultType": "Success", "resultSignature": "200", "resultDescription": "",
- "durationMs": "255",
+ "durationMs": "314",
"callerIpAddress": "13.68.244.*", "correlationId": "1ee2b6e9-3af4-4873-8c7c-1a698b9ac334",
+ "identity": {
+ "claims": {
+ "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
+ }
+ },
"level": "4", "location": "southcentralus", "uri": "https://myinstancename.api.scus.digitaltwins.azure.net/query?api-version=2020-10-31",
+ "properties": {},
+ "traceContext": {
+ "traceId": "95ff77cfb300b04f80d83e64d13831e7",
+ "spanId": "b630da57026dd046",
+ "parentId": "9f0de6dadae85945",
+ "traceFlags": "01",
+ "tracestate": "k1=v1,k2=v2"
+ }
} ```
+#### ADTEventRoutesOperation
+
+Here is an example JSON body for an `ADTEventRoutesOperation` that is **not** of `Microsoft.DigitalTwins/eventroutes/action` type (for more information about that schema, see the next section, [*Egress log schemas*](#egress-log-schemas)).
+
+```json
+ {
+ "time": "2020-10-30T22:18:38.0708705Z",
+ "resourceId": "/SUBSCRIPTIONS/BBED119E-28B8-454D-B25E-C990C9430C8F/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.DIGITALTWINS/DIGITALTWINSINSTANCES/MYINSTANCENAME",
+ "operationName": "Microsoft.DigitalTwins/eventroutes/write",
+ "operationVersion": "2020-10-31",
+ "category": "EventRoutesOperation",
+ "resultType": "Success",
+ "resultSignature": "204",
+ "resultDescription": "",
+ "durationMs": 42,
+ "callerIpAddress": "212.100.32.*",
+ "correlationId": "7f73ab45-14c0-491f-a834-0827dbbf7f8e",
+ "identity": {
+ "claims": {
+ "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
+ }
+ },
+ "level": "4",
+ "location": "southcentralus",
+ "uri": "https://myinstancename.api.scus.digitaltwins.azure.net/EventRoutes/egressRouteForEventHub?api-version=2020-10-31",
+ "properties": {},
+ "traceContext": {
+ "traceId": "95ff77cfb300b04f80d83e64d13831e7",
+ "spanId": "b630da57026dd046",
+ "parentId": "9f0de6dadae85945",
+ "traceFlags": "01",
+ "tracestate": "k1=v1,k2=v2"
+ }
+ },
+```
+ ### Egress log schemas
-This is the schema for `ADTEventRoutesOperation` logs. These contain details pertaining to exceptions and the API operations around egress endpoints connected to an Azure Digital Twins instance.
+This is the schema for `ADTEventRoutesOperation` logs specific to the `Microsoft.DigitalTwins/eventroutes/action` operation name. These contain details pertaining to exceptions and the API operations around egress endpoints connected to an Azure Digital Twins instance.
|Field name | Data type | Description | |--||-|
This is the schema for `ADTEventRoutesOperation` logs. These contain details per
| `OperationName` | String | The type of action being performed during the event | | `Category` | String | The type of resource being emitted | | `ResultDescription` | String | Additional details about the event |
-| `Level` | String | The logging severity of the event |
+| `CorrelationId` | Guid | Customer provided unique identifier for the event |
+| `ApplicationId` | Guid | Application ID used in bearer authorization |
+| `Level` | Int | The logging severity of the event |
| `Location` | String | The region where the event took place |
+| `TraceId` | String | `TraceId`, as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). The ID of the whole trace used to uniquely identify a distributed trace across systems. |
+| `SpanId` | String | `SpanId` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). The ID of this request in the trace. |
+| `ParentId` | String | `ParentId` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). A request without a parent ID is the root of the trace. |
+| `TraceFlags` | String | `TraceFlags` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Controls tracing flags such as sampling, trace level, etc. |
+| `TraceState` | String | `TraceState` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Additional vendor-specific trace identification information to span across different distributed tracing systems. |
| `EndpointName` | String | The name of egress endpoint created in Azure Digital Twins | Below are example JSON bodies for these types of logs.
-#### ADTEventRoutesOperation
+#### ADTEventRoutesOperation for Microsoft.DigitalTwins/eventroutes/action
+
+Here is an example JSON body for an `ADTEventRoutesOperation` that of `Microsoft.DigitalTwins/eventroutes/action` type.
```json { "time": "2020-11-05T22:18:38.0708705Z", "resourceId": "/SUBSCRIPTIONS/BBED119E-28B8-454D-B25E-C990C9430C8F/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.DIGITALTWINS/DIGITALTWINSINSTANCES/MYINSTANCENAME", "operationName": "Microsoft.DigitalTwins/eventroutes/action",
+ "operationVersion": "",
"category": "EventRoutesOperation",
- "resultDescription": "Unable to send EventGrid message to [my-event-grid.westus-1.eventgrid.azure.net] for event Id [f6f45831-55d0-408b-8366-058e81ca6089].",
+ "resultType": "",
+ "resultSignature": "",
+ "resultDescription": "Unable to send EventHub message to [myPath] for event Id [f6f45831-55d0-408b-8366-058e81ca6089].",
+ "durationMs": -1,
+ "callerIpAddress": "",
"correlationId": "7f73ab45-14c0-491f-a834-0827dbbf7f8e",
- "level": "3",
+ "identity": {
+ "claims": {
+ "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
+ }
+ },
+ "level": "4",
"location": "southcentralus",
+ "uri": "",
"properties": {
- "endpointName": "endpointEventGridInvalidKey"
+ "endpointName": "myEventHub"
+ },
+ "traceContext": {
+ "traceId": "95ff77cfb300b04f80d83e64d13831e7",
+ "spanId": "b630da57026dd046",
+ "parentId": "9f0de6dadae85945",
+ "traceFlags": "01",
+ "tracestate": "k1=v1,k2=v2"
}
-}
+},
``` ## View and query logs
dms Tutorial Mongodb Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-mongodb-cosmos-db.md
Title: "Tutorial: Migrate MongoDB offline to Azure Cosmos DB API for MongoDB"
-description: Learn to migrate from MongoDB on-premises to Azure Cosmos DB API for MongoDB offline by using Azure Database Migration Service.
+description: Migrate from MongoDB on-premises to Azure Cosmos DB API for MongoDB offline, by using Azure Database Migration Service.
Last updated 02/03/2021
-# Tutorial: Migrate MongoDB to Azure Cosmos DB's API for MongoDB offline using DMS
+# Tutorial: Migrate MongoDB to Azure Cosmos DB API for MongoDB offline
-You can use Azure Database Migration Service to perform an offline (one-time) migration of databases from an on-premises or cloud instance of MongoDB to Azure Cosmos DB's API for MongoDB.
+Use Azure Database Migration Service to perform an offline, one-time migration of databases from an on-premises or cloud instance of MongoDB to the Azure Cosmos DB API for MongoDB.
In this tutorial, you learn how to: > [!div class="checklist"]
In this tutorial, you learn how to:
> * Run the migration. > * Monitor the migration.
-In this tutorial, you migrate a dataset in MongoDB hosted in an Azure Virtual Machine to Azure Cosmos DB's API for MongoDB by using Azure Database Migration Service. If you don't have a MongoDB source set up already, see the article [Install and configure MongoDB on a Windows VM in Azure](/previous-versions/azure/virtual-machines/windows/install-mongodb).
+In this tutorial, you migrate a dataset in MongoDB that is hosted in an Azure virtual machine. By using Azure Database Migration Service, you migrate the dataset to the Azure Cosmos DB API for MongoDB. If you don't have a MongoDB source set up already, see [Install and configure MongoDB on a Windows VM in Azure](/previous-versions/azure/virtual-machines/windows/install-mongodb).
## Prerequisites To complete this tutorial, you need to:
-* [Complete the pre-migration](../cosmos-db/mongodb-pre-migration.md) steps such as estimating throughput, choosing a partition key, and the indexing policy.
-* [Create an Azure Cosmos DB's API for MongoDB account](https://ms.portal.azure.com/#create/Microsoft.DocumentDB).
-* Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details.
+* [Complete the pre-migration](../cosmos-db/mongodb-pre-migration.md) steps, such as estimating throughput and choosing a partition key.
+* [Create an account for the Azure Cosmos DB API for MongoDB](https://ms.portal.azure.com/#create/Microsoft.DocumentDB).
+* Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using Azure Resource Manager. This deployment model provides site-to-site connectivity to your on-premises source servers by using either [Azure ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Azure Virtual Network documentation](../virtual-network/index.yml), especially the "quickstart" articles with step-by-step details.
> [!NOTE] > During virtual network setup, if you use ExpressRoute with network peering to Microsoft, add the following service [endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) to the subnet in which the service will be provisioned: >
- > * Target database endpoint (for example, SQL endpoint, Cosmos DB endpoint, and so on)
+ > * Target database endpoint (for example, SQL endpoint or Azure Cosmos DB endpoint)
> * Storage endpoint > * Service bus endpoint > > This configuration is necessary because Azure Database Migration Service lacks internet connectivity.
-* Ensure that your virtual network Network Security Group (NSG) rules don't block the following communication ports: 53, 443, 445, 9354, and 10000-20000. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
+* Ensure that your network security group (NSG) rules for your virtual network don't block the following communication ports: 53, 443, 445, 9354, and 10000-20000. For more information, see [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
* Open your Windows firewall to allow Azure Database Migration Service to access the source MongoDB server, which by default is TCP port 27017.
-* When using a firewall appliance in front of your source database(s), you may need to add firewall rules to allow Azure Database Migration Service to access the source database(s) for migration.
+* When you're using a firewall appliance in front of your source database, you might need to add firewall rules to allow Azure Database Migration Service to access the source database for migration.
-## Configure Azure Cosmos DB Server Side Retries for efficient migration
+## Configure the Server Side Retry feature
-Customers migrating from MongoDB to Azure Cosmos DB benefit from resource governance capabilities, which guarantee the ability to fully utilize your provisioned RU/s of throughput. Azure Cosmos DB may throttle a given Data Migration Service request in the course of migration if that request exceeds the container provisioned RU/s; then that request needs to be retried. Data Migration Service is capable of performing retries, however the round-trip time involved in the network hop between Data Migration Service and Azure Cosmos DB impacts the overall response time of that request. Improving response time for throttled requests can shorten the total time needed for migration. The *Server Side Retry* feature of Azure Cosmos DB allows the service to intercept throttle error codes and retry with much lower round-trip time, dramatically improving request response times.
+You can benefit from resource governance capabilities if you migrate from MongoDB to Azure Cosmos DB. With these capabilities, you can make full use of your provisioned request units (RU/s) of throughput. Azure Cosmos DB might throttle a particular Database Migration Service request in the course of migration, if that request exceeds the container-provisioned RU/s. Then that request needs to be retried.
-You can find the Server Side Retry capability in the *Features* blade of the Azure Cosmos DB portal
+Database Migration Service is capable of performing retries. It's important to understand that the round-trip time involved in the network hop between Database Migration Service and Azure Cosmos DB affects the overall response time of that request. Improving response time for throttled requests can shorten the total time needed for migration.
-![MongoDB SSR feature](media/tutorial-mongodb-to-cosmosdb/mongo-server-side-retry-feature.png)
+The Server Side Retry feature of Azure Cosmos DB allows the service to intercept throttle error codes and retry with a much lower round-trip time, dramatically improving request response times.
-And if it is *Disabled*, then we recommend you enable it as shown below
+To use Server Side Retry, in the Azure Cosmos DB portal, select **Features** > **Server Side Retry**.
-![MongoDB SSR enable](media/tutorial-mongodb-to-cosmosdb/mongo-server-side-retry-enable.png)
+![Screenshot that shows where to find the Server Side Retry feature.](media/tutorial-mongodb-to-cosmosdb/mongo-server-side-retry-feature.png)
-## Register the Microsoft.DataMigration resource provider
+If the feature is disabled, select **Enable**.
+
+![Screenshot that shows how to enable Server Side Retry.](media/tutorial-mongodb-to-cosmosdb/mongo-server-side-retry-enable.png)
+
+## Register the resource provider
1. Sign in to the Azure portal, select **All services**, and then select **Subscriptions**.
- ![Show portal subscriptions](media/tutorial-mongodb-to-cosmosdb/portal-select-subscription1.png)
+ ![Screenshot that shows portal subscriptions.](media/tutorial-mongodb-to-cosmosdb/portal-select-subscription1.png)
-2. Select the subscription in which you want to create the instance of the Azure Database Migration Service, and then select **Resource providers**.
+2. Select the subscription in which you want to create the instance of Azure Database Migration Service, and then select **Resource providers**.
- ![Show resource providers](media/tutorial-mongodb-to-cosmosdb/portal-select-resource-provider.png)
+ ![Screenshot that shows resource providers.](media/tutorial-mongodb-to-cosmosdb/portal-select-resource-provider.png)
3. Search for migration, and then to the right of **Microsoft.DataMigration**, select **Register**.
- ![Register resource provider](media/tutorial-mongodb-to-cosmosdb/portal-register-resource-provider.png)
+ ![Screenshot that show how to register the resource provider.](media/tutorial-mongodb-to-cosmosdb/portal-register-resource-provider.png)
## Create an instance 1. In the Azure portal, select + **Create a resource**, search for Azure Database Migration Service, and then select **Azure Database Migration Service** from the drop-down list.
- ![Azure Marketplace](media/tutorial-mongodb-to-cosmosdb/portal-marketplace.png)
+ ![Screenshot that shows Azure Marketplace.](media/tutorial-mongodb-to-cosmosdb/portal-marketplace.png)
2. On the **Azure Database Migration Service** screen, select **Create**.
- ![Create Azure Database Migration Service instance](media/tutorial-mongodb-to-cosmosdb/dms-create1.png)
+ ![Screenshot that shows how to create an instance of Azure Database Migration Service.](media/tutorial-mongodb-to-cosmosdb/dms-create1.png)
-3. On the **Create Migration Service** screen, specify a name for the service, the subscription, and a new or existing resource group.
+3. On **Create Migration Service**, specify a name for the service, the subscription, and a new or existing resource group.
4. Select the location in which you want to create the instance of Azure Database Migration Service.
And if it is *Disabled*, then we recommend you enable it as shown below
The virtual network provides Azure Database Migration Service with access to the source MongoDB instance and the target Azure Cosmos DB account.
- For more information about how to create a virtual network in the Azure portal, see the article [Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
+ For more information about how to create a virtual network in the Azure portal, see [Create a virtual network by using the Azure portal](../virtual-network/quick-create-portal.md).
6. Select a pricing tier. For more information on costs and pricing tiers, see the [pricing page](https://aka.ms/dms-pricing).
- ![Configure Azure Database Migration Service instance settings](media/tutorial-mongodb-to-cosmosdb/dms-settings2.png)
+ ![Screenshot that shows configuration settings for the instance of Azure Database Migration Service.](media/tutorial-mongodb-to-cosmosdb/dms-settings2.png)
7. Select **Create** to create the service. ## Create a migration project
-After the service is created, locate it within the Azure portal, open it, and then create a new migration project.
+After you create the service, locate it within the Azure portal, and open it. Then create a new migration project.
1. In the Azure portal, select **All services**, search for Azure Database Migration Service, and then select **Azure Database Migration Services**.
- ![Locate all instances of Azure Database Migration Service](media/tutorial-mongodb-to-cosmosdb/dms-search.png)
+ ![Screenshot that shows how to locate all instances of Azure Database Migration Service.](media/tutorial-mongodb-to-cosmosdb/dms-search.png)
2. On the **Azure Database Migration Services** screen, search for the name of Azure Database Migration Service instance that you created, and then select the instance.
-3. Select + **New Migration Project**.
+3. Select **+ New Migration Project**.
-4. On the **New migration project** screen, specify a name for the project, in the **Source server type** text box, select **MongoDB**, in the **Target server type** text box, select **CosmosDB (MongoDB API)**, and then for **Choose type of activity**, select **Offline data migration**.
+4. On **New migration project**, specify a name for the project, and in the **Source server type** text box, select **MongoDB**. In the **Target server type** text box, select **CosmosDB (MongoDB API)**, and then for **Choose type of activity**, select **Offline data migration**.
- ![Create Database Migration Service project](media/tutorial-mongodb-to-cosmosdb/dms-create-project.png)
+ ![Screenshot that shows project options.](media/tutorial-mongodb-to-cosmosdb/dms-create-project.png)
5. Select **Create and run activity** to create the project and run the migration activity.
After the service is created, locate it within the Azure portal, open it, and th
1. On the **Source details** screen, specify the connection details for the source MongoDB server. > [!IMPORTANT]
- > Azure Database Migration Service does not support Azure Cosmos DB as a source.
+ > Azure Database Migration Service doesn't support Azure Cosmos DB as a source.
There are three modes to connect to a source:
- * **Standard mode**, which accepts a fully qualified domain name or an IP address, Port number, and connection credentials.
- * **Connection string mode**, which accepts a MongoDB Connection string as described in the article [Connection String URI Format](https://docs.mongodb.com/manual/reference/connection-string/).
- * **Data from Azure storage**, which accepts a blob container SAS URL. Select **Blob contains BSON dumps** if the blob container has BSON dumps produced by the MongoDB [bsondump tool](https://docs.mongodb.com/manual/reference/program/bsondump/), and de-select it if the container contains JSON files.
+ * **Standard mode**, which accepts a fully qualified domain name or an IP address, port number, and connection credentials.
+ * **Connection string mode**, which accepts a MongoDB connection string as described in [Connection String URI Format](https://docs.mongodb.com/manual/reference/connection-string/).
+ * **Data from Azure storage**, which accepts a blob container SAS URL. Select **Blob contains BSON dumps** if the blob container has BSON dumps produced by the MongoDB [bsondump tool](https://docs.mongodb.com/manual/reference/program/bsondump/). Don't select this option if the container contains JSON files.
- If you select this option, be sure that the storage account connection string appears in the format:
+ If you select this option, be sure that the storage account connection string appears in the following format:
``` https://blobnameurl/container?SASKEY ```
- This blob container SAS connection string can be found in Azure Storage explorer. Creating the SAS for the concerned container will provide you the URL in above requested format.
+ You can find this blob container SAS connection string in Azure Storage explorer. Creating the SAS for the concerned container provides you the URL in the requested format.
- Also, based on the type dump information in Azure Storage, keep the following detail in mind.
+ Also, based on the type dump information in Azure Storage, keep the following in mind:
- * For BSON dumps, the data within the blob container must be in bsondump format, such that data files are placed into folders named after the containing databases in the format collection.bson. Metadata files (if any) should be named using the format *collection*.metadata.json.
+ * For BSON dumps, the data within the blob container must be in the bsondump format. Place data files into folders named after the containing databases in the format *collection.bson*. Name any metadata files by using the format *collection.metadata.json*.
- * For JSON dumps, the files in the blob container must be placed into folders named after the containing databases. Within each database folder, data files must be placed in a subfolder called "data" and named using the format *collection*.json. Metadata files (if any) must be placed in a subfolder called "metadata" and named using the same format, *collection*.json. The metadata files must be in the same format as produced by the MongoDB bsondump tool.
+ * For JSON dumps, the files in the blob container must be placed into folders named after the containing databases. Within each database folder, data files must be placed in a subfolder called *data*, and named by using the format *collection.json*. Place any metadata files in a subfolder called *metadata*, and named by using the same format, *collection.json*. The metadata files must be in the same format as produced by the MongoDB bsondump tool.
> [!IMPORTANT]
- > It is discouraged to use a self-signed certificate on the mongo server. However, if one is used, please connect to the server using **connection string mode** and ensure that your connection string has ΓÇ£ΓÇ¥
+ > We don't recommend that you use a self-signed certificate on the MongoDB server. If you must use one, connect to the server by using the connection string mode, and ensure that your connection string has quotation marks ("").
> >``` >&sslVerifyCertificate=false >```
- You can also use the IP Address for situations in which DNS name resolution isn't possible.
+ You can also use the IP address for situations in which DNS name resolution isn't possible.
- ![Specify source details](media/tutorial-mongodb-to-cosmosdb/dms-specify-source.png)
+ ![Screenshot that shows specifying source details.](media/tutorial-mongodb-to-cosmosdb/dms-specify-source.png)
2. Select **Save**. ## Specify target details
-1. On the **Migration target details** screen, specify the connection details for the target Azure Cosmos DB account, which is the pre-provisioned Azure Cosmos DB's API for MongoDB account to which you're migrating your MongoDB data.
+1. On the **Migration target details** screen, specify the connection details for the target Azure Cosmos DB account. This account is the pre-provisioned Azure Cosmos DB API for MongoDB account to which you're migrating your MongoDB data.
- ![Specify target details](media/tutorial-mongodb-to-cosmosdb/dms-specify-target.png)
+ ![Screenshot that shows specifying target details.](media/tutorial-mongodb-to-cosmosdb/dms-specify-target.png)
2. Select **Save**.
After the service is created, locate it within the Azure portal, open it, and th
If the target database contains the same database name as the source database, Azure Database Migration Service selects the target database by default.
- If the string **Create** appears next to the database name, it indicates that Azure Database Migration Service didn't find the target database, and the service will create the database for you.
+ If **Create** appears next to the database name, it indicates that Azure Database Migration Service didn't find the target database, and the service will create the database for you.
- At this point in the migration, you can [provision throughput](../cosmos-db/set-throughput.md). In Cosmos DB, you can provision throughput either at the database-level or individually for each collection. Throughput is measured in [Request Units](../cosmos-db/request-units.md) (RUs). Learn more about [Azure Cosmos DB pricing](https://azure.microsoft.com/pricing/details/cosmos-db/).
+ At this point in the migration, you can [provision throughput](../cosmos-db/set-throughput.md). In Azure Cosmos DB, you can provision throughput either at the database level or individually for each collection. Throughput is measured in [request units](../cosmos-db/request-units.md). Learn more about [Azure Cosmos DB pricing](https://azure.microsoft.com/pricing/details/cosmos-db/).
- ![Map to target databases](media/tutorial-mongodb-to-cosmosdb/dms-map-target-databases.png)
+ ![Screenshot that shows mapping to target databases.](media/tutorial-mongodb-to-cosmosdb/dms-map-target-databases.png)
2. Select **Save**. 3. On the **Collection setting** screen, expand the collections listing, and then review the list of collections that will be migrated.
- Azure Database Migration Service auto selects all the collections that exist on the source MongoDB instance that don't exist on the target Azure Cosmos DB account. If you want to remigrate collections that already include data, you need to explicitly select the collections on this blade.
+ Azure Database Migration Service automatically selects all the collections that exist on the source MongoDB instance that don't exist on the target Azure Cosmos DB account. If you want to remigrate collections that already include data, you need to explicitly select the collections on this pane.
- You can specify the amount of RUs that you want the collections to use. Azure Database Migration Service suggests smart defaults based on the collection size.
+ You can specify the number of RUs that you want the collections to use. Azure Database Migration Service suggests smart defaults based on the collection size.
> [!NOTE]
- > Perform the database migration and collection in parallel using multiple instances of Azure Database Migration Service, if necessary, to speed up the run.
+ > Perform the database migration and collection in parallel. If necessary, you can use multiple instances of Azure Database Migration Service to speed up the run.
- You can also specify a shard key to take advantage of [partitioning in Azure Cosmos DB](../cosmos-db/partitioning-overview.md) for optimal scalability. Be sure to review the [best practices for selecting a shard/partition key](../cosmos-db/partitioning-overview.md#choose-partitionkey).
+ You can also specify a shard key to take advantage of [partitioning in Azure Cosmos DB](../cosmos-db/partitioning-overview.md) for optimal scalability. Review the [best practices for selecting a shard/partition key](../cosmos-db/partitioning-overview.md#choose-partitionkey).
- ![Select collections tables](media/tutorial-mongodb-to-cosmosdb/dms-collection-setting.png)
+ ![Screenshot that shows selecting collections tables.](media/tutorial-mongodb-to-cosmosdb/dms-collection-setting.png)
4. Select **Save**. 5. On the **Migration summary** screen, in the **Activity name** text box, specify a name for the migration activity.
- ![Migration summary](media/tutorial-mongodb-to-cosmosdb/dms-migration-summary.png)
+ ![Screenshot that shows the nigration summary.](media/tutorial-mongodb-to-cosmosdb/dms-migration-summary.png)
## Run the migration
-* Select **Run migration**.
-
- The migration activity window appears, and the **Status** of the activity is **Not started**.
+Select **Run migration**. The migration activity window appears, and the status of the activity is **Not started**.
- ![Activity status](media/tutorial-mongodb-to-cosmosdb/dms-activity-status.png)
+![Screenshot that shows the activity status.](media/tutorial-mongodb-to-cosmosdb/dms-activity-status.png)
## Monitor the migration
-* On the migration activity screen, select **Refresh** to update the display until the **Status** of the migration shows as **Completed**.
+On the migration activity screen, select **Refresh** to update the display until the status of the migration shows as **Completed**.
- > [!NOTE]
- > You can select the Activity to get details of database- and collection-level migration metrics.
+> [!NOTE]
+> You can select the activity to get details of database- and collection-level migration metrics.
- ![Activity status completed](media/tutorial-mongodb-to-cosmosdb/dms-activity-completed.png)
+![Screnshot that shows the activity status completed.](media/tutorial-mongodb-to-cosmosdb/dms-activity-completed.png)
-## Verify data in Cosmos DB
+## Verify data in Azure Cosmos DB
-* After the migration completes, you can check your Azure Cosmos DB account to verify that all the collections were migrated successfully.
+After the migration finishes, you can check your Azure Cosmos DB account to verify that all the collections were migrated successfully.
- ![Screenshot that shows where to check your Azure Cosmos DB account to verify that all the collections were migrated successfully.](media/tutorial-mongodb-to-cosmosdb/dms-cosmosdb-data-explorer.png)
+![Screenshot that shows where to check your Azure Cosmos DB account to verify that all the collections were migrated successfully.](media/tutorial-mongodb-to-cosmosdb/dms-cosmosdb-data-explorer.png)
## Post-migration optimization
-After you migrate the data stored in MongoDB database to Azure Cosmos DBΓÇÖs API for MongoDB, you can connect to Azure Cosmos DB and manage the data. You can also perform other post-migration optimization steps such as optimizing the indexing policy, update the default consistency level, or configure global distribution for your Azure Cosmos DB account. For more information, see the [Post-migration optimization](../cosmos-db/mongodb-post-migration.md) article.
+After you migrate the data stored in MongoDB database to the Azure Cosmos DB API for MongoDB, you can connect to Azure Cosmos DB and manage the data. You can also perform other post-migration optimization steps. These might include optimizing the indexing policy, updating the default consistency level, or configuring global distribution for your Azure Cosmos DB account. For more information, see [Post-migration optimization](../cosmos-db/mongodb-post-migration.md).
-## Additional resources
+## Next steps
+
+Review migration guidance for additional scenarios in the [Azure Database Migration Guide](https://datamigration.microsoft.com/).
-* [Cosmos DB service information](https://azure.microsoft.com/services/cosmos-db/)
-## Next steps
-* Review migration guidance for additional scenarios in the Microsoft [Database Migration Guide](https://datamigration.microsoft.com/).
dns Private Dns Import Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dns/private-dns-import-export.md
+
+ Title: Import and export a domain zone file for Azure private DNS - Azure CLI
+
+description: Learn how to import and export a DNS zone file to Azure private DNS by using Azure CLI
+++ Last updated : 03/16/2021++++
+# Import and export a private DNS zone file for Azure private DNS
+
+This article walks you through how to import and export DNS zone files for Azure DNS using the Azure CLI.
+
+## Introduction to DNS zone migration
+
+A DNS zone file is a text file that contains details of every Domain Name System (DNS) record in the zone. It follows a standard format, making it suitable for transferring DNS records between DNS systems. Using a zone file is a quick, reliable, and convenient way to transfer a DNS zone into or out of Azure DNS.
+
+Azure private DNS supports importing and exporting zone files by using the Azure command-line interface (CLI). Zone file import is **not** currently supported via Azure PowerShell or the Azure portal.
+
+The Azure CLI is a cross-platform command-line tool used for managing Azure services. It is available for the Windows, Mac, and Linux platforms from the [Azure downloads page](https://azure.microsoft.com/downloads/). Cross-platform support is important for importing and exporting zone files, because the most common name server software, [BIND](https://www.isc.org/downloads/bind/), typically runs on Linux.
+
+## Obtain your existing DNS zone file
+
+Before you import a DNS zone file into Azure DNS, you need to obtain a copy of the zone file. The source of this file depends on where the DNS zone is currently hosted.
+
+* If your DNS zone is hosted by a partner service (such as a domain registrar, dedicated DNS hosting provider, or alternative cloud provider), that service should provide the ability to download the DNS zone file.
+* If your DNS zone is hosted on Windows DNS, the default folder for the zone files is **%systemroot%\system32\dns**. The full path to each zone file also shows on the **General** tab of the DNS console.
+* If your DNS zone is hosted by using BIND, the location of the zone file for each zone is specified in the BIND configuration file **named.conf**.
+
+## Import a DNS zone file into Azure private DNS
+
+Importing a zone file creates a new zone in Azure private DNS if one does not already exist. If the zone already exists, the record sets in the zone file must be merged with the existing record sets.
+
+### Merge behavior
+
+* By default, existing and new record sets are merged. Identical records within a merged record set are de-duplicated.
+* When record sets are merged, the time to live (TTL) of preexisting record sets is used.
+* Start of Authority (SOA) parameters (except `host`) are always taken from the imported zone file. Similarly, for the name server record set at the zone apex, the TTL is always taken from the imported zone file.
+* An imported CNAME record does not replace an existing CNAME record with the same name.
+* When a conflict arises between a CNAME record and another record of the same name but different type (regardless of which is existing or new), the existing record is retained.
+
+### Additional information about importing
+
+The following notes provide additional technical details about the zone import process.
+
+* The `$TTL` directive is optional, and it is supported. When no `$TTL` directive is given, records without an explicit TTL are imported set to a default TTL of 3600 seconds. When two records in the same record set specify different TTLs, the lower value is used.
+* The `$ORIGIN` directive is optional, and it is supported. When no `$ORIGIN` is set, the default value used is the zone name as specified on the command line (plus the terminating ".").
+* The `$INCLUDE` and `$GENERATE` directives are not supported.
+* These record types are supported: A, AAAA, CAA, CNAME, MX, NS, SOA, SRV, and TXT.
+* The SOA record is created automatically by Azure DNS when a zone is created. When you import a zone file, all SOA parameters are taken from the zone file *except* the `host` parameter. This parameter uses the value provided by Azure DNS. This is because this parameter must refer to the primary name server provided by Azure DNS.
+* The name server record set at the zone apex is also created automatically by Azure DNS when the zone is created. Only the TTL of this record set is imported. These records contain the name server names provided by Azure DNS. The record data is not overwritten by the values contained in the imported zone file.
+* During Public Preview, Azure DNS supports only single-string TXT records. Multistring TXT records are be concatenated and truncated to 255 characters.
+
+### CLI format and values
+
+The format of the Azure CLI command to import a DNS zone is:
+
+```azurecli
+az network private-dns zone import -g <resource group> -n <zone name> -f <zone file name>
+```
+
+Values:
+
+* `<resource group>` is the name of the resource group for the zone in Azure DNS.
+* `<zone name>` is the name of the zone.
+* `<zone file name>` is the path/name of the zone file to be imported.
+
+If a zone with this name does not exist in the resource group, it is created for you. If the zone already exists, the imported record sets are merged with existing record sets.
+
+### Import a zone file
+
+To import a zone file for the zone **contoso.com**.
+
+1. If you don't have one already, you need to create a Resource Manager resource group.
+
+ ```azurecli
+ az group create --resource-group myresourcegroup -l westeurope
+ ```
+
+2. To import the zone **contoso.com** from the file **contoso.com.txt** into a new DNS zone in the resource group **myresourcegroup**, you will run the command `az network private-dns zone import`.<BR>This command loads the zone file and parses it. The command executes a series of commands on the Azure DNS service to create the zone and all the record sets in the zone. The command reports progress in the console window, along with any errors or warnings. Because record sets are created in series, it may take a few minutes to import a large zone file.
+
+ ```azurecli
+ az network private-dns zone import -g myresourcegroup -n contoso.com -f contoso.com.txt
+ ```
+
+### Verify the zone
+
+To verify the DNS zone after you import the file, you can use any one of the following methods:
+
+* You can list the records by using the following Azure CLI command:
+
+ ```azurecli
+ az network private-dns record-set list -g myresourcegroup -z contoso.com
+ ```
+
+* You can use `nslookup` to verify name resolution for the records. Because the zone isn't delegated yet, you need to specify the correct Azure DNS name servers explicitly. The following sample shows how to retrieve the name server names assigned to the zone. This also shows how to query the "www" record by using `nslookup`.
+
+## Export a DNS zone file from Azure DNS
+
+The format of the Azure CLI command to export a DNS zone is:
+
+```azurecli
+az network private-dns zone export -g <resource group> -n <zone name> -f <zone file name>
+```
+
+Values:
+
+* `<resource group>` is the name of the resource group for the zone in Azure DNS.
+* `<zone name>` is the name of the zone.
+* `<zone file name>` is the path/name of the zone file to be exported.
+
+As with the zone import, you first need to sign in, choose your subscription, and configure the Azure CLI to use Resource Manager mode.
+
+### To export a zone file
+
+To export the existing Azure DNS zone **contoso.com** in resource group **myresourcegroup** to the file **contoso.com.txt** (in the current folder), run `azure network private-dns zone export`. This command calls the Azure DNS service to enumerate record sets in the zone and export the results to a BIND-compatible zone file.
+
+```azurecli
+az network private-dns zone export -g myresourcegroup -n contoso.com -f contoso.com.txt
+```
+
+## Next steps
+
+* Learn how to [manage record sets and records](./private-dns-getstarted-cli.md) in your DNS zone.
event-hubs Event Hubs Java Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-java-get-started-send.md
Title: Send or receive events from Azure Event Hubs using Java (latest) description: This article provides a walkthrough of creating a Java application that sends/receives events to/from Azure Event Hubs using the latest azure-messaging-eventhubs package. Previously updated : 06/23/2020 Last updated : 04/05/2021
The Java client library for Event Hubs is available in the [Maven Central Reposi
<dependency> <groupId>com.azure</groupId> <artifactId>azure-messaging-eventhubs</artifactId>
- <version>5.0.1</version>
+ <version>5.6.0</version>
</dependency> ```
+> [!NOTE]
+> Update the version to the latest version published to the Maven repository.
+ ### Write code to send messages to the event hub For the following sample, first create a new Maven project for a console/shell application in your favorite Java development environment. Add a class named `Sender`, and add the following code to the class:
+> [!IMPORTANT]
+> Update `<Event Hubs namespace connection string>` with the connection string to your Event Hubs namespace. Update `<Event hub name>` with the name of your event hub in the namespace.
+ ```java import com.azure.messaging.eventhubs.*;
-import static java.nio.charset.StandardCharsets.UTF_8;
+import java.util.Arrays;
+import java.util.List;
public class Sender {
- public static void main(String[] args) {
+ private static final String connectionString = "<Event Hubs namespace connection string>";
+ private static final String eventHubName = "<Event hub name>";
+
+ public static void main(String[] args) {
+ publishEvents();
} } ```-
-### Connection string and event hub
-This code uses the connection string to the Event Hubs namespace and the name of the event hub to build an Event Hubs client.
-
-```java
-String connectionString = "<CONNECTION STRING to EVENT HUBS NAMESPACE>";
-String eventHubName = "<EVENT HUB NAME>";
-```
-
-### Create an Event Hubs Producer client
-This code creates a producer client object that's used to produce/send events to the event hub.
+### Add code to publish events to the event hub
+Add a method named `publishEvents` to the `Sender` class as shown below.
```java
-EventHubProducerClient producer = new EventHubClientBuilder()
- .connectionString(connectionString, eventHubName)
- .buildProducerClient();
-```
-
-### Prepare a batch of events
-This code prepares a batch of events.
-
-```java
-EventDataBatch batch = producer.createBatch();
-batch.tryAdd(new EventData("First event"));
-batch.tryAdd(new EventData("Second event"));
-batch.tryAdd(new EventData("Third event"));
-batch.tryAdd(new EventData("Fourth event"));
-batch.tryAdd(new EventData("Fifth event"));
-```
-
-### Send the batch of events to the event hub
-This code sends the batch of events you prepared in the previous step to the event hub. The following code blocks on the send operation.
-
-```java
-producer.send(batch);
-```
-
-### Close and cleanup
-This code closes the producer.
-
-```java
-producer.close();
-```
-### Complete code to send events
-Here is the complete code to send events to the event hub.
-
-```java
-import com.azure.messaging.eventhubs.*;
-
-public class Sender {
- public static void main(String[] args) {
- final String connectionString = "EVENT HUBS NAMESPACE CONNECTION STRING";
- final String eventHubName = "EVENT HUB NAME";
-
- // create a producer using the namespace connection string and event hub name
+ /**
+ * Code sample for publishing events.
+ * @throws IllegalArgumentException if the event data is bigger than max batch size.
+ */
+ public static void publishEvents() {
+ // create a producer client
EventHubProducerClient producer = new EventHubClientBuilder() .connectionString(connectionString, eventHubName) .buildProducerClient();
- // prepare a batch of events to send to the event hub
- EventDataBatch batch = producer.createBatch();
- batch.tryAdd(new EventData("First event"));
- batch.tryAdd(new EventData("Second event"));
- batch.tryAdd(new EventData("Third event"));
- batch.tryAdd(new EventData("Fourth event"));
- batch.tryAdd(new EventData("Fifth event"));
-
- // send the batch of events to the event hub
- producer.send(batch);
-
- // close the producer
+ // sample events in an array
+ List<EventData> allEvents = Arrays.asList(new EventData("Foo"), new EventData("Bar"));
+
+ // create a batch
+ EventDataBatch eventDataBatch = producer.createBatch();
+
+ for (EventData eventData : allEvents) {
+ // try to add the event from the array to the batch
+ if (!eventDataBatch.tryAdd(eventData)) {
+ // if the batch is full, send it and then create a new batch
+ producer.send(eventDataBatch);
+ eventDataBatch = producer.createBatch();
+
+ // Try to add that event that couldn't fit before.
+ if (!eventDataBatch.tryAdd(eventData)) {
+ throw new IllegalArgumentException("Event is too large for an empty batch. Max size: "
+ + eventDataBatch.getMaxSizeInBytes());
+ }
+ }
+ }
+ // send the last batch of remaining events
+ if (eventDataBatch.getCount() > 0) {
+ producer.send(eventDataBatch);
+ }
producer.close(); }
-}
``` Build the program, and ensure that there are no errors. You'll run this program after you run the receiver program.
Add the following dependencies in the pom.xml file.
<dependency> <groupId>com.azure</groupId> <artifactId>azure-messaging-eventhubs</artifactId>
- <version>5.1.1</version>
+ <version>5.6.0</version>
</dependency> <dependency> <groupId>com.azure</groupId> <artifactId>azure-messaging-eventhubs-checkpointstore-blob</artifactId>
- <version>1.5.0</version>
+ <version>1.5.1</version>
</dependency> </dependencies> ```
Add the following dependencies in the pom.xml file.
import com.azure.storage.blob.BlobContainerAsyncClient; import com.azure.storage.blob.BlobContainerClientBuilder; import java.util.function.Consumer;
- import java.util.concurrent.TimeUnit;
``` 2. Create a class named `Receiver`, and add the following string variables to the class. Replace the placeholders with the correct values. +
+ > [!IMPORTANT]
+ > Replace the placeholders with the correct values.
+ > - `<Event Hubs namespace connection string>` with the connection string to your Event Hubs namespace. Update
+ > - `<Event hub name>` with the name of your event hub in the namespace.
+ > - `<Storage connection string>` with the connection string to your Azure storage account.
+ > - `<Storage container name>` with the name of your container in your Azure blob storage.
+ ```java
- private static final String EH_NAMESPACE_CONNECTION_STRING = "<EVENT HUBS NAMESPACE CONNECTION STRING>";
- private static final String eventHubName = "<EVENT HUB NAME>";
- private static final String STORAGE_CONNECTION_STRING = "<AZURE STORAGE CONNECTION STRING>";
- private static final String STORAGE_CONTAINER_NAME = "<AZURE STORAGE CONTAINER NAME>";
+ private static final String connectionString = "<Event Hubs namespace connection string>";
+ private static final String eventHubName = "<Event hub name>";
+ private static final String storageConnectionString = "<Storage connection string>";
+ private static final String storageContainerName = "<Storage container name>";
```
-3. Add the following `main` method to the class.
+1. Add the following `main` method to the class.
```java public static void main(String[] args) throws Exception { // Create a blob container client that you use later to build an event processor client to receive and process events BlobContainerAsyncClient blobContainerAsyncClient = new BlobContainerClientBuilder()
- .connectionString(STORAGE_CONNECTION_STRING)
- .containerName(STORAGE_CONTAINER_NAME)
+ .connectionString(storageConnectionString)
+ .containerName(storageContainerName)
.buildAsyncClient(); // Create a builder object that you will use later to build an event processor client to receive and process events and errors. EventProcessorClientBuilder eventProcessorClientBuilder = new EventProcessorClientBuilder()
- .connectionString(EH_NAMESPACE_CONNECTION_STRING, eventHubName)
+ .connectionString(connectionString, eventHubName)
.consumerGroup(EventHubClientBuilder.DEFAULT_CONSUMER_GROUP_NAME) .processEvent(PARTITION_PROCESSOR) .processError(ERROR_HANDLER)
Add the following dependencies in the pom.xml file.
errorContext.getThrowable()); }; ```
-3. The complete code should look like:
-
- ```java
-
- import com.azure.messaging.eventhubs.EventHubClientBuilder;
- import com.azure.messaging.eventhubs.EventProcessorClient;
- import com.azure.messaging.eventhubs.EventProcessorClientBuilder;
- import com.azure.messaging.eventhubs.checkpointstore.blob.BlobCheckpointStore;
- import com.azure.messaging.eventhubs.models.ErrorContext;
- import com.azure.messaging.eventhubs.models.EventContext;
- import com.azure.storage.blob.BlobContainerAsyncClient;
- import com.azure.storage.blob.BlobContainerClientBuilder;
- import java.util.function.Consumer;
- import java.util.concurrent.TimeUnit;
-
- public class Receiver {
-
- private static final String EH_NAMESPACE_CONNECTION_STRING = "<EVENT HUBS NAMESPACE CONNECTION STRING>";
- private static final String eventHubName = "<EVENT HUB NAME>";
- private static final String STORAGE_CONNECTION_STRING = "<AZURE STORAGE CONNECTION STRING>";
- private static final String STORAGE_CONTAINER_NAME = "<AZURE STORAGE CONTAINER NAME>";
-
- public static final Consumer<EventContext> PARTITION_PROCESSOR = eventContext -> {
- System.out.printf("Processing event from partition %s with sequence number %d with body: %s %n",
- eventContext.getPartitionContext().getPartitionId(), eventContext.getEventData().getSequenceNumber(), eventContext.getEventData().getBodyAsString());
-
- if (eventContext.getEventData().getSequenceNumber() % 10 == 0) {
- eventContext.updateCheckpoint();
- }
- };
-
- public static final Consumer<ErrorContext> ERROR_HANDLER = errorContext -> {
- System.out.printf("Error occurred in partition processor for partition %s, %s.%n",
- errorContext.getPartitionContext().getPartitionId(),
- errorContext.getThrowable());
- };
-
- public static void main(String[] args) throws Exception {
- BlobContainerAsyncClient blobContainerAsyncClient = new BlobContainerClientBuilder()
- .connectionString(STORAGE_CONNECTION_STRING)
- .containerName(STORAGE_CONTAINER_NAME)
- .buildAsyncClient();
-
- EventProcessorClientBuilder eventProcessorClientBuilder = new EventProcessorClientBuilder()
- .connectionString(EH_NAMESPACE_CONNECTION_STRING, eventHubName)
- .consumerGroup(EventHubClientBuilder.DEFAULT_CONSUMER_GROUP_NAME)
- .processEvent(PARTITION_PROCESSOR)
- .processError(ERROR_HANDLER)
- .checkpointStore(new BlobCheckpointStore(blobContainerAsyncClient));
-
- EventProcessorClient eventProcessorClient = eventProcessorClientBuilder.buildEventProcessorClient();
-
- System.out.println("Starting event processor");
- eventProcessorClient.start();
-
- System.out.println("Press enter to stop.");
- System.in.read();
-
- System.out.println("Stopping event processor");
- eventProcessorClient.stop();
- System.out.println("Event processor stopped.");
-
- System.out.println("Exiting process");
- }
-
- }
- ```
3. Build the program, and ensure that there are no errors. ## Run the applications 1. Run the **receiver** application first. 1. Then, run the **sender** application. 1. In the **receiver** application window, confirm that you see the events that were published by the sender application.+
+ ```cmd
+ Starting event processor
+ Press enter to stop.
+ Processing event from partition 0 with sequence number 331 with body: Foo
+ Processing event from partition 0 with sequence number 332 with body: Bar
+ ```
1. Press **ENTER** in the receiver application window to stop the application.
+ ```cmd
+ Starting event processor
+ Press enter to stop.
+ Processing event from partition 0 with sequence number 331 with body: Foo
+ Processing event from partition 0 with sequence number 332 with body: Bar
+
+ Stopping event processor
+ Event processor stopped.
+ Exiting process
+ ```
+ ## Next steps See the following samples on GitHub:
expressroute Expressroute Network Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-network-insights.md
Previously updated : 03/23/2021 Last updated : 04/05/2021
This article explains how Network Insights can help you view your ExpressRoute metrics and configurations all in one place. Through Network Insights, you can view topological maps and health dashboards containing important ExpressRoute information without needing to complete any extra setup. ## Visualize functional dependencies
-To view this solution, navigate to the *Azure Monitor* page, select *Networks*, and then select the *ExpressRoute Circuits* card. Then, select the topology button for the circuit you would like to view.
+1. Navigate to the *Azure Monitor* page, then select *Networks*.
-The functional dependency view provides a clear picture of your ExpressRoute setup, outlining the relationship between different ExpressRoute components (peerings, connections, gateways).
+ :::image type="content" source="./media/expressroute-network-insights/monitor-page.png" alt-text="Screenshot of the Monitor landing page.":::
+1. Select the *ExpressRoute Circuits* card.
-Hover over any component in the topology map to view configuration information. For example, hover over an ExpressRoute peering component to view details such as circuit bandwidth and Global Reach enablement.
+1. Then, select the topology button for the circuit you would like to view.
+ :::image type="content" source="./media/expressroute-network-insights/monitor-landing-page.png" alt-text="Screenshot of ExpressRoute monitor landing page." lightbox="./media/expressroute-network-insights/monitor-landing-page-expanded.png":::
+
+1. The functional dependency view provides a clear picture of your ExpressRoute setup, outlining the relationship between different ExpressRoute components (peerings, connections, gateways).
+
+ :::image type="content" source="./media/expressroute-network-insights/topology-view.png" alt-text="Screenshot of topology view for network insights." lightbox="./media/expressroute-network-insights/topology-view-expanded.png":::
+
+1. Hover over any component in the topology map to view configuration information. For example, hover over an ExpressRoute peering component to view details such as circuit bandwidth and Global Reach enablement.
+
+ :::image type="content" source="./media/expressroute-network-insights/topology-hovered.png" alt-text="Screenshot of hovering over topology view resources." lightbox="./media/expressroute-network-insights/topology-hovered-expanded.png":::
## View a detailed and pre-loaded metrics dashboard
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/overview.md
Previously updated : 03/10/2021 Last updated : 04/05/2021 # Customer intent: As an administrator, I want to evaluate Azure Firewall so I can determine if I want to use it.
Azure Firewall has the following known issues:
|Issue |Description |Mitigation | ||||
-|If you update a rule from IP address to IP Group or vice-versa using the portal, both types are saved, but only one is presented on the portal.|This issue happens with Classic rules.<br><br>When you use the portal to update a NAT rule source type from IP address to IP Group or vice-versa, it saves both types in the backend but presents only the newly updated type.<br><br>The same issue exists when you update a Network or Application rule destination type from IP address to IP Group type or vice-versa.|A portal fix is targeted for March, 2021.<br><br>In the meantime, use Azure PowerShell, Azure CLI, or API to modify a rule from IP address to IP Group or vice versa.|
|Network filtering rules for non-TCP/UDP protocols (for example ICMP) don't work for Internet bound traffic|Network filtering rules for non-TCP/UDP protocols don't work with SNAT to your public IP address. Non-TCP/UDP protocols are supported between spoke subnets and VNets.|Azure Firewall uses the Standard Load Balancer, [which doesn't support SNAT for IP protocols today](../load-balancer/load-balancer-overview.md). We're exploring options to support this scenario in a future release.| |Missing PowerShell and CLI support for ICMP|Azure PowerShell and CLI don't support ICMP as a valid protocol in network rules.|It's still possible to use ICMP as a protocol via the portal and the REST API. We're working to add ICMP in PowerShell and CLI soon.| |FQDN tags require a protocol: port to be set|Application rules with FQDN tags require port: protocol definition.|You can use **https** as the port: protocol value. We're working to make this field optional when FQDN tags are used.|
Azure Firewall has the following known issues:
|Start/Stop doesnΓÇÖt work with a firewall configured in forced-tunnel mode|Start/stop doesnΓÇÖt work with Azure firewall configured in forced-tunnel mode. Attempting to start Azure Firewall with forced tunneling configured results in the following error:<br><br>*Set-AzFirewall: AzureFirewall FW-xx management IP configuration cannot be added to an existing firewall. Redeploy with a management IP configuration if you want to use forced tunneling support.<br>StatusCode: 400<br>ReasonPhrase: Bad Request*|Under investigation.<br><br>As a workaround, you can delete the existing firewall and create a new one with the same parameters.| |Can't add firewall policy tags using the portal|Azure Firewall Policy has a patch support limitation that prevents you from adding a tag using the Azure portal. The following error is generated: *Could not save the tags for the resource*.|A fix is being investigated. Or, you can use the Azure PowerShell cmdlet `Set-AzFirewallPolicy` to update tags.| |IPv6 not yet supported|If you add an IPv6 address to a rule, the firewall fails.|Use only IPv4 addresses. IPv6 support is under investigation.|
+|Updating multiple IP Groups fails with conflict error.|When you update two or more IPGroups attached to the same firewall, one of the resource goes into a failed state.|This is a known issue/limitation. <br><br>When you update an IPGroup, it triggers an update on all firewalls that the IPGroup is attached to. If an update to a second IPGroup is started while the firewall is still in the *Updating* state, then the IPGroup update fails.<br><br>To avoid the failure, IPGroups attached to the same firewall must be updated one at a time. Allow enough time between updates to allow the firewall to get out of the *Updating* state.|
## Next steps
governance Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/azure-security-benchmark.md
Title: Azure Security Benchmark blueprint sample overview description: Overview of the Azure Security Benchmark blueprint sample. This blueprint sample helps customers assess specific controls. Previously updated : 01/27/2021 Last updated : 04/02/2021 # Azure Security Benchmark blueprint sample
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/dod-impact-level-4/control-mapping.md
Title: DoD Impact Level 4 blueprint sample controls description: Control mapping of the DoD Impact Level 4 blueprint sample. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 01/08/2021 Last updated : 04/02/2021 # Control mapping of the DoD Impact Level 4 blueprint sample
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/dod-impact-level-4/deploy.md
Title: DoD Impact Level 4 blueprint sample description: Deploy steps for the DoD Impact Level 4 blueprint sample including blueprint artifact parameter details. Previously updated : 01/08/2021 Last updated : 04/02/2021 # Deploy the DoD Impact Level 4 blueprint sample
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/dod-impact-level-4/index.md
Title: DoD Impact Level 4 blueprint sample overview description: Overview of the DoD Impact Level 4 sample. This blueprint sample helps customers assess specific DoD Impact Level 4 controls. Previously updated : 01/08/2021 Last updated : 04/02/2021 # Overview of the DoD Impact Level 4 blueprint sample
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/dod-impact-level-5/control-mapping.md
Title: DoD Impact Level 5 blueprint sample controls description: Control mapping of the DoD Impact Level 5 blueprint sample. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 01/08/2021 Last updated : 04/02/2021 # Control mapping of the DoD Impact Level 5 blueprint sample
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/dod-impact-level-5/deploy.md
Title: DoD Impact Level 5 blueprint sample description: Deploy steps for the DoD Impact Level 5 blueprint sample including blueprint artifact parameter details. Previously updated : 01/08/2021 Last updated : 04/02/2021 # Deploy the DoD Impact Level 5 blueprint sample
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/dod-impact-level-5/index.md
Title: DoD Impact Level 5 blueprint sample overview description: Overview of the DoD Impact Level 5 sample. This blueprint sample helps customers assess specific DoD Impact Level 5 controls. Previously updated : 01/08/2021 Last updated : 04/02/2021 # Overview of the DoD Impact Level 5 blueprint sample
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-h/control-mapping.md
Title: FedRAMP High blueprint sample controls description: Control mapping of the FedRAMP High blueprint sample. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 01/08/2021 Last updated : 04/02/2021 # Control mapping of the FedRAMP High blueprint sample
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-h/deploy.md
Title: Deploy FedRAMP High blueprint sample description: Deploy steps for the FedRAMP High blueprint sample including blueprint artifact parameter details. Previously updated : 01/08/2021 Last updated : 04/02/2021 # Deploy the FedRAMP High blueprint sample
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-h/index.md
Title: FedRAMP High blueprint sample overview description: Overview of the FedRAMP High blueprint sample. This blueprint sample helps customers assess specific FedRAMP High controls. Previously updated : 01/08/2021 Last updated : 04/02/2021 # Overview of the FedRAMP High blueprint sample
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-m/control-mapping.md
Title: FedRAMP Moderate blueprint sample controls description: Control mapping of the FedRAMP Moderate blueprint sample. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 01/08/2021 Last updated : 04/02/2021 # Control mapping of the FedRAMP Moderate blueprint sample
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-m/deploy.md
Title: Deploy FedRAMP Moderate blueprint sample description: Deploy steps for the FedRAMP Moderate blueprint sample including blueprint artifact parameter details. Previously updated : 01/08/2021 Last updated : 04/02/2021 # Deploy the FedRAMP Moderate blueprint sample
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-m/index.md
Title: FedRAMP Moderate blueprint sample overview description: Overview of the FedRAMP Moderate blueprint sample. This blueprint sample helps customers assess specific FedRAMP Moderate controls. Previously updated : 01/08/2021 Last updated : 04/02/2021 # Overview of the FedRAMP Moderate blueprint sample
governance Hipaa Hitrust 9 2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/hipaa-hitrust-9-2.md
Title: HIPAA HITRUST 9.2 blueprint sample overview description: Overview of the HIPAA HITRUST 9.2 blueprint sample. This blueprint sample helps customers assess specific HIPAA HITRUST 9.2 controls. Previously updated : 01/27/2021 Last updated : 04/02/2021 # HIPAA HITRUST 9.2 blueprint sample
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/irs-1075/control-mapping.md
Title: IRS 1075 blueprint sample controls description: Control mapping of the IRS 1075 blueprint sample. Each control is mapped to one or more Azure Policy definitions that assists with assessment. Previously updated : 01/08/2021 Last updated : 04/02/2021 # Control mapping of the IRS 1075 blueprint sample
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/irs-1075/deploy.md
Title: Deploy IRS 1075 blueprint sample description: Deploy steps for the IRS 1075 (Rev.11-2016) blueprint sample including blueprint artifact parameter details. Previously updated : 01/08/2021 Last updated : 04/02/2021 # Deploy the IRS 1075 blueprint sample
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/irs-1075/index.md
Title: IRS 1075 blueprint sample overview description: Overview of the IRS 1075 blueprint sample. This blueprint sample helps customers assess specific IRS 1075 controls. Previously updated : 01/08/2021 Last updated : 04/02/2021 # Overview of the IRS 1075 blueprint sample
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/ism-protected/control-mapping.md
Title: Australian Government ISM PROTECTED blueprint sample controls description: Control mapping of the Australian Government ISM PROTECTED blueprint sample. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 01/21/2021 Last updated : 04/02/2021 # Control mapping of the Australian Government ISM PROTECTED blueprint sample
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/ism-protected/deploy.md
Title: Deploy Australian Government ISM PROTECTED blueprint sample description: Deploy steps for the Australian Government ISM PROTECTED blueprint sample including blueprint artifact parameter details. Previously updated : 01/21/2021 Last updated : 04/02/2021 # Deploy the Australian Government ISM PROTECTED blueprint sample
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/ism-protected/index.md
Title: Australian Government ISM PROTECTED blueprint sample overview description: Overview of the Australian Government ISM PROTECTED blueprint sample. This blueprint sample helps customers assess specific ISM PROTECTED controls. Previously updated : 01/21/2021 Last updated : 04/02/2021 # Overview of the Australian Government ISM PROTECTED blueprint sample
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/media/control-mapping.md
Title: Media blueprint sample controls description: Control mapping of the Media blueprint samples. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 01/08/2021 Last updated : 04/02/2021 # Control mapping of the Media blueprint sample
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/media/deploy.md
Title: Deploy Media blueprint sample description: Deploy steps for the Media blueprint sample including blueprint artifact parameter details. Previously updated : 01/08/2021 Last updated : 04/02/2021 # Deploy the Media blueprint sample
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/media/index.md
Title: Media blueprint sample overview description: Overview of the Media blueprint sample. This blueprint sample helps customers assess specific Media controls. Previously updated : 01/08/2021 Last updated : 04/02/2021 # Overview of the Media blueprint sample
governance Nist Sp 800 171 R2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/nist-sp-800-171-r2.md
Title: NIST SP 800-171 R2 blueprint sample overview description: Overview of the NIST SP 800-171 R2 blueprint sample. This blueprint sample helps customers assess specific NIST SP 800-171 R2 requirements or controls. Previously updated : 01/27/2021 Last updated : 04/02/2021 # NIST SP 800-171 R2 blueprint sample
governance Nist Sp 800 53 R4 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/nist-sp-800-53-r4.md
Title: NIST SP 800-53 R4 blueprint sample overview description: Overview of the NIST SP 800-53 R4 blueprint sample. This blueprint sample helps customers assess specific NIST SP 800-53 R4 controls. Previously updated : 01/27/2021 Last updated : 04/02/2021 # NIST SP 800-53 R4 blueprint sample
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/pci-dss-3.2.1/control-mapping.md
Title: PCI-DSS v3.2.1 blueprint sample controls description: Control mapping of the Payment Card Industry Data Security Standard v3.2.1 blueprint sample to Azure Policy and Azure RBAC. Previously updated : 01/08/2021 Last updated : 04/02/2021 # Control mapping of the PCI-DSS v3.2.1 blueprint sample
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/pci-dss-3.2.1/deploy.md
Title: Deploy PCI-DSS v3.2.1 blueprint sample description: Deploy steps for the Payment Card Industry Data Security Standard v3.2.1 blueprint sample including blueprint artifact parameter details. Previously updated : 01/08/2021 Last updated : 04/02/2021 # Deploy the PCI-DSS v3.2.1 blueprint sample
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/pci-dss-3.2.1/index.md
Title: PCI-DSS v3.2.1 blueprint sample overview description: Overview of the Payment Card Industry Data Security Standard v3.2.1 blueprint sample. This blueprint sample helps customers assess specific controls. Previously updated : 01/08/2021 Last updated : 04/02/2021 # Overview of the PCI-DSS v3.2.1 blueprint sample
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/swift-2020/control-mapping.md
Title: SWIFT CSP-CSCF v2020 blueprint sample controls description: Control mapping of the SWIFT CSP-CSCF v2020 blueprint sample. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 01/08/2021 Last updated : 04/02/2021 # Control mapping of the SWIFT CSP-CSCF v2020 blueprint sample
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/swift-2020/deploy.md
Title: Deploy SWIFT CSP-CSCF v2020 blueprint sample description: Deploy steps for the SWIFT CSP-CSCF v2020 blueprint sample including blueprint artifact parameter details. Previously updated : 01/08/2021 Last updated : 04/02/2021 # Deploy the SWIFT CSP-CSCF v2020 blueprint sample
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/swift-2020/index.md
Title: SWIFT CSP-CSCF v2020 blueprint sample overview description: Overview of the SWIFT CSP-CSCF v2020 blueprint sample. This blueprint sample helps customers assess specific SWIFT CSP-CSCF controls. Previously updated : 01/08/2021 Last updated : 04/02/2021 # Overview of the SWIFT CSP-CSCF v2020 blueprint sample
governance Guest Configuration Baseline Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/guest-configuration-baseline-linux.md
Title: Reference - Azure Policy Guest Configuration baseline for Linux description: Details of the Linux baseline on Azure implemented through Azure Policy Guest Configuration. Previously updated : 03/12/2021 Last updated : 04/05/2021
governance Guest Configuration Baseline Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/guest-configuration-baseline-windows.md
Title: Reference - Azure Policy Guest Configuration baseline for Windows description: Details of the Windows baseline on Azure implemented through Azure Policy Guest Configuration. Previously updated : 03/11/2021 Last updated : 04/05/2021
hdinsight Hdinsight Restrict Outbound Traffic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-restrict-outbound-traffic.md
A summary of the steps to lock down egress from your existing HDInsight with Azu
1. Create a subnet. 1. Create a firewall.
-1. Add application rules to the firewall
+1. Add application rules to the firewall.
1. Add network rules to the firewall. 1. Create a routing table.
Create an application rule collection that allows the cluster to send and receiv
| | | | | | | Rule_2 | * | https:443 | login.windows.net | Allows Windows login activity | | Rule_3 | * | https:443 | login.microsoftonline.com | Allows Windows login activity |
- | Rule_4 | * | https:443,http:80 | storage_account_name.blob.core.windows.net | Replace `storage_account_name` with your actual storage account name. To use ONLY https connections, make sure ["secure transfer required"](../storage/common/storage-require-secure-transfer.md) is enabled on the storage account. If you are using Private endpoint to access storage accounts, this step is not needed and storage traffic is not forwarded to the firewall.|
+ | Rule_4 | * | https:443 | storage_account_name.blob.core.windows.net | Replace `storage_account_name` with your actual storage account name. Make sure ["secure transfer required"](../storage/common/storage-require-secure-transfer.md) is enabled on the storage account. If you are using Private endpoint to access storage accounts, this step is not needed and storage traffic is not forwarded to the firewall.|
:::image type="content" source="./media/hdinsight-restrict-outbound-traffic/hdinsight-restrict-outbound-traffic-add-app-rule-collection-details.png" alt-text="Title: Enter application rule collection details":::
Create an application rule collection that allows the cluster to send and receiv
### Configure the firewall with network rules
-Create the network rules to correctly configure your HDInsight cluster.
+Create the network rules to correctly configure your HDInsight cluster.
1. Continuing from the prior step, navigate to **Network rule collection** > **+ Add network rule collection**.
Create the network rules to correctly configure your HDInsight cluster.
| Name | Protocol | Source Addresses | Service Tags | Destination Ports | Notes | | | | | | | |
- | Rule_5 | TCP | * | SQL | 1433 | If you are using the default sql servers provided by HDInsight, configure a network rule in the Service Tags section for SQL that will allow you to log and audit SQL traffic. Unless you configured Service Endpoints for SQL Server on the HDInsight subnet, which will bypass the firewall. If you are using custom SQL server for Ambari, Oozie, Ranger and Hive metastores then you only need to allow the traffic to your own custom SQL Servers.|
+ | Rule_5 | TCP | * | SQL | 1433 , 11000-11999 | If you are using the default sql servers provided by HDInsight, configure a network rule in the Service Tags section for SQL that will allow you to log and audit SQL traffic. Unless you configured Service Endpoints for SQL Server on the HDInsight subnet, which will bypass the firewall. If you are using custom SQL server for Ambari, Oozie, Ranger and Hive metastores then you only need to allow the traffic to your own custom SQL Servers. Refer to [Azure SQL Database and Azure Synapse Analytics connectivity architecture](../azure-sql/database/connectivity-architecture.md) to see why 11000-11999 port range is also needed in addition to 1433. |
| Rule_6 | TCP | * | Azure Monitor | * | (optional) Customers who plan to use auto scale feature should add this rule. | :::image type="content" source="./media/hdinsight-restrict-outbound-traffic/hdinsight-restrict-outbound-traffic-add-network-rule-collection.png" alt-text="Title: Enter application rule collection"::: 1. Select **Add**.
-### Create and configure a route table
+### Create and configure a route table
Create a route table with the following entries:
hdinsight Troubleshoot Sqoop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/troubleshoot-sqoop.md
+
+ Title: Sqoop import/export command fails for some users in ESP clusters - Azure HDInsight
+description: 'Apache Sqoop import/export command fails with "Import Failed: java.io.IOException: The ownership on the staging directory /user/yourusername/.staging is not as expected" error for some users in Azure HDInsight ESP cluster'
++ Last updated : 04/01/2021++
+# Scenario: Sqoop import/export command fails for usernames greater than 20 characters in Azure HDInsight ESP clusters
+
+This article describes a known issue and workaround when using Azure HDInsight ESP (Enterprise Security Pack) enabled clusters using ADLS Gen2 (ABFS) storage account.
+
+## Issue
+
+When running sqoop import/export command, it fails with the error below for some users:
+
+```
+ERROR tool.ImportTool: Import failed: java.io.IOException:
+The ownership on the staging directory /user/yourlongdomainuserna/.staging is not as expected.
+It is owned by yourlongdomainusername.
+The directory must be owned by the submitter yourlongdomainuserna or yourlongdomainuserna@AADDS.CONTOSO.COM
+```
+
+In the example above, `/user/yourlongdomainuserna/.staging` displays the truncated 20 character username for the username `yourlongdomainusername`.
+
+## Cause
+
+The length of the username exceeds 20 characters in length.
+
+Refer to [How objects and credentials are synchronized in an Azure Active Directory Domain Services managed domain](../active-directory-domain-services/synchronization.md) for further details.
+
+## Workaround
+
+Use a username less than or equals to 20 characters.
+
+## Next steps
+
healthcare-apis Iot Fhir Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/iot-fhir-portal-quickstart.md
Previously updated : 11/13/2020 Last updated : 04/05/2021
Deploy the [Continuous patient monitoring application template](../../iot-centra
Once you've deployed your IoT Central application, your two out-of-the-box simulated devices will start generating telemetry. For this tutorial, we'll ingest the telemetry from *Smart Vitals Patch* simulator into FHIR via the Azure IoT Connector for FHIR. To export your IoT data to the Azure IoT Connector for FHIR, we'll want to [set up a continuous data export within IoT Central](../../iot-central/core/howto-export-data.md). We'll first need to create a connection to the destination, and then we'll create a data export job to continuously run:
+> [!NOTE]
+> You will want to select **Data export** vs. **Data export (legacy)** within the IoT Central App settings for this section.
+
+[![IoT Central Data Export Settings](media/quickstart-iot-fhir-portal/iot-central-data-export-dashboard.png)](media/quickstart-iot-fhir-portal/iot-central-data-export-dashboard.png#lightbox)
+ Create a new destination: - Go to the **Destinations** tab and create a new destination. - Start by giving your destination a unique name.
healthcare-apis Iot Mapping Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/iot-mapping-templates.md
Previously updated : 08/03/2020 Last updated : 04/05/2021
The JsonPathContentTemplate allows matching on and extracting values from an Eve
{ "typeName": "bloodpressure", "typeMatchExpression": "$..[?(@systolic && @diastolic)]",
- "deviceIdExpression": "$.deviceid",
+ "deviceIdExpression": "$.deviceId",
"timestampExpression": "$.endDate", "values": [ {
iot-develop Quickstart Devkit Mxchip Az3166 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-mxchip-az3166.md
Last updated 03/17/2021
**Applies to**: [Embedded device development](about-iot-develop.md#embedded-device-development)<br> **Total completion time**: 30 minutes
-[![Browse code](media/common/browse-code-github.svg)](https://github.com/azure-rtos/getting-started/tree/master/MXChip/AZ3166)
+[![Browse code](media/common/browse-code.svg)](https://github.com/azure-rtos/getting-started/tree/master/MXChip/AZ3166)
In this tutorial you use Azure RTOS to connect an MXCHIP AZ3166 IoT DevKit (hereafter, MXCHIP DevKit) to Azure IoT. The article is part of the series [Get started with Azure IoT embedded device development](quickstart-device-development.md). The series introduces device developers to Azure RTOS, and shows how to connect several device evaluation kits to Azure IoT.
iot-hub Tutorial X509 Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/tutorial-x509-certificates.md
The format for a private key store defined by [RFC 5208](https://tools.ietf.org/
A complex format that can store and protect a key and the entire certificate chain. It is commonly used with a .pfx extension. PKCS#12 is synonymous with the PFX format.
+## For more information
+
+For more information, see the following topics:
+
+* [The laymanΓÇÖs guide to X.509 certificate jargon](https://techcommunity.microsoft.com/t5/internet-of-things/the-layman-s-guide-to-x-509-certificate-jargon/ba-p/2203540)
+* [Conceptual understanding of X.509 CA certificates in the IoT industry](https://docs.microsoft.com/azure/iot-hub/iot-hub-x509ca-concept)
+ ## Next steps If you want to generate test certificates that you can use to authenticate devices to your IoT Hub, see the following topics:
iot-hub Tutorial X509 Openssl https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/tutorial-x509-openssl.md
extendedKeyUsage = clientAuth,serverAuth
keyUsage = critical,keyCertSign,cRLSign subjectKeyIdentifier = hash
+[client_ext]
+authorityKeyIdentifier = keyid:always
+basicConstraints = critical,CA:false
+extendedKeyUsage = clientAuth
+keyUsage = critical,digitalSignature
+subjectKeyIdentifier = hash
+ ``` ## Step 3 - Create a root CA
You now have both a root CA certificate and a subordinate CA certificate. You ca
1. Select **Generate Verification Code**. For more information, see [Prove Possession of a CA certificate](tutorial-x509-prove-possession.md).
-1. Copy the verification code to the clipboard. You must set the verification code as the certificate subject. For example, if the verification code is BB0C656E69AF75E3FB3C8D922C1760C58C1DA5B05AAA9D0A, add that as the subject of your certificate as shown in the next step.
+1. Copy the verification code to the clipboard. You must set the verification code as the certificate subject. For example, if the verification code is BB0C656E69AF75E3FB3C8D922C1760C58C1DA5B05AAA9D0A, add that as the subject of your certificate as shown in step 9.
1. Generate a private key. ```bash
- $ openssl req -new -key pop.key -out pop.csr
+ $ openssl genpkey -out pop.key -algorithm RSA -pkeyopt rsa_keygen_bits:2048
+ ```
+9. Generate a certificate signing request (CSR) from the private key. Add the verification code as the subject of your certificate.
+
+ ```bash
+ openssl req -new -key pop.key -out pop.csr
+
-- Country Name (2 letter code) [XX]:. State or Province Name (full name) []:.
You now have both a root CA certificate and a subordinate CA certificate. You ca
```
-9. Create a certificate using the root CA configuration file and the CSR.
+10. Create a certificate using the root CA configuration file and the CSR for the proof of possession certificate.
```bash openssl ca -config rootca.conf -in pop.csr -out pop.crt -extensions client_ext ```
-10. Select the new certificate in the **Certificate Details** view
+11. Select the new certificate in the **Certificate Details** view. To find the PEM file, navigate to the certs folder.
-11. After the certificate uploads, select **Verify**. The CA certificate status should change to **Verified**.
+12. After the certificate uploads, select **Verify**. The CA certificate status should change to **Verified**.
## Step 8 - Create a device in your IoT Hub
iot-hub Tutorial X509 Prove Possession https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/tutorial-x509-prove-possession.md
After you upload your root certification authority (CA) certificate or subordina
* If you are using the Bash script supplied by Microsoft, run `./certGen.sh create_verification_certificate "75B86466DA34D2B04C0C4C9557A119687ADAE7D4732BDDB3"` to create a certificate named `verification-code.cert.pem`. For more information, see [Using Microsoft-supplied Scripts](tutorial-x509-scripts.md).
- * If you are using OpenSSL to generate your certificates, you must first generate a private key and a certificate signing request (CSR):
+ * If you are using OpenSSL to generate your certificates, you must first generate a private key and then a certificate signing request (CSR):
```bash
+ $ openssl genpkey -out pop.key -algorithm RSA -pkeyopt rsa_keygen_bits:2048
+ $ openssl req -new -key pop.key -out pop.csr --
iot-hub Tutorial X509 Scripts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/tutorial-x509-scripts.md
# Tutorial: Using Microsoft-supplied scripts to create test certificates
-Microsoft provides PowerShell and Bash scripts to help you understand how to create your own X.509 certificates and authenticate them to an IoT Hub. The scripts are located in [GitHub](https://github.com/Azure/azure-iot-sdk-c/tree/master/tools/CACertificates). They are provided for demonstration purposes only. Certificates created by them must not be used for production. The certificates contain hard-coded passwords (ΓÇ£1234ΓÇ¥) and expire after 30 days. For a production environment, you'll need to use your own best practices for certificate creation and lifetime management.
+Microsoft provides PowerShell and Bash scripts to help you understand how to create your own X.509 certificates and authenticate them to an IoT Hub. The scripts are located in a GitHub [repository](https://github.com/Azure/azure-iot-sdk-c/tree/master/tools/CACertificates). They are provided for demonstration purposes only. Certificates created by them must not be used for production. The certificates contain hard-coded passwords (ΓÇ£1234ΓÇ¥) and expire after 30 days. For a production environment, you'll need to use your own best practices for certificate creation and lifetime management.
## PowerShell scripts
Microsoft provides PowerShell and Bash scripts to help you understand how to cre
Get OpenSSL for Windows. See <https://www.openssl.org/docs/faq.html#MISC4> for places to download it or <https://www.openssl.org/source/> to build from source. Then run the preliminary scripts:
-1. Copy the scripts from [GitHub](https://github.com/Azure/azure-iot-sdk-c/tree/master/tools/CACertificates) into the local directory in which you want to work. All files will be created as children of this directory.
+1. Copy the scripts from this GitHub [repository](https://github.com/Azure/azure-iot-sdk-c/tree/master/tools/CACertificates) into the local directory in which you want to work. All files will be created as children of this directory.
1. Start PowerShell as an administrator.
load-balancer Load Balancer Tcp Reset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-tcp-reset.md
TCP keep-alive works for scenarios where battery life isn't a constraint. It isn
## Limitations - TCP reset only sent during TCP connection in ESTABLISHED state.-- TCP reset is not sent for internal Load Balancers with HA ports configured. - TCP idle timeout does not affect load balancing rules on UDP protocol. ## Next steps - Learn about [Standard Load Balancer](./load-balancer-overview.md). - Learn about [outbound rules](./load-balancer-outbound-connections.md#outboundrules).-- [Configure TCP RST on Idle Timeout](load-balancer-tcp-idle-timeout.md)
+- [Configure TCP RST on Idle Timeout](load-balancer-tcp-idle-timeout.md)
media-services Azure Media Player Accessibility https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/azure-media-player/azure-media-player-accessibility.md
Previously updated : 04/20/2020 Last updated : 04/05/2021 # Accessibility #
media-services Azure Media Player Api Methods https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/azure-media-player/azure-media-player-api-methods.md
Previously updated : 04/20/2020 Last updated : 04/05/2021
media-services Azure Media Player Changelog https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/azure-media-player/azure-media-player-changelog.md
- Title: Azure Media Player Changelog
-description: Azure Media Player changelog.
---- Previously updated : 09/23/2020-
-# Changelog
-
-## 2.3.6 (Official Update September 21 2020)
-
-### Features 2.3.6
-
-Added audio-only support for the azureHtml5JS tech (DASH)
-Support late start of live transcription
-Support language change in live transcription
-
-### Bug Fixes 2.3.6
-
-When using "playsinline" in HLS playbacks on Apple devices, clicking on the "LIVE" button causes the video to restart
-The AMP poster image sometimes causes an exception
-The volume button was missing when playing HLS FairPlay
-[Accessibility] Tooltips not defined for buttons when the keyboard is used
-[Accessibility] The luminosity ratio is less than 1.3:1 for the progress bar
-[Accessibility] The keyboard focus sometimes does not return to the video quality button
-[Accessibility] Controls are not visible on the Video screen, preventing the Narrator from finding them
-
-### Changes 2.3.6
-
-Return meaningful KeyDelivery errors to calling applications
-
-## 2.3.5 (Official Update June 1 2020)
-
-### Bug Fixes 2.3.5
--- [Accessibility] Esc key listener in Options pane is attached to document-- [Accessibility] Prevent the player UI from disappearing if the control bar or the options menu contains focus-- Control bar shows incorrect wall clock time when Wall Clock Time Display Settings is enabled-
-### Changes 2.3.5
--- Added error message for error code 0x00400005 and documented it-
-## 2.3.4 (Official Update March 4 2020)
-
-### Bug Fixes 2.3.4
--- Can't set PlayReady overrideLicenseAcquistionUrl-- Unable to play some content with discontinuities-- [Accessibility] ID attribute value for screen reader alert must be unique-- [Accessibility] While navigating Closed Captions settings dialog box, focus moves out of dialog box-
-### Changes 2.3.4
--- Log Content-Length after a successful download to help analyze decryption errors 2.3.3 (Official Update November 12 2019)-
-### Features 2.3.4
--- Added support for displaying the wall clock time of a video as an overlay, and in the control bar-
-### Bug Fixes 2.3.4
--- Audio track switch works but outputs error on IE11 and Windows7 'Object doesn't support property or method 'enabled''-- Audio track switch fails when buffer is fully loaded-- Audio track switch fails when user pauses video and switches between audio tracks very rapidly-- [Accessibility] Tooltips not defined for Video Control under Video Player-- Missing volume buttons on Html5 depending on when 'loadstart' is received-- [Accessibility] No way to set the alt text for the poster image-- [Accessibility] Application focus lost after selecting 'Done' in captions settings dialog box-- [Accessibility] Incorrect ARIA attributes are defined for 'video' under 'segments preview'-
-### Changes 2.3.4
--- Removed empty caption label/track when playing HLS on iOS and macOS Safari-- Reduced the number of 412s for IMSC1 captions-- Output warning in the console for 10 consecutive empty IMSC1 caption responses to help live debugging-
-## 2.3.2 (Official Update October 9 2019)
-
-### Features 2.3.2
---Added PlayReady support for DASH playback for Chromium Edge browser-
-### Bug Fixes 2.3.2
--- The current playback speed is not visually shown in the playback speed menu unless the user manually sets it-- [Accessibility] 'Settings' pane is not getting collapsed with 'Esc' key-- [Accessibility] AMP shortcut key 'M' doesn't work when Narrator is on-
-### Changes 2.3.2
--- For browsers that do not support E-AC3 audio codec, E-AC3 audio tracks are hidden in the audio track menu-- For browsers that do support E-AC3 audio codec, an E-AC3 audio track is selected by default-- For browsers that do not support audio codec switching, audio tracks with a different codec from the selected track are hidden in the audio track menu-
-## 2.3.1 (Official Update August 12 2019)
-
-### Features 2.3.1
--- Signal an event when emsg boxes are received in DASH playback--Added support to show ec-3 audio tracks in the audio menu on browsers that support ec-3 and allow switching audio track from aac to ec3 and vice versa only on the Chromimum-based Edge browser-
-### Bug Fixes 2.3.1
--- The audio track menu is corrupted after removing ec-3 tracks-- The current time can be great than the video duration-- Setting the playback speed via initialSpeed doesn't work-- Sometimes after a seek, the player seems stuck-- On Edge and IE on a touch screen, after zooming into a page, pressing or hovering over the seekbar does not accurately get the correct segment of the video-- [Accessibility] Aria label for Play/Pause is not descriptive for video player
-Map live segment not found error for flashSS to the correct amp error
-- [Accessibility] Aria roles used for Play/Pause must conform to valid values (.vjs-text-track-display)-- [Accessibility] Certain ARIA roles must be contained by particular parents-- [Accessibility] There is no tooltip defined for play/pause button of the video player
-IMSC1 captions can disappear after seeking within the current video/audio buffer
-
-### Changes 2.3.1
--- Upon getting a segmentDecryptError and the player is already on the live edge, the player now refreshes the manifest instead of trying the next segment-- Added more logging for diagnosis-- Updated documentation to include FairPlay support for iOS Safari-- Added an example for the "srclang" of IMSC1 option-- Added padding, textPadding, boxShadow overrides for text tracks.-- Added an errorcode (0x0020025B) to differentiate that segment download failed due to no internet rather than just throwing 0x00200259-
-## 2.3.0 (Official Release April 30 2019)
-
-### Features 2.3.0
--- Added support for IMSC1 captions for DASH-- Added support for video-only assets for DASH-- Added API presentationTimeOffsetInSec-
-### Bug Fixes 2.3.0
--- AMP LowLatency heuristics profile interferes with iOS video playback
-ΓÇ£muteΓÇ¥ and ΓÇ£unmuteΓÇ¥ for some languages have wrong translations
-- The aria-valuenow value of the progress bar slider is sometimes incorrect-- The aria role value of the text track display is incorrect-
-### Changes 2.3.0
--- Logs now include the size of downloaded media fragments-- Removed IE 9 and IE 10 support-- Updated CEA708 sample to show left-align captions-- Include MediaError.message in logs for playback failures-
-## 2.2.4 (Official Update February 22 2019) ##
-
-### Bug Fixes 2.2.4 ###
--- [Bug Fix][AMP][Accessibility] Removed a reachable phantom tab when the error screen appears-- [Bug Fix][AMP] Fixed the hotkey 'M' for IE11 and Edge-- [Bug Fix][AMP] Fixed an exception for CEA708 captions-- [Bug Fix][AMP] Fixed a video freeze issue for the Edge browser-
-### Changes 2.2.4 ###
--- [Change][AMP] When a fragment decryption error happens, the player retries current and various fragments to recover the playback-- [Change][AMP] Made AMP more tolerant of overlapping video or audio fragments-
-## 2.2.3 (Official Update January 9 2019) ##
-
-### Features 2.2.3 ###
--- [Feature][HLS] Added the audio track menu for Safari HLS playback-
-### Bug Fixes 2.2.3 ###
--- [Bug Fix][AMP][Accessibility] During live broadcast playbacks, the "live" button cannot be selected by using the keyboard-- [Bug Fix][AMP] Fixed false positives 0x0400003 errors due to failed MSE test-- [Bug Fix][AMP] Fixed an issue where the video could freeze when starting a live stream-
-### Changes 2.2.3 ###
--- [Change][AMP] Added more information in the log to enable better diagnostics-- [Change][AMP] When more than one bitrate is available at the same screen resolution, all the bitrates are available for selection-
-## 2.2.2 (Official Update) ##
-
-### Bug Fixes 2.2.2 ###
--- [Bug Fix][AMP] When the player encounters a transient network outage, it stops playback immediately-- [Bug Fix][AMP][Accessibility] The error dialog is not accessible by keyboard-- [Bug Fix][AMP] Infinite spinner displayed when playing audio only asset instead of unsupported error-
-### Changes 2.2.2 ###
--- [Change][AMP] added localized strings for advertisement UI-
-## 2.2.1 (Official Update) ##
-
-### Features 2.2.1 ###
--- [Feature][CMAF] Added support for HLS CMAF-
-### Bug Fixes ###
--- [Bug Fix][AMP] uncleared timers in retry logic yielding playback errors-- [Bug Fix][AMP][Firefox] ended event is not fired on Firefox and Chrome when stopped the live program-- [Bug Fix][AMP] Controls displayed after setsource, even when controls are set to false in player options-
-### Changes ###
--- [Change][Live captioning] Changed API name for CEA captions from 608 to 708. For more information, see [CEA708 Captions Settings](/javascript/api/azuremediaplayer/amp.player.cea708captionssettings)-->-
-## 2.2.0 (Official Release) ##
-
-### Features 2.2.0 ###
--- [Feature][Azurehtml5JS][Flash][LiveCaptions]CEA 708 captioning support in Azurehtml5JS and FlashSS tech for clear and AES content.-
-### Bug Fixes 2.2.0 ###
--- [Bug Fix]Flash version detection not working in Chrome/Edge-
-### Changes 2.2.0 ###
--- [Change][AMP][Heuristics]Changed heuristic profile name from QuickStartive to LowLatency-- [Change][Flash]Change in Flash player for version detection to enable playback of AES content with the new Adobe Flash update.-
-## 2.1.9 (Official Hotfix) ##
-
-### Bug Fixes 2.1.9 ###
--- [Bug Fix][Live] Exception occurring when live streams transition to video on demand/live archives-
-### Changes 2.1.9 ###
--- [Change][Flash][AES] Modified Flash tech logic to not use sharedbytearrays for AES decryption as Adobe has blocked the usage as of Flash 30. Please note, playback will only work once Adobe deploys a new version of Flash due to a bug in v30. Please see [known issues](azure-media-player-known-issues.md) for more details-
-## 2.1.8 (Official Update) ##
-
-### Bug Fixes 2.1.8 ###
--- [Bug fix] Spinner occasionally doesn't show post seek and pre- play-- [Bug Fix] Player doesn't start muted when muted option enabled-- [Bug Fix] Volume slider is displayed when controls are set to false-- [Bug Fix] Playback occasionally repeating when user skips to the live edge-- [Bug Fix][Firefox] Player occasionally throws JavaScript exception on load-- [Bug Fix][Accessibility]Play/ Pause/Volume button lose focus outline when selected using keyboard controls-- [Bug Fix] Fixed memory leakage on player is disposed-- [Bug Fix] Calling src() after player errors out doesn't reset the source-- [Bug Fix][Live] AMP is in constant loading state when user clicks on the Live button after broadcast has ended-- [Bug Fix][Chrome] Player hangs and playback fails when browser minimized to background.-
-### Changes 2.1.8 ###
--- [Change]Updated 0x0600001 error to display when AES content is played back with Flash 30 as it's not supported at this time. Please see [known issues](azure-media-player-known-issues.md) for more details-- [Change] Added additional retries for live scenarios when manifest requests 404 or returns empty manifests.-
-## 2.1.7 (Official Update) ##
-
-### Features 2.1.7 ###
--- [Feature][AzureHtml5JS] Added configuration option to flush stale data in the media source buffer-
-### Bug Fixes 2.1.7 ###
--- [Bug Fix][Accessibility][Screen Reader] Removed the blank header the player included when title is not set-- [Bug Fix][UWA] AMP throws exception when playback in Universal Windows App-- [Bug Fix][OSX] setActiveTextTrack() not working in Safari on OSx-- [Bug Fix][Live] Clicking to the live edge after disposing and re initializing player yields exception-- [Bug Fix][Skin] Current time truncated for certain assets-- [Bug Fix][DRM] fix included to support playback in browsers that support multiple CENC DRM-
-### Changes 2.1.7 ###
--- [Change][Samples][Accessibility]Added language tag to all samples-
-## 2.1.6 (Official Update) ##
-
-### Bug Fixes 2.1.6 ###
--- [Bug Fix]AMP displaying incorrect duration for specific asset-- [Bug Fix][FairPlay-HLS] Fairplay errors not propagating to UI-- [Bug Fix]Custom Heuristic properties being ignored in AMP 2.1.5.-
-### Changes 2.1.6 ###
--- [Change][FairPlayDRM] Removed the timeout for both Cert request and license request for FairPlay in order to keep parity with PlayReady and Widevine implementations-- [Change] Misc Heuristic improvements to combat blurry content-
-### Features 2.1.6 ###
--- [Feature] Added support mpd-time-cmaf format-
-## 2.1.5 (Official Hotfix) ##
-
-### Bug Fixes 2.1.5 ###
--- [Bug Fix][Captions] VTT styling not rendered correctly by player-- [Bug Fix][Accessibility]Live button has no aria label-
-## 2.1.4 (Official Update) ##
-
-### Bug Fixes 2.1.4 ###
--- [Bug Fix][Accessibility][Focus]Users cannot tab to focus on custom buttons added to the right of the full screen button in the control bar-- [Bug Fix][IE11][Volume bar]Tabbing to volume pop-up makes the entire video screen flash in IE11 while in full-screen mode-- [Bug Fix][Skin|Flush] Space displayed between control bar and volume bar pop-up-- [Bug Fix][AMP][Captions]Old embedded tracks are not cleared when source is changed on an existing player-- [Bug Fix][Accessibility][Narrator]Screen Reader reads volume control incorrectly-- [Bug Fix][FlashSS]Play Event occasionally doesn't fire from Flash tech-- [Bug Fix][AMP][Focus] Play/pause requires two clicks when player has focus and is in full screen mode-- [Bug Fix][AMP][Skin]Incorrect duration being displayed on progress bar for a specific asset-- [Bug Fix][Ads][Ad Butler] VAST parser doesn't handle VAST file that does not have progress event-- [Bug Fix][SDN][AMP 2.1.1] Fixed issue for Hive SDN plugin support-- [Bug Fix][Accessibility]Narrator reads "Midnight Mute Button" when user has focus of volume button-
-### Changes 2.1.4 ###
--- [Change][Accessibility][Assistive Technology] Buttons now have aria-live property to improve experience with assistive technology-- [Change][Accessibility][Volume button|Narrator]Improved accessibility of volume button by modifying the tabbing functionality and the slider behavior. These changes make it easier for keyboard users to modify the player's volume-- [Change]Increased inactivity context menu timeout from 3 to 5 seconds-- [Change][Accessibility][Luminosity] Improved luminosity contrast ratio on dropdown menus in captions settings-
-## 2.1.3 (Official Update) ##
-
-### Bug Fixes 2.1.3 ###
--- [Bug Fix][Plugins|Title Overlay] Title Overlay plugin throws JS exceptions with AMP v2.X+-- [Bug Fix]Source Set event is sent to JavaScript console even when logging is turned off-- [Bug Fix][Skin] Player time tips are rendered outside context of the player when hovering over either end duration bar-- [Bug Fix][Accessibility][Screen Reader] Narrator reads "Region Landmark" or "Video Player Region Landmark" when viewer has focus on player-- [Bug Fix][AMP] Cannot disable player outline via CSS-- [Bug Fix][Accessibility]Cannot tab to focus on entire player when user is in full-screen mode-- [Bug Fix][Skin][Live]Skin not responsive to localized LIVE text in Japanese-- [Bug Fix][Skin]Duration and current time get cut off when stream > 60 min
- -[Bug Fix][iPhone|Live]player shows text for current time/duration in control bar
-- [Bug Fix][AMP] Calling player heuristics APIs yields JavaScript exceptions-- [Bug Fix][Native Html5|iOS] Videotag property "playsinline" not propagating to player-- [Bug Fix][iOS|iframe]Player cannot enter fullscreen on iPhone if player is loaded in an iframe-- [Bug Fix][AMP][Heuristics]AMP always operates with hybrid profile regardless of player options-- [Bug Fix][AMP|Win8.1]throws when hosted in Win8.1 app with a webview-
-### Changes 2.1.3 ###
--- [Change][AMP] Added CDN endpoint information in FragmentDownloadComplete event-- [Change][AMP][Live] Improved and optimized live streaming latency-
-## 2.1.2 (Official Hotfix) ##
-
-### Bug Fixes 2.1.2 ####
--- [Bug Fix][Accessibility][Windows Narrator]Narrator reads "Progress midnight" when user has context of progress bar and current time is 0:00-- [Bug Fix][Skin]logo size is hard-coded in JavaScript code-- [Accessibility][HotKeys] Hotkeys not enabled when player is clicked.-
-### Changes 2.1.2 ####
--- [Change][Logging]Log manifest URL when player fails to load manifest-
-### Features 2.1.2 ###
--- [Change][Performance][Optimization] Improved player load and start-up times-
-## 2.1.1 (Official Update) ##
-
-### Bug Fixes 2.1.1 ####
--- [Bug Fix][iOS]Setting Autoplay to false yields infinite spinner in Safari for iOS-- [Bug Fix] Seeking to a time greater than content duration yields infinite spinner-- [Bug Fix] Hotkeys require multiple keyboard tabs to get context of the player to work-- [Bug Fix] Video freezes for a few seconds after resizing the player in certain assets-- [Bug Fix] Infinite spinner(after seek completes) when user does multiple seeks quickly-- [Bug Fix] Control bar is not hidden during inactivity-- [Bug Fix] Opening a webapp that hosts AMP can cause the webpage to be loaded twice-- [Bug Fix] Infinite while playing content certain assets via Flash Tech-- [Bug Fix] More Options menu not being displayed with 3rd party plugins-- [Bug Fix][Skin|Tube][Live] Two live icons are displayed when player is at the live edge of a program-- [Bug Fix][Skin]Logo cannot be disabled-- [Bug Fix][DD+ Content] Continuous spinner shows up for the assets containing Dolby Digital audio track-- [Bug Fix] Latest AMP freezes when switching audio language tracks during livestream-- [Bug Fix] fixed background disappearance for spinner-- [Bug Fix]Infinite spinner in AES flash token static samples bug fixes-
-### Changes 2.1.1 ####
--- [Change] Added Error Code for Widevine Https requirement: as of Chrome v58, widevine content must be loaded/played back via the `https://` protocol otherwise playback will fail.-- [Change] Added aria label for loading spinner so assistive technology can narrate "video loading" when content is loading -
-## 2.1.0 (Official Release) ##
-
-### Features 2.1.0 ###
--- [Feature][AzureHtml5JS]VOD Ad Support for pre- mid- post-rolls-- [Feature][Beta][AzureHtml5JS] Live Ad support for pre- mid- post-rolls-- [Feature] Added new skin option - AMP-flush-- [Feature] Added improved aria labels for better integration with screen readers/assistive technology-- [Feature][Skin] Skin now shows all icons and buttons clearly in high contrast mode-
-### Bug Fixes 2.1.0 ###
--- [Bug Fix] Number of accessibility and UI fixes-- [Bug Fix] AMP not loading correctly in IE9-
-### Changes 2.1.0 ###
--- [Change] Restructured DOM elements in player to accommodate ads work-- [Change] Switched from CSS to SCSS for skin development-- [Change][Samples]Added sample for VOD ads-- [Change][Samples]Added sample for playback speed-- [Change][Samples]Added sample for Flush Skin-
-## 2.0.0 (Beta Release) ##
--- [Change]updated to VJS5-- [feature] Added new fluid API for player responsiveness fluid-- [Feature] Playback speed-- [Change] Switched from CSS to SCSS for skin-
-## 1.8.3 (Official Hotfix Update) ##
-
-### Bug Fixes 1.8.3 ###
--- [Bug Fix][AzureHtml5JS] Certain assets with negative DTS won't playback in Chrome-
-## 1.8.2 (Official Hotfix Update) ##
-
-### Bug Fixes 1.8.2 ###
--- [Bug Fix][AzureHtml5JS] Higher audio bitrates won't play back via AzureHtml5JS-
-## 1.8.1 (Official Update) ##
-
-### Bug Fixes 1.8.1 ###
--- [Bug Fix][iOS] Captions/subtitles not showing up in native player-- [Bug Fix][AMP] CDN-backed streaming URLs appended with authentication tokens not playing-- [Bug Fix][FairPlay] FairPlay Error code missing Tech ID (Bits [31-28] of the ErrorCode) see Error Codes for more details-- [Bug Fix][Safari][PlayReady] PlayReady content in Safari yielding infinite spinner-
-### Changes 1.8.1 ###
--- [Change][Html5]Change native Html5 tech verbose logs to contain events from VideoTag-
-## 1.8.0 (Official Update) ##
-
-### Features 1.8.0 ###
--- [Features][DRM] Added FairPlay Support (see [Protected Content](azure-media-player-protected-content.md) for more info)-
-### Bug Fixes 1.8.0 ###
--- [Bug Fix][AMP] User seek doesn't trigger a wait event when network is throttled-- [Bug Fix][FlashSS] Selecting quality in flash tech throws exception-- [Bug Fix][AMP] Dynamically selecting quality does show in context menu-- [Bug Fix][Skin] It's difficult to select the last menu item of context menus-
-### Changes 1.8.0 ###
--- [Change] Updated player to current Chrome EME requirements-- [Change] Default techOrder changed to accommodate new tech- html5FairPlayHLS (see [Protected Content](azure-media-player-protected-content.md) for more info)-- [Change][AzureHtml5JS] Enabled MPEG-Dash playback in Safari-- [Change][Samples] Changed Multi-DRM samples to accommodate FairPlay-
-## 1.7.4 (Official Hotfix Update) ##
-
-### Bug Fixes 1.7.4 ###
--- [Bug Fix][Chrome] Blue outline appears around seek handle when user has context of player-- [Bug Fix][IE9] JavaScript exception thrown when player loaded in IE9-
-## 1.7.3 (Official Hotfix Update) ##
-
-### Bug Fixes 1.7.3 ###
--- [Bug Fix][AzureHtml5JS] Player timing out in constrained networks-
-### Changes 1.7.3 ###
--- [Change] Enabling Webcrypto on Edge for decrypting AES content-- [Change] Optimizing AMP heuristics to account for cached chunks-- [Change][AzureHtml5JS] Optimize heuristic by reduce bandwidth estimation latency-
-## 1.7.2 (Official Hotfix Update) ##
-
-### Features 1.7.2 ###
-<!API needs onboarding. Removed link to API until remedied.>
-- [Feature][AzureHtml5JS|Firefox] Enable Widevine playback with EME for Firefox 47+-- [Feature] Add event for player disposing
-<!-- ([disposing](https://docsupdatetracker.net/index.html#static-amp.eventname.disposing)) -->
-
-### Bug Fixes 1.7.2 ###
--- [Bug Fix] Encoded Akamai CDN URL query parameters not correctly decoded-- [Bug Fix] Exception being thrown on manifestPlayableWindowLength()-- [Bug Fix] Viewer cannot always click play on the video after the video has ended to rewatch-- [Bug Fix] Responsive sizing not conforming to rapid window size changes-- [Bug Fix][Edge|IE] Responsive sizing not taking into effect on page load for width=x, height=auto-- [Bug Fix][Android|Chrome] Chrome asking permissions to playback DRM content when content is not encrypted-- [Bug Fix][Accessibility][Edge] Keyboard controls do not correctly select context menu items-- [Bug Fix][Accessibility] Missing displayed border in high contrast mode-- [Bug Fix][FlashSS] Mouse up event listener not removed after player dispose causes exception-- [Bug Fix][FlashSS] Issue parsing manifest URL with encoded spaces-- [Bug Fix][iOS] Type error when evaluating tech.featuresVolumeControl-
-### Changes 1.7.2 ###
--- [Change][DRM] Moved DRM checks after set source to only check when content is encrypted-- [Change][AES] Removed undefined body of type/plain from Key delivery request-- [Change][Accessibility] Windows narrator now reads "Media Player" when context is on player instead of properties-
-## 1.7.1 (Official Hotfix Update) ##
-
-### Features 1.7.1 ###
--- [Feature] Added option for Hybrid Heuristic profile (this profile is set by default)-
-### Bug Fixes 1.7.1 ###
--- [Bug Fix] Responsive design doesn't work as per HTML5 standard (width=100%, height=auto)-- [Bug Fix] Percentage values for width and height not behaving as expected in v1.7.0-
-## 1.7.0 (Official Update) ##
-
-### Features 1.7.0 ###
-<!API needs onboarding. Removed link until remedied.>
-- [Feature][AzureHtml5JS][FlashSS] Added currentMediaTime() to get the encoder media time of the current time in seconds-- [Feature][FlashSS] Implemented download telemetry APIs with videoBufferData() and audioBufferData()<!-- (see [BufferData](https://docsupdatetracker.net/index.html#amp.bufferdata) for more details) -->-- [Feature][FlashSS] Added 'downloadbitratechanged' event-- [Feature] Loading time improved compared to older versions of player-- [Feature] Errors are logged to JavaScript console-
-### Bug Fixes 1.7.0 ###
--- [Bug Fix] Encoded poster URL with query string parameters not displaying in player-- [Bug Fix] Exception thrown when no tech loaded and API amp.Player.poster() is called-- [Bug Fix] Exception thrown when functions try to access player after disposed-- [Bug Fix][Accessibility] Missing outline on focus on progress bar seek head-- [Bug Fix][Accessibility] Context menus have a shadow in high contrast mode-- [Bug Fix][iOS] native player WebVTT captions playback not working-- [Bug Fix][AzureHtml5JS] Error 0x0100002 should be shown when playing HTTP stream on HTTPS site that instead yields infinite spinner as a result of mixed content-- [Bug Fix][AzureHtml5JS] Missing end segment causing looping health check error displaying a perceived infinite buffering state-- [Bug Fix][AzureHtml5JS] Incorrect audio track name in menu when useManifestForLabel=false and three letter language codes are used-- [Bug Fix][AzureHtml5JS|Chrome] Perceived infinite buffer state at the end of content caused by floating point imprecision in duration with JavaScript in Chrome-- [Bug Fix][FlashSS] Non-fatal intermittent error momentarily displayed when flash player created-- [Bug Fix][FlashSS] Playback failing when video and audio streams use different timescales due to rounding imprecision failing with "Fragment url (...) is failed to generate FLVTags"-- [Bug Fix][FlashSS] Issues parsing manifest urls with encoded spaces-- [Bug Fix][FlashSS] Missing check to determine if Flash player version >= 11.4 that causes an error in playback instead of falling back to the next tech in the techOrder-- [Bug Fix][FlashSS][AES] Issues accepting AES tokens with underscores in it-- [Bug Fix][SilverlightSS|OSX] "//" prefixing a manifest instead of the protocol (HTTP or HTTPS) is recognized as a local file yielding infinite spinner-
-### Changes 1.7.0 ###
--- [Change][FlashSS] Merged SWF Scripts ("MSAdaptiveStreamingPlugin-osmf2.0.swf" and "StrobeMediaPlayback.2.0.swf") into a single SWF called "StrobeMediaPlayback.2.0.swf"-- [Change][FlashSS] Updated error code propagation to get more precise error codes (ex. 404s now result in 0x30200194 instead of generic error 0x30200000)-
-## 1.6.3 (Official Hotfix Update) ##
-
-### Bug Fixes 1.6.3 ###
--- [Bug Fix] JavaScript runtime exception when the hotkeys event handler is executed after the disposing of the player-- [Bug Fix][Android][AzureHtml5JS] No playback on mobile device using cellular network-- [Bug Fix] Updated Forge to run as web worker to free up UI-
-## 1.6.2 (Official Hotfix Update) ##
-
-### Features 1.6.2 ###
--- [Feature] Added additional languages for localization (see documentation for more details)-
-### Bug Fixes 1.6.2 ###
--- [Bug Fix][IE9-10] Clicking on areas around the player minimized browser window due to IE9/IE10 bug that minimizes on window.blur()-- [Bug Fix][FlashSS] Not accepting AES tokens with underscores-
-## 1.6.1 (Official Hotfix Update) ##
-
-### Bug Fixes 1.6.1 ###
--- [Bug Fix][FlashSS|Edge,IE][SilverlightSS|IE] Can't get focus on other UI elements for inputs or other in IE/Edge-- [Bug Fix] AES playback failing when forge undefined-- [Bug Fix][Android][AzureHtml5JS|Chrome] Continuous spinner not playing back content when in health check loop-- [Bug Fix][IE9] console.log() not supported by IE 9 causing exception-
-## 1.6.0 (Official Update) ##
-
-### Features 1.6.0 ###
--- [Feature] 33% size reduction of azuremediaplayer.min.js-- [Feature][AzureHtml5JS|Edge][Untested] Support for DD+ audio streams in Edge (no codec switching after initial choice). App must select correct audio stream at this time.-- [Feature] Hot key controls (see docs for more details)-- [Feature] Progress time tip hover for time accurate seeking-- [Feature] Allow for async detection of plugins if setupDone method exists in plugin-
-### Bug Fixes 1.6.0 ###
--- [Bug Fix] Memory log not flushing on getMemoryLog(true)-- [Bug Fix] Bitrate selection box resets on mouse move causing issue selecting lower bitrates through mouse control-- [Bug Fix] Mac Office in app crashes when performing DRM check-- [Bug Fix] CSS classes are easily accidentally overwritten-- [Bug Fix][Chrome] Update identification from user-agent string browser is Edge-- [Bug Fix][AzureHtml5JS] Captions button not showing up in tool bar in Edge(Win10) or Chrome(Mac)-- [Bug Fix][Android][AzureHtml5JS|Chrome] InvalidStateError exception on endOfStream() call on short videos-- [Bug Fix][Firefox] Removal of DRM warning caused by Firefox when checking browser capabilities-- [Bug Fix][Html5] Subtitle/Captions not shown with progressive mp4 content-- [Bug Fix][FlashSS] Messages with matching timestamps were logged in reverse order-- [Bug Fix][Accessibility][Chrome|Firefox] Tab and select controls automatically select first menu item-- [Bug Fix][Accessibility] Tab to control volume button-
-### Changes 1.6.0 ###
--- [Change] Use AES decryption time on quality level selection-- [Change] Update URL rewriter to use HLS v4 before HLS v3 for multi-audio streams-- [Change] Set nativeControlsForTouch to false as default (must be false to work correctly)-
-## 1.5.0 (Official Update) ##
-
-### Features 1.5.0 ###
--- [Feature] Enhancements for general web security (prevention of injection, XSS, etc.)-- [Feature] SDN plugin integration hooks for sourceset event and options.sdn-- [Feature] Robustness handling of 5XX and 4XX errors during playback-
-### Bug Fixes 1.5.0 ###
--- [Bug Fix] Update CSS minification to use HTML entity font codes for buttons instead of Unicode-- [Bug Fix] [AzureHtml5JS] Multi-DRM content always selecting the first element's token from protectionInfo causing second DRM to fail-- [Bug Fix] [AzureHtml5JS] Seeking never completes when seeking in an area with missing segments.-- [Bug Fix] [AzureHtml5JS|Edge] Enable prefixed EME in Edge update for PlayReady playback-- [Bug Fix] [AzureHtml5JS|Firefox] Update EME check to allow Firefox v42+ (with MSE) to fallback to Silverlight for protected content-- [Bug Fix] [FlashSS] Update error.message from number to detailed string-
-### Changes 1.5.0 ###
--- [Change] Posters currently only work as absolute URLs.-
-## 1.4.0 (Official Update) ##
-
-### Features 1.4.0 ###
--- [Feature] [AzureHtml5JS|Chrome] Simple Widevine DRM support-- [Feature] [AzureHtml5JS] Robustness handling of 404/412 errors during playback-
-### Bug Fixes 1.4.0 ###
--- [Bug Fix] [FlashSS] Enhancement for parameter validation-
-## 1.3.0 (Official Update) ##
-
-### Features 1.3.0 ###
--- [Feature] [AzureHtml5JS] [FlashSS] Audio switching of the same codec Multi-Audio content-
-### Bug Fixes 1.3.0 ###
--- [Bug Fix] [AzureHtml5JS|Chrome] Intermittent infinite spinner-- [Bug Fix] [AzureHtml5JS|IE][Windows Phone] Exception causing Windows Phone to have playback issues-- [Bug Fix] [FlashSS] Autoplay set to false fails for additional instances-- [Bug Fix] UI menu sizing issues-
-## 1.2.0 (Official Update) ##
-
-### Features 1.2.0 ###
--- [Feature] [AzureHtml5JS|Firefox] Support when MSE is enabled-- [Feature] No longer require app to provide paths for fallback tech binaries (swf, xap). Path is relative to the Azure Media Player script.-
-### Bug Fixes 1.2.0 ###
--- [Bug Fix] [AzureHtml5JS|Chrome] Player drifts behind live edge when player in the background-- [Bug Fix] [AzureHtml5JS|Edge] Full screen not working-- [Bug Fix] [AzureHtml5JS] Logging wasn't enabled properly when set in options-- [Bug Fix] [Flash] Both "buffering" and buffering icon show during waiting event-- [Bug Fix] Allow playback to continue if initial bandwidth request fails-- [Bug Fix] Player fails to load when initialized with undefined options-- [Bug Fix] When attempting to dispose the player after it is already disposed, a vdata exception occurs-- [Bug Fix] Quality bar icons mapped incorrectly-
-## 1.1.1 (Official Hotfix Update) ##
-
-### Bug Fixes 1.1.1 ###
--- [Bug Fix] Older IE full screen issue-- [Bug Fix] Plugins no longer overwritten-
-## 1.1.0 (Official Update) ##
-
-### Features 1.1.0 ###
--- [Feature] Update UI Localization strings-
-### Bug Fixes 1.1.0 ###
--- [Bug Fix] Big Play Button does not have enough contrast-- [Bug Fix] Visual tab focus indicator-- [Bug Fix] Select Bitrate menu now using correct resolution information-- [Bug Fix] More options menu now dynamically sized-- [Bug Fix] Various UI issues-
-## 1.0.0 (Official Release) ##
-
-### Features 1.0.0 ###
--- [Feature] Basic accessibility testing for tab control, focus control, screen reader, high contrast UI-- [Feature] Updated UI-- [Feature] Dev logging-- [Feature] API for dynamically setting captions/subtitles tracks-- [Feature] Basic localization features-- [Feature] Error code consolidation across techs-- [Feature] New error code for when plugins (like Flash or Silverlight) aren't installed-- [Feature] [AzureHtml5JS] Implemented basic diagnostic events-
-### Bug Fixes 1.0.0 ###
-<!What is that actually supposed to say?>
-- [Bug Fix] [AzureHtml5JS] Live playback freezing on MPD updates when there are small imprecisions in the timestamp-- [Bug Fix] [AzureHtml5JS] Mitigated several Live playback issues-- [Bug Fix] [AzureHtml5JS] Flush buffers when window size heuristics is on and go to a higher resolution screen-- [Bug Fix] [AzureHtml5JS] Chrome now properly shows ended event. Linked to previous known issue of *Chrome will not properly send ended event when using AzureHtml5JS. There is an issue in the underlying browser.*-- [Bug Fix] [AzureHtml5JS] Disabled Safari for this tech in order to address *Playback issue with OSX Yosemite with AzureHtml5JS tech. There are MSE implementation issues. Temporary Mitigation: force flashSS, silverlightSS as tech order for these user agents*-- [Bug Fix] [FlashSS] loadstart fired after error occurred-
-## 0.2.0 (Beta) ##
-
-### Features 0.2.0 ###
--- [Feature] Completed testing for PlayReady and AES for on demand and live - see compatibility matrix-- [Feature] Handling Discontinuities-- [Feature] Support for timestamps greater than 2^53-- [Feature] URL query parameter persists to the manifest request-- [Feature] [Untested] Support for `QuickStart` and `HighQuality` heuristics profiles-- [Feature] [Untested] Exposing video stream information for bitrates, width and height on AzureHtml5JS and FlashSS-- [Feature] [Untested] Select Bitrate on AzureHtml5JS and FlashSS (see API documentation)-
-### Bug Fixes 0.2.0 ###
--- [Bug Fix] large play button now viewable on WP8.1-- [Bug Fix] fixed multiple live playback issues-- [Bug Fix] unmute button now works on the UI-- [Bug Fix] updated UI loading experience for autoplay mode-- [Bug Fix] AMD loader issue and define method conflicts-- [Bug Fix] WP 8.1 Cordova App loading issue-- [Bug Fix] Protected content queries platform/tech supported ProtectionType to select the appropriate tech for playback. Fixes previous known issue of '_PlayReady content on Chrome (desktop) / Safari 8 (on OSX Yosemite) currently does not fallback to Silverlight player_'-- [Bug Fix] uncaught exception on WinServer 2012 R2 due to Media Foundation not installed on that machine by default. Attempt to use HTML video tag APIs, that are not implemented, thus throwing an error. Current mitigation is to catch that error and return false instead of throwing the error.-- [Bug Fix] always get the init segment after seek or http failure to prevent glitches during playback-- [Bug Fix] turn off tracking simulated progress and timeupdates when Error has occurred.-- [Bug Fix] remove right click menu-- [Bug Fix] [AzureHtml5JS] error message not being displayed when invalid token set for PlayReady content-- [Bug Fix] [AzureHtml5JS] going fullscreen during live playback wasn't taking window size heuristics into account-- [Bug Fix] [FlashSS] Removed Strobe Media Player displayed messages so that only Azure Media Player messages are shown-- [Bug Fix] [SilverlightSS] not getting 'seeked' event when we seek beyond duration or less than 0-
-## 0.1.0 (Beta Release) ##
-
-Initial Pre-Release
-
-## Next steps ##
--- [Azure Media Player Quickstart](azure-media-player-quickstart.md)
media-services Azure Media Player Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/azure-media-player/azure-media-player-error-codes.md
Previously updated : 04/20/2020 Last updated : 04/05/2021 # Error codes #
media-services Azure Media Player Feature List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/azure-media-player/azure-media-player-feature-list.md
Previously updated : 04/20/2020 Last updated : 04/05/2021 # Feature list #
media-services Azure Media Player Full Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/azure-media-player/azure-media-player-full-setup.md
Previously updated : 04/20/2020 Last updated : 04/05/2021
Azure Media Player is easy to set up. It only takes a few moments to get basic p
## Step 1: Include the JavaScript and CSS files in the head of your page ##
-With Azure Media Player, you can access the scripts from the CDN hosted version. It's often recommended now to put JavaScript before the end body tag `<body>` instead of the `<head>`, but Azure Meia Player includes an 'HTML5 Shiv', which needs to be in the head for older IE versions to respect the video tag as a valid element.
+With Azure Media Player, you can access the scripts from the CDN hosted version. It's often recommended now to put JavaScript before the end body tag `<body>` instead of the `<head>`, but Azure Media Player includes an 'HTML5 Shiv', which needs to be in the head for older IE versions to respect the video tag as a valid element.
> [!NOTE] > If you're already using an HTML5 shiv like [Modernizr](https://modernizr.com/) you can include the Azure Media Player JavaScript anywhere. However make sure your version of Modernizr includes the shiv for video.
With Azure Media Player, you can access the scripts from the CDN hosted version.
``` > [!IMPORTANT]
-> You should **NOT** use the `latest` version in production, as this is subject to change on demand. Replace `latest` with a version of Azure Media Player. For example, replace `latest` with `2.1.1`. Azure Media Player versions can be queried from [here](azure-media-player-changelog.md).
+> You should **NOT** use the `latest` version in production, as this is subject to change on demand. Replace `latest` with a version of Azure Media Player. For example, replace `latest` with `2.1.1`. Azure Media Player versions can be queried from [here](https://amp.azure.net/libs/amp/latest/docs/changelog.html).
> [!NOTE] > Since the `1.2.0` release, it is no longer required to include the location to the fallback techs (it will automatically pick up the location from the relative path of the azuremediaplayer.min.js file). You can modify the location of the fallback techs by adding the following script in the `<head>` after the above scripts.
media-services Azure Media Player Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/azure-media-player/azure-media-player-known-issues.md
Previously updated : 05/11/2020 Last updated : 04/05/2021 # Known Issues #
media-services Azure Media Player Localization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/azure-media-player/azure-media-player-localization.md
Previously updated : 04/20/2020 Last updated : 04/05/2021 # Azure Media Player localization #
media-services Azure Media Player Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/azure-media-player/azure-media-player-options.md
Previously updated : 04/20/2020 Last updated : 04/05/2021 # Options #
By setting this option to true video element will take full width of the parent
`<video ... data-setup='{"playbackSpeed": {"enabled": true}}'>` -
-Other properties of the `playbackSpeed` setting are given by [PlaybackSpeedOptions](/javascript/api/azuremediaplayer/amp.player.playbackspeedoptions) object.
+Other properties of the `playbackSpeed` setting are given by `PlaybackSpeedOptions` object.
Example of setting playback speed options in JavaScript:
media-services Azure Media Player Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/azure-media-player/azure-media-player-overview.md
Previously updated : 04/20/2020 Last updated : 04/05/2021 # Azure Media Player overview #
media-services Azure Media Player Playback Technology https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/azure-media-player/azure-media-player-playback-technology.md
Previously updated : 04/20/2020 Last updated : 04/05/2021
Playback Technology refers to the specific browser or plugin technology used to
- **azureHtml5JS**: utilizes MSE and EME standards in conjunction with the video element for plugin-less based playback of DASH content with support for AES-128 bit envelope encrypted content or DRM common encrypted content (via PlayReady and Widevine when the browser supports it) from Azure Media Services - **flashSS**: utilizes flash player technology to play back Smooth content with support for AES-128 bit envelope decryption from Azure Media Services - requires Flash version of 11.4 or higher - **html5FairPlayHLS**: utilizes Safari specific in browser-based playback technology via HLS with the video element. This tech is requires to play back FairPlay protected content from Azure Media Services and was added to the techOrder as of 10/19/16-- **silverlightSS**: utilizes silverlight technology to play back Smooth content with support for PlayReady protected content from Azure Media Services.
+- **SilverlightSS**: utilizes Silverlight technology to play back Smooth content with support for PlayReady protected content from Azure Media Services.
- **html5**: utilizes in browser-based playback technology with the video element. When on an Apple iOS or Android device, this tech allows playback of HLS streams with some basic support for AES-128 bit envelope encryption or DRM content (via FairPlay when the browser supports it). ## Tech Order ##
Given the recommended tech order with streaming content from Azure Media Service
| Browser | OS | Expected Tech (Clear) | Expected Tech (AES) | Expected Tech (DRM) | |-|-||-|| | EdgeIE 11 | Windows 10, Windows 8.1, Windows Phone 10<sup>1</sup> | azureHtml5JS | azureHtml5JS | azureHtml5JS (PlayReady) |
-| IE 11 | Windows 7, Windows Vista<sup>1</sup> | flashSS | flashSS | silverlightSS (PlayReady) |
+| IE 11 | Windows 7, Windows Vista<sup>1</sup> | flashSS | flashSS | SilverlightSS (PlayReady) |
| IE 11 | Windows Phone 8.1 | azureHtml5JS | azureHtml5JS | not supported | | Edge | Xbox One<sup>1</sup> (Nov 2015 update) | azureHtml5JS | azureHtml5JS | not supported | | Chrome 37+ | Windows 10, Windows 8.1, macOS X Yosemite<sup>1</sup> | azureHtml5JS | azureHtml5JS | azureHtml5JS (Widevine) | | Firefox 47+ | Windows 10, Windows 8.1, macOS X Yosemite+<sup>1</sup> | azureHtml5JS | azureHtml5JS | azureHtml5JS (Widevine) |
-| Firefox 42-46 | Windows 10, Windows 8.1, macOS X Yosemite+<sup>1</sup> | azureHtml5JS | azureHtml5JS | silverlightSS (PlayReady) |
-| Firefox 35-41 | Windows 10, Windows 8.1 | flashSS | flashSS | silverlightSS (PlayReady) |
+| Firefox 42-46 | Windows 10, Windows 8.1, macOS X Yosemite+<sup>1</sup> | azureHtml5JS | azureHtml5JS | SilverlightSS (PlayReady) |
+| Firefox 35-41 | Windows 10, Windows 8.1 | flashSS | flashSS | SilverlightSS (PlayReady) |
| Safari | iOS 6+ | html5 | html5 (no token)3 | not supported | | Safari 8+ | OS X Yosemite+ | azureHtml5JS | azureHtml5JS | html5FairPlayHLS (FairPlay) |
-| Safari 6 | OS X Mountain Lion<sup>1</sup> | flashSS | flashSS | silverlightSS (PlayReady) |
+| Safari 6 | OS X Mountain Lion<sup>1</sup> | flashSS | flashSS | SilverlightSS (PlayReady) |
| Chrome 37+ | Android 4.4.4+<sup>2</sup> | azureHtml5JS | azureHtml5JS | azureHtml5JS (Widevine) | | Chrome 37+ | Android 4.02 | html5 | html5 (no token)<sup>3</sup> | not supported | | Firefox 42+ | Android 5.0+<sup>2</sup> | azureHtml5JS | azureHtml5JS | not supported |
media-services Azure Media Player Plugin Gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/azure-media-player/azure-media-player-plugin-gallery.md
Previously updated : 04/20/2020 Last updated : 04/05/2021 # Azure Media Player Plugin Gallery #
media-services Azure Media Player Protected Content https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/azure-media-player/azure-media-player-protected-content.md
Previously updated : 04/20/2020 Last updated : 04/05/2021
media-services Azure Media Player Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/azure-media-player/azure-media-player-quickstart.md
Previously updated : 04/20/2020 Last updated : 04/05/2021 # Azure Media Player quickstart
Azure Media Player is easy to set up. It only takes a few minutes to get basic p
``` > [!IMPORTANT]
-> You should **NOT** use the `latest` version in production, as this is subject to change on demand. Replace `latest` with a version of Azure Media Player; for example replace `latest` with `1.0.0`. Azure Media Player versions can be queried from [here](azure-media-player-changelog.md).
+> You should **NOT** use the `latest` version in production, as this is subject to change on demand. Replace `latest` with a version of Azure Media Player; for example replace `latest` with `1.0.0`. Azure Media Player versions can be queried from [here](https://amp.azure.net/libs/amp/latest/docs/changelog.html).
## Use the video element
media-services Azure Media Player Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/azure-media-player/azure-media-player-release-notes.md
- Title: Azure Media Player Release Notes
-description: Release notes for Azure Media Player
---- Previously updated : 04/05/2020-
-# Release notes
-
-Below is a list of known issues associated with this release. Also, a list of tested and unsupported features is provided below to help during development.
-
-[Feature List](azure-media-player-feature-list.md)
-
-[Known Issue List](azure-media-player-known-issues.md)
-
-[Changelog](azure-media-player-changelog.md "Changelog")
-
-<!-- Typescript definitions were moved to the samples repository.>-->
-[TypeScript Definitions (d.ts)](https://github.com/Azure-Samples/azure-media-player-samples "TypeScript Definitions" )
-
-## Next steps
--- [Azure Media Player Quickstart](azure-media-player-quickstart.md)
media-services Azure Media Player Url Rewriter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/azure-media-player/azure-media-player-url-rewriter.md
Previously updated : 04/20/2020 Last updated : 04/05/2021
media-services Azure Media Player Writing Plugins https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/azure-media-player/azure-media-player-writing-plugins.md
Previously updated : 04/20/2020 Last updated : 04/05/2021
media-services Demos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/azure-media-player/demos.md
Previously updated : 04/24/2020 Last updated : 04/05/2021
migrate Migrate Support Matrix Physical https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/migrate-support-matrix-physical.md
To assess physical servers, you create a project, and add the Azure Migrate: Dis
**Permissions:** - For Windows servers, use a domain account for domain-joined servers, and a local account for servers that are not domain-joined. The user account should be added to these groups: Remote Management Users, Performance Monitor Users, and Performance Log Users.
+ > [!Note]
+ > For Windows Server 2008 and 2008 R2, ensure that WMF 3.0 is installed on the servers and the domain/local account used to access the servers is added to these groups: Performance Monitor Users, Performance Log Users and WinRMRemoteWMIUsers.
- For Linux servers, you need a root account on the Linux servers that you want to discover. Alternately, you can set a non-root account with the required capabilities using the following commands: **Command** | **Purpose**
migrate Tutorial Discover Physical https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-discover-physical.md
If you just created a free Azure account, you're the owner of your subscription.
Set up an account that the appliance can use to access the physical servers. - For **Windows servers**, use a domain account for domain-joined servers, and a local account for server that are not domain-joined. The user account should be added to these groups: Remote Management Users, Performance Monitor Users, and Performance Log Users.
+ > [!Note]
+ > For Windows Server 2008 and 2008 R2, ensure that WMF 3.0 is installed on the servers and the domain/local account used to access the servers is added to these groups: Performance Monitor Users, Performance Log Users and WinRMRemoteWMIUsers.
+ - For **Linux servers**, you need a root account on the Linux servers that you want to discover. Alternately, you can set a non-root account with the required capabilities using the following commands: **Command** | **Purpose**
mysql Concepts Migrate Import Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-migrate-import-export.md
Create an empty database on the Azure Database for MySQL server by using MySQL W
To get connected, do the following:
-1. In the Azure portal, look for the connection information on the **Overview** pane of your Azure database for MySQL.
+1. In the Azure portal, look for the connection information on the **Overview** pane of your Azure Database for MySQL.
:::image type="content" source="./media/concepts-migrate-import-export/1_server-overview-name-login.png" alt-text="Screenshot of the Azure Database for MySQL server connection information in the Azure portal."::: 1. Add the connection information to MySQL Workbench.
- :::image type="content" source="./media/concepts-migrate-import-export/2_setup-new-connection.png" alt-text="MySQL Workbench connection string":::
+ :::image type="content" source="./media/concepts-migrate-import-export/2_setup-new-connection.png" alt-text="Screenshot of the MySQL Workbench connection string.":::
## Determine when to use import and export techniques
To get connected, do the following:
In the following scenarios, use MySQL tools to import and export databases into your MySQL database. For other tools, go to the "Migration Methods" section (page 22) of the [MySQL to Azure Database migration guide](https://github.com/Azure/azure-mysql/blob/master/MigrationGuide/MySQL%20Migration%20Guide_v1.1.pdf). -- When you need to selectively choose a few tables to import from an existing MySQL database into your Azure MySQL database, it's best to use the import and export technique. By doing so, you can omit any unneeded tables from the migration to save time and resources. For example, use the `--include-tables` or `--exclude-tables` switch with [mysqlpump](https://dev.mysql.com/doc/refman/5.7/en/mysqlpump.html#option_mysqlpump_include-tables) and the `--tables` switch with [mysqldump](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_tables).
+- When you need to selectively choose a few tables to import from an existing MySQL database into your Azure MySQL database, it's best to use the import and export technique. By doing so, you can omit any unneeded tables from the migration to save time and resources. For example, use the `--include-tables` or `--exclude-tables` switch with [mysqlpump](https://dev.mysql.com/doc/refman/5.7/en/mysqlpump.html#option_mysqlpump_include-tables), and the `--tables` switch with [mysqldump](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_tables).
- When you're moving database objects other than tables, explicitly create those objects. Include constraints (primary key, foreign key, and indexes), views, functions, procedures, triggers, and any other database objects that you want to migrate. - When you're migrating data from external data sources other than a MySQL database, create flat files and import them by using [mysqlimport](https://dev.mysql.com/doc/refman/5.7/en/mysqlimport.html). > [!Important]
-> Both Single Server and Flexible Server support *only the InnoDB storage engine*. Make sure that all tables in the database use the InnoDB storage engine when you're loading data into your Azure database for MySQL.
+> Both Single Server and Flexible Server support only the InnoDB storage engine. Make sure that all tables in the database use the InnoDB storage engine when you're loading data into your Azure database for MySQL.
>
-> If your source database uses another storage engine, convert to the InnoDB engine prior before you migrate the database. For example, if you have a WordPress or web app that uses the MyISAM engine, first convert the tables by migrating the data into InnoDB tables. Use the clause `ENGINE=INNODB` to set the engine for creating a table, and then transfer the data into the compatible table before the migration.
+> If your source database uses another storage engine, convert to the InnoDB engine before you migrate the database. For example, if you have a WordPress or web app that uses the MyISAM engine, first convert the tables by migrating the data into InnoDB tables. Use the clause `ENGINE=INNODB` to set the engine for creating a table, and then transfer the data into the compatible table before the migration.
```sql INSERT INTO innodb_table SELECT * FROM myisam_table ORDER BY primary_key_columns
You can use the **Data Export** pane to export your MySQL data.
1. Select the database objects to export, and configure the related options. 1. Select **Refresh** to load the current objects.
-1. Optionally, select **Advanced Options** at the upper right to refine the export operation. For example, add table locks, use replace instead of insert statements, and quote identifiers with backtick characters.
+1. Optionally, select **Advanced Options** at the upper right to refine the export operation. For example, add table locks, use `replace` instead of `insert` statements, and quote identifiers with backtick characters.
1. Select **Start Export** to begin the export process.
network-watcher Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/traffic-analytics.md
You can use traffic analytics for NSGs in any of the following supported regions
East US 2 East US 2 EUAP France Central
- Germany West Central
+ Germany West Central
Japan East Japan West Korea Central
The Log Analytics workspace must exist in the following regions:
:::column span=""::: East US 2 East US 2 EUAP
- France Central
- Germany West Central
+ France Central
Japan East Korea Central North Central US
postgresql Howto Migrate Using Dump And Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-migrate-using-dump-and-restore.md
Title: Dump and restore - Azure Database for PostgreSQL - Single Server
-description: Describes how to extract a PostgreSQL database into a dump file and restore from a file created by pg_dump in Azure Database for PostgreSQL - Single Server.
+description: You can extract a PostgreSQL database into a dump file. Then, you can restore from a file created by pg_dump in Azure Database for PostgreSQL Single Server.
Last updated 09/22/2020
-# Migrate your PostgreSQL database using dump and restore
+# Migrate your PostgreSQL database by using dump and restore
[!INCLUDE[applies-to-postgres-single-flexible-server](includes/applies-to-postgres-single-flexible-server.md)]
-You can use [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) to extract a PostgreSQL database into a dump file and [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html) to restore the PostgreSQL database from an archive file created by pg_dump.
+You can use [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) to extract a PostgreSQL database into a dump file. Then use [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html) to restore the PostgreSQL database from an archive file created by `pg_dump`.
## Prerequisites+ To step through this how-to guide, you need:-- An [Azure Database for PostgreSQL server](quickstart-create-server-database-portal.md) with firewall rules to allow access and database under it.-- [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) and [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html) command-line utilities installed
+- An [Azure Database for PostgreSQL server](quickstart-create-server-database-portal.md), including firewall rules to allow access.
+- [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) and [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html) command-line utilities installed.
-Follow these steps to dump and restore your PostgreSQL database:
+## Create a dump file that contains the data to be loaded
-## Create a dump file using pg_dump that contains the data to be loaded
To back up an existing PostgreSQL database on-premises or in a VM, run the following command:+ ```bash pg_dump -Fc -v --host=<host> --username=<name> --dbname=<database name> -f <database>.dump ```
-For example, if you have a local server and a database called **testdb** in it
+For example, if you have a local server and a database called **testdb** in it, run:
+ ```bash pg_dump -Fc -v --host=localhost --username=masterlogin --dbname=testdb -f testdb.dump ```
+## Restore the data into the target database
+
+After you've created the target database, you can use the `pg_restore` command and the `--dbname` parameter to restore the data into the target database from the dump file.
-## Restore the data into the target Azure Database for PostgreSQL using pg_restore
-After you've created the target database, you can use the pg_restore command and the -d, --dbname parameter to restore the data into the target database from the dump file.
```bash pg_restore -v --no-owner --host=<server name> --port=<port> --username=<user-name> --dbname=<target database name> <database>.dump ```
-Including the --no-owner parameter causes all objects created during the restore to be owned by the user specified with --username. For more information, see the official PostgreSQL documentation on [pg_restore](https://www.postgresql.org/docs/9.6/static/app-pgrestore.html).
+Including the `--no-owner` parameter causes all objects created during the restore to be owned by the user specified with `--username`. For more information, see the [PostgreSQL documentation](https://www.postgresql.org/docs/9.6/static/app-pgrestore.html).
> [!NOTE]
-> If your PostgreSQL server requires TLS/SSL connections (on by default in Azure Database for PostgreSQL servers), set an environment variable `PGSSLMODE=require` so that the pg_restore tool connects with TLS. Without TLS, the error may read `FATAL: SSL connection is required. Please specify SSL options and retry.`
->
-> In the Windows command line, run the command `SET PGSSLMODE=require` before running the pg_restore command. In Linux or Bash run the command `export PGSSLMODE=require` before running the pg_restore command.
+> On Azure Database for PostgreSQL servers, TLS/SSL connections are on by default. If your PostgreSQL server requires TLS/SSL connections, but doesn't have them, set an environment variable `PGSSLMODE=require` so that the pg_restore tool connects with TLS. Without TLS, the error might read: "FATAL: SSL connection is required. Please specify SSL options and retry." In the Windows command line, run the command `SET PGSSLMODE=require` before running the `pg_restore` command. In Linux or Bash, run the command `export PGSSLMODE=require` before running the `pg_restore` command.
>
-In this example, restore the data from the dump file **testdb.dump** into the database **mypgsqldb** on target server **mydemoserver.postgres.database.azure.com**.
+In this example, restore the data from the dump file **testdb.dump** into the database **mypgsqldb**, on target server **mydemoserver.postgres.database.azure.com**.
-Here is an example for how to use this **pg_restore** for **Single Server**:
+Here's an example for how to use this `pg_restore` for Single Server:
```bash pg_restore -v --no-owner --host=mydemoserver.postgres.database.azure.com --port=5432 --username=mylogin@mydemoserver --dbname=mypgsqldb testdb.dump ```
-Here is an example for how to use this **pg_restore** for **Flexible Server**:
+
+Here's an example for how to use this `pg_restore` for Flexible Server:
```bash pg_restore -v --no-owner --host=mydemoserver.postgres.database.azure.com --port=5432 --username=mylogin --dbname=mypgsqldb testdb.dump ```-
-## Optimizing the migration process
+## Optimize the migration process
-One way to migrate your existing PostgreSQL database to Azure Database for PostgreSQL service is to back up the database on the source and restore it in Azure. To minimize the time required to complete the migration, consider using the following parameters with the backup and restore commands.
+One way to migrate your existing PostgreSQL database to Azure Database for PostgreSQL is to back up the database on the source and restore it in Azure. To minimize the time required to complete the migration, consider using the following parameters with the backup and restore commands.
> [!NOTE]
-> For detailed syntax information, see the articles [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) and [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html).
+> For detailed syntax information, see [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) and [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html).
> ### For the backup-- Take the backup with the -Fc switch so that you can perform the restore in parallel to speed it up. For example:
- ```bash
- pg_dump -h my-source-server-name -U source-server-username -Fc -d source-databasename -f Z:\Data\Backups\my-database-backup.dump
- ```
+Take the backup with the `-Fc` switch, so that you can perform the restore in parallel to speed it up. For example:
+
+```bash
+pg_dump -h my-source-server-name -U source-server-username -Fc -d source-databasename -f Z:\Data\Backups\my-database-backup.dump
+```
### For the restore-- We suggest that you move the backup file to an Azure VM in the same region as the Azure Database for PostgreSQL server you are migrating to, and do the pg_restore from that VM to reduce network latency. We also recommend that the VM is created with [accelerated networking](../virtual-network/create-vm-accelerated-networking-powershell.md) enabled. -- It should be already done by default, but open the dump file to verify that the create index statements are after the insert of the data. If it isn't the case, move the create index statements after the data is inserted.
+- Move the backup file to an Azure VM in the same region as the Azure Database for PostgreSQL server you are migrating to. Perform the `pg_restore` from that VM to reduce network latency. Create the VM with [accelerated networking](../virtual-network/create-vm-accelerated-networking-powershell.md) enabled.
-- Restore with the switches -Fc and -j *#* to parallelize the restore. *#* is the number of cores on the target server. You can also try with *#* set to twice the number of cores of the target server to see the impact. For example:
+- Open the dump file to verify that the create index statements are after the insert of the data. If it isn't the case, move the create index statements after the data is inserted. This should already be done by default, but it's a good idea to confirm.
-Here is an example for how to use this **pg_restore** for **Single Server**:
-```bash
- pg_restore -h my-target-server.postgres.database.azure.com -U azure-postgres-username@my-target-server -Fc -j 4 -d my-target-databasename Z:\Data\Backups\my-database-backup.dump
-```
-Here is an example for how to use this **pg_restore** for **Flexible Server**:
-```bash
- pg_restore -h my-target-server.postgres.database.azure.com -U azure-postgres-username@my-target-server -Fc -j 4 -d my-target-databasename Z:\Data\Backups\my-database-backup.dump
- ```
+- Restore with the switches `-Fc` and `-j` (with a number) to parallelize the restore. The number you specify is the number of cores on the target server. You can also set to twice the number of cores of the target server to see the impact.
-- You can also edit the dump file by adding the command *set synchronous_commit = off;* at the beginning and the command *set synchronous_commit = on;* at the end. Not turning it on at the end, before the apps change the data, may result in subsequent loss of data.
+ Here's an example for how to use this `pg_restore` for Single Server:
+
+ ```bash
+ pg_restore -h my-target-server.postgres.database.azure.com -U azure-postgres-username@my-target-server -Fc -j 4 -d my-target-databasename Z:\Data\Backups\my-database-backup.dump
+ ```
+
+ Here's an example for how to use this `pg_restore` for Flexible Server:
+
+ ```bash
+ pg_restore -h my-target-server.postgres.database.azure.com -U azure-postgres-username@my-target-server -Fc -j 4 -d my-target-databasename Z:\Data\Backups\my-database-backup.dump
+ ```
+
+- You can also edit the dump file by adding the command `set synchronous_commit = off;` at the beginning, and the command `set synchronous_commit = on;` at the end. Not turning it on at the end, before the apps change the data, might result in subsequent loss of data.
- On the target Azure Database for PostgreSQL server, consider doing the following before the restore:
- - Turn off query performance tracking, since these statistics are not needed during the migration. You can do this by setting pg_stat_statements.track, pg_qs.query_capture_mode, and pgms_wait_sampling.query_capture_mode to NONE.
+
+ - Turn off query performance tracking. These statistics aren't needed during the migration. You can do this by setting `pg_stat_statements.track`, `pg_qs.query_capture_mode`, and `pgms_wait_sampling.query_capture_mode` to `NONE`.
- - Use a high compute and high memory sku, like 32 vCore Memory Optimized, to speed up the migration. You can easily scale back down to your preferred sku after the restore is complete. The higher the sku, the more parallelism you can achieve by increasing the corresponding `-j` parameter in the pg_restore command.
+ - Use a high compute and high memory SKU, like 32 vCore Memory Optimized, to speed up the migration. You can easily scale back down to your preferred SKU after the restore is complete. The higher the SKU, the more parallelism you can achieve by increasing the corresponding `-j` parameter in the `pg_restore` command.
- - More IOPS on the target server could improve the restore performance. You can provision more IOPS by increasing the server's storage size. This setting is not reversible, but consider whether a higher IOPS would benefit your actual workload in the future.
+ - More IOPS on the target server might improve the restore performance. You can provision more IOPS by increasing the server's storage size. This setting isn't reversible, but consider whether a higher IOPS would benefit your actual workload in the future.
Remember to test and validate these commands in a test environment before you use them in production. ## Next steps-- To migrate a PostgreSQL database using export and import, see [Migrate your PostgreSQL database using export and import](howto-migrate-using-export-and-import.md).+
+- To migrate a PostgreSQL database by using export and import, see [Migrate your PostgreSQL database using export and import](howto-migrate-using-export-and-import.md).
- For more information about migrating databases to Azure Database for PostgreSQL, see the [Database Migration Guide](https://aka.ms/datamigration).++
purview Create Purview Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/create-purview-dotnet.md
+
+ Title: Create Purview Account using .NET SDK
+description: Create an Azure Purview Account using .NET SDK.
+++
+ms.devlang: dotnet
+ Last updated : 4/2/2021++
+# Quickstart: Create a Purview Account using .NET SDK
+
+This quickstart describes how to use .NET SDK to create an Azure Purview account
+
+> [!IMPORTANT]
+> Azure Purview is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+* Your own [Azure Active Directory tenant](../active-directory/fundamentals/active-directory-access-create-new-tenant.md).
+
+* Your account must have permission to create resources in the subscription
+
+* If you have **Azure Policy** blocking all applications from creating **Storage account** and **EventHub namespace**, you need to make policy exception using tag, which can be entered during the process of creating a Purview account. The main reason is that for each Purview Account created, it needs to create a managed Resource Group and within this resource group, a Storage account and an EventHub namespace. For more information refer to [Create Catalog Portal](create-catalog-portal.md)
+
+### Visual Studio
+
+The walkthrough in this article uses Visual Studio 2019. The procedures for Visual Studio 2013, 2015, or 2017 differ slightly.
+
+### Azure .NET SDK
+
+Download and install [Azure .NET SDK](https://azure.microsoft.com/downloads/) on your machine.
+
+## Create an application in Azure Active Directory
+
+From the sections in *How to: Use the portal to create an Azure AD application and service principal that can access resources*, follow the instructions to do these tasks:
+
+1. In [Create an Azure Active Directory application](../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal), create an application that represents the .NET application you are creating in this tutorial. For the sign-on URL, you can provide a dummy URL as shown in the article (`https://contoso.org/exampleapp`).
+2. In [Get values for signing in](../active-directory/develop/howto-create-service-principal-portal.md#get-tenant-and-app-id-values-for-signing-in), get the **application ID** and **tenant ID**, and note down these values that you use later in this tutorial.
+3. In [Certificates and secrets](../active-directory/develop/howto-create-service-principal-portal.md#authentication-two-options), get the **authentication key**, and note down this value that you use later in this tutorial.
+4. In [Assign the application to a role](../active-directory/develop/howto-create-service-principal-portal.md#assign-a-role-to-the-application), assign the application to the **Contributor** role at the subscription level so that the application can create data factories in the subscription.
+
+## Create a Visual Studio project
+
+Next, create a C# .NET console application in Visual Studio:
+
+1. Launch **Visual Studio**.
+2. In the Start window, select **Create a new project** > **Console App (.NET Framework)**. .NET version 4.5.2 or above is required.
+3. In **Project name**, enter **ADFv2QuickStart**.
+4. Select **Create** to create the project.
+
+## Install NuGet packages
+
+1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console**.
+2. In the **Package Manager Console** pane, run the following commands to install packages. For more information, see the [Microsoft.Azure.Management.Purview NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Management.Purview/).
+
+ ```powershell
+ Install-Package Microsoft.Azure.Management.Purview
+ Install-Package Microsoft.Azure.Management.ResourceManager -IncludePrerelease
+ Install-Package Microsoft.IdentityModel.Clients.ActiveDirectory
+ ```
+
+## Create a Purview client
+
+1. Open **Program.cs**, include the following statements to add references to namespaces.
+
+ ```csharp
+ using System;
+ using System.Collections.Generic;
+ using System.Linq;
+ using Microsoft.Rest;
+ using Microsoft.Rest.Serialization;
+ using Microsoft.Azure.Management.ResourceManager;
+ using Microsoft.Azure.Management.Purview;
+ using Microsoft.Azure.Management.Purview.Models;
+ using Microsoft.IdentityModel.Clients.ActiveDirectory;
+ ```
+
+2. Add the following code to the **Main** method that sets the variables. Replace the placeholders with your own values. For a list of Azure regions in which Purview is currently available, search on **Azure Purview** and select the regions that interest you on the following page: [Products available by region](https://azure.microsoft.com/global-infrastructure/services/).
+
+ ```csharp
+ // Set variables
+ string tenantID = "<your tenant ID>";
+ string applicationId = "<your application ID>";
+ string authenticationKey = "<your authentication key for the application>";
+ string subscriptionId = "<your subscription ID where the data factory resides>";
+ string resourceGroup = "<your resource group where the data factory resides>";
+ string region = "<the location of your resource group>";
+ string purviewAccountName =
+ "<specify the name of purview account to create. It must be globally unique.>";
+ ```
+
+3. Add the following code to the **Main** method that creates an instance of **PurviewManagementClient** class. You use this object to create a Purview Account.
+
+ ```csharp
+ // Authenticate and create a purview management client
+ var context = new AuthenticationContext("https://login.windows.net/" + tenantID);
+ ClientCredential cc = new ClientCredential(applicationId, authenticationKey);
+ AuthenticationResult result = context.AcquireTokenAsync(
+ "https://management.azure.com/", cc).Result;
+ ServiceClientCredentials cred = new TokenCredentials(result.AccessToken);
+ var client = new PurviewManagementClient(cred)
+ {
+ SubscriptionId = subscriptionId
+ };
+ ```
+
+## Create a Purview Account
+
+Add the following code to the **Main** method that creates a **Purview Account**.
+
+```csharp
+ // Create a purview Account
+ Console.WriteLine("Creating Purview Account " + purviewAccountName + "...");
+ Account account = new Account()
+ {
+ Location = region,
+ Identity = new Identity(type: "SystemAssigned"),
+ Sku = new AccountSku(name: "Standard", capacity: 4)
+ };
+ try
+ {
+ client.Accounts.CreateOrUpdate(resourceGroup, purviewAccountName, account);
+ Console.WriteLine(client.Accounts.Get(resourceGroup, purviewAccountName).ProvisioningState);
+ }
+ catch (ErrorResponseModelException purviewException)
+ {
+ Console.WriteLine(purviewException.StackTrace);
+ }
+ Console.WriteLine(
+ SafeJsonConvert.SerializeObject(account, client.SerializationSettings));
+ while (client.Accounts.Get(resourceGroup, purviewAccountName).ProvisioningState ==
+ "PendingCreation")
+ {
+ System.Threading.Thread.Sleep(1000);
+ }
+ Console.WriteLine("\nPress any key to exit...");
+ Console.ReadKey();
+```
+
+## Run the code
+
+Build and start the application, then verify the execution.
+
+The console prints the progress of creating Purview Account.
+
+### Sample output
+
+```json
+Creating Purview Account testpurview...
+Succeeded
+{
+ "sku": {
+ "capacity": 4,
+ "name": "Standard"
+ },
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "location": "southcentralus"
+}
+
+Press any key to exit...
+```
+
+## Verify the output
+
+Go to the **Purview accounts** page in the [Azure portal](https://portal.azure.com) and verify the account created using the above code.
+
+## Delete Purview account
+
+To programmatically delete a Purview Account, add the following lines of code to the program:
+
+```csharp
+ Console.WriteLine("Deleting the Purview Account");
+ client.Accounts.Delete(resourceGroup, purviewAccountName);
+```
+
+## Check if Purview account name is available
+
+To check availability of a purview account, use the following code:
+
+```csharp
+ CheckNameAvailabilityRequest checkNameAvailabilityRequest = new CheckNameAvailabilityRequest()
+ {
+ Name = purviewAccountName,
+ Type = "Microsoft.Purview/accounts"
+ };
+ Console.WriteLine("Check Purview account name");
+ Console.WriteLine(client.Accounts.CheckNameAvailability(checkNameAvailabilityRequest).NameAvailable);
+```
+
+The above code with print 'True' if the name is available and 'False' if the name is not available.
++
+## Next steps
+
+The code in this tutorial creates a purview account, deletes a purview account and checks for name availability of purview account. You can now download the .NET SDK and learn about other resource provider actions you can perform for a Purview account.
+
+Advance to the next article to learn how to allow users to access your Azure Purview Account.
+
+> [!div class="nextstepaction"]
+> [Add users to your Azure Purview Account](catalog-permissions.md)
purview Create Purview Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/create-purview-python.md
+
+ Title: 'Quickstart: Create an Purview Account using Python'
+description: Create an Azure Purview Account using Python.
++++
+ms.devlang: python
+ Last updated : 04/02/2021++
+# Quickstart: Create a Purview Account using Python
+
+In this quickstart, you create a Purview account using Python.
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+* Your own [Azure Active Directory tenant](../active-directory/fundamentals/active-directory-access-create-new-tenant.md).
+
+* Your account must have permission to create resources in the subscription
+
+* If you have **Azure Policy** blocking all applications from creating **Storage account** and **EventHub namespace**, you need to make policy exception using tag, which can be entered during the process of creating a Purview account. The main reason is that for each Purview Account created, it needs to create a managed Resource Group and within this resource group, a Storage account and an
+EventHub namespace. For more information refer to [Create Catalog Portal](create-catalog-portal.md)
++
+## Install the Python package
+
+1. Open a terminal or command prompt with administrator privileges. 
+2. First, install the Python package for Azure management resources:
+
+ ```python
+ pip install azure-mgmt-resource
+ ```
+3. To install the Python package for Purview, run the following command:
+
+ ```python
+ pip install azure-mgmt-purview
+ ```
+
+ The [Python SDK for Purview](https://github.com/Azure/azure-sdk-for-python) supports Python 2.7, 3.3, 3.4, 3.5, 3.6 and 3.7.
+
+4. To install the Python package for Azure Identity authentication, run the following command:
+
+ ```python
+ pip install azure-identity
+ ```
+ > [!NOTE]
+ > The "azure-identity" package might have conflicts with "azure-cli" on some common dependencies. If you meet any authentication issue, remove "azure-cli" and its dependencies, or use a clean machine without installing "azure-cli" package to make it work.
+
+## Create a purview client
+
+1. Create a file named **purview.py**. Add the following statements to add references to namespaces.
+
+ ```python
+ from azure.identity import ClientSecretCredential
+ from azure.mgmt.resource import ResourceManagementClient
+ from azure.mgmt.purview import PurviewManagementClient
+ from azure.mgmt.purview.models import *
+ from datetime import datetime, timedelta
+ import time
+ ```
+
+2. Add the following code to the **Main** method that creates an instance of PurviewManagementClient class. You use this object to create a purview account, delete purview account, check name availability and other resource provider operations.
+
+ ```python
+ def main():
+
+ # Azure subscription ID
+ subscription_id = '<subscription ID>'
+
+ # This program creates this resource group. If it's an existing resource group, comment out the code that creates the resource group
+ rg_name = '<resource group>'
+
+ # The purview name. It must be globally unique.
+ purview_name = '<purview account name>'
+
+ # Location name, where Purview account must be created.
+ location = '<location name>'
+
+ # Specify your Active Directory client ID, client secret, and tenant ID
+ credentials = ClientSecretCredential(client_id='<service principal ID>', client_secret='<service principal key>', tenant_id='<tenant ID>')
+ # resource_client = ResourceManagementClient(credentials, subscription_id)
+ purview_client = PurviewManagementClient(credentials, subscription_id)
+ ```
+
+## Create a purview account
+
+Add the following code to the **Main** method that creates a **purview account**. If your resource group already exists, comment out the first `create_or_update` statement.
+
+```python
+ # create the resource group
+ # comment out if the resource group already exits
+ resource_client.resource_groups.create_or_update(rg_name, rg_params)
+
+ #Create a purview
+ identity = Identity(type= "SystemAssigned")
+ sku = AccountSku(name= 'Standard', capacity= 4)
+ purview_resource = Account(identity=identity,sku=sku,location =location )
+
+ try:
+ pa = (purview_client.accounts.begin_create_or_update(rg_name, purview_name, purview_resource)).result()
+ print("location:", pa.location, " Purview Account Name: ", pa.name, " Id: " , pa.id ," tags: " , pa.tags)
+ except:
+ print("Error")
+ print_item(pa)
+
+ while (getattr(pa,'provisioning_state')) != "Succeeded" :
+ pa = (purview_client.accounts.get(rg_name, purview_name))
+ print(getattr(pa,'provisioning_state'))
+ if getattr(pa,'provisioning_state') != "Failed" :
+ print("Error in creating Purview account")
+ break
+ time.sleep(30)
+
+```
+
+Now, add the following statement to invoke the **main** method when the program is run:
+
+```python
+# Start the main method
+main()
+```
+
+## Full script
+
+Here is the full Python code:
+
+```python
+
+ from azure.identity import ClientSecretCredential
+ from azure.mgmt.resource import ResourceManagementClient
+ from azure.mgmt.purview import PurviewManagementClient
+ from azure.mgmt.purview.models import *
+ from datetime import datetime, timedelta
+ import time
+
+ # Azure subscription ID
+ subscription_id = '<subscription ID>'
+
+ # This program creates this resource group. If it's an existing resource group, comment out the code that creates the resource group
+ rg_name = '<resource group>'
+
+ # The purview name. It must be globally unique.
+ purview_name = '<purview account name>'
+
+ # Specify your Active Directory client ID, client secret, and tenant ID
+ credentials = ClientSecretCredential(client_id='<service principal ID>', client_secret='<service principal key>', tenant_id='<tenant ID>')
+ # resource_client = ResourceManagementClient(credentials, subscription_id)
+ purview_client = PurviewManagementClient(credentials, subscription_id)
+
+ # create the resource group
+ # comment out if the resource group already exits
+ resource_client.resource_groups.create_or_update(rg_name, rg_params)
+
+ #Create a purview
+ identity = Identity(type= "SystemAssigned")
+ sku = AccountSku(name= 'Standard', capacity= 4)
+ purview_resource = Account(identity=identity,sku=sku,location ="southcentralus" )
+
+ try:
+ pa = (purview_client.accounts.begin_create_or_update(rg_name, purview_name, purview_resource)).result()
+ print("location:", pa.location, " Purview Account Name: ", purview_name, " Id: " , pa.id ," tags: " , pa.tags)
+ except:
+ print("Error in submitting job to create account")
+ print_item(pa)
+
+ while (getattr(pa,'provisioning_state')) != "Succeeded" :
+ pa = (purview_client.accounts.get(rg_name, purview_name))
+ print(getattr(pa,'provisioning_state'))
+ if getattr(pa,'provisioning_state') != "Failed" :
+ print("Error in creating Purview account")
+ break
+ time.sleep(30)
+
+# Start the main method
+main()
+```
+
+## Run the code
+
+Build and start the application, then verify the pipeline execution.
+
+The console prints the progress of creating data factory, linked service, datasets, pipeline, and pipeline run. Wait until you see the copy activity run details with data read/written size. Then, use tools such as [Azure Storage explorer](https://azure.microsoft.com/features/storage-explorer/) to check the blob(s) is copied to "outputBlobPath" from "inputBlobPath" as you specified in variables.
+
+Here is the sample output:
+
+```console
+location: southcentralus Purview Account Name: purviewpython7 Id: /subscriptions/8c2c7b23-848d-40fe-b817-690d79ad9dfd/resourceGroups/Demo_Catalog/providers/Microsoft.Purview/accounts/purviewpython7 tags: None
+Creating
+Creating
+Succeeded
+```
+
+## Verify the output
+
+Go to the **Purview accounts** page in the Azure portal and verify the account created using the above code.
+
+## Delete Purview Account
+
+To delete purview account, add the following code to the program:
+
+```python
+pa = purview_client.accounts.begin_delete(rg_name, purview_name).result()
+```
+
+## Next steps
+
+The code in this tutorial creates a purview account and deletes a purview account. You can now download the python SDK and learn about other resource provider actions you can perform for a Purview account.
+
+Advance to the next article to learn how to allow users to access your Azure Purview Account.
+
+> [!div class="nextstepaction"]
+> [Add users to your Azure Purview Account](catalog-permissions.md)
purview Register Scan Amazon S3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-amazon-s3.md
Previously updated : 04/04/2021 Last updated : 04/05/2021 # Customer intent: As a security officer, I need to understand how to use the Azure Purview connector for Amazon S3 service to set up, configure, and scan my Amazon S3 buckets.
The following table maps the regions where you data is stored to the region wher
Ensure that you've performed the following prerequisites before adding your Amazon S3 buckets as Purview data sources and scanning your S3 data. -- You need to be an Azure Purview Data Source Admin.--- When adding your buckets as Purview resources, you'll need the values of your [AWS ARN](#retrieve-your-new-role-arn), [bucket name](#retrieve-your-amazon-s3-bucket-name), and sometimes your [AWS account ID](#locate-your-aws-account-id).
+> [!div class="checklist"]
+> * You need to be an Azure Purview Data Source Admin.
+> * [Create a Purview account](#create-a-purview-account) if you don't yet have one
+> * [Create a Purview credential for your AWS bucket scan](#create-a-purview-credential-for-your-aws-bucket-scan)
+> * [Create a new AWS role for use with Purview](#create-a-new-aws-role-for-purview)
+> * [Configure scanning for encrypted Amazon S3 buckets](#configure-scanning-for-encrypted-amazon-s3-buckets), if relevant
+> * When adding your buckets as Purview resources, you'll need the values of your [AWS ARN](#retrieve-your-new-role-arn), [bucket name](#retrieve-your-amazon-s3-bucket-name), and sometimes your [AWS account ID](#locate-your-aws-account-id).
### Create a Purview account
For more information about Purview credentials, see the [Azure Purview public pr
![Select the ReadOnlyAccess policy for the new Amazon S3 scanning role.](./media/register-scan-amazon-s3/aws-permission-role-amazon-s3.png)
+ > [!IMPORTANT]
+ > The **AmazonS3ReadOnlyAccess** policy provides minimum permissions required for scanning your S3 buckets, and may include other permissions as well.
+ >
+ >To apply only the minimum permissions required for scanning your buckets, create a new policy with the permissions listed in [Minimum permissions for your AWS policy](#minimum-permissions-for-your-aws-policy), depending on whether you want to scan a single bucket or all the buckets in your account.
+ >
+ >Apply your new policy to the role instead of **AmazonS3ReadOnlyAccess.**
+ 1. In the **Add tags (optional)** area, you can optionally choose to create a meaningful tag for this new role. Useful tags enable you to organize, track, and control access for each role you create. Enter a new key and value for your tag as needed. When you're done, or if you want to skip this step, select **Next: Review** to review the role details and complete the role creation.
Use the other areas of Purview to find out details about the content in your dat
For more information, see the [Understand Insights in Azure Purview](concept-insights.md).
+## Minimum permissions for your AWS policy
+
+The default procedure for [creating an AWS role for Purview](#create-a-new-aws-role-for-purview) to use when scanning your S3 buckets uses the **AmazonS3ReadOnlyAccess** policy.
+
+The **AmazonS3ReadOnlyAccess** policy provides minimum permissions required for scanning your S3 buckets, and may include other permissions as well.
+
+To apply only the minimum permissions required for scanning your buckets, create a new policy with the permissions listed in the following sections, depending on whether you want to scan a single bucket or all the buckets in your account.
+
+Apply your new policy to the role instead of **AmazonS3ReadOnlyAccess.**
+
+### Individual buckets
+
+When scanning individual S3 buckets, minimum AWS permissions include:
+
+- `GetBucketLocation`
+- `GetBucketPublicAccessBlock`
+- `GetObject`
+- `ListBucket`
+
+Make sure to define your resource with the specific bucket name.
+For example:
+
+```json
+{
+"Version": "2012-10-17",
+"Statement": [
+ {
+ "Effect": "Allow",
+ "Action": [
+ "s3:GetBucketLocation",
+ "s3:GetBucketPublicAccessBlock",
+ "s3:GetObject",
+ "s3:ListBucket"
+ ],
+ "Resource": "arn:aws:s3:::<bucketname>"
+ },
+ {
+ "Effect": "Allow",
+ "Action": [
+ "s3:GetObject"
+ ],
+ "Resource": "arn:aws:s3::: <bucketname>/*"
+ }
+ ]
+}
+```
+
+### All buckets in your account
+
+When scanning all the buckets in your AWS account, minimum AWS permissions include:
+
+- `GetBucketLocation`
+- `GetBucketPublicAccessBlock`
+- `GetObject`
+- `ListAllMyBuckets`
+- `ListBucket`.
+
+Make sure to define your resource with a wildcard. For example:
+
+```json
+{
+"Version": "2012-10-17",
+"Statement": [
+ {
+ "Effect": "Allow",
+ "Action": [
+ "s3:GetBucketLocation",
+ "s3:GetBucketPublicAccessBlock",
+ "s3:GetObject",
+ "s3:ListAllMyBuckets",
+ "s3:ListBucket"
+ ],
+ "Resource": "*"
+ },
+ {
+ "Effect": "Allow",
+ "Action": [
+ "s3:GetObject"
+ ],
+ "Resource": "*"
+ }
+ ]
+}
+```
+ ## Next steps Learn more about Azure Purview Insight reports:
role-based-access-control Role Assignments Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/role-assignments-portal.md
If you need to assign administrator roles in Azure Active Directory, see [View a
The Add role assignment pane opens.
- ![Add role assignment pane](./media/shared/add-role-assignment.png)
+ ![Add role assignment page](../../includes/role-based-access-control/media/add-role-assignment-page.png)
## Step 3: Select the appropriate role
route-server Quickstart Configure Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/route-server/quickstart-configure-template.md
+
+ Title: 'Quickstart: Create an Azure Route Server by using an Azure Resource Manager template (ARM template)'
+description: This quickstart shows you how to create an Azure Route Server by using Azure Resource Manager template (ARM template).
+++++ Last updated : 04/05/2021+++
+# Quickstart: Create an Azure Route Server using an ARM template
+
+This quickstart describes how to use an Azure Resource Manager template (ARM Template) to deploy an Azure Route Server into a new or existing virtual network.
++
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-route-server%2Fazuredeploy.json)
+
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the template
+
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/101-route-server).
+
+In this quickstart, you'll deploy an Azure Route Server into a new or existing virtual network. A dedicated subnet named `RouteServerSubnet` will be created to host the Route Server. The Route Server will also be configured with the Peer ASN and Peer IP to establish a BGP peering.
++
+Multiple Azure resources have been defined in the template:
+
+* [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualNetworks)
+* [**Microsoft.Network/virtualNetworks/subnets**](/azure/templates/microsoft.network/virtualNetworks/subnets) (two subnets, one named `routeserversubnet`)
+* [**Microsoft.Network/virtualHubs**](/azure.templates/microsoft.network/virtualhubs) (Route Server deployment)
+* [**Microsoft.Network/virtualHubs/ipConfigurations**](/azure.templates/microsoft.network/virtualhubs/ipConfigurations)
+* [**Microsoft.Network/virtualHubs/bgpConnections**](/azure.templates/microsoft.network/virtualhubs/bgpConnections) (Peer ASN and Peer IP configuration)
++
+To find more templates that are related to ExpressRoute, see [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Network&pageNumber=1&sort=Popular).
+
+## Deploy the template
+
+1. Select **Try it** from the following code block to open Azure Cloud Shell, and then follow the instructions to sign in to Azure.
+
+ ```azurepowershell-interactive
+ $projectName = Read-Host -Prompt "Enter a project name that is used for generating resource names"
+ $location = Read-Host -Prompt "Enter the location (i.e. centralus)"
+ $templateUri = "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-route-server/azuredeploy.json"
+
+ $resourceGroupName = "${projectName}rg"
+
+ New-AzResourceGroup -Name $resourceGroupName -Location "$location"
+ New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateUri $templateUri
+
+ Read-Host -Prompt "Press [ENTER] to continue ..."
+ ```
+
+ Wait until you see the prompt from the console.
+
+1. Select **Copy** from the previous code block to copy the PowerShell script.
+
+1. Right-click the shell console pane and then select **Paste**.
+
+1. Enter the values.
+
+ The resource group name is the project name with **rg** appended.
+
+ It takes about 20 minutes to deploy the template. When completed, the output is similar to:
+
+ :::image type="content" source="./media/quickstart-configure-template/powershell-output.png" alt-text="Route Server Resource Manager template PowerShell deployment output.":::
+
+Azure PowerShell is used to deploy the template. In addition to Azure PowerShell, you can also use the Azure portal, Azure CLI, and REST API. To learn other deployment methods, see [Deploy templates](../azure-resource-manager/templates/deploy-portal.md).
+
+## Validate the deployment
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Select **Resource groups** from the left pane.
+
+1. Select the resource group that you created in the previous section. The default resource group name is the project name with **rg** appended.
+
+1. The resource group should contain only the virtual network:
+
+ :::image type="content" source="./media/quickstart-configure-template/resource-group.png" alt-text="Route Server deployment resource group with virtual network.":::
+
+1. Go to https://aka.ms/routeserver.
+
+1. Select the Route Server named **routeserver** to verify that the deployment was successful.
+
+ :::image type="content" source="./media/quickstart-configure-template/deployment.png" alt-text="Screenshot of Route Server overview page.":::
+
+## Clean up resources
+
+When you no longer need the resources that you created with the Route Server, delete the resource group. This removes the Route Server and all the related resources.
+
+To delete the resource group, call the `Remove-AzResourceGroup` cmdlet:
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name <your resource group name>
+```
+
+## Next steps
+
+In this quickstart, you created a:
+
+* Route Server
+* Virtual Network
+* Subnet
+
+After you create the Azure Route Server, continue to learn about how Azure Route Server interacts with ExpressRoute and VPN Gateways:
+
+> [!div class="nextstepaction"]
+> [Azure ExpressRoute and Azure VPN support](expressroute-vpn-support.md)
security-center Defender For Kubernetes Azure Arc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/defender-for-kubernetes-azure-arc.md
+
+ Title: Protect hybrid and multicloud Kubernetes deployments with Azure Defender for Kubernetes
+description: Use Azure Defender for Kubernetes with your on-premises and multicloud Kubernetes clusters
++++ Last updated : 04/05/2021+++
+# Defend Azure Arc enabled Kubernetes clusters running in on-premises and multicloud environments
+
+To defend your on-premises clusters with the same threat detection capabilities offered today for Azure Kubernetes Service clusters, enable Azure Arc on the clusters and deploy the **Azure Defender for Kubernetes cluster extension**
+
+You can also use the extension to protect Kubernetes clusters deployed on machines in other cloud providers, although not on their managed Kubernetes services.
+
+> [!TIP]
+> We've put some sample files to help with the installation process in [Installation examples on GitHub](https://aka.ms/kubernetes-extension-installation-examples).
+
+## Availability
+
+| Aspect | Details |
+|--||
+| Release state | **Preview** [!INCLUDE [Legalese](../../includes/security-center-preview-legal-text.md)]|
+| Required roles and permissions | [Security admin](../role-based-access-control/built-in-roles.md#security-admin) can dismiss alerts<br>[Security reader](../role-based-access-control/built-in-roles.md#security-reader) can view findings |
+| Pricing | Requires [Azure Defender for Kubernetes](defender-for-kubernetes-introduction.md) |
+| Supported Kubernetes distributions | [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br>[Kubernetes](https://kubernetes.io/docs/home/)<br> [AKS Engine](https://github.com/Azure/aks-engine)<br> [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer) |
+| Limitations | Azure Arc enabled Kubernetes and the Azure Defender extension **don't support** managed Kubernetes offerings like Google Kubernetes Engine and Elastic Kubernetes Service. [Azure Defender is natively available for Azure Kubernetes Service (AKS)](defender-for-kubernetes-introduction.md) and doesn't require connecting the cluster to Azure Arc. |
+| Environments and regions | Availability for this extension is the same as [Azure Arc enabled Kubernetes](../azure-arc/kubernetes/overview.md)|
+
+## Architecture overview
+
+For all Kubernetes clusters other than AKS, you'll need to connect your cluster to Azure Arc. Once connected, Azure Defender for Kubernetes can be deployed on [Azure Arc enabled Kubernetes](../azure-arc/kubernetes/overview.md) resources as a [cluster extension](../azure-arc/kubernetes/extensions.md).
+
+The extension components collect Kubernetes audit logs data from all control plane nodes in the cluster and send them to the Azure Defender for Kubernetes backend in the cloud for further analysis. The extension is registered with a Log Analytics workspace used as a data pipeline, but the audit log data isn't stored in the Log Analytics workspace.
+
+This diagram shows the interaction between Azure Defender for Kubernetes and the Azure Arc enabled Kubernetes cluster:
++
+## Prerequisites
+
+- Azure Defender for Kubernetes is [enabled on your subscription](enable-azure-defender.md)
+- Your Kubernetes cluster is [connected to Azure Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md)
+- You've met the pre-requisites listed under the [generic cluster extensions documentation](../azure-arc/kubernetes/extensions.md#prerequisites).
+
+## Deploy the Azure Defender extension
+
+You can deploy the Azure Defender extension using a range of methods. For detailed steps, select the relevant tab.
+
+### [**Azure portal**](#tab/k8s-deploy-asc)
+
+### Use the "Quick fix" option from the Security Center recommendation
+
+A dedicated recommendation in Azure Security Center provides:
+
+- **Visibility** about which of your clusters has the Defender for Kubernetes extension deployed
+- **A "Quick fix" option** to deploy it to those clusters without the extension
+
+1. From Azure Security Center's recommendations page, open the **Enable Azure Defender** security control.
+
+1. Use the filter to find the recommendation named **Azure Arc enabled Kubernetes clusters should have Azure Defender's extension installed**.
+
+ :::image type="content" source="media/defender-for-kubernetes-azure-arc/extension-recommendation.png" alt-text="Azure Security Center's recommendation for deploying the Azure Defender extension for Azure Arc enabled Kubernetes clusters." lightbox="media/defender-for-kubernetes-azure-arc/extension-recommendation.png":::
+
+ > [!TIP]
+ > Notice the Quick Fix icon in the actions column
+
+1. Select the extension to see the details of the healthy and unhealthy resources - clusters with and without the extension.
+
+1. From the unhealthy resources list, select a cluster and select **Remediate** to open the pane with the remediation options.
+
+1. Select the relevant Log Analytics workspace and select **Remediate x resource**.
+
+ :::image type="content" source="media/defender-for-kubernetes-azure-arc/security-center-deploy-extension.gif" alt-text="Deploy Azure Defender extension for Azure Arc with Security Center's quick fix option.":::
++
+### [**Azure CLI**](#tab/k8s-deploy-cli)
+
+### Use Azure CLI to deploy the Azure Defender extension
+
+1. Login to Azure:
+
+ ```azurecli
+ az login
+ az account set --subscription <your-subscription-id>
+ ```
+
+ > [!IMPORTANT]
+ > Ensure that you use the same subscription ID for ``<your-subscription-id>`` as the one that was used when connecting your cluster to Azure Arc.
+
+1. Run the following command to deploy the extension on top of your Azure Arc enabled Kubernetes cluster:
+
+ ```azurecli
+ az k8s-extension create --name microsoft.azuredefender.kubernetes --cluster-type connectedClusters --cluster-name <cluster-name> --resource-group <resource-group> --extension-type microsoft.azuredefender.kubernetes
+ ```
+
+ A description of all the supported configuration settings on the Azure Defender extension type is given below:
+
+ | Property | Description |
+ |-|-|
+ | logAnalyticsWorkspaceResourceID | **Optional**. Full resource ID of your own Log Analytics workspace.<br>When not provided, the default workspace of the region will be used.<br><br>To get the full resource ID, run the following command to display the list of workspaces in your subscriptions in the default JSON format:<br>```az resource list --resource-type Microsoft.OperationalInsights/workspaces -o json```<br><br>The Log Analytics workspace resource ID has the following syntax:<br>/subscriptions/{your-subscription-id}/resourceGroups/{your-resource-group}/providers/Microsoft.OperationalInsights/workspaces/{your-workspace-name}. <br>Learn more in [Log Analytics workspaces](../azure-monitor/logs/data-platform-logs.md#log-analytics-workspaces) |
+ | auditLogPath |**Optional**. The full path to the audit log files.<br>When not provided, the default path ``/var/log/kube-apiserver/audit.log`` will be used.<br>For AKS Engine, the standard path is ``/var/log/kubeaudit/audit.log`` |
+
+ The below command shows an example usage of all optional fields:
+
+ ```azurecli
+ az k8s-extension create --name microsoft.azuredefender.kubernetes --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <your-rg> --extension-type microsoft.azuredefender.kubernetes --configuration-settings logAnalyticsWorkspaceResourceID=<log-analytics-workspace-resource-id> auditLogPath=<your-auditlog-path>
+ ```
+
+### [**Resource Manager**](#tab/k8s-deploy-resource-manager)
+
+### Use Azure Resource Manager to deploy the Azure Defender extension
+
+To use Azure Resource Manager to deploy the Azure Defender extension, you'll need a Log Analytics workspace on your subscription. Learn more in [Log Analytics workspaces](../azure-monitor/logs/data-platform-logs.md#log-analytics-workspaces).
+
+You can use the **azure-defender-extension-arm-template.json** Resource Manager template from Security Center's [installation examples](https://aka.ms/kubernetes-extension-installation-examples).
+
+> [!TIP]
+> If you're new to Resource Manager templates, start here: [What are Azure Resource Manager templates?](../azure-resource-manager/templates/overview.md)
+
+### [**REST API**](#tab/k8s-deploy-api)
+
+### Use REST API to deploy the Azure Defender extension
+
+To use the REST API to deploy the Azure Defender extension, you'll need a Log Analytics workspace on your subscription. Learn more in [Log Analytics workspaces](../azure-monitor/logs/data-platform-logs.md#log-analytics-workspaces).
+
+> [!TIP]
+> The simplest way to use the API to deploy the Azure Defender extension is with the supplied **Postman Collection JSON** example from Security Center's [installation examples](https://aka.ms/kubernetes-extension-installation-examples).
+- To modify the Postman Collection JSON, or to manually deploy the extension with the REST API, run the following PUT command:
+
+ ```rest
+ PUT https://management.azure.com/subscriptions/{{Subscription Id}}/resourcegroups/{{Resource Group}}/providers/Microsoft.Kubernetes/connectedClusters/{{Cluster Name}}/providers/Microsoft.KubernetesConfiguration/extensions/microsoft.azuredefender.kubernetes?api-version=2020-07-01-preview
+ ```
+
+ Where:
+
+ | Name | In | Required | Type | Description |
+ |--||-|--|-|
+ | Subscription ID | path | True | string | Your Azure Arc enabled Kubernetes resource's subscription ID |
+ | Resource Group | path | True | string | Name of the resource group containing your Azure Arc enabled Kubernetes resource |
+ | Cluster Name | path | True | string | Name of your Azure Arc enabled Kubernetes resource |
++
+ For **Authentication**, your header must have a Bearer token (as with other Azure APIs). To get a bearer token, run the following command:
+
+ ```az account get-access-token --subscription <your-subscription-id>```
+ Use the following structure for the body of your message:
+ ```json
+ {
+ "properties": {
+ "extensionType": "microsoft.azuredefender.kubernetes",
+ "con figurationSettings":ΓÇ»{
+ "logAnalytics.workspaceId":"YOUR-WORKSPACE-ID"
+ // , "auditLogPath":"PATH/TO/AUDITLOG"
+ },
+ "configurationProtectedSettings": {
+ "logAnalytics.key":"YOUR-WORKSPACE-KEY"
+ }
+ }
+ }
+ ```
+
+ Description of the properties is given below:
+
+ | Property | Description |
+ | -- | -- |
+ | logAnalytics.workspaceId | Workspace ID of the Log Analytics resource |
+ | logAnalytics.key | Key of the Log Analytics resource |
+ | auditLogPath | **Optional**. The full path to the audit log files. The default value is ``/var/log/kube-apiserver/audit.log`` |
+++
+## Verify the deployment
+
+To verify that your cluster has the Azure Defender extension installed on it, follow the steps in one of the tabs below:
+
+### [**Azure portal - Security Center**](#tab/k8s-verify-asc)
+
+### Use Security Center recommendation to verify the status of your extension
+
+1. From Azure Security Center's recommendations page, open the **Enable Azure Defender** security control.
+
+1. Select the recommendation named **Azure Arc enabled Kubernetes clusters should have Azure Defender's extension installed**.
+
+ :::image type="content" source="media/defender-for-kubernetes-azure-arc/extension-recommendation.png" alt-text="Azure Security Center's recommendation for deploying the Azure Defender extension for Azure Arc enabled Kubernetes clusters." lightbox="media/defender-for-kubernetes-azure-arc/extension-recommendation.png":::
+
+1. Check that the cluster on which you deployed the extension is listed as **Healthy**.
++
+### [**Azure portal - Azure Arc**](#tab/k8s-verify-arc)
+
+### Use the Azure Arc pages to verify the status of your extension
+
+1. From the Azure portal, open **Azure Arc**.
+1. From the infrastructure list, select **Kubernetes clusters** and then select the specific cluster.
+1. Open the extensions page. The extensions on the cluster are listed. Check the **Install status** column to confirm that the Azure Defender extension was installed correctly.
+
+ :::image type="content" source="media/defender-for-kubernetes-azure-arc/extension-installed-clusters-page.png" alt-text="Azure Arc page for checking the status of all installed extensions on a Kubernetes cluster." lightbox="media/defender-for-kubernetes-azure-arc/extension-installed-clusters-page.png":::
+
+1. For more details, select the extension.
+
+ :::image type="content" source="media/defender-for-kubernetes-azure-arc/extension-details-page.png" alt-text="Full details of an Azure Arc extension on a Kubernetes cluster.":::
++
+### [**Azure CLI**](#tab/k8s-verify-cli)
+
+### Use Azure CLI to verify that the extension is deployed
+
+1. Run the following command on Azure CLI:
+
+ ```azurecli
+ az k8s-extension show --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <your-rg> --name microsoft.azuredefender.kubernetes
+ ```
+
+1. In the response, look for "extensionType": "microsoft.azuredefender.kubernetes" and "installState": "Installed".
+
+ > [!NOTE]
+ > It might show "installState": "Pending" for the first few minutes.
+
+1. If the state shows **Installed**, run the following command on your machine with the `kubeconfig` file pointed to your cluster to check that a pod called "azuredefender-XXXXX" is in 'Running' state:
+
+ ```console
+ kubectl get pods -n azuredefender
+ ```
+
+### [**REST API**](#tab/k8s-verify-api)
+
+### Use the REST API to verify that the extension is deployed
+
+To confirm a successful deployment, or to validate the status of your extension at any time:
+
+1. Run the following GET command:
+
+ ```rest
+ GET https://management.azure.com/subscriptions/{{Subscription Id}}/resourcegroups/{{Resource Group}}/providers/Microsoft.Kubernetes/connectedClusters/{{Cluster Name}}/providers/Microsoft.KubernetesConfiguration/extensions/microsoft.azuredefender.kubernetes?api-version=2020-07-01-preview
+ ```
+
+1. In the response, look in "extensionType": "microsoft.azuredefender.kubernetes" for "installState": "Installed".
+
+ > [!TIP]
+ > It might show "installState": "Pending" for the first few minutes.
+
+1. If the state shows **Installed**, run the following command on your machine with the `kubeconfig` file pointed to your cluster to check that a pod called "azuredefender-XXXXX" is in 'Running' state:
+
+ ```console
+ kubectl get pods -n azuredefender
+ ```
++
+## Simulate security alerts from Azure Defender for Kubernetes
+
+A full list of supported alerts is available in the [reference table of all security alerts in Azure Security Center](alerts-reference.md#alerts-akscluster).
+
+1. To simulate an Azure Defender alert, run the following command:
+
+ ```console
+ kubectl get pods --namespace=asc-alerttest-662jfi039n
+ ```
+
+ The expected response is "No resource found".
+
+ Within 30 minutes, Azure Defender will detect this activity and trigger a security alert.
+
+1. In the Azure portal, open Azure Security Center's security alerts page and look for the alert on the relevant resource:
+
+ :::image type="content" source="media/defender-for-kubernetes-azure-arc/sample-kubernetes-security-alert.png" alt-text="Sample alert from Azure Defender for Kubernetes." lightbox="media/defender-for-kubernetes-azure-arc/sample-kubernetes-security-alert.png":::
+
+## Removing the Azure Defender extension
+
+You can remove the extension using Azure portal, Azure CLI or REST API as explained in the tabs below.
+
+### [**Azure portal - Arc**](#tab/k8s-remove-arc)
+
+### Use Azure portal to remove the extension
+
+1. From the Azure portal, open Azure Arc.
+1. From the infrastructure list, select **Kubernetes clusters** and then select the specific cluster.
+1. Open the extensions page. The extensions on the cluster are listed.
+1. Select the cluster and select **Uninstall**.
+
+ :::image type="content" source="media/defender-for-kubernetes-azure-arc/extension-uninstall-clusters-page.png" alt-text="Removing an extension from your Arc enabled Kubernetes cluster." lightbox="media/defender-for-kubernetes-azure-arc/extension-uninstall-clusters-page.png":::
+
+### [**Azure CLI**](#tab/k8s-remove-cli)
+
+### Use Azure CLI to remove the Azure Defender extension
+
+1. Remove the Azure Defender for Kubernetes Arc extension with the following commands:
+
+ ```azurecli
+ az login
+ az account set --subscription <subscription-id>
+ az k8s-extension delete --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <your-rg> --name microsoft.azuredefender.kubernetes --yes
+ ```
+
+ Removing the extension may take a few minutes. We recommend you wait before you try to verify that it was successful.
+
+1. To verify that the extension was successfully removed, run the following commands:
+
+ ```azurecli
+ az k8s-extension show --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <your-rg> --name microsoft.azuredefender.kubernetes
+ ```
+
+ There should be no delay in the extension resource getting deleted from Azure Resource Manager. After that, validate that there are no pods called "azuredefender-XXXXX" on the cluster by running the following command with the `kubeconfig` file pointed to your cluster:
+
+ ```console
+ kubectl get pods -n azuredefender
+ ```
+
+ It might take a few minutes for the pods to be deleted.
+
+### [**REST API**](#tab/k8s-remove-api)
+
+### Use REST API to remove the Azure Defender extension
+
+To remove the extension using the REST API, run the following DELETE command:
+
+```rest
+DELETE https://management.azure.com/subscriptions/{{Subscription Id}}/resourcegroups/{{Resource Group}}/providers/Microsoft.Kubernetes/connectedClusters/{{Cluster Name}}/providers/Microsoft.KubernetesConfiguration/extensions/microsoft.azuredefender.kubernetes?api-version=2020-07-01-preview
+```
+
+| Name | In | Required | Type | Description |
+|--||-|--|-|
+| Subscription ID | path | True | string | Your Arc enabled Kubernetes cluster's subscription ID |
+| Resource Group | path | True | string | Your Arc enabled Kubernetes cluster's resource group |
+| Cluster Name | path | True | string | Your Arc enabled Kubernetes cluster's name |
+
+For **Authentication**, your header must have a Bearer token (as with other Azure APIs). To get a bearer token, run the following command:
+
+```azurecli
+az account get-access-token --subscription <your-subscription-id>
+```
+
+The request may take several minutes to complete.
+++
+## Next steps
+
+This page explained how to deploy the Azure Defender extension for Azure Arc enabled Kubernetes clusters. Learn more about Azure Defender and Azure Security Center's container security features in the following pages:
+
+- [Container security in Security Center](container-security.md)
+- [Introduction to Azure Defender for Kubernetes](defender-for-kubernetes-introduction.md)
+- [Protect your Kubernetes workloads](kubernetes-workload-protections.md)
security-center Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/deploy-vulnerability-assessment-vm.md
Title: Security Center's integrated vulnerability assessment solution for Azure and hybrid machines description: Install a vulnerability assessment solution on your Azure machines to get recommendations in Azure Security Center that can help you protect your Azure and virtual machines - Last updated 02/18/2021
The vulnerability scanner extension works as follows:
>[!IMPORTANT] > If the deployment fails on one or more machines, ensure the target machines can communicate with Qualys' cloud service by adding the following URLs to your allow lists (via port 443 - the default for HTTPS): >
- > - https://www.qualys.com/company/newsroom/news-releases/usa/2017-02-08-qualys-expands-global-cloud-platform-with-three-new-secure-operations-centers/ - Qualys' US data center
+ > - https://qagpublic.qg3.apps.qualys.com - Qualys' US data center
> - https://qagpublic.qg2.apps.qualys.eu - Qualys' European data center > > If your machine is in a European Azure region, its artifacts will be processed in Qualys' European data center. Artifacts for virtual machines located elsewhere are sent to the US data center.
The Azure Security Center vulnerability assessment extension (powered by Qualys)
During setup, Security Center checks to ensure that the machine can communicate with the following two Qualys data centers (via port 443 - the default for HTTPS): -- https://www.qualys.com/company/newsroom/news-releases/usa/2017-02-08-qualys-expands-global-cloud-platform-with-three-new-secure-operations-centers/ - Qualys' US data center-- https://www.qualys.com/company/newsroom/news-releases/usa/2017-02-08-qualys-expands-global-cloud-platform-with-three-new-secure-operations-centers/ - Qualys' European data center
+- https://qagpublic.qg3.apps.qualys.com - Qualys' US data center
+- https://qagpublic.qg2.apps.qualys.eu - Qualys' European data center
The extension doesn't currently accept any proxy configuration details.
security-center Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/release-notes-archive.md
Previously updated : 03/04/2021 Last updated : 04/04/2021
This page provides you with information about:
- Deprecated functionality
+## October 2020
+
+Updates in October include:
+- [Vulnerability assessment for on-premise and multi-cloud machines (preview)](#vulnerability-assessment-for-on-premise-and-multi-cloud-machines-preview)
+- [Azure Firewall recommendation added (preview)](#azure-firewall-recommendation-added-preview)
+- [Authorized IP ranges should be defined on Kubernetes Services recommendation updated with quick fix](#authorized-ip-ranges-should-be-defined-on-kubernetes-services-recommendation-updated-with-quick-fix)
+- [Regulatory compliance dashboard now includes option to remove standards](#regulatory-compliance-dashboard-now-includes-option-to-remove-standards)
+- [Microsoft.Security/securityStatuses table removed from Azure Resource Graph (ARG)](#microsoftsecuritysecuritystatuses-table-removed-from-azure-resource-graph-arg)
+
+### Vulnerability assessment for on-premise and multi-cloud machines (preview)
+
+[Azure Defender for servers](defender-for-servers-introduction.md)' integrated vulnerability assessment scanner (powered by Qualys) now scans Azure Arc enabled servers.
+
+When you've enabled Azure Arc on your non-Azure machines, Security Center will offer to deploy the integrated vulnerability scanner on them - manually and at-scale.
+
+With this update, you can unleash the power of **Azure Defender for servers** to consolidate your vulnerability management program across all of your Azure and non-Azure assets.
+
+Main capabilities:
+
+- Monitoring the VA (vulnerability assessment) scanner provisioning state on Azure Arc machines
+- Provisioning the integrated VA agent to unprotected Windows and Linux Azure Arc machines (manually and at-scale)
+- Receiving and analyzing detected vulnerabilities from deployed agents (manually and at-scale)
+- Unified experience for Azure VMs and Azure Arc machines
+
+[Learn more about deploying the integrated vulnerability scanner to your hybrid machines](deploy-vulnerability-assessment-vm.md#deploy-the-integrated-scanner-to-your-azure-and-hybrid-machines).
+
+[Learn more about Azure Arc enabled servers](../azure-arc/servers/index.yml).
++
+### Azure Firewall recommendation added (preview)
+
+A new recommendation has been added to protect all your virtual networks with Azure Firewall.
+
+The recommendation, **Virtual networks should be protected by Azure Firewall** advises you to restrict access to your virtual networks and prevent potential threats by using Azure Firewall.
+
+Learn more about [Azure Firewall](https://azure.microsoft.com/services/azure-firewall/).
++
+### Authorized IP ranges should be defined on Kubernetes Services recommendation updated with quick fix
+
+The recommendation **Authorized IP ranges should be defined on Kubernetes Services** now has a quick fix option.
+
+For more information about this recommendation and all other Security Center recommendations, see [Security recommendations - a reference guide](recommendations-reference.md).
+++
+### Regulatory compliance dashboard now includes option to remove standards
+
+Security Center's regulatory compliance dashboard provides insights into your compliance posture based on how you're meeting specific compliance controls and requirements.
+
+The dashboard includes a default set of regulatory standards. If any of the supplied standards isn't relevant to your organization, it's now a simple process to remove them from the UI for a subscription. Standards can be removed only at the *subscription* level; not the management group scope.
+
+Learn more in [Remove a standard from your dashboard](update-regulatory-compliance-packages.md#remove-a-standard-from-your-dashboard).
++
+### Microsoft.Security/securityStatuses table removed from Azure Resource Graph (ARG)
+
+Azure Resource Graph is a service in Azure that is designed to provide efficient resource exploration with the ability to query at scale across a given set of subscriptions so that you can effectively govern your environment.
+
+For Azure Security Center, you can use ARG and the [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/) to query a wide range of security posture data. For example:
+
+- Asset inventory utilizes (ARG)
+- We have documented a sample ARG query for how to [Identify accounts without multi-factor authentication (MFA) enabled](security-center-identity-access.md#identify-accounts-without-multi-factor-authentication-mfa-enabled)
+
+Within ARG, there are tables of data for you to use in your queries.
++
+> [!TIP]
+> The ARG documentation lists all the available tables in [Azure Resource Graph table and resource type reference](../governance/resource-graph/reference/supported-tables-resources.md).
+
+From this update, the **Microsoft.Security/securityStatuses** table has been removed. The securityStatuses API is still available.
+
+Data replacement can be used by Microsoft.Security/Assessments table.
+
+The major difference between Microsoft.Security/securityStatuses and Microsoft.Security/Assessments is that while the first shows aggregation of assessments, the seconds holds a single record for each.
+
+For example, Microsoft.Security/securityStatuses would return a result with an array of two policyAssessments:
+
+```
+{
+id: "/subscriptions/449bcidd-3470-4804-ab56-2752595 felab/resourceGroups/mico-rg/providers/Microsoft.Network/virtualNetworks/mico-rg-vnet/providers/Microsoft.Security/securityStatuses/mico-rg-vnet",
+name: "mico-rg-vnet",
+type: "Microsoft.Security/securityStatuses",
+properties: {
+ policyAssessments: [
+ {assessmentKey: "e3deicce-f4dd-3b34-e496-8b5381bazd7e", category: "Networking", policyName: "Azure DDOS Protection Standard should be enabled",...},
+ {assessmentKey: "sefac66a-1ec5-b063-a824-eb28671dc527", category: "Compute", policyName: "",...}
+ ],
+ securitystateByCategory: [{category: "Networking", securityState: "None" }, {category: "Compute",...],
+ name: "GenericResourceHealthProperties",
+ type: "VirtualNetwork",
+ securitystate: "High"
+}
+```
+Whereas, Microsoft.Security/Assessments will hold a record for each such policy assessment as follows:
+
+```
+{
+type: "Microsoft.Security/assessments",
+id: "/subscriptions/449bc1dd-3470-4804-ab56-2752595f01ab/resourceGroups/mico-rg/providers/Microsoft. Network/virtualNetworks/mico-rg-vnet/providers/Microsoft.Security/assessments/e3delcce-f4dd-3b34-e496-8b5381ba2d70",
+name: "e3deicce-f4dd-3b34-e496-8b5381ba2d70",
+properties: {
+ resourceDetails: {Source: "Azure", Id: "/subscriptions/449bc1dd-3470-4804-ab56-2752595f01ab/resourceGroups/mico-rg/providers/Microsoft.Network/virtualNetworks/mico-rg-vnet"...},
+ displayName: "Azure DDOS Protection Standard should be enabled",
+ status: (code: "NotApplicable", cause: "VnetHasNOAppGateways", description: "There are no Application Gateway resources attached to this Virtual Network"...}
+}
+
+{
+type: "Microsoft.Security/assessments",
+id: "/subscriptions/449bc1dd-3470-4804-ab56-2752595f01ab/resourcegroups/mico-rg/providers/microsoft.network/virtualnetworks/mico-rg-vnet/providers/Microsoft.Security/assessments/80fac66a-1ec5-be63-a824-eb28671dc527",
+name: "8efac66a-1ec5-be63-a824-eb28671dc527",
+properties: {
+ resourceDetails: (Source: "Azure", Id: "/subscriptions/449bc1dd-3470-4804-ab56-2752595f01ab/resourcegroups/mico-rg/providers/microsoft.network/virtualnetworks/mico-rg-vnet"...),
+ displayName: "Audit diagnostic setting",
+ status: {code: "Unhealthy"}
+}
+```
+
+**Example of converting an existing ARG query using securityStatuses to now use the assessments table:**
+
+Query that references SecurityStatuses:
+
+```kusto
+SecurityResources
+| where type == 'microsoft.security/securitystatuses' and properties.type == 'virtualMachine'
+| where name in ({vmnames})
+| project name, resourceGroup, policyAssesments = properties.policyAssessments, resourceRegion = location, id, resourceDetails = properties.resourceDetails
+```
+
+Replacement query for the Assessments table:
+
+```kusto
+securityresources
+| where type == "microsoft.security/assessments" and id contains "virtualMachine"
+| extend resourceName = extract(@"(?i)/([^/]*)/providers/Microsoft.Security/assessments", 1, id)
+| extend source = tostring(properties.resourceDetails.Source)
+| extend resourceId = trim(" ", tolower(tostring(case(source =~ "azure", properties.resourceDetails.Id,
+source =~ "aws", properties.additionalData.AzureResourceId,
+source =~ "gcp", properties.additionalData.AzureResourceId,
+extract("^(.+)/providers/Microsoft.Security/assessments/.+$",1,id)))))
+| extend resourceGroup = tolower(tostring(split(resourceId, "/")[4]))
+| where resourceName in ({vmnames})
+| project resourceName, resourceGroup, resourceRegion = location, id, resourceDetails = properties.additionalData
+```
+
+Learn more at the following links:
+- [How to create queries with Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md)
+- [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/)
+ ## September 2020
security-center Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/release-notes.md
Previously updated : 03/22/2021 Last updated : 04/05/2021
To learn about *planned* changes that are coming soon to Security Center, see [I
> [!TIP] > If you're looking for items older than six months, you'll find them in the [Archive for What's new in Azure Security Center](release-notes-archive.md).
+## April 2021
+
+Updates in April include:
+- [Four new recommendations related to guest configuration (preview)](#four-new-recommendations-related-to-guest-configuration-preview)
+- [11 Azure Defender alerts deprecated](#11-azure-defender-alerts-deprecated)
++
+### Four new recommendations related to guest configuration (preview)
+
+Azure's [Guest Configuration extension](../governance/policy/concepts/guest-configuration.md) reports to Security Center to help ensure your virtual machines' in-guest settings are hardened. The extension isn't required for Arc enabled servers because it's included in the Arc Connected Machine agent. The extension requires a system-managed identity on the machine.
+
+We've added four new recommendations to Security Center to make the most of this extension.
+
+- Two recommendations prompt you to install the extension and its required system-managed identity:
+ - **Guest Configuration extension should be installed on your machines**
+ - **Virtual machines' Guest Configuration extension should be deployed with system-assigned managed identity**
+
+- When the extension is installed and running, it'll begin auditing your machines and you'll be prompted to harden settings such as configuration of the operating system and environment settings. These two recommendations will prompt you to harden your Windows and Linux machines as described:
+ - **Windows Defender Exploit Guard should be enabled on your machines**
+ - **Authentication to Linux machines should require SSH keys**
+
+Learn more in [Understand Azure Policy's Guest Configuration](../governance/policy/concepts/guest-configuration.md).
+++
+### 11 Azure Defender alerts deprecated
+
+The eleven Azure Defender alerts listed below have been deprecated.
+
+- New alerts will replace these two alerts and provide better coverage:
+
+ | AlertType | AlertDisplayName |
+ |--|--|
+ | ARM_MicroBurstDomainInfo | PREVIEW - MicroBurst toolkit "Get-AzureDomainInfo" function run detected |
+ | ARM_MicroBurstRunbook | PREVIEW - MicroBurst toolkit "Get-AzurePasswords" function run detected |
+ | | |
+
+- These nine alerts relate to an Azure Active Directory Identity Protection connector (IPC) that has already been deprecated:
+
+ | AlertType | AlertDisplayName |
+ ||-|
+ | UnfamiliarLocation | Unfamiliar sign-in properties |
+ | AnonymousLogin | Anonymous IP address |
+ | InfectedDeviceLogin | Malware linked IP address |
+ | ImpossibleTravel | Atypical travel |
+ | MaliciousIP | Malicious IP address |
+ | LeakedCredentials | Leaked credentials |
+ | PasswordSpray | Password Spray |
+ | LeakedCredentials | Azure AD threat intelligence |
+ | AADAI | Azure AD AI |
+ | | |
+
+ > [!TIP]
+ > These nine IPC alerts were never Security Center alerts. TheyΓÇÖre part of the Azure Active Directory (AAD) Identity Protection connector (IPC) that was sending them to Security Center. For the last two years, the only customers whoΓÇÖve been seeing those alerts are organizations who configured the export (from the connector to ASC) in 2019 or earlier. AAD IPC has continued to show them in its own alerts systems and theyΓÇÖve continued to be available in Azure Sentinel. The only change is that theyΓÇÖre no longer appearing in Security Center.
+ ## March 2021
The **System updates should be installed on your machines** recommendation has b
You can now see whether or not your subscriptions have the default Security Center policy assigned, in the Security Center's **security policy** page of the Azure portal. -
-## October 2020
-
-Updates in October include:
-- [Vulnerability assessment for on-premise and multi-cloud machines (preview)](#vulnerability-assessment-for-on-premise-and-multi-cloud-machines-preview)-- [Azure Firewall recommendation added (preview)](#azure-firewall-recommendation-added-preview)-- [Authorized IP ranges should be defined on Kubernetes Services recommendation updated with quick fix](#authorized-ip-ranges-should-be-defined-on-kubernetes-services-recommendation-updated-with-quick-fix)-- [Regulatory compliance dashboard now includes option to remove standards](#regulatory-compliance-dashboard-now-includes-option-to-remove-standards)-- [Microsoft.Security/securityStatuses table removed from Azure Resource Graph (ARG)](#microsoftsecuritysecuritystatuses-table-removed-from-azure-resource-graph-arg)-
-### Vulnerability assessment for on-premise and multi-cloud machines (preview)
-
-[Azure Defender for servers](defender-for-servers-introduction.md)' integrated vulnerability assessment scanner (powered by Qualys) now scans Azure Arc enabled servers.
-
-When you've enabled Azure Arc on your non-Azure machines, Security Center will offer to deploy the integrated vulnerability scanner on them - manually and at-scale.
-
-With this update, you can unleash the power of **Azure Defender for servers** to consolidate your vulnerability management program across all of your Azure and non-Azure assets.
-
-Main capabilities:
--- Monitoring the VA (vulnerability assessment) scanner provisioning state on Azure Arc machines-- Provisioning the integrated VA agent to unprotected Windows and Linux Azure Arc machines (manually and at-scale)-- Receiving and analyzing detected vulnerabilities from deployed agents (manually and at-scale)-- Unified experience for Azure VMs and Azure Arc machines-
-[Learn more about deploying the integrated vulnerability scanner to your hybrid machines](deploy-vulnerability-assessment-vm.md#deploy-the-integrated-scanner-to-your-azure-and-hybrid-machines).
-
-[Learn more about Azure Arc enabled servers](../azure-arc/servers/index.yml).
--
-### Azure Firewall recommendation added (preview)
-
-A new recommendation has been added to protect all your virtual networks with Azure Firewall.
-
-The recommendation, **Virtual networks should be protected by Azure Firewall** advises you to restrict access to your virtual networks and prevent potential threats by using Azure Firewall.
-
-Learn more about [Azure Firewall](https://azure.microsoft.com/services/azure-firewall/).
--
-### Authorized IP ranges should be defined on Kubernetes Services recommendation updated with quick fix
-
-The recommendation **Authorized IP ranges should be defined on Kubernetes Services** now has a quick fix option.
-
-For more information about this recommendation and all other Security Center recommendations, see [Security recommendations - a reference guide](recommendations-reference.md).
---
-### Regulatory compliance dashboard now includes option to remove standards
-
-Security Center's regulatory compliance dashboard provides insights into your compliance posture based on how you're meeting specific compliance controls and requirements.
-
-The dashboard includes a default set of regulatory standards. If any of the supplied standards isn't relevant to your organization, it's now a simple process to remove them from the UI for a subscription. Standards can be removed only at the *subscription* level; not the management group scope.
-
-Learn more in [Remove a standard from your dashboard](update-regulatory-compliance-packages.md#remove-a-standard-from-your-dashboard).
--
-### Microsoft.Security/securityStatuses table removed from Azure Resource Graph (ARG)
-
-Azure Resource Graph is a service in Azure that is designed to provide efficient resource exploration with the ability to query at scale across a given set of subscriptions so that you can effectively govern your environment.
-
-For Azure Security Center, you can use ARG and the [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/) to query a wide range of security posture data. For example:
--- Asset inventory utilizes (ARG)-- We have documented a sample ARG query for how to [Identify accounts without multi-factor authentication (MFA) enabled](security-center-identity-access.md#identify-accounts-without-multi-factor-authentication-mfa-enabled)-
-Within ARG, there are tables of data for you to use in your queries.
--
-> [!TIP]
-> The ARG documentation lists all the available tables in [Azure Resource Graph table and resource type reference](../governance/resource-graph/reference/supported-tables-resources.md).
-
-From this update, the **Microsoft.Security/securityStatuses** table has been removed. The securityStatuses API is still available.
-
-Data replacement can be used by Microsoft.Security/Assessments table.
-
-The major difference between Microsoft.Security/securityStatuses and Microsoft.Security/Assessments is that while the first shows aggregation of assessments, the seconds holds a single record for each.
-
-For example, Microsoft.Security/securityStatuses would return a result with an array of two policyAssessments:
-
-```
-{
-id: "/subscriptions/449bcidd-3470-4804-ab56-2752595 felab/resourceGroups/mico-rg/providers/Microsoft.Network/virtualNetworks/mico-rg-vnet/providers/Microsoft.Security/securityStatuses/mico-rg-vnet",
-name: "mico-rg-vnet",
-type: "Microsoft.Security/securityStatuses",
-properties: {
- policyAssessments: [
- {assessmentKey: "e3deicce-f4dd-3b34-e496-8b5381bazd7e", category: "Networking", policyName: "Azure DDOS Protection Standard should be enabled",...},
- {assessmentKey: "sefac66a-1ec5-b063-a824-eb28671dc527", category: "Compute", policyName: "",...}
- ],
- securitystateByCategory: [{category: "Networking", securityState: "None" }, {category: "Compute",...],
- name: "GenericResourceHealthProperties",
- type: "VirtualNetwork",
- securitystate: "High"
-}
-```
-Whereas, Microsoft.Security/Assessments will hold a record for each such policy assessment as follows:
-
-```
-{
-type: "Microsoft.Security/assessments",
-id: "/subscriptions/449bc1dd-3470-4804-ab56-2752595f01ab/resourceGroups/mico-rg/providers/Microsoft. Network/virtualNetworks/mico-rg-vnet/providers/Microsoft.Security/assessments/e3delcce-f4dd-3b34-e496-8b5381ba2d70",
-name: "e3deicce-f4dd-3b34-e496-8b5381ba2d70",
-properties: {
- resourceDetails: {Source: "Azure", Id: "/subscriptions/449bc1dd-3470-4804-ab56-2752595f01ab/resourceGroups/mico-rg/providers/Microsoft.Network/virtualNetworks/mico-rg-vnet"...},
- displayName: "Azure DDOS Protection Standard should be enabled",
- status: (code: "NotApplicable", cause: "VnetHasNOAppGateways", description: "There are no Application Gateway resources attached to this Virtual Network"...}
-}
-
-{
-type: "Microsoft.Security/assessments",
-id: "/subscriptions/449bc1dd-3470-4804-ab56-2752595f01ab/resourcegroups/mico-rg/providers/microsoft.network/virtualnetworks/mico-rg-vnet/providers/Microsoft.Security/assessments/80fac66a-1ec5-be63-a824-eb28671dc527",
-name: "8efac66a-1ec5-be63-a824-eb28671dc527",
-properties: {
- resourceDetails: (Source: "Azure", Id: "/subscriptions/449bc1dd-3470-4804-ab56-2752595f01ab/resourcegroups/mico-rg/providers/microsoft.network/virtualnetworks/mico-rg-vnet"...),
- displayName: "Audit diagnostic setting",
- status: {code: "Unhealthy"}
-}
-```
-
-**Example of converting an existing ARG query using securityStatuses to now use the assessments table:**
-
-Query that references SecurityStatuses:
-
-```kusto
-SecurityResources
-| where type == 'microsoft.security/securitystatuses' and properties.type == 'virtualMachine'
-| where name in ({vmnames})
-| project name, resourceGroup, policyAssesments = properties.policyAssessments, resourceRegion = location, id, resourceDetails = properties.resourceDetails
-```
-
-Replacement query for the Assessments table:
-
-```kusto
-securityresources
-| where type == "microsoft.security/assessments" and id contains "virtualMachine"
-| extend resourceName = extract(@"(?i)/([^/]*)/providers/Microsoft.Security/assessments", 1, id)
-| extend source = tostring(properties.resourceDetails.Source)
-| extend resourceId = trim(" ", tolower(tostring(case(source =~ "azure", properties.resourceDetails.Id,
-source =~ "aws", properties.additionalData.AzureResourceId,
-source =~ "gcp", properties.additionalData.AzureResourceId,
-extract("^(.+)/providers/Microsoft.Security/assessments/.+$",1,id)))))
-| extend resourceGroup = tolower(tostring(split(resourceId, "/")[4]))
-| where resourceName in ({vmnames})
-| project resourceName, resourceGroup, resourceRegion = location, id, resourceDetails = properties.additionalData
-```
-
-Learn more at the following links:
-- [How to create queries with Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md)-- [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/)
security-center Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/upcoming-changes.md
Previously updated : 03/18/2021 Last updated : 04/04/2021
If you're looking for the latest release notes, you'll find them in the [What's
| Planned change | Estimated date for change | |--||
-| [Two recommendations from "Apply system updates" security control being deprecated](#two-recommendations-from-apply-system-updates-security-control-being-deprecated) | March 2021 |
-| [Deprecation of 11 Azure Defender alerts](#deprecation-of-11-azure-defender-alerts) | March 2021 |
+| [Two recommendations from "Apply system updates" security control being deprecated](#two-recommendations-from-apply-system-updates-security-control-being-deprecated) | April 2021 |
| [21 recommendations moving between security controls](#21-recommendations-moving-between-security-controls) | April 2021 | | [Two further recommendations from "Apply system updates" security control being deprecated](#two-further-recommendations-from-apply-system-updates-security-control-being-deprecated) | April 2021 | | [Recommendations from AWS will be released for general availability (GA)](#recommendations-from-aws-will-be-released-for-general-availability-ga) | April 2021 |
If you're looking for the latest release notes, you'll find them in the [What's
### Two recommendations from "Apply system updates" security control being deprecated
-**Estimated date for change:** March 2021
+**Estimated date for change:** April 2021
-The following two recommendations are scheduled to be deprecated in February 2021:
+The following two recommendations are scheduled to be deprecated in April 2021:
- **Your machines should be restarted to apply system updates**. This might result in a slight impact on your secure score. - **Monitoring agent should be installed on your machines**. This recommendation relates to on-premises machines only and some of its logic will be transferred to another recommendation, **Log Analytics agent health issues should be resolved on your machines**. This might result in a slight impact on your secure score.
We recommend checking your continuous export and workflow automation configurati
Learn more about these recommendations in the [security recommendations reference page](recommendations-reference.md).
-### Deprecation of 11 Azure Defender alerts
-
-**Estimated date for change:** March 2021
-
-Next month, the eleven Azure Defender alerts listed below will be deprecated.
--- New alerts will replace these two alerts and provide better coverage:-
- | AlertType | AlertDisplayName |
- |--|--|
- | ARM_MicroBurstDomainInfo | PREVIEW - MicroBurst toolkit "Get-AzureDomainInfo" function run detected |
- | ARM_MicroBurstRunbook | PREVIEW - MicroBurst toolkit "Get-AzurePasswords" function run detected |
- | | |
--- These nine alerts relate to an Azure Active Directory Identity Protection connector that has already been deprecated:-
- | AlertType | AlertDisplayName |
- ||-|
- | UnfamiliarLocation | Unfamiliar sign-in properties |
- | AnonymousLogin | Anonymous IP address |
- | InfectedDeviceLogin | Malware linked IP address |
- | ImpossibleTravel | Atypical travel |
- | MaliciousIP | Malicious IP address |
- | LeakedCredentials | Leaked credentials |
- | PasswordSpray | Password Spray |
- | LeakedCredentials | Azure AD threat intelligence |
- | AADAI | Azure AD AI |
- | | |
-
---- ### 21 recommendations moving between security controls **Estimated date for change:** April 2021
security Customer Lockbox Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/customer-lockbox-overview.md
Previously updated : 02/19/2021 Last updated : 04/05/2021 # Customer Lockbox for Microsoft Azure
Last updated 02/19/2021
> [!NOTE] > To use this feature, your organization must have an [Azure support plan](https://azure.microsoft.com/support/plans/) with a minimal level of **Developer**.
-Customer Lockbox for Microsoft Azure provides an interface for customers to review and approve or reject customer data access requests. It is used in cases where a Microsoft engineer needs to access customer data during a support request.
+Most operations, support, and troubleshooting performed by Microsoft personnel and sub-processors do not require access to customer data. In those rare circumstances where such access is required, Customer Lockbox for Microsoft Azure provides an interface for customers to review and approve or reject customer data access requests. It is used in cases where a Microsoft engineer needs to access customer data, whether in response to a customer-initiated support ticket or a problem identified by Microsoft.
This article covers how to enable Customer Lockbox and how Lockbox requests are initiated, tracked, and stored for later reviews and audits. <a name='supported-services-and-scenarios-in-general-availability'></a><a name='supported-services-and-scenarios-in-preview'></a>
-## Supported services and scenarios (General Availability)
+## Supported services and scenarios
-The following services are now generally available for Customer Lockbox:
+### General Availability
+The following services are generally available for Customer Lockbox:
- Azure API Management - Azure App Service
The following services are now generally available for Customer Lockbox:
- Azure Synapse Analytics - Virtual machines in Azure (covering remote desktop access, access to memory dumps, and managed disks)
+### Public Preview
+The following services are currently in preview for Customer Lockbox:
+
+- Azure Machine Learning
+- Azure Batch
+ ## Enable Customer Lockbox You can now enable Customer Lockbox from the [Administration module](https://aka.ms/customerlockbox/administration) in the Customer Lockbox blade.
The following steps outline a typical workflow for a Customer Lockbox request.
3. An Azure Support Engineer reviews the service request and determines the next steps to resolve the issue.
-4. If the support engineer can't troubleshoot the issue by using standard tools and telemetry, the next step is to request elevated permissions by using a Just-In-Time (JIT) access service. This request can be from the original support engineer or from a different engineer because the problem is escalated to the Azure DevOps team.
+4. If the support engineer can't troubleshoot the issue by using standard tools and service generated data, the next step is to request elevated permissions by using a Just-In-Time (JIT) access service. This request can be from the original support engineer or from a different engineer because the problem is escalated to the Azure DevOps team.
5. After the access request is submitted by the Azure Engineer, Just-In-Time service evaluates the request taking into account factors such as: - The scope of the resource
We've introduced a new baseline control ([3.13](../benchmarks/security-control-i
Customer Lockbox requests are not triggered in the following engineering support scenarios: -- A Microsoft engineer needs to do an activity that falls outside of standard operating procedures. For example, to recover or restore services in unexpected or unpredictable scenarios.-- A Microsoft engineer accesses the Azure platform as part of troubleshooting and inadvertently has access to customer data. For example, the Azure Network Team performs troubleshooting that results in a packet capture on a network device. In this scenario, if the customer encrypts the data while it is in transit then the engineer cannot read the data.
+- Emergency scenarios that fall outside of standard operating procedures. For example, a major service outage requires immediate attention to recover or restore services in an unexpected or unpredictable scenario. These ΓÇ£break glassΓÇ¥ events are rare and, in most instances, do not require any access to customer data to resolve.
+- A Microsoft engineer accesses the Azure platform as part of troubleshooting and is inadvertently exposed to customer data. For example, the Azure Network Team performs troubleshooting that results in a packet capture on a network device. It is rare that such scenarios would result in access to meaningful quantities of customer data. Customers can further protect their data through use of in transit and at rest encryption.
+
+Customer Lockbox requests are also not triggered by external legal demands for data. For details, see the discussion of [government requests for data](https://www.microsoft.com/trust-center/) on the Microsoft Trust Center.
## Next steps
security Customer Lockbox Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/customer-lockbox-security-baseline.md
Title: Azure security baseline for Customer Lockbox for Microsoft Azure
-description: Azure security baseline for Customer Lockbox for Microsoft Azure
+description: The Customer Lockbox for Microsoft Azure security baseline provides procedural guidance and resources for implementing the security recommendations specified in the Azure Security Benchmark.
Previously updated : 06/05/2020 Last updated : 03/31/2021
# Azure security baseline for Customer Lockbox for Microsoft Azure
-The Azure Security Baseline for Customer Lockbox for Microsoft Azure contains recommendations that will help you improve the security posture of your deployment.
+This security baseline applies guidance from the [Azure Security Benchmark version1.0](../benchmarks/overview-v1.md) to Customer Lockbox. The Azure Security Benchmark provides recommendations on how you can secure your cloud solutions on Azure. The content is grouped by the **security controls** defined by the Azure Security Benchmark and the related guidance applicable to Customer Lockbox.
-The baseline for this service is drawn from the [Azure Security Benchmark version 1.0](../benchmarks/overview.md), which provides recommendations on how you can secure your cloud solutions on Azure with our best practices guidance.
+>[!NOTE]
+>**Controls** not applicable to Customer Lockbox, or for which the responsibility is Microsoft's, have been excluded. To see how Customer Lockbox completely maps to the Azure Security Benchmark, see the [full Customer Lockbox security baseline mapping file](https://github.com/MicrosoftDocs/SecurityBenchmarks/tree/master/Azure%20Offer%20Security%20Baselines).
-For more information, see the [Azure security baselines overview](../benchmarks/security-baselines-overview.md).
+## Logging and Monitoring
-## Network security
-
-*For more information, see [Security control: Network security](../benchmarks/security-control-network-security.md).*
-
-### 1.1: Protect resources using Network Security Groups or Azure Firewall on your Virtual Network
-
-**Guidance**: Not applicable; you cannot associate a virtual network, subnet, or network security group with Customer Lockbox.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 1.2: Monitor and log the configuration and traffic of Vnets, Subnets, and NICs
-
-**Guidance**: Not applicable; you cannot associate a virtual network, subnet, or network security group with Customer Lockbox.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 1.3: Protect critical web applications
-
-**Guidance**: Not applicable; this recommendation is intended for web applications running on Azure App Service or compute resources.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 1.4: Deny communications with known malicious IP addresses
-
-**Guidance**: Not applicable; you cannot associate a virtual network, subnet, or network security group with Customer Lockbox.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 1.5: Record network packets and flow logs
-
-**Guidance**: Not applicable; you cannot associate a virtual network, subnet, or network security group with Customer Lockbox.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 1.6: Deploy network based intrusion detection/intrusion prevention systems (IDS/IPS)
-
-**Guidance**: Not applicable; you cannot associate a virtual network, subnet, or network security group with Customer Lockbox.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 1.7: Manage traffic to web applications
-
-**Guidance**: Not applicable; this recommendation is intended for web applications running on Azure App Service or compute resources.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 1.8: Minimize complexity and administrative overhead of network security rules
-
-**Guidance**: Not applicable; you cannot associate a virtual network, subnet, or network security group with Customer Lockbox.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 1.9: Maintain standard security configurations for network devices
-
-**Guidance**: Not applicable; you cannot associate a virtual network, subnet, or network security group with Azure Customer Lockbox.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 1.10: Document traffic configuration rules
-
-**Guidance**: Not applicable; you cannot associate a virtual network, subnet, or network security group with Customer Lockbox.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 1.11: Use automated tools to monitor network resource configurations and detect changes
-
-**Guidance**: Not applicable; you cannot associate a virtual network, subnet, or network security group with Customer Lockbox. There are no network configurations to make or monitor.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-## Logging and monitoring
-
-*For more information, see [Security control: Logging and monitoring](../benchmarks/security-control-logging-monitoring.md).*
-
-### 2.1: Use approved time synchronization sources
-
-**Guidance**: Not applicable; Microsoft maintains the time source used for resources such as Customer Lockbox.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
+*For more information, see the [Azure Security Benchmark: Logging and Monitoring](../benchmarks/security-control-logging-monitoring.md).*
### 2.2: Configure central security log management
For more information, see the [Azure security baselines overview](../benchmarks/
Onboard the activity logs generated by Customer Lockbox to Azure Sentinel or another SIEM to enable central log aggregation and management.
-* [Audit logs for Customer Lockbox](./customer-lockbox-overview.md#auditing-logs)
-
-* [How to onboard Azure Sentinel](../../sentinel/quickstart-onboard.md)
+- [Audit logs for Customer Lockbox](https://docs.microsoft.com/azure/security/fundamentals/customer-lockbox-overview#auditing-logs)
-**Azure Security Center monitoring**: Yes
+- [How to onboard Azure Sentinel](../../sentinel/quickstart-onboard.md)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 2.3: Enable audit logging for Azure resources **Guidance**: Audit logs for Customer Lockbox are automatically enabled and maintained in Azure Activity Logs. You can view this data by streaming it from the Azure Activity log into a Log Analytic workspace where you can then perform research and analytics on it.
-* [Audit logs for Customer Lockbox](./customer-lockbox-overview.md#auditing-logs)
-
-* [How to onboard Azure Sentinel](../../sentinel/quickstart-onboard.md)
+- [Audit logs for Customer Lockbox](https://docs.microsoft.com/azure/security/fundamentals/customer-lockbox-overview#auditing-logs)
-**Azure Security Center monitoring**: Yes
+- [How to onboard Azure Sentinel](../../sentinel/quickstart-onboard.md)
**Responsibility**: Customer
-### 2.4: Collect security logs from operating systems
-
-**Guidance**: Not applicable; this recommendation is intended for compute resources.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
+**Azure Security Center monitoring**: None
### 2.5: Configure security log storage retention **Guidance**: In Azure Monitor, set log retention period for Log Analytics workspaces associated with your Customer Lockbox according to your organization's compliance regulations.
-* [How to set log retention parameters](../../azure-monitor/logs/manage-cost-storage.md#change-the-data-retention-period)
-
-**Azure Security Center monitoring**: Not applicable
+- [How to set log retention parameters](https://docs.microsoft.com/azure/azure-monitor/logs/manage-cost-storage#change-the-data-retention-period)
**Responsibility**: Customer
-### 2.6: Monitor and review Logs
+**Azure Security Center monitoring**: None
-**Guidance**: Audit logs for Customer Lockbox are automatically enabled and maintained in Azure Activity Logs. You can view this data by streaming it from the Azure Activity log into a Log Analytic workspace where you can then perform research and analytics on it. Analyze and monitor logs from your Customer Lockbox requests for anomalous behavior. Use the "Logs" section in your Azure Sentinel workspace to perform queries or create alerts based on your Customer Lockbox logs.
-
-* [Audit logs in Customer Lockbox](./customer-lockbox-overview.md#auditing-logs)
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Customer
-
-### 2.7: Enable alerts for anomalous activity
+### 2.6: Monitor and review logs
**Guidance**: Audit logs for Customer Lockbox are automatically enabled and maintained in Azure Activity Logs. You can view this data by streaming it from the Azure Activity log into a Log Analytic workspace where you can then perform research and analytics on it. Analyze and monitor logs from your Customer Lockbox requests for anomalous behavior. Use the "Logs" section in your Azure Sentinel workspace to perform queries or create alerts based on your Customer Lockbox logs.
-* [Audit logs in Customer Lockbox](./customer-lockbox-overview.md#auditing-logs)
-
-* [How to alert on log analytics log data](../../azure-monitor/alerts/tutorial-response.md)
-
-**Azure Security Center monitoring**: Yes
+- [Audit logs in Customer Lockbox](https://docs.microsoft.com/azure/security/fundamentals/customer-lockbox-overview#auditing-logs)
**Responsibility**: Customer
-### 2.8: Centralize anti-malware logging
+**Azure Security Center monitoring**: None
-**Guidance**: Not applicable; Customer Lockbox does not process or produce anti-malware related logs.
+### 2.7: Enable alerts for anomalous activities
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 2.9: Enable DNS query logging
-
-**Guidance**: Not applicable; Customer Lockbox does not process or produce DNS-related logs.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
+**Guidance**: Audit logs for Customer Lockbox are automatically enabled and maintained in Azure Activity Logs. You can view this data by streaming it from the Azure Activity log into a Log Analytic workspace where you can then perform research and analytics on it. Analyze and monitor logs from your Customer Lockbox requests for anomalous behavior. Use the "Logs" section in your Azure Sentinel workspace to perform queries or create alerts based on your Customer Lockbox logs.
-### 2.10: Enable command-line audit logging
+- [Audit logs in Customer Lockbox](https://docs.microsoft.com/azure/security/fundamentals/customer-lockbox-overview#auditing-logs)
-**Guidance**: Not applicable; this recommendation is intended for compute resources.
+- [How to alert on log analytics log data](../../azure-monitor/alerts/tutorial-response.md)
-**Azure Security Center monitoring**: Not applicable
+**Responsibility**: Customer
-**Responsibility**: Not applicable
+**Azure Security Center monitoring**: None
-## Identity and access control
+## Identity and Access Control
-*For more information, see [Security control: Identity and access control](../benchmarks/security-control-identity-access-control.md).*
+*For more information, see the [Azure Security Benchmark: Identity and Access Control](../benchmarks/security-control-identity-access-control.md).*
### 3.1: Maintain an inventory of administrative accounts
-**Guidance**: Maintain an inventory of the user accounts that have administrative access to your Customer Lockbox requests. You can use the Identity and Access control (IAM) pane in the Azure portal for your subscription to configure Azure role-based access control (Azure RBAC). The roles are applied to users, groups, service principals, and managed identities in Azure Active Directory.
+**Guidance**: Maintain an inventory of the user accounts that have administrative access to your Customer Lockbox requests. You can use the Identity and Access control (IAM) pane in the Azure portal for your subscription to configure Azure role-based access control (Azure RBAC). The roles are applied to users, groups, service principals, and managed identities in Azure Active Directory (Azure AD).
At the customer organization, the user who has the Owner role for the Azure subscription receives an email from Microsoft, to notify them about any pending access requests. For Customer Lockbox requests, this person is the designated approver.
-* [Understand custom roles](../../role-based-access-control/custom-roles.md)
+- [Understand custom roles](../../role-based-access-control/custom-roles.md)
-* [How to configure Azure RBAC for workbooks](../../sentinel/quickstart-get-visibility.md)
+- [How to configure Azure RBAC for workbooks](../../sentinel/quickstart-get-visibility.md)
-* [Understand access request permissions in Customer Lockbox](./customer-lockbox-overview.md)
-
-**Azure Security Center monitoring**: Yes
+- [Understand access request permissions in Customer Lockbox](customer-lockbox-overview.md)
**Responsibility**: Customer
-### 3.2: Change default passwords where applicable
+**Azure Security Center monitoring**: None
-**Guidance**: Azure Active Directory does not have the concept of default passwords. Other Azure resources requiring a password forces a password to be created with complexity requirements and a minimum password length, which differs depending on the service. You are responsible for third-party applications and marketplace services that may use default passwords.
+### 3.2: Change default passwords where applicable
-**Azure Security Center monitoring**: Not applicable
+**Guidance**: Azure Active Directory (Azure AD) does not have the concept of default passwords. Other Azure resources requiring a password forces a password to be created with complexity requirements and a minimum password length, which differs depending on the service. You are responsible for third-party applications and marketplace services that may use default passwords.
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.3: Use dedicated administrative accounts **Guidance**: Create standard operating procedures around the use of dedicated administrative accounts. Use Azure Security Center Identity and Access Management to monitor the number of administrative accounts. Additionally, to help you keep track of dedicated administrative accounts, you may use recommendations from Azure Security Center or built-in Azure policies, such as:+ - There should be more than one owner assigned to your subscription - Deprecated accounts with owner permissions should be removed from your subscription - External accounts with owner permissions should be removed from your subscription
-* [How to use Azure Security Center to monitor identity and access (Preview)](../../security-center/security-center-identity-access.md)
-
-* [How to use Azure Policy](../../governance/policy/tutorials/create-and-manage.md)
+- [How to use Azure Security Center to monitor identity and access (Preview)](../../security-center/security-center-identity-access.md)
-**Azure Security Center monitoring**: Yes
+- [How to use Azure Policy](../../governance/policy/tutorials/create-and-manage.md)
**Responsibility**: Customer
-### 3.4: Use single sign-on (SSO) with Azure Active Directory
+**Azure Security Center monitoring**: None
-**Guidance**: Not applicable; access to Customer Lockbox is through the Azure portal and reserved for accounts with the tenant role of Owner. Single sign-on is not supported.
+### 3.4: Use Azure Active Directory single sign-on (SSO)
-**Azure Security Center monitoring**: Not applicable
+**Guidance**: Not applicable; access to Customer Lockbox is through the Azure portal and reserved for accounts with the tenant role of Owner. Single sign-on is not supported.
**Responsibility**: Customer
-### 3.5: Use multi-factor authentication for all Azure Active Directory based access
+**Azure Security Center monitoring**: None
-**Guidance**: Enable Azure Active Directory Multi-Factor Authentication and follow Azure Security Center Identity and Access Management recommendations.
+### 3.5: Use multi-factor authentication for all Azure Active Directory-based access
-* [How to enable MFA in Azure](../../active-directory/authentication/howto-mfa-getstarted.md)
+**Guidance**: Enable Azure AD Multi-Factor Authentication and follow Azure Security Center Identity and Access Management recommendations.
-* [How to monitor identity and access within Azure Security Center](../../security-center/security-center-identity-access.md)
+- [How to enable Azure AD Multi-Factor Authentication](../../active-directory/authentication/howto-mfa-getstarted.md)
-**Azure Security Center monitoring**: Yes
+- [How to monitor identity and access within Azure Security Center](../../security-center/security-center-identity-access.md)
**Responsibility**: Customer
-### 3.6: Use dedicated machines (Privileged Access Workstations) for all administrative tasks
+**Azure Security Center monitoring**: None
-**Guidance**: Use a Privileged Access Workstation (PAW) with Azure AD Multi-Factor Authentication (MFA) enabled to log into and configure your Customer Lockbox requests.
+### 3.6: Use secure, Azure-managed workstations for administrative tasks
-* [Privileged Access Workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
+**Guidance**: Use a Privileged Access Workstation (PAW) with Azure AD Multi-Factor Authentication enabled to log into and configure your Customer Lockbox requests.
-* [Planning a cloud-based Azure AD Multi-Factor Authentication deployment](../../active-directory/authentication/howto-mfa-getstarted.md)
+- [Privileged Access Workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/)
-**Azure Security Center monitoring**: Not applicable
+- [Planning a cloud-based Azure AD Multi-Factor Authentication deployment](../../active-directory/authentication/howto-mfa-getstarted.md)
**Responsibility**: Customer
-### 3.7: Log and alert on suspicious activity from administrative accounts
+**Azure Security Center monitoring**: None
-**Guidance**: Use Azure Active Directory Privileged Identity Management (PIM) for generation of logs and alerts when suspicious or unsafe activity occurs in the environment.
+### 3.7: Log and alert on suspicious activities from administrative accounts
-In addition, use Azure Active Directory risk detections to view alerts and reports on risky user behavior.
+**Guidance**: Use Azure Active Directory (Azure AD) Privileged Identity Management (PIM) for generation of logs and alerts when suspicious or unsafe activity occurs in the environment.
-* [How to deploy Privileged Identity Management (PIM)](../../active-directory/privileged-identity-management/pim-deployment-plan.md)
+In addition, use Azure AD risk detections to view alerts and reports on risky user behavior.
-* [Understand Azure AD risk detections](../../active-directory/identity-protection/overview-identity-protection.md)
+- [How to deploy Privileged Identity Management (PIM)](../../active-directory/privileged-identity-management/pim-deployment-plan.md)
-**Azure Security Center monitoring**: Yes
+- [Understand Azure AD risk detections](../../active-directory/identity-protection/overview-identity-protection.md)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 3.8: Manage Azure resources from only approved locations **Guidance**: Use Conditional Access named locations to allow access to the Azure portal from only specific logical groupings of IP address ranges or countries/regions.
-* [How to configure named locations in Azure](../../active-directory/reports-monitoring/quickstart-configure-named-locations.md)
-
-**Azure Security Center monitoring**: Not applicable
+- [How to configure named locations in Azure](../../active-directory/reports-monitoring/quickstart-configure-named-locations.md)
**Responsibility**: Customer
-### 3.9: Use Azure Active Directory
+**Azure Security Center monitoring**: None
-**Guidance**: Use Azure Active Directory as the central authentication and authorization system where applicable. Azure Active Directory protects data by using strong encryption for data at rest and in transit. Azure Active Directory also salts, hashes, and securely stores user credentials.
+### 3.9: Use Azure Active Directory
-* [How to create and configure an Azure Active Directory instance](../../active-directory/fundamentals/active-directory-access-create-new-tenant.md)
+**Guidance**: Use Azure Active Directory (Azure AD) as the central authentication and authorization system where applicable. Azure AD protects data by using strong encryption for data at rest and in transit. Azure AD also salts, hashes, and securely stores user credentials.
-**Azure Security Center monitoring**: Not applicable
+- [How to create and configure an Azure AD instance](../../active-directory/fundamentals/active-directory-access-create-new-tenant.md)
**Responsibility**: Customer
-### 3.10: Regularly review and reconcile user access
+**Azure Security Center monitoring**: None
-**Guidance**: Azure Active Directory provides logs to help you discover stale accounts. In addition, use Azure Active Directory access reviews to efficiently manage group memberships, access to enterprise applications, and role assignments. User access can be reviewed on a regular basis to make sure only the right users have continued access.
+### 3.10: Regularly review and reconcile user access
-* [Understand Azure Active Directory reporting](../../active-directory/reports-monitoring/index.yml)
+**Guidance**: Azure Active Directory (Azure AD) provides logs to help you discover stale accounts. In addition, use Azure AD access reviews to efficiently manage group memberships, access to enterprise applications, and role assignments. User access can be reviewed on a regular basis to make sure only the right users have continued access.
-* [How to use Azure Active Directory access reviews](../../active-directory/governance/access-reviews-overview.md)
+- [Understand Azure AD reporting](/azure/active-directory/reports-monitoring/)
-**Azure Security Center monitoring**: Yes
+- [How to use Azure AD access reviews](../../active-directory/governance/access-reviews-overview.md)
**Responsibility**: Customer
-### 3.11: Monitor attempts to access deactivated accounts
+**Azure Security Center monitoring**: None
-**Guidance**: Use Azure Active Directory as the central authentication and authorization system where applicable. Azure Active Directory protects data by using strong encryption for data at rest and in transit. Azure Active Directory also salts, hashes, and securely stores user credentials.
+### 3.11: Monitor attempts to access deactivated credentials
-You have access to Azure Active Directory sign-in activity, audit and risk event log sources, which allow you to integrate with Azure Sentinel or a third-party SIEM.
+**Guidance**: Use Azure Active Directory (Azure AD) as the central authentication and authorization system where applicable. Azure AD protects data by using strong encryption for data at rest and in transit. Azure AD also salts, hashes, and securely stores user credentials.
-You can streamline this process by creating diagnostic settings for Azure Active Directory user accounts and sending the audit logs and sign-in logs to a Log Analytics workspace. You can configure desired log alerts within Log Analytics.
+You have access to Azure AD sign-in activity, audit and risk event log sources, which allow you to integrate with Azure Sentinel or a third-party SIEM.
-* [How to integrate Azure activity logs into Azure Monitor](../../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)
+You can streamline this process by creating diagnostic settings for Azure AD user accounts and sending the audit logs and sign-in logs to a Log Analytics workspace. You can configure desired log alerts within Log Analytics.
-* [How to on-board Azure Sentinel](../../sentinel/quickstart-onboard.md)
+- [How to integrate Azure activity logs into Azure Monitor](/azure/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics)
-**Azure Security Center monitoring**: Not applicable
+- [How to on-board Azure Sentinel](../../sentinel/quickstart-onboard.md)
**Responsibility**: Customer
-### 3.12: Alert on account login behavior deviation
+**Azure Security Center monitoring**: None
-**Guidance**: For account login behavior deviation on the control plane (e.g. Azure portal), use Azure Active Directory identity protection and risk detection features to configure automated responses to detected suspicious actions related to user identities. You can also ingest data into Azure Sentinel for further investigation.
+### 3.12: Alert on account sign-in behavior deviation
-* [How to view Azure Active Directory risky sign-in](../../active-directory/identity-protection/overview-identity-protection.md)
+**Guidance**: For account login behavior deviation on the control plane (e.g. Azure portal), use Azure Active Directory (Azure AD) identity protection and risk detection features to configure automated responses to detected suspicious actions related to user identities. You can also ingest data into Azure Sentinel for further investigation.
-* [How to configure and enable identity protection risk policies](../../active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md)
+- [How to view Azure AD risky sign-in](../../active-directory/identity-protection/overview-identity-protection.md)
-* [How to onboard Azure Sentinel](../../sentinel/quickstart-onboard.md)
+- [How to configure and enable identity protection risk policies](../../active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md)
-**Azure Security Center monitoring**: Currently not available
+- [How to onboard Azure Sentinel](../../sentinel/quickstart-onboard.md)
**Responsibility**: Customer
-### 3.13: Provide Microsoft with access to relevant customer data during support scenarios
-
-**Guidance**: This recommendation is not applicable to Customer Lockbox.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-## Data protection
-
-*For more information, see [Security control: Data protection](../benchmarks/security-control-data-protection.md).*
-
-### 4.1: Maintain an inventory of sensitive Information
-
-**Guidance**: This recommendation is not applicable; tags are not supported for Customer Lockbox.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 4.2: Isolate systems storing or processing sensitive information
-
-**Guidance**: Not applicable; Customer Lockbox will be provisioned in the same subscription as the resources that you are granting access to. There is no public endpoint to protect or isolate. Customer Lockbox request access is granted to the user who holds the Owner role at the tenant level.
-
-* [Understand Customer Lockbox workflow](./customer-lockbox-overview.md)
+**Azure Security Center monitoring**: None
-**Azure Security Center monitoring**: Not applicable
+## Data Protection
-**Responsibility**: Not applicable
+*For more information, see the [Azure Security Benchmark: Data Protection](../benchmarks/security-control-data-protection.md).*
-### 4.3: Monitor and block unauthorized transfer of sensitive information
-
-**Guidance**: Microsoft manages the underlying infrastructure for Customer Lockbox and has implemented strict controls to prevent the loss or exposure of customer data.
-
-* [Understand customer data protection in Azure](./protection-customer-data.md)
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Microsoft
-
-### 4.4: Encrypt all sensitive information in transit
-
-**Guidance**: By default, Microsoft uses the Transport Layer Security (TLS) protocol to protect data when itΓÇÖs traveling between the cloud services and customers. Microsoft datacenters negotiate a TLS connection with client systems that connect to Azure services. TLS provides strong authentication, message privacy, and integrity (enabling detection of message tampering, interception, and forgery), interoperability, algorithm flexibility, and ease of deployment and use.
-
-* [Understand encryption in transit with Azure](./encryption-overview.md#encryption-of-data-in-transit)
-
-**Azure Security Center monitoring**: Currently not available
-
-**Responsibility**: Microsoft
-
-### 4.5: Use an active discovery tool to identify sensitive data
-
-**Guidance**: Not applicable; Customer Lockbox itself does not hold any customer data.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 4.6: Use Azure RBAC to control access to resources
+### 4.6: Use Role-based access control to control access to resources
**Guidance**: Customer Lockbox request approval is granted to the user who holds the Owner role at the tenant level.
-* [Understand Customer Lockbox workflow](./customer-lockbox-overview.md)
-
-**Azure Security Center monitoring**: Not applicable
+- [Understand Customer Lockbox workflow](customer-lockbox-overview.md)
**Responsibility**: Customer
-### 4.7: Use host-based data loss prevention to enforce access control
-
-**Guidance**: Not applicable; this recommendation is intended for compute resources. Microsoft manages the underlying infrastructure for Customer Lockbox and has implemented strict controls to prevent the loss or exposure of customer data.
-
-* [Azure customer data protection](./protection-customer-data.md)
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 4.8: Encrypt sensitive information at rest
-
-**Guidance**: Not applicable; Customer Lockbox itself does not hold customer data.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
+**Azure Security Center monitoring**: None
### 4.9: Log and alert on changes to critical Azure resources **Guidance**: Audit logs for Customer Lockbox are automatically enabled and maintained in Azure Activity Logs. Use the Azure Activity Log to monitor and detect changes to Azure Customer Lockbox resources. Create alerts within Azure Monitor that will trigger when changes to critical resources take place.
-* [How to enable auditing in Customer Lockbox](./customer-lockbox-overview.md)
-
-* [How to view and retrieve Azure Activity Log events](../../azure-monitor/essentials/activity-log.md#view-the-activity-log)
+- [How to enable auditing in Customer Lockbox](customer-lockbox-overview.md)
-* [How to create alerts in Azure Monitor](../../azure-monitor/alerts/alerts-activity-log.md)
+- [How to view and retrieve Azure Activity Log events](https://docs.microsoft.com/azure/azure-monitor/essentials/activity-log#view-the-activity-log)
-**Azure Security Center monitoring**: Yes
+- [How to create alerts in Azure Monitor](../../azure-monitor/alerts/alerts-activity-log.md)
**Responsibility**: Customer
-## Vulnerability management
-
-*For more information, see [Security control: Vulnerability management](../benchmarks/security-control-vulnerability-management.md).*
-
-### 5.1: Run automated vulnerability scanning tools
-
-**Guidance**: Not applicable; Microsoft performs vulnerability management on the underlying systems that support Customer Lockbox.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Microsoft
-
-### 5.2: Deploy automated operating system patch management solution
-
-**Guidance**: Not applicable; this recommendation is intended for compute resources.
-
-**Azure Security Center monitoring**: Not applicable
+**Azure Security Center monitoring**: None
-**Responsibility**: Not applicable
+## Inventory and Asset Management
-### 5.3: Deploy automated third-party software patch management solution
+*For more information, see the [Azure Security Benchmark: Inventory and Asset Management](../benchmarks/security-control-inventory-asset-management.md).*
-**Guidance**: Not applicable; this recommendation is intended for compute resources.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 5.4: Compare back-to-back vulnerability scans
-
-**Guidance**: Not applicable; Microsoft performs vulnerability management on the underlying systems that support Customer Lockbox.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 5.5: Use a risk-rating process to prioritize the remediation of discovered vulnerabilities
-
-**Guidance**: Not applicable; Microsoft performs vulnerability management on the underlying systems that support Customer Lockbox.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-## Inventory and asset management
-
-*For more information, see [Security control: Inventory and asset management](../benchmarks/security-control-inventory-asset-management.md).*
-
-### 6.1: Use Azure Asset Discovery
+### 6.1: Use automated asset discovery solution
**Guidance**: Use Azure Resource Graph to query/discover all resources (such as compute, storage, network, ports, and protocols etc.) within your subscription(s). Ensure appropriate (read) permissions in your tenant and enumerate all Azure subscriptions as well as resources within your subscriptions. Although classic Azure resources may be discovered via Azure Resource Graph, it is highly recommended that you create and use Azure Resource Manager resources.
-* [How to create queries with Azure Resource Graph](../../governance/resource-graph/first-query-portal.md)
+- [How to create queries with Azure Resource Graph](../../governance/resource-graph/first-query-portal.md)
-* [How to view your Azure Subscriptions](/powershell/module/az.accounts/get-azsubscription)
+- [How to view your Azure Subscriptions](/powershell/module/az.accounts/get-azsubscription)
-* [Understand Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md)
-
-**Azure Security Center monitoring**: Not applicable
+- [Understand Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md)
**Responsibility**: Customer
-### 6.2: Maintain asset metadata
-
-**Guidance**: Tags are not supported for Customer Lockbox.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
+**Azure Security Center monitoring**: None
### 6.3: Delete unauthorized Azure resources **Guidance**: Use tagging, management groups, and separate subscriptions, where appropriate, to organize and track Azure resources. Reconcile inventory on a regular basis and ensure unauthorized resources are deleted from the subscription in a timely manner. In addition, use Azure Policy to put restrictions on the type of resources that can be created in customer subscription(s) using the following built-in policy definitions:+ - Not allowed resource types - Allowed resource types
-* [How to create additional Azure subscriptions](../../cost-management-billing/manage/create-subscription.md)
+For more information, see the following references:
-* [How to create Management Groups](../../governance/management-groups/create-management-group-portal.md)
+- [How to create additional Azure subscriptions](../../cost-management-billing/manage/create-subscription.md)
-* [How to create and use tags](../../azure-resource-manager/management/tag-resources.md)
+- [How to create Management Groups](../../governance/management-groups/create-management-group-portal.md)
-**Azure Security Center monitoring**: Not applicable
+- [How to create and use tags](../../azure-resource-manager/management/tag-resources.md)
**Responsibility**: Customer
-### 6.4: Maintain an inventory of approved Azure resources and software titles
-
-**Guidance**: Not applicable; this recommendation is intended for compute resources.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
+**Azure Security Center monitoring**: None
### 6.5: Monitor for unapproved Azure resources
-**Guidance**: Use Azure Policy to put restrictions on the type of resources that can be created in your subscription(s).
+**Guidance**: Use Azure Policy to put restrictions on the type of resources that can be created in your subscription(s).
Use Azure Resource Graph to query/discover resources within their subscription(s). Ensure that all Azure resources present in the environment are approved.
-* [How to configure and manage Azure Policy](../../governance/policy/tutorials/create-and-manage.md)
+- [How to configure and manage Azure Policy](../../governance/policy/tutorials/create-and-manage.md)
-* [How to create queries with Azure Resource Graph](../../governance/resource-graph/first-query-portal.md)
-
-**Azure Security Center monitoring**: Not applicable
+- [How to create queries with Azure Resource Graph](../../governance/resource-graph/first-query-portal.md)
**Responsibility**: Customer
-### 6.6: Monitor for unapproved software applications within compute resources
-
-**Guidance**: Not applicable; this recommendation is intended for compute resources.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 6.7: Remove unapproved Azure resources and software applications
-
-**Guidance**: Not applicable; this recommendation is intended for compute resources.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 6.8: Use only approved applications
-
-**Guidance**: Not applicable; this recommendation is intended for compute resources.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
+**Azure Security Center monitoring**: None
### 6.9: Use only approved Azure services
Use Azure Resource Graph to query/discover resources within their subscription(s
- Not allowed resource types - Allowed resource types
-* [How to configure and manage Azure Policy](../../governance/policy/tutorials/create-and-manage.md)
+For more information, see the following references:
-* [How to deny a specific resource type with Azure Policy](../../governance/policy/samples/index.md)
+- [How to configure and manage Azure Policy](../../governance/policy/tutorials/create-and-manage.md)
-**Azure Security Center monitoring**: Not applicable
+- [How to deny a specific resource type with Azure Policy](https://docs.microsoft.com/azure/governance/policy/samples/built-in-policies#general)
**Responsibility**: Customer
-### 6.10: Implement approved application list
+**Azure Security Center monitoring**: None
-**Guidance**: Not applicable; this recommendation is intended for compute resources.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 6.11: Limit users' ability to interact with AzureResources Manager via scripts
+### 6.11: Limit users' ability to interact with Azure Resource Manager
**Guidance**: Configure Azure Conditional Access to limit users' ability to interact with Azure Resource Manager by configuring "Block access" for the "Microsoft Azure Management" App.
-* [How to configure Conditional Access to block access to Azure Resource Manager](../../role-based-access-control/conditional-access-azure-management.md)
-
-**Azure Security Center monitoring**: Not applicable
+- [How to configure Conditional Access to block access to Azure Resource Manager](../../role-based-access-control/conditional-access-azure-management.md)
**Responsibility**: Customer
-### 6.12: Limit users' ability to execute scripts within compute resources
-
-**Guidance**: Not applicable; this recommendation is intended for compute resources.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 6.13: Physically or logically segregate high risk applications
-
-**Guidance**: Not applicable; this recommendation is intended for web applications running on Azure App Service or compute resources.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-## Secure configuration
-
-*For more information, see [Security control: Secure configuration](../benchmarks/security-control-secure-configuration.md).*
-
-### 7.1: Establish secure configurations for all Azure resources
-
-**Guidance**: Not applicable, Customer Lockbox does not have configurable security settings.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 7.2: Establish secure operating system configurations
-
-**Guidance**: Not applicable; this guideline is intended for compute resources.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 7.3: Maintain secure Azure resource configurations
-
-**Guidance**: Not applicable, Customer Lockbox does not have configurable security settings.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 7.4: Maintain secure operating system configurations
-
-**Guidance**: Not applicable; this guideline is intended for compute resources.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 7.5: Securely store configuration of Azure resources
-
-**Guidance**: Not applicable; Customer Lockbox does not have configurable security settings.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
+**Azure Security Center monitoring**: None
-### 7.6: Securely store custom operating system images
+## Secure Configuration
-**Guidance**: Not applicable; this guideline is intended for compute resources.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 7.7: Deploy system configuration management tools
-
-**Guidance**: Not applicable; Customer Lockbox does not have configurable security settings.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 7.8: Deploy system configuration management tools for operating systems
-
-**Guidance**: Not applicable; this guideline is intended for compute resources.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 7.9: Implement automated configuration monitoring for Azure services
-
-**Guidance**: Not applicable; Customer Lockbox does not have configurable security settings.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 7.10: Implement automated configuration monitoring for operating systems
-
-**Guidance**: Not applicable; this guideline is intended for compute resources.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 7.11: Manage Azure secrets securely
-
-**Guidance**: Not applicable; access to Customer Lockbox requests are limited to the owner of the Azure subscription which houses the resource. There are no passwords, secrets, or keys required to access Customer Lockbox outside of logging in as the tenant owner.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 7.12: Manage identities securely and automatically
-
-**Guidance**: Not applicable; Customer Lockbox does not make use of managed identities.
-
-* [Azure services that support managed identities](../../active-directory/managed-identities-azure-resources/services-support-managed-identities.md)
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
+*For more information, see the [Azure Security Benchmark: Secure Configuration](../benchmarks/security-control-secure-configuration.md).*
### 7.13: Eliminate unintended credential exposure **Guidance**: Implement Credential Scanner to identify credentials within code. Credential Scanner will also encourage moving discovered credentials to more secure locations such as Azure Key Vault.
-* [How to setup Credential Scanner](https://secdevtools.azurewebsites.net/helpcredscan.html)
-
-**Azure Security Center monitoring**: Not applicable
+- [How to setup Credential Scanner](https://secdevtools.azurewebsites.net/helpcredscan.html)
**Responsibility**: Customer
-## Malware defense
-
-*For more information, see [Security control: Malware defense](../benchmarks/security-control-malware-defense.md).*
+**Azure Security Center monitoring**: None
-### 8.1: Use centrally managed anti-malware software
+## Malware Defense
-**Guidance**: Not applicable; this guideline is intended for compute resources. Microsoft Antimalware is enabled on the underlying host that supports the Customer Lockbox solution.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
+*For more information, see the [Azure Security Benchmark: Malware Defense](../benchmarks/security-control-malware-defense.md).*
### 8.2: Pre-scan files to be uploaded to non-compute Azure resources
Use Azure Resource Graph to query/discover resources within their subscription(s
It is your responsibility to pre-scan any content being uploaded to non-compute Azure resources. Microsoft cannot access customer data, and therefore cannot conduct anti-malware scans of customer content on your behalf.
-**Azure Security Center monitoring**: Not applicable
- **Responsibility**: Customer
-### 8.3: Ensure anti-malware software and signatures are updated
-
-**Guidance**: Not applicable; this recommendation is intended for compute resources. Microsoft Antimalware is enabled on the underlying host that supports Azure services, however it does not run on customer content.
+**Azure Security Center monitoring**: None
-**Azure Security Center monitoring**: Not applicable
+## Incident Response
-**Responsibility**: Not applicable
-
-## Data recovery
-
-*For more information, see [Security control: Data recovery](../benchmarks/security-control-data-recovery.md).*
-
-### 9.1: Ensure regular automated back ups
-
-**Guidance**: Not applicable; Customer Lockbox itself does not store customer data.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 9.2: Perform complete system backups and backup any customer managed keys
-
-**Guidance**: Not applicable; Customer Lockbox itself does not store customer data.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 9.3: Validate all backups including customer managed keys
-
-**Guidance**: Not applicable; Customer Lockbox itself does not store customer data.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-### 9.4: Ensure protection of backups and customer managed keys
-
-**Guidance**: Not applicable; Customer Lockbox itself does not store customer data, it also does not use keys or passwords for access.
-
-**Azure Security Center monitoring**: Not applicable
-
-**Responsibility**: Not applicable
-
-## Incident response
-
-*For more information, see [Security control: Incident response](../benchmarks/security-control-incident-response.md).*
+*For more information, see the [Azure Security Benchmark: Incident Response](../benchmarks/security-control-incident-response.md).*
### 10.1: Create an incident response guide **Guidance**: Build out an incident response guide for your organization. Ensure that there are written incident response plans that define all roles of personnel as well as phases of incident handling/management from detection to post-incident review.
-* [How to configure Workflow Automations within Azure Security Center](../../security-center/security-center-planning-and-operations-guide.md)
+- [How to configure Workflow Automations within Azure Security Center](../../security-center/security-center-planning-and-operations-guide.md)
-* [Guidance on building your own security incident response process](https://msrc-blog.microsoft.com/2019/07/01/inside-the-msrc-building-your-own-security-incident-response-process/)
+- [Guidance on building your own security incident response process](https://msrc-blog.microsoft.com/2019/07/01/inside-the-msrc-building-your-own-security-incident-response-process/)
-* [Microsoft Security Response Center's Anatomy of an Incident](https://msrc-blog.microsoft.com/2019/07/01/inside-the-msrc-building-your-own-security-incident-response-process/)
+- [Microsoft Security Response Center's Anatomy of an Incident](https://msrc-blog.microsoft.com/2019/07/01/inside-the-msrc-building-your-own-security-incident-response-process/)
-* [Customer may also leverage NIST's Computer Security Incident Handling Guide to aid in the creation of their own incident response plan](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-61r2.pdf)
-
-**Azure Security Center monitoring**: Not applicable
+- [Customer may also leverage NIST's Computer Security Incident Handling Guide to aid in the creation of their own incident response plan](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-61r2.pdf)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 10.2: Create an incident scoring and prioritization procedure
-**Guidance**: Security Center assigns a severity to each alert to help you prioritize which alerts should be investigated first. The severity is based on how confident Security Center is in the finding or the analytic used to issue the alert as well as the confidence level that there was malicious intent behind the activity that led to the alert.
+**Guidance**: Security Center assigns a severity to each alert to help you prioritize which alerts should be investigated first. The severity is based on how confident Security Center is in the finding or the metric used to issue the alert as well as the confidence level that there was malicious intent behind the activity that led to the alert.
Additionally, clearly mark subscriptions (for ex. production, non-prod) and create a naming system to clearly identify and categorize Azure resources.
-**Azure Security Center monitoring**: Yes
- **Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 10.3: Test security response procedures **Guidance**: Conduct exercises to test your systemsΓÇÖ incident response capabilities on a regular cadence. Identify weak points and gaps and revise plan as needed.
-* [Refer to NIST's publication: Guide to Test, Training, and Exercise Programs for IT Plans and Capabilities](https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-84.pdf)
-
-**Azure Security Center monitoring**: Not applicable
+- [Refer to NIST's publication: Guide to Test, Training, and Exercise Programs for IT Plans and Capabilities](https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-84.pdf)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 10.4: Provide security incident contact details and configure alert notifications for security incidents **Guidance**: Security incident contact information will be used by Microsoft to contact you if the Microsoft Security Response Center (MSRC) discovers that the customer's data has been accessed by an unlawful or unauthorized party. Review incidents after the fact to ensure that issues are resolved.
-* [How to set the Azure Security Center Security Contact](../../security-center/security-center-provide-security-contact-details.md)
-
-**Azure Security Center monitoring**: Yes
+- [How to set the Azure Security Center Security Contact](../../security-center/security-center-provide-security-contact-details.md)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 10.5: Incorporate security alerts into your incident response system **Guidance**: Export your Azure Security Center alerts and recommendations using the Continuous Export feature. Continuous Export allows you to export alerts and recommendations either manually or in an ongoing, continuous fashion. You may use the Azure Security Center data connector to stream the alerts to Azure Sentinel.
-* [How to configure continuous export](../../security-center/continuous-export.md)
+- [How to configure continuous export](../../security-center/continuous-export.md)
-* [How to stream alerts into Azure Sentinel](../../sentinel/connect-azure-security-center.md)
-
-**Azure Security Center monitoring**: Not applicable
+- [How to stream alerts into Azure Sentinel](../../sentinel/connect-azure-security-center.md)
**Responsibility**: Customer
+**Azure Security Center monitoring**: None
+ ### 10.6: Automate the response to security alerts **Guidance**: Use the Workflow Automation feature in Azure Security Center to automatically trigger responses via "Logic Apps" on security alerts and recommendations.
-* [How to configure Workflow Automation and Logic Apps](../../security-center/workflow-automation.md)
-
-**Azure Security Center monitoring**: Not applicable
+- [How to configure Workflow Automation and Logic Apps](../../security-center/workflow-automation.md)
**Responsibility**: Customer
-## Penetration tests and red team exercises
+**Azure Security Center monitoring**: None
-*For more information, see [Security control: Penetration tests and red team exercises](../benchmarks/security-control-penetration-tests-red-team-exercises.md).*
+## Penetration Tests and Red Team Exercises
-### 11.1: Conduct regular penetration testing of your Azure resources and ensure remediation of all critical security findings within 60 days
+*For more information, see the [Azure Security Benchmark: Penetration Tests and Red Team Exercises](../benchmarks/security-control-penetration-tests-red-team-exercises.md).*
-**Guidance**:
+### 11.1: Conduct regular penetration testing of your Azure resources and ensure remediation of all critical security findings
-* [Follow the Microsoft Rules of Engagement to ensure your Penetration Tests are not in violation of Microsoft policies](https://www.microsoft.com/msrc/pentest-rules-of-engagement?rtc=1)
+**Guidance**: Follow the Microsoft Cloud Penetration Testing Rules of Engagement to ensure your penetration tests are not in violation of Microsoft policies. Use Microsoft's strategy and execution of Red Teaming and live site penetration testing against Microsoft-managed cloud infrastructure, services, and applications.
-* [You can find more information on MicrosoftΓÇÖs strategy and execution of Red Teaming and live site penetration testing against Microsoft-managed cloud infrastructure, services, and applications, here](https://gallery.technet.microsoft.com/Cloud-Red-Teaming-b837392e)
+- [Penetration Testing Rules of Engagement](https://www.microsoft.com/msrc/pentest-rules-of-engagement?rtc=1)
-**Azure Security Center monitoring**: Not applicable
+- [Microsoft Cloud Red Teaming](https://gallery.technet.microsoft.com/Cloud-Red-Teaming-b837392e)
**Responsibility**: Shared
+**Azure Security Center monitoring**: None
+ ## Next steps -- See the [Azure security benchmark](../benchmarks/overview.md)-- Learn more about [Azure security baselines](../benchmarks/security-baselines-overview.md)
+- See the [Azure Security Benchmark V2 overview](/azure/security/benchmarks/overview)
+- Learn more about [Azure security baselines](/azure/security/benchmarks/security-baselines-overview)
service-fabric