Updates from: 02/01/2021 04:05:36
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/analytics-with-application-insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/analytics-with-application-insights.md
@@ -1,7 +1,7 @@
Title: Track user behavior with Application Insights
-description: Learn how to enable event logs in Application Insights from Azure AD B2C user journeys by using custom policies.
+description: Learn how to enable event logs in Application Insights from Azure AD B2C user journeys.
@@ -9,31 +9,44 @@
Previously updated : 04/05/2020 Last updated : 01/29/2021 -
+zone_pivot_groups: b2c-policy-type
# Track user behavior in Azure Active Directory B2C using Application Insights
-[!INCLUDE [active-directory-b2c-public-preview](../../includes/active-directory-b2c-public-preview.md)]
+[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
+
+::: zone pivot="b2c-user-flow"
+
+[!INCLUDE [active-directory-b2c-limited-to-custom-policy](../../includes/active-directory-b2c-limited-to-custom-policy.md)]
+
+::: zone-end
+
+::: zone pivot="b2c-custom-policy"
-Azure Active Directory B2C (Azure AD B2C) supports sending event data directly to [Application Insights](../azure-monitor/app/app-insights-overview.md) by using the instrumentation key provided to Azure AD B2C. With an Application Insights technical profile, you can get detailed and customized event logs for your user journeys to:
+Azure Active Directory B2C (Azure AD B2C) supports sending event data directly to [Application Insights](../azure-monitor/app/app-insights-overview.md) by using the instrumentation key provided to Azure AD B2C. With an Application Insights technical profile, you can get detailed and customized event logs for your user journeys to:
* Gain insights on user behavior. * Troubleshoot your own policies in development or in production. * Measure performance. * Create notifications from Application Insights.
-## How it works
+## Overview
-The [Application Insights](application-insights-technical-profile.md) technical profile defines an event from Azure AD B2C. The profile specifies the name of the event, the claims that are recorded, and the instrumentation key. To post an event, the technical profile is then added as an orchestration step in a [user journey](userjourneys.md).
+To enable custom event logs, you add an Application Insights technical profile. In the technical profile, you define the Application Insights instrumentation key, event name, and the claims to record. To post an event, the technical profile is then added as an orchestration step in a [user journey](userjourneys.md).
-Application Insights can unify the events by using a correlation ID to record a user session. Application Insights makes the event and session available within seconds and presents many visualization, export, and analytical tools.
+When using the Application Insights, consider the following:
+
+- There is a short delay, typically less than five minutes, before new logs available in Application Insights.
+- Azure AD B2C allows you to choose the claims to be recorded. Don't include claims with personal data.
+- To record a user session, events can be unified by using a correlation ID.
+- Call the Application Insights technical profile directly from a [user journey](userjourneys.md) or a [sub journeys](subjourneys.md). Don't use Application Insights technical profile as a [validation technical profile](validation-technical-profile.md).
## Prerequisites
-Complete the steps in [Get started with custom policies](custom-policy-get-started.md). You should have a working custom policy for sign-up and sign-in with local accounts.
+[!INCLUDE [active-directory-b2c-customization-prerequisites-custom-policy](../../includes/active-directory-b2c-customization-prerequisites-custom-policy.md)]
## Create an Application Insights resource
@@ -98,11 +111,11 @@ A claim provides a temporary storage of data during an Azure AD B2C policy execu
## Add new technical profiles
-Technical profiles can be considered functions in the Identity Experience Framework of Azure AD B2C. This table defines the technical profiles that are used to open a session and post events.
+Technical profiles can be considered functions in the custom policy. This table defines the technical profiles that are used to open a session and post events. The solution uses the [technical profile inclusion](technicalprofiles.md#include-technical-profile) approach. Where a technical profile includes another technical profile to change settings or add new functionality.
| Technical Profile | Task | | -- | --|
-| AppInsights-Common | The common set of parameters to be included in all Azure Insights technical profiles. |
+| AppInsights-Common | The common technical profile with the common set of configuration. Including, the Application Insights instrumentation key, collection of claims to record, and the developer mode. The following technical profiles include the common technical profile, and add more claims, such as the event name. |
| AppInsights-SignInRequest | Records a `SignInRequest` event with a set of claims when a sign-in request has been received. | | AppInsights-UserSignUp | Records a `UserSignUp` event when the user triggers the sign-up option in a sign-up/sign-in journey. | | AppInsights-SignInComplete | Records a `SignInComplete` event on successful completion of an authentication, when a token has been sent to the relying party application. |
@@ -125,6 +138,7 @@ Add the profiles to the *TrustFrameworkExtensions.xml* file from the starter pac
<InputClaims> <!-- Properties of an event are added through the syntax {property:NAME}, where NAME is property being added to the event. DefaultValue can be either a static value or a value that's resolved by one of the supported DefaultClaimResolvers. --> <InputClaim ClaimTypeReferenceId="EventTimestamp" PartnerClaimType="{property:EventTimestamp}" DefaultValue="{Context:DateTimeInUtc}" />
+ <InputClaim ClaimTypeReferenceId="tenantId" PartnerClaimType="{property:TenantId}" DefaultValue="{Policy:TrustFrameworkTenantId}" />
<InputClaim ClaimTypeReferenceId="PolicyId" PartnerClaimType="{property:Policy}" DefaultValue="{Policy:PolicyId}" /> <InputClaim ClaimTypeReferenceId="CorrelationId" PartnerClaimType="{property:CorrelationId}" DefaultValue="{Context:CorrelationId}" /> <InputClaim ClaimTypeReferenceId="Culture" PartnerClaimType="{property:Culture}" DefaultValue="{Culture:RFC5646}" />
@@ -151,6 +165,7 @@ Add the profiles to the *TrustFrameworkExtensions.xml* file from the starter pac
<InputClaim ClaimTypeReferenceId="EventType" PartnerClaimType="eventName" DefaultValue="SignInComplete" /> <InputClaim ClaimTypeReferenceId="federatedUser" PartnerClaimType="{property:FederatedUser}" DefaultValue="false" /> <InputClaim ClaimTypeReferenceId="parsedDomain" PartnerClaimType="{property:FederationPartner}" DefaultValue="Not Applicable" />
+ <InputClaim ClaimTypeReferenceId="identityProvider" PartnerClaimType="{property:IDP}" DefaultValue="Local" />
</InputClaims> <IncludeTechnicalProfile ReferenceId="AppInsights-Common" /> </TechnicalProfile>
@@ -213,28 +228,97 @@ Immediately after the `SendClaims` orchestration step, call `AppInsights-SignInC
## Upload your file, run the policy, and view events
-Save and upload the *TrustFrameworkExtensions.xml* file. Then, call the relying party policy from your application or use **Run Now** in the Azure portal. In seconds, your events are available in Application Insights.
+Save and upload the *TrustFrameworkExtensions.xml* file. Then, call the relying party policy from your application or use **Run Now** in the Azure portal. Wait a minute or so, and your events will be available in Application Insights.
1. Open the **Application Insights** resource in your Azure Active Directory tenant.
-2. Select **Usage** > **Events**.
+2. Select **Usage**, then select **Events**.
3. Set **During** to **Last hour** and **By** to **3 minutes**. You might need to select **Refresh** to view results. ![Application Insights USAGE-Events Blase](./media/analytics-with-application-insights/app-ins-graphic.png)
-## [Optional] Collect more data
+## Collect more data
+
+To fit your business needs, you may want to record more claims. To add a claim, first [define a claim](#define-claims), then add the claim to the input claims collection. Claims that you add to the *AppInsights-Common* technical profile, will appear in all of the events. Claims that you add to a specific technical profile, will appear only in that event. The input claim element contains the following attributes:
+
+- **ClaimTypeReferenceId** - is the reference to a claim type.
+- **PartnerClaimType** - is the name of the property that appears in Azure Insights. Use the syntax of `{property:NAME}`, where `NAME` is property being added to the event.
+- **DefaultValue** - A predefined value to be recorded, such as event name. A claim that is used in the user journey, such as the identity provider name. If the claim is empty, the default value will be used. For example, the `identityProvider` claim is set by the federation technical profiles, such as Facebook. If the claim is empty, it indicates the user sign-in with a local account. Thus, the default value is set to *Local*. You can also record a [claim resolvers](claim-resolver-overview.md) with a contextual value, such as the application ID, or the user IP address.
+
+### Manipulating claims
-Add claim types and events to your user journey to fit your needs. You can use [claim resolvers](claim-resolver-overview.md) or any string claim type, add the claims by adding an **Input Claim** element to the Application Insights event or to the AppInsights-Common technical profile.
+You can use [input claims transformations](custom-policy-trust-frameworks.md#manipulating-your-claims) to modify the input claims or generate new ones before sending to Application Insights. In the following example, the technical profile includes the *CheckIsAdmin* input claims transformation.
-- **ClaimTypeReferenceId** is the reference to a claim type.-- **PartnerClaimType** is the name of the property that appears in Azure Insights. Use the syntax of `{property:NAME}`, where `NAME` is property being added to the event.-- **DefaultValue** use any string value or the claim resolver.
+```xml
+<TechnicalProfile Id="AppInsights-SignInComplete">
+ <InputClaimsTransformations>
+ <InputClaimsTransformation ReferenceId="CheckIsAdmin" />
+ </InputClaimsTransformations>
+ <InputClaims>
+ <InputClaim ClaimTypeReferenceId="isAdmin" PartnerClaimType="{property:IsAdmin}" />
+ ...
+ </InputClaims>
+ <IncludeTechnicalProfile ReferenceId="AppInsights-Common" />
+</TechnicalProfile>
+```
+
+### Add events
+
+To add an event, create a new technical profile that includes the *AppInsights-Common* technical profile. Then add the technical profile as orchestration step to the [user journey](custom-policy-trust-frameworks.md#orchestration-steps). Use [precondition](userjourneys.md#preconditions) to trigger the event when desired. For example, report the event only when users run through MFA.
```xml
-<InputClaim ClaimTypeReferenceId="app_session" PartnerClaimType="{property:app_session}" DefaultValue="{OAUTH-KV:app_session}" />
-<InputClaim ClaimTypeReferenceId="loyalty_number" PartnerClaimType="{property:loyalty_number}" DefaultValue="{OAUTH-KV:loyalty_number}" />
-<InputClaim ClaimTypeReferenceId="language" PartnerClaimType="{property:language}" DefaultValue="{Culture:RFC5646}" />
+<TechnicalProfile Id="AppInsights-MFA-Completed">
+ <InputClaims>
+ <InputClaim ClaimTypeReferenceId="EventType" PartnerClaimType="eventName" DefaultValue="MFA-Completed" />
+ </InputClaims>
+ <IncludeTechnicalProfile ReferenceId="AppInsights-Common" />
+</TechnicalProfile>
+```
+
+Now that you have a technical profile, add the event to the user journey. Then renumber the steps sequentially without skipping any integers from 1 to N.
+
+```xml
+<OrchestrationStep Order="8" Type="ClaimsExchange">
+ <Precondition Type="ClaimsExist" ExecuteActionsIf="true">
+ <Value>isActiveMFASession</Value>
+ <Action>SkipThisOrchestrationStep</Action>
+ </Precondition>
+ </Preconditions>
+ <ClaimsExchanges>
+ <ClaimsExchange Id="TrackUserMfaCompleted" TechnicalProfileReferenceId="AppInsights-MFA-Completed" />
+ </ClaimsExchanges>
+</OrchestrationStep>
+```
+
+## Enable developer mode
+
+When using the Application Insights to define events, you can indicate whether developer mode is enabled. Developer mode controls how events are buffered. In a development environment with minimal event volume, enabling developer mode results in events being sent immediately to Application Insights. The default value is `false`. Don't enable developer mode in production environments.
+
+To enable developer mode, in the *AppInsights-Common* technical profile, change the `DeveloperMode` metadata to `true`:
+
+```xml
+<TechnicalProfile Id="AppInsights-Common">
+ <Metadata>
+ ...
+ <Item Key="DeveloperMode">true</Item>
+ </Metadata>
+</TechnicalProfile>
+```
+
+## Disable telemetry
+
+To disable the Application insight logs, in the *AppInsights-Common* technical profile, change the `DisableTelemetry` metadata to `true`:
+
+```xml
+<TechnicalProfile Id="AppInsights-Common">
+ <Metadata>
+ ...
+ <Item Key="DisableTelemetry">true</Item>
+ </Metadata>
+</TechnicalProfile>
``` ## Next steps -- Learn more about [Application Insights](application-insights-technical-profile.md) technical profile in the IEF reference.
+- Learn how to [create custom KPI dashboards using Azure Application Insights](../azure-monitor/learn/tutorial-app-dashboards.md).
+
+::: zone-end
\ No newline at end of file
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/application-insights-technical-profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/application-insights-technical-profile.md
@@ -1,84 +0,0 @@
- Title: Define an Application Insights technical profile in a custom policy-
-description: Define an Application Insights technical profile in a custom policy in Azure Active Directory B2C.
------- Previously updated : 03/20/2020-----
-# Define an Application Insights technical profile in an Azure AD B2C custom policy
-
-[!INCLUDE [active-directory-b2c-advanced-audience-warning](../../includes/active-directory-b2c-advanced-audience-warning.md)]
-
-Azure Active Directory B2C (Azure AD B2C) supports sending event data directly to [Application Insights](../azure-monitor/app/app-insights-overview.md) by using the instrumentation key provided to Azure AD B2C. With an Application Insights technical profile, you can get detailed and customized event logs for your user journeys to:
-
-* Gain insights on user behavior.
-* Troubleshoot your own policies in development or in production.
-* Measure performance.
-* Create notifications from Application Insights.
--
-## Protocol
-
-The **Name** attribute of the **Protocol** element needs to be set to `Proprietary`. The **handler** attribute must contain the fully qualified name of the protocol handler assembly that is used by Azure AD B2C for Application Insights:
-`Web.TPEngine.Providers.AzureApplicationInsightsProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null`
-
-The following example shows the common Application Insights technical profile. Other Application Insights technical profiles include the AzureInsights-Common to leverage its configuration.
-
-```xml
-<TechnicalProfile Id="AzureInsights-Common">
- <DisplayName>Azure Insights Common</DisplayName>
- <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.Insights.AzureApplicationInsightsProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
-</TechnicalProfile>
-```
-
-## Input claims
-
-The **InputClaims** element contains a list of claims to send to Application Insights. You can also map the name of your claim to a name you prefer to appear in Application Insights. The following example shows how to send telemetries to Application Insights. Properties of an event are added through the syntax `{property:NAME}`, where NAME is property being added to the event. DefaultValue can be either a static value or a value that's resolved by one of the supported [claim resolvers](claim-resolver-overview.md).
-
-```xml
-<InputClaims>
- <InputClaim ClaimTypeReferenceId="PolicyId" PartnerClaimType="{property:Policy}" DefaultValue="{Policy:PolicyId}" />
- <InputClaim ClaimTypeReferenceId="CorrelationId" PartnerClaimType="{property:JourneyId}" DefaultValue="{Context:CorrelationId}" />
- <InputClaim ClaimTypeReferenceId="Culture" PartnerClaimType="{property:Culture}" DefaultValue="{Culture:RFC5646}" />
- <InputClaim ClaimTypeReferenceId="objectId" PartnerClaimType="{property:objectId}" />
-</InputClaims>
-```
-
-The **InputClaimsTransformations** element may contain a collection of **InputClaimsTransformation** elements that are used to modify the input claims or generate new ones before sending to Application Insights.
-
-## Persist claims
-
-The PersistedClaims element is not used.
-
-## Output claims
-
-The OutputClaims, and OutputClaimsTransformations elements are not used.
-
-## Cryptographic keys
-
-The CryptographicKeys element is not used.
--
-## Metadata
-
-| Attribute | Required | Description |
-| | -- | -- |
-| InstrumentationKey| Yes | The Application Insights [instrumentation key](../azure-monitor/app/create-new-resource.md#copy-the-instrumentation-key), which will be used for logging the events. |
-| DeveloperMode| No | A Boolean that indicates whether developer mode is enabled. Possible values: `true` or `false` (default). This metadata controls how events are buffered. In a development environment with minimal event volume, enabling developer mode results in events being sent immediately to Application Insights.|
-|DisableTelemetry |No |A Boolean that indicates whether telemetry should be enabled or not. Possible values: `true` or `false` (default).|
--
-## Next steps
--- [Create an Application Insights resource](../azure-monitor/app/create-new-resource.md)-- Learn how to [track user behavior in Azure Active Directory B2C using Application Insights](analytics-with-application-insights.md)
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/azure-monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/azure-monitor.md
@@ -11,7 +11,7 @@
Previously updated : 11/12/2020 Last updated : 01/29/2021 # Monitor Azure AD B2C with Azure Monitor
@@ -28,6 +28,10 @@ You can route log events to:
In this article, you learn how to transfer the logs to an Azure Log Analytics workspace. Then you can create a dashboard or create alerts that are based on Azure AD B2C users' activities.
+> [!IMPORTANT]
+> When you plan to transfer Azure AD B2C logs to different monitoring solutions, or repository, consider the following. Azure AD B2C logs contain personal data. Such data should be processed in a manner that ensures appropriate security of the personal data, including protection against unauthorized or unlawful processing, using appropriate technical or organizational measures.
++ ## Deployment overview Azure AD B2C leverages [Azure Active Directory monitoring](../active-directory/reports-monitoring/overview-monitoring.md). To enable *Diagnostic settings* in Azure Active Directory within your Azure AD B2C tenant, you use [Azure Lighthouse](../lighthouse/concepts/azure-delegated-resource-management.md) to [delegate a resource](../lighthouse/concepts/azure-delegated-resource-management.md), which allows your Azure AD B2C (the **Service Provider**) to manage an Azure AD (the **Customer**) resource. After you complete the steps in this article, you'll have access to the *azure-ad-b2c-monitor* resource group that contains the [Log Analytics workspace](../azure-monitor/learn/quick-create-workspace.md) in your **Azure AD B2C** portal. You'll also be able to transfer the logs from Azure AD B2C to your Log Analytics workspace.
@@ -316,4 +320,4 @@ Azure Monitor Logs are designed to scale and support collecting, indexing, and s
* For more information about adding and configuring diagnostic settings in Azure Monitor, see [Tutorial: Collect and analyze resource logs from an Azure resource](../azure-monitor/insights/monitor-azure-resource.md).
-* For information about streaming Azure AD logs to an event hub, see [Tutorial: Stream Azure Active Directory logs to an Azure event hub](../active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md).
\ No newline at end of file
+* For information about streaming Azure AD logs to an event hub, see [Tutorial: Stream Azure Active Directory logs to an Azure event hub](../active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md).
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/custom-policy-developer-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-policy-developer-notes.md
@@ -25,7 +25,7 @@ While most of the custom policy options available are now generally available, t
## Features that are generally available - Author and upload custom authentication user journeys by using custom policies.
- - Describe user journeys step-by-step as exchanges between claims providers.
+ - Describe user journeys step by step as exchanges between claims providers.
- Define conditional branching in user journeys. - Interoperate with REST API-enabled services in your custom authentication user journeys. - Federate with identity providers that are compliant with the OpenIDConnect protocol.
@@ -33,14 +33,14 @@ While most of the custom policy options available are now generally available, t
## Responsibilities of custom policy feature-set developers
-Manual policy configuration grants lower-level access to the underlying platform of Azure AD B2C and results in the creation of a unique, trust framework. The many possible permutations of custom identity providers, trust relationships, integrations with external services, and step-by-step workflows require a methodical approach to design and configuration.
+Manual policy configuration grants lower-level access to the underlying platform of Azure AD B2C and results in the creation of a unique, trust framework. The many possible permutations of custom identity providers, trust relationships, integrations with external services, and step by step workflows require a methodical approach to design and configuration.
Developers consuming the custom policy feature set should adhere to the following guidelines: - Become familiar with the configuration language of the custom policies and key/secrets management. For more information, see [TrustFrameworkPolicy](trustframeworkpolicy.md). - Take ownership of scenarios and custom integrations. Document your work and inform your live site organization. - Perform methodical scenario testing.-- Follow software development and staging best practices with a minimum of one development and testing environment and one production environment.
+- Follow software development and staging best practices. A minimum of one development and testing environment is recommended.
- Stay informed about new developments from the identity providers and services you integrate with. For example, keep track of changes in secrets and of scheduled and unscheduled changes to the service. - Set up active monitoring, and monitor the responsiveness of production environments. For more information about integrating with Application Insights, see [Azure Active Directory B2C: Collecting Logs](analytics-with-application-insights.md). - Keep contact email addresses current in the Azure subscription, and stay responsive to the Microsoft live-site team emails.
@@ -54,7 +54,7 @@ Developers consuming the custom policy feature set should adhere to the followin
## Features by stage and known issues
-Custom policy/Identity Experience Framework capabilities are under constant and rapid development. The following table is an index of features and component availability.
+Custom policy capabilities are under constant development. The following table is an index of features and component availability.
### Protocols and authorization flows
@@ -140,7 +140,7 @@ Custom policy/Identity Experience Framework capabilities are under constant and
| Azure Portal-IEF UX | | | X | | | Policy upload | | | X | | | [Application Insights user journey logs](troubleshoot-with-application-insights.md) | | X | | Used for troubleshooting during development. |
-| [Application Insights event logs](application-insights-technical-profile.md) | | X | | Used to monitor user flows in production. |
+| [Application Insights event logs](analytics-with-application-insights.md) | | X | | Used to monitor user flows in production. |
## Next steps
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/technicalprofiles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/technicalprofiles.md
@@ -18,17 +18,17 @@
[!INCLUDE [active-directory-b2c-advanced-audience-warning](../../includes/active-directory-b2c-advanced-audience-warning.md)]
-A technical profile provides a framework with a built-in mechanism to communicate with different type of parties using a custom policy in Azure Active Directory B2C (Azure AD B2C). Technical profiles are used to communicate with your Azure AD B2C tenant, to create a user, or read a user profile. A technical profile can be self-asserted to enable interaction with the user. For example, collect the user's credential to sign in and then render the sign-up page or password reset page.
+A technical profile provides a framework with a built-in mechanism to communicate with different type of parties. Technical profiles are used to communicate with your Azure AD B2C tenant, to create a user, or read a user profile. A technical profile can be self-asserted to enable interaction with the user. For example, collect the user's credential to sign in and then render the sign-up page or password reset page.
## Type of technical profiles A technical profile enables these types of scenarios: -- [Application Insights](application-insights-technical-profile.md) - Sending event data to [Application Insights](../azure-monitor/app/app-insights-overview.md).
+- [Application Insights](analytics-with-application-insights.md) - Sending event data to [Application Insights](../azure-monitor/app/app-insights-overview.md).
- [Azure Active Directory](active-directory-technical-profile.md) - Provides support for the Azure Active Directory B2C user management. - [Azure AD Multi-Factor Authentication](multi-factor-auth-technical-profile.md) - provides support for verifying a phone number by using Azure AD Multi-Factor Authentication (MFA). - [Claims transformation](claims-transformation-technical-profile.md) - Call output claims transformations to manipulate claims values, validate claims, or set default values for a set of output claims.-- [ID token hint](id-token-hint.md) - Validates `id_token_hint` JWT token signature, the issuer name and the token audience and extracts the claim from the inbound token.
+- [ID token hint](id-token-hint.md) - Validates `id_token_hint` JWT token signature, the issuer name, and the token audience and extracts the claim from the inbound token.
- [JWT token issuer](jwt-issuer-technical-profile.md) - Emits a JWT token that is returned back to the relying party application. - [OAuth1](oauth1-technical-profile.md) - Federation with any OAuth 1.0 protocol identity provider. - [OAuth2](oauth2-technical-profile.md) - Federation with any OAuth 2.0 protocol identity provider.
@@ -43,7 +43,7 @@ A technical profile enables these types of scenarios:
## Technical profile flow
-All types of technical profiles share the same concept. You send input claims, run claims transformation, and communicate with the configured party, such as an identity provider, REST API, or Azure AD directory services. After the process is completed, the technical profile returns the output claims and may run output claims transformation. The following diagram shows how the transformations and mappings referenced in the technical profile are processed. Regardless of the party the technical profile interacts with, after any claims transformation is executed, the output claims from the technical profile are immediately stored in the claims bag.
+All types of technical profiles share the same concept. Start by reading the input claims, run claims transformation. Then communicate with the configured party, such as an identity provider, REST API, or Azure AD directory services. After the process is completed, the technical profile returns the output claims and may run output claims transformation. The following diagram shows how the transformations and mappings referenced in the technical profile are processed. After the claims transformation is executed, the output claims are immediately stored in the claims bag. Regardless of the party the technical profile interacts with.
![Diagram illustrating the technical profile flow](./media/technical-profiles/technical-profile-flow.png)
@@ -60,7 +60,7 @@ All types of technical profiles share the same concept. You send input claims, r
1. **Output claims transformations** - After the technical profile is completed, Azure AD B2C runs output [claims transformation](claimstransformations.md). 1. **Single sign-on (SSO) session management** - Persists technical profile's data to the session, using [SSO session management](custom-policy-reference-sso.md).
-A **TechnicalProfiles** element contains a set of technical profiles supported by the claim provider. Every claims provider must have one or more technical profiles that determine the endpoints and the protocols needed to communicate with the claims provider. A claims provider can have multiple technical profiles.
+A **TechnicalProfiles** element contains a set of technical profiles supported by the claim provider. Every claims provider must have at least one technical profile. The technical profile determines the endpoints, and the protocols needed to communicate with the claims provider. A claims provider can have multiple technical profiles.
```xml <ClaimsProvider>
@@ -92,14 +92,14 @@ The **TechnicalProfile** contains the following elements:
| DisplayName | 1:1 | The display name of the technical profile. | | Description | 0:1 | The description of the technical profile. | | Protocol | 1:1 | The protocol used for the communication with the other party. |
-| Metadata | 0:1 | A collection of key/value pairs that are utilized by the protocol for communicating with the endpoint in the course of a transaction. |
+| Metadata | 0:1 | A collection of key/value that controls the behavior of the technical profile. |
| InputTokenFormat | 0:1 | The format of the input token. Possible values: `JSON`, `JWT`, `SAML11`, or `SAML2`. The `JWT` value represents a JSON Web Token as per IETF specification. The `SAML11` value represents a SAML 1.1 security token as per OASIS specification. The `SAML2` value represents a SAML 2.0 security token as per OASIS specification. | | OutputTokenFormat | 0:1 | The format of the output token. Possible values: `JSON`, `JWT`, `SAML11`, or `SAML2`. | | CryptographicKeys | 0:1 | A list of cryptographic keys that are used in the technical profile. | | InputClaimsTransformations | 0:1 | A list of previously defined references to claims transformations that should be executed before any claims are sent to the claims provider or the relying party. | | InputClaims | 0:1 | A list of the previously defined references to claim types that are taken as input in the technical profile. |
-| PersistedClaims | 0:1 | A list of the previously defined references to claim types that are persisted by the claims provider that relates to the technical profile. |
-| DisplayClaims | 0:1 | A list of the previously defined references to claim types that are presented by the claims provider that relates to the [self-asserted technical profile](self-asserted-technical-profile.md). The DisplayClaims feature is currently in **preview**. |
+| PersistedClaims | 0:1 | A list of the previously defined references to claim types that will be persisted by the technical profile. |
+| DisplayClaims | 0:1 | A list of the previously defined references to claim types that are presented by the [self-asserted technical profile](self-asserted-technical-profile.md). The DisplayClaims feature is currently in **preview**. |
| OutputClaims | 0:1 | A list of the previously defined references to claim types that are taken as output in the technical profile. | | OutputClaimsTransformations | 0:1 | A list of previously defined references to claims transformations that should be executed after the claims are received from the claims provider. | | ValidationTechnicalProfiles | 0:n | A list of references to other technical profiles that the technical profile uses for validation purposes. For more information, see [validation technical profile](validation-technical-profile.md)|
@@ -117,7 +117,7 @@ The **Protocol** specifies the protocol to be used for the communication with th
| Attribute | Required | Description | | | -- | -- | | Name | Yes | The name of a valid protocol supported by Azure AD B2C that is used as part of the technical profile. Possible values: `OAuth1`, `OAuth2`, `SAML2`, `OpenIdConnect`, `Proprietary`, or `None`. |
-| Handler | No | When the protocol name is set to `Proprietary`, specify the fully-qualified name of the assembly that is used by Azure AD B2C to determine the protocol handler. |
+| Handler | No | When the protocol name is set to `Proprietary`, specify the name of the assembly that is used by Azure AD B2C to determine the protocol handler. |
## Metadata
@@ -125,7 +125,7 @@ The **Metadata** element contains the relevant configuration options to a specif
| Element | Occurrences | Description | | - | -- | -- |
-| Item | 0:n | The metadata that relates to the technical profile. Each type of technical profile has a different set of metadata items. See the technical profile types section, for more information. |
+| Item | 0:n | The metadata that relates to the technical profile. Each type of technical profile has a different set of metadata items. For more information, see the technical profile types section. |
### Item
@@ -169,7 +169,7 @@ The following example illustrates the use of metadata relevant to [REST API tech
## Cryptographic keys
-Azure AD B2C stores secrets and certificates in the form of [policy keys](policy-keys-overview.md) to establish trust with the services it integrates with. During the technical profile executing, Azure AD B2C retrieves the cryptographic keys from Azure AD B2C policy keys, and then uses the keys establish trust, encrypt or sign a token. These trusts consist of:
+To establish trust with the services it integrates with, Azure AD B2C stores secrets and certificates in the form of [policy keys](policy-keys-overview.md). During the technical profile executing, Azure AD B2C retrieves the cryptographic keys from Azure AD B2C policy keys. Then uses the keys establish trust, encrypt or sign a token. These trusts consist of:
- Federation with [OAuth1](oauth1-technical-profile.md#cryptographic-keys), [OAuth2](oauth2-technical-profile.md#cryptographic-keys), and [SAML](saml-identity-provider-technical-profile.md#cryptographic-keys) identity providers - Secure the connecting with [REST API services](secure-rest-api.md)
@@ -194,7 +194,7 @@ The **Key** element contains the following attribute:
The **InputClaimsTransformations** element may contain a collection of input claims transformation elements that are used to modify input claims or generate new one.
-The output claims of a previous claims transformation in the claims transformation collection can be input claims of a subsequent input claims transformation, allowing you to have a sequence of claims transformation depending on each other.
+The output claims of a previous claims transformation in the claims transformation collection can be input claims of a subsequent input claims transformation allowing you to have a sequence of claims transformation depending on each other.
The **InputClaimsTransformations** element contains the following element:
@@ -247,13 +247,13 @@ The **InputClaim** element contains the following attributes:
| Attribute | Required | Description | | | -- | -- |
-| ClaimTypeReferenceId | Yes | The identifier of a claim type already defined in the ClaimsSchema section in the policy file or parent policy file. |
+| ClaimTypeReferenceId | Yes | The identifier of a claim type. The claim is already defined in the claims schema section in the policy file, or parent policy file. |
| DefaultValue | No | A default value to use to create a claim if the claim indicated by ClaimTypeReferenceId does not exist so that the resulting claim can be used as an InputClaim by the technical profile. | | PartnerClaimType | No | The identifier of the claim type of the external partner that the specified policy claim type maps to. If the PartnerClaimType attribute is not specified, then the specified policy claim type is mapped to the partner claim type of the same name. Use this property when your claim type name is different from the other party. For example, the first claim name is 'givenName', while the partner uses a claim named 'first_name'. | ## Display claims
-The **DisplayClaims** element contains a list of claims defined by [self-asserted technical profile](self-asserted-technical-profile.md) to be presented on the screen for collecting data from the user. In the display claims collection, you can include a reference to a [claim type](claimsschema.md), or a [DisplayControl](display-controls.md) that you've created.
+The **DisplayClaims** element contains a list of claims to be presented on the screen to collect data from the user. In the display claims collection, you can include a reference to a [claim type](claimsschema.md), or a [DisplayControl](display-controls.md) that you've created.
- A claim type is a reference to a claim to be displayed on the screen. - To force the user to provide a value for a specific claim, set the **Required** attribute of the **DisplayClaim** element to `true`.
@@ -322,7 +322,7 @@ The **PersistedClaim** element contains the following attributes:
| Attribute | Required | Description | | | -- | -- | | ClaimTypeReferenceId | Yes | The identifier of a claim type already defined in the ClaimsSchema section in the policy file or parent policy file. |
-| DefaultValue | No | A default value to use to create a claim if the claim indicated by ClaimTypeReferenceId does not exist so that the resulting claim can be used as an InputClaim by the technical profile. |
+| DefaultValue | No | A default value to use to create a claim if the claim does not exist. |
| PartnerClaimType | No | The identifier of the claim type of the external partner that the specified policy claim type maps to. If the PartnerClaimType attribute is not specified, then the specified policy claim type is mapped to the partner claim type of the same name. Use this property when your claim type name is different from the other party. For example, the first claim name is 'givenName', while the partner uses a claim named 'first_name'. | In the following example, the **AAD-UserWriteUsingLogonEmail** technical profile or the [starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/SocialAndLocalAccounts), which creates new local account, persists following claims:
@@ -353,13 +353,13 @@ The **OutputClaim** element contains the following attributes:
| Attribute | Required | Description | | | -- | -- | | ClaimTypeReferenceId | Yes | The identifier of a claim type already defined in the ClaimsSchema section in the policy file or parent policy file. |
-| DefaultValue | No | A default value to use to create a claim if the claim indicated by ClaimTypeReferenceId does not exist so that the resulting claim can be used as an InputClaim by the technical profile. |
+| DefaultValue | No | A default value to use to create a claim if the claim does not exist. |
|AlwaysUseDefaultValue |No |Force the use of the default value. |
-| PartnerClaimType | No | The identifier of the claim type of the external partner that the specified policy claim type maps to. If the PartnerClaimType attribute is not specified, then the specified policy claim type is mapped to the partner claim type of the same name. Use this property when your claim type name is different from the other party. For example, the first claim name is 'givenName', while the partner uses a claim named 'first_name'. |
+| PartnerClaimType | No | The identifier of the claim type of the external partner that the specified policy claim type maps to. If the partner claim type attribute is not specified, the specified policy claim type is mapped to the partner claim type of the same name. Use this property when your claim type name is different from the other party. For example, the first claim name is 'givenName', while the partner uses a claim named 'first_name'. |
## Output claims transformations
-The **OutputClaimsTransformations** element may contain a collection of **OutputClaimsTransformation** elements that are used to modify the output claims or generate new ones. After execution, the output claims are put back in the claims bag. You can use those claims in the next orchestrations step.
+The **OutputClaimsTransformations** element may contain a collection of **OutputClaimsTransformation** elements. The output claims transformations are used to modify the output claims or generate new ones. After execution, the output claims are put back in the claims bag. You can use those claims in the next orchestrations step.
The output claims of a previous claims transformation in the claims transformation collection can be input claims of a subsequent input claims transformation, allowing you to have a sequence of claims transformation depending on each other.
@@ -400,7 +400,7 @@ The following technical profile references the AssertAccountEnabledIsTrue claims
## Validation technical profiles
-A validation technical profile is used for validating some or all of the output claims of the referencing in a [self-asserted technical profile](self-asserted-technical-profile.md#validation-technical-profiles). A validation technical profile is an ordinary technical profile from any protocol, such as [Azure Active Directory](active-directory-technical-profile.md) or a [REST API](restful-technical-profile.md). The validation technical profile returns output claims, or returns error code. The error message is rendered to the user on screen, allowing the user to retry.
+A validation technical profile is used for validating output claims in a [self-asserted technical profile](self-asserted-technical-profile.md#validation-technical-profiles). A validation technical profile is an ordinary technical profile from any protocol, such as [Azure Active Directory](active-directory-technical-profile.md) or a [REST API](restful-technical-profile.md). The validation technical profile returns output claims, or returns error code. The error message is rendered to the user on screen, allowing the user to retry.
The following diagram illustrates how Azure AD B2C uses a validation technical profile to validate the user credentials
@@ -430,7 +430,9 @@ The **SubjectNamingInfo** defines the subject name used in tokens in a [relying
## Include technical profile
-A technical profile can include another technical profile to change settings or add new functionality. The **IncludeTechnicalProfile** element is a reference to the common technical profile from which a technical profile is derived. To reduce redundancy and complexity of your policy elements, use inclusion when you have multiple technical profiles that share the core elements. Use a common technical profile with the common set of configuration, along with specific task technical profiles that include the common technical profile. For example, suppose you have a [REST API technical profile](restful-technical-profile.md) with a single endpoint where you need to send different set of claims for different scenarios. Create a common technical profile with the shared functionality, such as the REST API endpoint URI, metadata, authentication type, and cryptographic keys. Then create specific task technical profiles that include the common technical profile, add the input claims, output claims, or overwrite the REST API endpoint URI relevant to that technical profile.
+A technical profile can include another technical profile to change settings or add new functionality. The **IncludeTechnicalProfile** element is a reference to the common technical profile from which a technical profile is derived. To reduce redundancy and complexity of your policy elements, use inclusion when you have multiple technical profiles that share the core elements. Use a common technical profile with the common set of configuration, along with specific task technical profiles that include the common technical profile.
+
+Suppose you have a [REST API technical profile](restful-technical-profile.md) with a single endpoint where you need to send different set of claims for different scenarios. Create a common technical profile with the shared functionality, such as, the REST API endpoint URI, metadata, authentication type, and cryptographic keys. Create specific task technical profiles that include the common technical profile. Then add the input claims, output claims, or overwrite the REST API endpoint URI relevant to that technical profile.
The **IncludeTechnicalProfile** element contains the following attribute:
@@ -557,7 +559,10 @@ The [ClaimsProviderSelections](userjourneys.md#claimsproviderselection) in a use
- **OnItemExistenceInStringCollectionClaim**, execute only when an item exists in a string collection claim. - **OnItemAbsenceInStringCollectionClaim** execute only when an item does not exist in a string collection claim.
-Using **OnClaimsExistence**, **OnItemExistenceInStringCollectionClaim** or **OnItemAbsenceInStringCollectionClaim**, requires you to provide the following metadata: **ClaimTypeOnWhichToEnable** specifies the claim's type that is to be evaluated, **ClaimValueOnWhichToEnable** specifies the value that is to be compared.
+Using **OnClaimsExistence**, **OnItemExistenceInStringCollectionClaim**, or **OnItemAbsenceInStringCollectionClaim**, requires you to provide the following metadata:
+
+- **ClaimTypeOnWhichToEnable** - specifies the claim's type that is to be evaluated.
+- **ClaimValueOnWhichToEnable** - specifies the value that is to be compared.
The following technical profile is executed only if the **identityProviders** string collection contains the value of `facebook.com`:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/authentication-flows-app-scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/authentication-flows-app-scenarios.md
@@ -39,8 +39,8 @@ The following sections describe the categories of applications.
Authentication scenarios involve two activities: -- **Acquiring security tokens for a protected web API**: We recommend that you use [Microsoft-supported client libraries](reference-v2-libraries.md#microsoft-supported-client-libraries) to acquire tokens. In particular, we recommend the Microsoft Authentication Library (MSAL) family.-- **Protecting a web API or a web app**: One challenge of protecting these resources is validating the security token. On some platforms, Microsoft offers [middleware libraries](reference-v2-libraries.md#microsoft-supported-server-middleware-libraries).
+- **Acquiring security tokens for a protected web API**: We recommend that you use the [Microsoft Authentication Library (MSAL)](reference-v2-libraries.md), developed and supported by Microsoft.
+- **Protecting a web API or a web app**: One challenge of protecting these resources is validating the security token. On some platforms, Microsoft offers [middleware libraries](reference-v2-libraries.md).
### With users or without users
@@ -62,7 +62,7 @@ Security tokens can be acquired by multiple types of applications. These applica
- Desktop apps that call web APIs on behalf of signed-in users - Mobile apps - Apps running on devices that don't have a browser, like those running on IoT
-
+ - **Confidential client applications**: Apps in this category include: - Web apps that call a web API - Web APIs that call a web API
@@ -92,7 +92,7 @@ Applications use the different authentication flows to sign in users and get tok
Many modern web apps are built as client-side single-page applications. These applications use JavaScript or a framework like Angular, Vue, and React. These applications run in a web browser.
-Single-page applications differ from traditional server-side web apps in terms of authentication characteristics. By using the Microsoft identity platform, single-page applications can sign in users and get tokens to access back-end services or web APIs. The Microsoft identity platform offers two grant types for JavaScript applications:
+Single-page applications differ from traditional server-side web apps in terms of authentication characteristics. By using the Microsoft identity platform, single-page applications can sign in users and get tokens to access back-end services or web APIs. The Microsoft identity platform offers two grant types for JavaScript applications:
| MSAL.js (2.x) | MSAL.js (1.x) | |||
@@ -157,7 +157,7 @@ For more information, see [Mobile app that calls web APIs](scenario-mobile-overv
### Protected web API
-You can use the Microsoft identity platform to secure web services like your app's RESTful web API. A protected web API is called through an access token. The token helps secure the API's data and authenticate incoming requests. The caller of a web API appends an access token in the authorization header of an HTTP request.
+You can use the Microsoft identity platform endpoint to secure web services like your app's RESTful web API. A protected web API is called through an access token. The token helps secure the API's data and authenticate incoming requests. The caller of a web API appends an access token in the authorization header of an HTTP request.
If you want to protect your ASP.NET or ASP.NET Core web API, you need to validate the access token. For this validation, you use the ASP.NET JWT middleware. The validation is done by the [IdentityModel extensions for .NET](https://github.com/AzureAD/azure-activedirectory-identitymodel-extensions-for-dotnet/wiki) library and not by MSAL.NET.
@@ -306,7 +306,7 @@ In the Windows column of the following table, each time .NET Core is mentioned,
| [Daemon app](scenario-daemon-overview.md) <br/> [![Daemon app](medi) | ![.NET Core](media/sample-v2-code/small_logo_NETcore.png)MSAL.NET ![MSAL Java](media/sample-v2-code/small_logo_java.png)<br/>MSAL Java<br/>![MSAL Python](media/sample-v2-code/small_logo_python.png)<br/>MSAL Python| ![.NET Core](media/sample-v2-code/small_logo_NETcore.png) MSAL.NET ![MSAL Java](media/sample-v2-code/small_logo_java.png)<br/>MSAL Java<br/>![MSAL Python](media/sample-v2-code/small_logo_python.png)<br/>MSAL Python| ![.NET Core](media/sample-v2-code/small_logo_NETcore.png)MSAL.NET ![MSAL Java](media/sample-v2-code/small_logo_java.png)<br/>MSAL Java<br/>![MSAL Python](media/sample-v2-code/small_logo_python.png)<br/>MSAL Python | [Web API that calls web APIs](scenario-web-api-call-api-overview.md) <br/><br/> [![Web API that calls web APIs](medi) | ![ASP.NET Core](media/sample-v2-code/small_logo_NETcore.png)<br/>ASP.NET Core + MSAL.NET ![MSAL Java](media/sample-v2-code/small_logo_java.png)<br/>MSAL Java<br/>![MSAL Python](media/sample-v2-code/small_logo_python.png)<br/>MSAL Python| ![.NET Core](media/sample-v2-code/small_logo_NETcore.png)<br/>ASP.NET Core + MSAL.NET ![MSAL Java](media/sample-v2-code/small_logo_java.png)<br/>MSAL Java<br/>![MSAL Python](media/sample-v2-code/small_logo_python.png)<br/>MSAL Python| ![.NET Core](media/sample-v2-code/small_logo_NETcore.png)<br/>ASP.NET Core + MSAL.NET ![MSAL Java](media/sample-v2-code/small_logo_java.png)<br/>MSAL Java<br/>![MSAL Python](media/sample-v2-code/small_logo_python.png)<br/>MSAL Python
-For more information, see [Microsoft-supported libraries by OS/language](reference-v2-libraries.md#microsoft-supported-libraries-by-os--language).
+For more information, see [Microsoft identity platform authentication libraries](reference-v2-libraries.md).
## Next steps
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/developer-support-help-options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/developer-support-help-options.md
@@ -32,7 +32,7 @@ If you have a development-related question, you may be able to find the answer i
### Scoped search
-For faster results, scope your search to Microsoft Q&A, the documentation, and the code samples by using the following query in your favorite search engine:
+For faster results, scope your search to [Microsoft Q&A](https://docs.microsoft.com/answers/products/)the documentation, and the code samples by using the following query in your favorite search engine:
``` {Your Search Terms} (site:http://www.docs.microsoft.com/answers/products/ OR site:docs.microsoft.com OR site:github.com/azure-samples OR site:cloudidentity.com OR site:developer.microsoft.com/graph)
@@ -49,9 +49,9 @@ Where *{Your Search Terms}* correspond to your search keywords.
## Post a question to Microsoft Q&A
-Microsoft Q&A is the preferred channel for development-related questions. Here, members of the developer community and Microsoft team members are directly involved in helping you to solve your problems.
+[Microsoft Q&A](https://docs.microsoft.com/answers/products/) is the preferred channel for development-related questions. Here, members of the developer community and Microsoft team members are directly involved in helping you to solve your problems.
-If you can't find an answer to your question through search, submit a new question to Microsoft Q&A. Use one of the following tags when asking questions to help the community identify and answer your question more quickly:
+If you can't find an answer to your question through search, submit a new question to [Microsoft Q&A](https://docs.microsoft.com/answers/products/) . Use one of the following tags when asking questions to help the community identify and answer your question more quickly:
|Component/area | Tags | |||
@@ -61,9 +61,9 @@ If you can't find an answer to your question through search, submit a new questi
| [Azure B2B](../external-identities/what-is-b2b.md) | [[azure-ad-b2b]](https://docs.microsoft.com/answers/topics/azure-ad-b2b.html) | | [Azure B2C](https://azure.microsoft.com/services/active-directory-b2c/) | [[azure-ad-b2c]](https://docs.microsoft.com/answers/topics/azure-ad-b2c.html) | | [Microsoft Graph API](https://developer.microsoft.com/graph/) | [[azure-ad-graph]](https://docs.microsoft.com/answers/topics/azure-ad-graph.html) |
-| Any other area related to authentication or authorization topics | [[azure-active-directory]](https://docs.microsoft.com/answers/topics/azure-ad-graph.html) |
+| Any other area related to authentication or authorization topics | [[azure-active-directory]](https://docs.microsoft.com/answers/topics/azure-active-directory.html) |
-The following posts from Microsoft Q&A contain tips on how to ask questions and how to add source code. Follow these guidelines to increase the chances for community members to assess and respond to your question quickly:
+The following posts from [Microsoft Q&A](https://docs.microsoft.com/answers/products/) contain tips on how to ask questions and how to add source code. Follow these guidelines to increase the chances for community members to assess and respond to your question quickly:
* [How do I ask a good question](https://docs.microsoft.com/answers/articles/24951/how-to-write-a-quality-question.html) * [How to create a minimal, complete, and verifiable example](https://docs.microsoft.com/answers/articles/24907/how-to-write-a-quality-answer.html)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/identity-platform-integration-checklist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/identity-platform-integration-checklist.md
@@ -65,7 +65,7 @@ Use the following checklist to ensure that your application is effectively integ
![checkbox](./medi)) to securely sign in users.
-![checkbox](./medi#compatible-client-libraries).<br/><br/>If you must hand code for the authentication protocols, you should follow a methodology such as [Microsoft SDL](https://www.microsoft.com/sdl/default.aspx). Pay close attention to the security considerations in the standards specifications for each protocol.
+![checkbox](./medi). If you must hand-code for the authentication protocols, you should follow the [Microsoft SDL](https://www.microsoft.com/sdl/default.aspx) or similar development methodology. Pay close attention to the security considerations in the standards specifications for each protocol.
![checkbox](./medi) apps.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-migration.md
@@ -69,7 +69,7 @@ __Q: How does MSAL work with AD FS?__
A: MSAL.NET supports certain scenarios to authenticate against AD FS 2019. If your app needs to acquire tokens directly from earlier version of AD FS, you should remain on ADAL. [Learn more](msal-net-adfs-support.md). __Q: How do I get help migrating my application?__
-A: See the [Migration guidance](#migration-guidance) section of this article. If, after reading the guide for your app's platform, you have additional questions, you can post on Microsoft Q&A with the tag `[azure-ad-adal-deprecation]` or open an issue in library's GitHub repository. See the [Languages and frameworks](msal-overview.md#languages-and-frameworks) section of the MSAL overview article for links to each library's repo.
+A: See the [Migration guidance](#migration-guidance) section of this article. If, after reading the guide for your app's platform, you have additional questions, you can post on [Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-ad-adal-deprecation.html) with the tag `[azure-ad-adal-deprecation]` or open an issue in library's GitHub repository. See the [Languages and frameworks](msal-overview.md#languages-and-frameworks) section of the MSAL overview article for links to each library's repo.
## Next steps
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/reference-v2-libraries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-v2-libraries.md
@@ -1,146 +1,138 @@
Title: Microsoft identity platform authentication libraries
-description: Compatible client libraries and server middleware libraries, along with related library, source, and sample links, for the Microsoft identity platform.
+ Title: Microsoft identity platform authentication libraries | Azure
+description: List of client libraries and middleware compatible with the Microsoft identity platform. Use these libraries to add support for user sign-in (authentication) and protected web API access (authorization) to your applications.
-+ Previously updated : 07/25/2019- Last updated : 01/29/2021+
+# Customer intent: As a developer, I want to know whether there's a Microsoft Authentication Library (MSAL) available for
+# the language/framework I'm using to build my application, and whether the library is GA or in preview.
# Microsoft identity platform authentication libraries
-The [Microsoft identity platform ](../azuread-dev/azure-ad-endpoint-comparison.md) supports the industry-standard OAuth 2.0 and OpenID Connect 1.0 protocols. The Microsoft Authentication Library (MSAL) is designed to work with the Microsoft identity platform. You can also use open-source libraries that support OAuth 2.0 and OpenID Connect 1.0.
+The following tables show Microsoft authentication library support for several application types. They include links to library source code, where to get the package for your app's project, and whether the library supports user sign-in (authentication), access to protected web APIs (authorization), or both.
-We recommend that you use libraries written by protocol domain experts who follow a Security Development Lifecycle (SDL) methodology. Such methodologies include [the one that Microsoft follows][Microsoft-SDL]. If you hand code for the protocols, you should follow a methodology such as Microsoft SDL. Pay close attention to the security considerations in the standards specifications for each protocol.
+The Microsoft identity platform has been certified by the OpenID Foundation as a [certified OpenID provider](https://openid.net/certification/). If you prefer to use a library other than the Microsoft Authentication Library (MSAL) or another Microsoft-supported library, choose one with a [certified OpenID Connect implementation](https://openid.net/developers/certified/).
-> [!NOTE]
-> Are you looking for the Azure Active Directory Authentication Library (ADAL)? Check out the [ADAL library guide](../azuread-dev/active-directory-authentication-libraries.md).
+If you choose to hand-code your own protocol-level implementation of [OAuth 2.0 or OpenID Connect 1.0](active-directory-v2-protocols.md), pay close attention to the security considerations in each standard's specification and follow a software development lifecycle (SDL) methodology like the [Microsoft SDL][Microsoft-SDL].
-## Types of libraries
+## Single-page application (SPA)
-The Microsoft identity platform works with two types of libraries:
+A single-page application runs entirely on the browser surface and fetches page data (HTML, CSS, and JavaScript) dynamically or at application load time. It can call web APIs to interact with back-end data sources.
-* **Client libraries**: Native clients and servers use client libraries to acquire access tokens for calling a resource such as Microsoft Graph.
-* **Server middleware libraries**: Web apps use server middleware libraries for user sign-in. Web APIs use server middleware libraries to validate tokens that are sent by native clients or by other servers.
+Because a SPA's code runs entirely in the browser, it's considered a *public client* that's unable to store secrets securely.
-## Library support
+| Language / framework | Project on<br/>GitHub | Package | Getting<br/>started | Sign in users | Access web APIs | Generally available (GA) *or*<br/>Public preview<sup>1</sup> |
+|-|--||:--:|:--:|::|::|
+| Angular | [MSAL Angular 2.0](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular) | [@azure/msal-angular](https://www.npmjs.com/package/@azure/msal-angular) | ΓÇö | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | Public preview |
+| Angular | [MSAL Angular](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/msal-angular-v1/lib/msal-angular) | [@azure/msal-angular](https://www.npmjs.com/package/@azure/msal-angular) | [Tutorial](tutorial-v2-angular.md) | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
+| AngularJS | [MSAL AngularJS](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-angularjs) | [@azure/msal-angularjs](https://www.npmjs.com/package/@azure/msal-angularjs) | ΓÇö | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | Public preview |
+| JavaScript | [MSAL.js 2.0](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-browser) | [@azure/msal-browser](https://www.npmjs.com/package/@azure/msal-browser) | [Tutorial](tutorial-v2-javascript-auth-code.md) | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
+| React | [MSAL React](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-react) | [@azure/msal-react](https://www.npmjs.com/package/@azure/msal-react) | ΓÇö | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | Public preview |
+<!--
+| Vue | [Vue MSAL]( https://github.com/mvertopoulos/vue-msal) | [vue-msal]( https://www.npmjs.com/package/vue-msal) | ![X indicating no.][n] | ![Green check mark.][y] | ![Green check mark.][y] | -- |
+-->
-Libraries come in two support categories:
+<sup>1</sup> [Supplemental terms of use for Microsoft Azure Previews][preview-tos] apply to libraries in *Public preview*.
-* **Microsoft-supported**: Microsoft provides fixes for these libraries and has done SDL due diligence on these libraries.
-* **Compatible**: Microsoft has tested these libraries in basic scenarios and has confirmed that they work with the Microsoft identity platform. Microsoft doesn't provide fixes for these libraries and hasn't done a review of these libraries. Issues and feature requests should be directed to the library's open-source project.
+## Web application
-For a list of libraries that work with the Microsoft identity platform, see the following sections.
+A web application runs code on a server that generates and sends HTML, CSS, and JavaScript to a user's web browser to be rendered. The user's identity is maintained as a session between the user's browser (the front end) and the web server (the back end).
-## Microsoft-supported client libraries
+Because a web application's code runs on the web server, it's considered a *confidential client* that can store secrets securely.
-Use client authentication libraries to acquire a token for calling a protected web API.
+| Language / framework | Project on<br/>GitHub | Package | Getting<br/>started | Sign in users | Access web APIs | Generally available (GA) *or*<br/>Public preview<sup>1</sup> |
+|-|--||:-:|:--:|::|::|
+| .NET | [MSAL.NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client) | ΓÇö | ![Library cannot request ID tokens for user sign-in.][n] | ![Library can request access tokens for protected web APIs.][y] | GA |
+| ASP.NET Core | [ASP.NET Security](/aspnet/core/security/) | [Microsoft.AspNetCore.Authentication](https://www.nuget.org/packages/Microsoft.AspNetCore.Authentication/) | ΓÇö | ![Library can request ID tokens for user sign-in.][y] | ![Library cannot request access tokens for protected web APIs.][n] | GA |
+| ASP.NET Core | [Microsoft.Identity.Web](https://github.com/AzureAD/microsoft-identity-web) | [Microsoft.Identity.Web](https://www.nuget.org/packages/Microsoft.Identity.Web) | ΓÇö | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
+| Java | [MSAL4J](https://github.com/AzureAD/microsoft-authentication-library-for-java) | [msal4j](https://search.maven.org/artifact/com.microsoft.azure/msal4j) | [Quickstart](quickstart-v2-java-webapp.md) | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
+| Node.js | [MSAL Node.js](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) | [msal-node](https://www.npmjs.com/package/@azure/msal-node) | [Quickstart](quickstart-v2-nodejs-webapp-msal.md) | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | Public preview |
+| Node.js | [Azure AD Passport](https://github.com/AzureAD/passport-azure-ad) | [passport-azure-ad](https://www.npmjs.com/package/passport-azure-ad) | [Quickstart](quickstart-v2-nodejs-webapp.md) | ![Library can request ID tokens for user sign-in.][y] | ![Library cannot request access tokens for protected web APIs.][n] | GA |
+| Python | [MSAL Python](https://github.com/AzureAD/microsoft-authentication-library-for-python) | [msal](https://pypi.org/project/msal) | [Quickstart](quickstart-v2-python-webapp.md) | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
+<!--
+| Java | [ScribeJava](https://github.com/scribejava/scribejava) | [ScribeJava 3.2.0](https://github.com/scribejava/scribejava/releases/tag/scribejava-3.2.0) | ![X indicating no.][n] | ![X indicating no.][n] | ![Green check mark.][y] | -- |
+| Java | [Gluu oxAuth](https://github.com/GluuFederation/oxAuth) | [oxAuth 3.0.2](https://github.com/GluuFederation/oxAuth/releases/tag/3.0.2) | ![X indicating no.][n] | ![Green check mark.][y] | ![Green check mark.][y] | -- |
+| Node.js | [openid-client](https://github.com/panva/node-openid-client/) | [openid-client 2.4.5](https://github.com/panva/node-openid-client/releases/tag/v2.4.5) | ![X indicating no.][n] | ![Green check mark.][y] | ![Green check mark.][y] | -- |
+| PHP | [PHP League oauth2-client](https://github.com/thephpleague/oauth2-client) | [oauth2-client 1.4.2](https://github.com/thephpleague/oauth2-client/releases/tag/1.4.2) | ![X indicating no.][n] | ![X indicating no.][n] | ![Green check mark.][y] | -- |
+| Ruby | [OmniAuth](https://github.com/omniauth/omniauth) | [omniauth 1.3.1](https://github.com/omniauth/omniauth/releases/tag/v1.3.1)<br/>[omniauth-oauth2 1.4.0](https://github.com/intridea/omniauth-oauth2) | ![X indicating no.][n] | ![X indicating no.][n] | ![Green check mark.][y] | -- |
+-->
-| Platform | Library | Download | Source code | Sample | Reference | Conceptual doc | Roadmap |
-| | | | | | | | |
-| ![JavaScript](medi)| [Roadmap](https://github.com/AzureAD/microsoft-authentication-library-for-js/wiki#roadmap)
-![Angular](medi) | [Roadmap](https://github.com/AzureAD/microsoft-authentication-library-for-js/wiki#roadmap)
-| ![.NET Framework](medi) | [Roadmap](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki#roadmap)
-| ![.NET Core icon](media/sample-v2-code/logo_NETCore.png) | Microsoft Identity Web |[NuGet](https://www.nuget.org/packages/Microsoft.Identity.Web) |[GitHub](https://github.com/AzureAD/microsoft-identity-web) | [Samples](https://aka.ms/ms-id-web/samples) | [Microsoft.Identity.Web](/dotnet/api/microsoft.identity.web?view=azure-dotnet-preview&preserve-view=true) |[Conceptual docs](https://aka.ms/ms-id-web/conceptual-doc) | [Roadmap](https://github.com/AzureAD/microsoft-identity-web/wiki#roadmap)
-| ![Python](media/sample-v2-code/logo_python.png) | MSAL Python | [PyPI](https://pypi.org/project/msal) | [GitHub](https://github.com/AzureAD/microsoft-authentication-library-for-python) | [Samples](https://github.com/AzureAD/microsoft-authentication-library-for-python/tree/dev/sample) | [ReadTheDocs](https://msal-python.rtfd.io/) | [Wiki](https://github.com/AzureAD/microsoft-authentication-library-for-python/wiki) | [Roadmap](https://github.com/AzureAD/microsoft-authentication-library-for-python/wiki/Roadmap)
-| ![Java](media/sample-v2-code/logo_java.png) | MSAL Java | [Maven](https://search.maven.org/artifact/com.microsoft.azure/msal4j) | [GitHub](https://github.com/AzureAD/microsoft-authentication-library-for-java) | [Samples](https://github.com/AzureAD/microsoft-authentication-library-for-java/tree/dev/src/samples) | [Reference](https://javadoc.io/doc/com.microsoft.azure/msal4j/latest/https://docsupdatetracker.net/index.html) | [Wiki](https://github.com/AzureAD/microsoft-authentication-library-for-java/wiki) | [Roadmap](https://github.com/AzureAD/microsoft-authentication-library-for-java/wiki)
-| iOS & macOS | MSAL iOS and macOS | [GitHub](https://github.com/AzureAD/microsoft-authentication-library-for-objc) |[GitHub](https://github.com/AzureAD/microsoft-authentication-library-for-objc) | [iOS app](https://github.com/Azure-Samples/ms-identity-mobile-apple-swift-objc), [macOS app](https://github.com/Azure-Samples/ms-identity-macOS-swift-objc) | [Reference](https://azuread.github.io/microsoft-authentication-library-for-objc/https://docsupdatetracker.net/index.html) | [Conceptual docs](msal-overview.md) | |
-|![Android / Java](medi) |[Roadmap](https://github.com/AzureAD/microsoft-authentication-library-for-android/wiki/Roadmap)
+<sup>1</sup> [Supplemental terms of use for Microsoft Azure Previews][preview-tos] apply to libraries in *Public preview*.
-## Microsoft-supported server middleware libraries
+## Desktop application
-Use middleware libraries to help protect web applications and web APIs. Web apps or web APIs written with ASP.NET or ASP.NET Core use the middleware libraries.
+A desktop application is typically binary (compiled) code that surfaces a user interface and is intended to run on a user's desktop.
-| Platform | Library | Download | Source Code | Sample | Reference
-| | | | | | |
-| ![.NET](medi) |[ASP.NET API reference](/dotnet/api/?view=aspnetcore-2.0&preserve-view=true) |
-| ![.NET](medi) |[Reference](/dotnet/api/overview/azure/activedirectory/client?view=azure-dotnet&preserve-view=true) |
-| ![Node.js](media/sample-v2-code/logo_nodejs.png) | Azure AD Passport |[NPM](https://www.npmjs.com/package/passport-azure-ad) |[GitHub](https://github.com/AzureAD/passport-azure-ad) | [Web app](https://github.com/AzureADQuickStarts/AppModelv2-WebApp-OpenIDConnect-nodejs) | |
+Because a desktop application runs on the user's desktop, it's considered a *public client* that's unable to store secrets securely.
-## Microsoft-supported libraries by OS / language
+| Language / framework | Project on<br/>GitHub | Package | Getting<br/>started | Sign in users | Access web APIs | Generally available (GA) *or*<br/>Public preview<sup>1</sup> |
+|-|--||::|:--:|::|::|
+| Electron | [MSAL Node.js](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) | [@azure/msal-node](https://www.npmjs.com/package/@azure/msal-node) | ΓÇö | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | Public preview |
+| Java | [MSAL4J](https://github.com/AzureAD/microsoft-authentication-library-for-java) | [msal4j](https://mvnrepository.com/artifact/com.microsoft.azure/msal4j) | ΓÇö | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
+| macOS (Swift/Obj-C) | [MSAL for iOS and macOS](https://github.com/AzureAD/microsoft-authentication-library-for-objc) | [MSAL](https://cocoapods.org/pods/MSAL) | [Tutorial](tutorial-v2-ios.md) | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
+| UWP | [MSAL.NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client) | [Tutorial](tutorial-v2-windows-uwp.md) | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
+| WPF | [MSAL.NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client) | [Tutorial](tutorial-v2-windows-desktop.md) | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
+<!--
+| Java | Scribe | [Scribe Java](https://mvnrepository.com/artifact/org.scribe/scribe) | ![X indicating no.][n] | ![Green check mark.][y] | ![Green check mark.][y] | -- |
+| React Native | [React Native App Auth](https://github.com/FormidableLabs/react-native-app-auth/blob/main/docs/config-examples/azure-active-directory.md) | [react-native-app-auth](https://www.npmjs.com/package/react-native-app-auth) | ![X indicating no.][n] | ![Green check mark.][y] | ![Green check mark.][y] | -- |
+-->
-In term of supported operating systems vs languages, the mapping is the following:
+<sup>1</sup> [Supplemental terms of use for Microsoft Azure Previews][preview-tos] apply to libraries in *Public preview*.
-| Platform | Windows | Linux | macOS | iOS | Android |
-|-||||||
-| ![JavaScript](media/sample-v2-code/logo_js.png) | MSAL.js | MSAL.js | MSAL.js | MSAL.js | MSAL.js |
-| <img alt="C#" src="../../cognitive-services/speech-service/media/index/logo_csharp.svg" width="64px" height="64px" /> | ASP.NET, ASP.NET Core, MSAL.Net (.NET FW, Core, UWP)| ASP.NET Core, MSAL.Net (.NET Core) | ASP.NET Core, MSAL.Net (macOS) | MSAL.Net (Xamarin.iOS) | MSAL.Net (Xamarin.Android)|
-| Swift <br> Objective-C | | | [MSAL for iOS and macOS](msal-overview.md) | [MSAL for iOS and macOS](msal-overview.md) | |
-| ![Java](media/sample-v2-code/logo_java.png) Java | msal4j | msal4j | msal4j | | MSAL Android |
-| ![Python](media/sample-v2-code/logo_python.png) Python | MSAL Python | MSAL Python | MSAL Python |
-| ![Node.js](media/sample-v2-code/logo_nodejs.png) Node.js | Passport.node | Passport.node | Passport.node |
+## Mobile application
-See also [Scenarios by supported platforms and languages](authentication-flows-app-scenarios.md#scenarios-and-supported-platforms-and-languages)
+A mobile application is typically binary (compiled) code that surfaces a user interface and is intended to run on a user's mobile device.
-## Compatible client libraries
+Because a mobile application runs on the the user's mobile device, it's considered a *public client* that's unable to store secrets securely.
-| Platform | Library name | Tested version | Source code | Sample |
-|::|::|::|::|::|
-|![JavaScript](media/sample-v2-code/logo_js.png)|[Hello.js](https://adodson.com/hello.js/) | Version 1.13.5 |[Hello.js](https://github.com/MrSwitch/hello.js) |[SPA](https://github.com/Azure-Samples/active-directory-javascript-graphapi-v2) |
-|![Vue](media/sample-v2-code/logo_vue.png)|[Vue MSAL](https://github.com/mvertopoulos/vue-msal) | Version 3.0.3 |[vue-msal](https://github.com/mvertopoulos/vue-msal) | |
-| ![Java](media/sample-v2-code/logo_java.png) | [Scribe Java](https://github.com/scribejava/scribejava) | [Version 3.2.0](https://github.com/scribejava/scribejava/releases/tag/scribejava-3.2.0) | [ScribeJava](https://github.com/scribejava/scribejava/) | |
-| ![Java](media/sample-v2-code/logo_java.png) | [Gluu OpenID Connect library](https://github.com/GluuFederation/oxAuth) | [Version 3.0.2](https://github.com/GluuFederation/oxAuth/releases/tag/3.0.2) | [Gluu OpenID Connect library](https://github.com/GluuFederation/oxAuth) | |
-| ![Python](media/sample-v2-code/logo_python.png) | [Requests-OAuthlib](https://github.com/requests/requests-oauthlib) | [Version 1.2.0](https://github.com/requests/requests-oauthlib/releases/tag/v1.2.0) | [Requests-OAuthlib](https://github.com/requests/requests-oauthlib) | |
-| ![Node.js](media/sample-v2-code/logo_nodejs.png) | [openid-client](https://github.com/panva/node-openid-client) | [Version 2.4.5](https://github.com/panva/node-openid-client/releases/tag/v2.4.5) | [openid-client](https://github.com/panva/node-openid-client) | |
-| ![PHP](media/sample-v2-code/logo_php.png) | [The PHP League oauth2-client](https://github.com/thephpleague/oauth2-client) | [Version 1.4.2](https://github.com/thephpleague/oauth2-client/releases/tag/1.4.2) | [oauth2-client](https://github.com/thephpleague/oauth2-client/) | |
-| ![Ruby](media/sample-v2-code/logo_ruby.png) |[OmniAuth](https://github.com/omniauth/omniauth/wiki) |omniauth: 1.3.1<br />omniauth-oauth2: 1.4.0 |[OmniAuth](https://github.com/omniauth/omniauth)<br />[OmniAuth OAuth2](https://github.com/intridea/omniauth-oauth2) | |
-| iOS, macOS, & Android | [React Native App Auth](https://github.com/FormidableLabs/react-native-app-auth) | [Version 4.2.0](https://github.com/FormidableLabs/react-native-app-auth/releases/tag/v4.2.0) | [React Native App Auth](https://github.com/FormidableLabs/react-native-app-auth) | |
+| Platform | Project on<br/>GitHub | Package | Getting<br/>started | Sign in users | Access web APIs | Generally available (GA) *or*<br/>Public preview<sup>1</sup> |
+|-|||:--:|:--:|::|::|
+| Android (Java) | [MSAL Android](https://github.com/AzureAD/microsoft-authentication-library-for-android) | [MSAL](https://mvnrepository.com/artifact/com.microsoft.identity.client/msal) | [Quickstart](quickstart-v2-android.md) | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
+| Android (Kotlin) | [MSAL Android](https://github.com/AzureAD/microsoft-authentication-library-for-android) | [MSAL](https://mvnrepository.com/artifact/com.microsoft.identity.client/msal) | ΓÇö | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
+| iOS (Swift/Obj-C) | [MSAL for iOS and macOS](https://github.com/AzureAD/microsoft-authentication-library-for-objc) | [MSAL](https://cocoapods.org/pods/MSAL) | [Tutorial](tutorial-v2-ios.md) | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
+| Xamarin (.NET) | [MSAL.NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client) | ΓÇö | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
+<!--
+| React Native |[React Native App Auth](https://github.com/FormidableLabs/react-native-app-auth/blob/main/docs/config-examples/azure-active-directory.md) | [react-native-app-auth](https://www.npmjs.com/package/react-native-app-auth) | ![X indicating no.][n] | ![Green check mark.][y] | ![Green check mark.][y] | -- |
+-->
-For any standards-compliant library, you can use the Microsoft identity platform. It's important to know where to go for support:
+<sup>1</sup> [Supplemental terms of use for Microsoft Azure Previews][preview-tos] apply to libraries in *Public preview*.
-* For issues and new feature requests in library code, contact the library owner.
-* For issues and new feature requests in the service-side protocol implementation, contact Microsoft.
-* [File a feature request](https://feedback.azure.com/forums/169401-azure-active-directory) for additional features you want to see in the protocol.
-* [Create a support request](../../azure-portal/supportability/how-to-create-azure-support-request.md) if you find an issue where the Microsoft identity platform isn't compliant with OAuth 2.0 or OpenID Connect 1.0.
+## Service / daemon
-## Related content
+Services and daemons are commonly used for server-to-server and other unattended (sometimes called *headless*) communication. Because there's no user at the keyboard to enter credentials or consent to resource access, these applications authenticate as themselves, not a user, when requesting authorized access to a web API's resources.
-For more information about the Microsoft identity platform, see the [Microsoft identity platform overview][AAD-App-Model-V2-Overview].
+A service or daemon that runs on a server is considered a *confidential client* that can store its secrets securely.
+
+| Language / framework | Project on<br/>GitHub | Package | Getting<br/>started | Sign in users | Access web APIs | Generally available (GA) *or*<br/>Public preview<sup>1</sup> |
+|-||-|::|:--:|::|::|
+| .NET | [MSAL.NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client/) | [Quickstart](quickstart-v2-netcore-daemon.md) | ![Library cannot request ID tokens for user sign-in.][n] | ![Library can request access tokens for protected web APIs.][y] | GA |
+| Java | [MSAL4J](https://github.com/AzureAD/microsoft-authentication-library-for-java) | [msal4j](https://javadoc.io/doc/com.microsoft.azure/msal4j/latest/https://docsupdatetracker.net/index.html) | ΓÇö | ![Library cannot request ID tokens for user sign-in.][n] | ![Library can request access tokens for protected web APIs.][y] | GA |
+| Python | [MSAL Python](https://github.com/AzureAD/microsoft-authentication-library-for-python) | [msal-python](https://github.com/AzureAD/microsoft-authentication-library-for-python) | [Quickstart](quickstart-v2-python-daemon.md) | ![Library cannot request ID tokens for user sign-in.][n] | ![Library can request access tokens for protected web APIs.][y] | GA |
+<!--
+|PHP| [The PHP League oauth2-client](https://oauth2-client.thephpleague.com/usage/) | [League\OAuth2](https://oauth2-client.thephpleague.com/) | ![Green check mark.][n] | ![X indicating no.][n] | ![Green check mark.][y] | -- |
+-->
+
+<sup>1</sup> [Supplemental terms of use for Microsoft Azure Previews][preview-tos] apply to libraries in *Public preview*.
+
+## Next steps
+
+For more information about the Microsoft Authentication Library, see the [Overview of the Microsoft Authentication Library (MSAL)](msal-overview.md).
<!--Image references-->
+[y]: ./media/common/yes.png
+[n]: ./media/common/no.png
-<!--Reference style links -->
+<!--Reference-style links -->
[AAD-App-Model-V2-Overview]: v2-overview.md
-[ClientLib-NET-Lib]: https://www.nuget.org/packages/Microsoft.Identity.Client
-[ClientLib-NET-Repo]: https://github.com/AzureAD/microsoft-authentication-library-for-dotnet
-[ClientLib-NET-Sample]: ./tutorial-v2-windows-desktop.md
-[ClientLib-Node-Lib]: https://www.npmjs.com/package/passport-azure-ad
-[ClientLib-Node-Repo]: https://github.com/AzureAD/passport-azure-ad
-[ClientLib-Node-Sample]:/
-[ClientLib-Iosmac-Lib]:/
-[ClientLib-Iosmac-Repo]:/
-[ClientLib-Iosmac-Sample]:/
-[ClientLib-Android-Lib]:/
-[ClientLib-Android-Repo]:/
-[ClientLib-Android-Sample]:/
-[ClientLib-Js-Lib]:/
-[ClientLib-Js-Repo]:/
-[ClientLib-Js-Sample]:/
-
-[Microsoft-SDL]: https://www.microsoft.com/sdl/default.aspx
-[ServerLib-Net4-Owin-Oidc-Lib]: https://www.nuget.org/packages/Microsoft.Owin.Security.OpenIdConnect/
-[ServerLib-Net4-Owin-Oidc-Repo]: https://katanaproject.codeplex.com/
-[ServerLib-Net4-Owin-Oidc-Sample]: ./tutorial-v2-asp-webapp.md
-[ServerLib-Net4-Owin-Oauth-Lib]: https://www.nuget.org/packages/Microsoft.Owin.Security.OAuth/
-[ServerLib-Net4-Owin-Oauth-Repo]: https://katanaproject.codeplex.com/
-[ServerLib-Net4-Owin-Oauth-Sample]: https://azure.microsoft.com/documentation/articles/active-directory-v2-devquickstarts-dotnet-api/
-[ServerLib-Net-Jwt-Lib]: https://www.nuget.org/packages/System.IdentityModel.Tokens.Jwt
-[ServerLib-Net-Jwt-Repo]: https://github.com/AzureAD/azure-activedirectory-identitymodel-extensions-for-dotnet
-[ServerLib-Net-Jwt-Sample]:/
-[ServerLib-NetCore-Owin-Oidc-Lib]: https://www.nuget.org/packages/Microsoft.AspNetCore.Authentication.OpenIdConnect/
-[ServerLib-NetCore-Owin-Oidc-Repo]: https://github.com/aspnet/Security
-[ServerLib-NetCore-Owin-Oidc-Sample]: https://github.com/Azure-Samples/active-directory-dotnet-webapp-openidconnect-aspnetcore-v2
-[ServerLib-NetCore-Owin-Oauth-Lib]: https://www.nuget.org/packages/Microsoft.AspNetCore.Authentication.OAuth/
-[ServerLib-NetCore-Owin-Oauth-Repo]: https://github.com/aspnet/Security
-[ServerLib-NetCore-Owin-Oauth-Sample]:/
-[ServerLib-Node-Lib]: https://www.npmjs.com/package/passport-azure-ad
-[ServerLib-Node-Repo]: https://github.com/AzureAD/passport-azure-ad/
-[ServerLib-Node-Sample]: https://azure.microsoft.com/documentation/articles/active-directory-v2-devquickstarts-node-web/
+[Microsoft-SDL]: https://www.microsoft.com/securityengineering/sdl/
+[preview-tos]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/tutorial-v2-aspnet-daemon-web-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-v2-aspnet-daemon-web-app.md
@@ -234,8 +234,8 @@ When no longer needed, delete the app object that you created in the [Register y
## Get help
-Use [Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-ad-msal.html) to get support from the community.
-Ask your questions on Microsoft Q&A first, and browse existing issues to see if someone has asked your question before.
+Use [Microsoft Q&A](https://docs.microsoft.com/answers/products/) to get support from the community.
+Ask your questions on [Microsoft Q&A](https://docs.microsoft.com/answers/products/) first, and browse existing issues to see if someone has asked your question before.
Make sure that your questions or comments are tagged with "azure-ad-adal-deprecation," "azure-ad-msal," and "dotnet-standard." If you find a bug in the sample, please raise the issue on [GitHub Issues](https://github.com/Azure-Samples/ms-identity-aspnet-daemon-webapp/issues).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-howto-get-appsource-certified https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-howto-get-appsource-certified.md
@@ -102,7 +102,7 @@ For more information about the AppSource trial experience, see [this video](http
For Azure AD integration, we use [Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-active-directory.html) with the community to provide support.
-We highly recommend you ask your questions on Microsoft Q&A first and browse existing issues to see if someone has asked your question before. Make sure that your questions or comments are tagged with [`[azure-active-directory]`](https://docs.microsoft.com/answers/topics/azure-active-directory.html).
+We highly recommend you ask your questions on [Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-active-directory.html) first and browse existing issues to see if someone has asked your question before. Make sure that your questions or comments are tagged with [`[azure-active-directory]`](https://docs.microsoft.com/answers/topics/azure-active-directory.html).
Use the following comments section to provide feedback and help us refine and shape our content.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/manage-app-consent-policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/manage-app-consent-policies.md
@@ -147,4 +147,4 @@ To learn more:
* [Permissions and consent in the Microsoft identity platform](../develop/v2-permissions-and-consent.md) To get help or find answers to your questions:
-* [Azure AD on StackOverflow](https://docs.microsoft.com/answers/topics/azure-active-directory.html)
\ No newline at end of file
+* [Azure AD on Microsoft Q&A](https://docs.microsoft.com/answers/products/)
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/github-ae-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/github-ae-tutorial.md
@@ -66,7 +66,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **GitHub AE** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -91,11 +91,19 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
![image](common/default-attributes.png)
-1. In addition to above, GitHub AE application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
-
- | Name | Source Attribute|
- | -- | |
- | administrator | true |
+1. Edit **User Attributes & Claims**.
+
+1. Click **Add new claim** and enter the name as **administrator** in the textbox.
+
+1. Expand **Claim conditions** and select **Members** from **User type**.
+
+1. Click on **Select groups** and search for the **Group** you want to include this claim, where its members should be administrators for GHAE.
+
+1. Select **Attribute** for **Source** and enter **true** for the **Value**.
+
+10. Click **Save**.
+
+ ![manage claim](./media/github-ae-tutorial/administrator.png)
> [!NOTE] > To know the instructions on how to add a claim, please follow the [link](https://docs.github.com/en/github-ae@latest/admin/authentication/configuring-authentication-and-provisioning-for-your-enterprise-using-azure-ad).
aks https://docs.microsoft.com/en-us/azure/aks/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/25/2021 Last updated : 01/29/2021
api-management https://docs.microsoft.com/en-us/azure/api-management/api-management-policy-expressions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-policy-expressions.md
@@ -215,7 +215,7 @@ A variable named `context` is implicitly available in every policy [expression](
|<a id="ref-context-request-headers"></a>string context.Request.Headers.GetValueOrDefault(headerName: string, defaultValue: string)|headerName: string<br /><br /> defaultValue: string<br /><br /> Returns comma-separated request header values or `defaultValue` if the header is not found.| |<a id="ref-context-response"></a>context.Response|Body: [IMessageBody](#ref-imessagebody)<br /><br /> [Headers](#ref-context-response-headers): IReadOnlyDictionary<string, string[]><br /><br /> StatusCode: int<br /><br /> StatusReason: string| |<a id="ref-context-response-headers"></a>string context.Response.Headers.GetValueOrDefault(headerName: string, defaultValue: string)|headerName: string<br /><br /> defaultValue: string<br /><br /> Returns comma-separated response header values or `defaultValue` if the header is not found.|
-|<a id="ref-context-subscription"></a>context.Subscription|CreatedTime: DateTime<br /><br /> EndDate: DateTime?<br /><br /> Id: string<br /><br /> Key: string<br /><br /> Name: string<br /><br /> PrimaryKey: string<br /><br /> SecondaryKey: string<br /><br /> StartDate: DateTime?|
+|<a id="ref-context-subscription"></a>context.Subscription|CreatedDate: DateTime<br /><br /> EndDate: DateTime?<br /><br /> Id: string<br /><br /> Key: string<br /><br /> Name: string<br /><br /> PrimaryKey: string<br /><br /> SecondaryKey: string<br /><br /> StartDate: DateTime?|
|<a id="ref-context-user"></a>context.User|Email: string<br /><br /> FirstName: string<br /><br /> Groups: IEnumerable<[IGroup](#ref-igroup)\><br /><br /> Id: string<br /><br /> Identities: IEnumerable<[IUserIdentity](#ref-iuseridentity)\><br /><br /> LastName: string<br /><br /> Note: string<br /><br /> RegistrationDate: DateTime| |<a id="ref-iapi"></a>IApi|Id: string<br /><br /> Name: string<br /><br /> Path: string<br /><br /> Protocols: IEnumerable<string\><br /><br /> ServiceUrl: [IUrl](#ref-iurl)<br /><br /> SubscriptionKeyParameterNames: [ISubscriptionKeyParameterNames](#ref-isubscriptionkeyparameternames)| |<a id="ref-igroup"></a>IGroup|Id: string<br /><br /> Name: string|
@@ -248,4 +248,4 @@ For more information working with policies, see:
+ [Policies in API Management](api-management-howto-policies.md) + [Transform APIs](transform-api.md) + [Policy Reference](./api-management-policies.md) for a full list of policy statements and their settings
-+ [Policy samples](./policy-reference.md)
\ No newline at end of file++ [Policy samples](./policy-reference.md)
api-management https://docs.microsoft.com/en-us/azure/api-management/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 01/25/2021 Last updated : 01/29/2021
app-service https://docs.microsoft.com/en-us/azure/app-service/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/25/2021 Last updated : 01/29/2021
automation https://docs.microsoft.com/en-us/azure/automation/automation-hrw-run-runbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-hrw-run-runbooks.md
@@ -3,7 +3,7 @@ Title: Run Azure Automation runbooks on a Hybrid Runbook Worker
description: This article describes how to run runbooks on machines in your local datacenter or other cloud provider with the Hybrid Runbook Worker. Previously updated : 10/06/2020 Last updated : 01/29/2021
@@ -90,6 +90,10 @@ Use the following procedure to specify a Run As account for a Hybrid Runbook Wor
As part of your automated build process for deploying resources in Azure, you might require access to on-premises systems to support a task or set of steps in your deployment sequence. To provide authentication against Azure using the Run As account, you must install the Run As account certificate.
+>[!NOTE]
+>This PowerShell runbook currently does not run on LInux machines. It runs only on Windows machines.
+>
+ The following PowerShell runbook, called **Export-RunAsCertificateToHybridWorker**, exports the Run As certificate from your Azure Automation account. The runbook downloads and imports the certificate into the local machine certificate store on a Hybrid Runbook Worker that is connected to the same account. Once it completes that step, the runbook verifies that the worker can successfully authenticate to Azure using the Run As account. >[!NOTE]
automation https://docs.microsoft.com/en-us/azure/automation/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/25/2021 Last updated : 01/29/2021
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/25/2021 Last updated : 01/29/2021
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/quickstart-resource-manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/quickstart-resource-manager.md
@@ -103,7 +103,7 @@ Write-Host "Press [ENTER] to continue..."
## Next steps
-To learn about creating other applications with Azure App Configuration, continue to the following article:
+To learn about adding feature flag and Key Vault reference to an App Configuration store, check below ARM template examples.
-> [!div class="nextstepaction"]
-> [Quickstart: Create an ASP.NET Core app with Azure App Configuration](quickstart-aspnet-core-app.md)
+- [101-app-configuration-store-ff](https://github.com/Azure/azure-quickstart-templates/tree/master/101-app-configuration-store-ff)
+- [101-app-configuration-store-keyvaultref](https://github.com/Azure/azure-quickstart-templates/tree/master/101-app-configuration-store-keyvaultref)
\ No newline at end of file
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Arc enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/25/2021 Last updated : 01/29/2021 #
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/servers/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Arc enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/25/2021 Last updated : 01/29/2021
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/25/2021 Last updated : 01/29/2021
azure-government https://docs.microsoft.com/en-us/azure/azure-government/compliance/azure-services-in-fedramp-auditscope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
@@ -287,10 +287,10 @@ This article provides a detailed list of in-scope cloud services across Azure Pu
| [Dynamics 365 Forms Pro](/forms-pro/get-started) | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | | [Dynamics 365 Customer Insights](/dynamics365/ai/customer-insights/overview) | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | | [Dynamics 365 Customer Engagement (Common Data Service)](/dynamics365/customerengagement/on-premises/overview) | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: |
-| [Dynamics 365 Customer Service](/dynamics365/customer-service/overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
-| [Dynamics 365 Field Service](/dynamics365/field-service/overview) | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: |
-| [Dynamics 365 Project Service Automation](/dynamics365/project-service/overview) | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: |
-| [Dynamics 365 Sales](/dynamics365/sales-enterprise/overview) | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: |
+| [Dynamics 365 Customer Service](/dynamics365/customer-service/overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| [Dynamics 365 Field Service](/dynamics365/field-service/overview) | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: |
+| [Dynamics 365 Project Service Automation](/dynamics365/project-service/overview) | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: |
+| [Dynamics 365 Sales](/dynamics365/sales-enterprise/overview) | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: |
| [Event Hubs](https://azure.microsoft.com/services/event-hubs/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Export to Data Lake service](https://docs.microsoft.com/powerapps/maker/data-platform/export-to-data-lake) | :heavy_check_mark: | | | | :heavy_check_mark: | | [ExpressRoute](https://azure.microsoft.com/services/expressroute/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/ip-addresses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/ip-addresses.md
@@ -239,11 +239,7 @@ Note: *.loganalytics.io domain is owned by the Log Analytics team.
## Action Group webhooks
-| Purpose | IP | Ports
-| | | |
-| Alerting | 13.66.60.119/32<br/>13.66.143.220/30<br/>13.66.202.14/32<br/>13.66.248.225/32<br/>13.66.249.211/32<br/>13.67.10.124/30<br/>13.69.109.132/30<br/>13.71.199.112/30<br/>13.77.53.216/30<br/>13.77.172.102/32<br/>13.77.183.209/32<br/>13.78.109.156/30<br/>13.84.49.247/32<br/>13.84.51.172/32<br/>13.84.52.58/32<br/>13.86.221.220/30<br/>13.106.38.142/32<br/>13.106.38.148/32<br/>13.106.54.3/32<br/>13.106.54.19/32<br/>13.106.57.181/32<br/>13.106.57.196/31<br/>20.38.149.132/30<br/>20.42.64.36/30<br/>20.43.121.124/30<br/>20.44.17.220/30<br/>20.45.123.236/30<br/>20.72.27.152/30<br/>20.150.172.228/30<br/>20.192.238.124/30<br/>20.193.202.4/30<br/>40.68.195.137/32<br/>40.68.201.58/32<br/>40.68.201.65/32<br/>40.68.201.206/32<br/>40.68.201.211/32<br/>40.68.204.18/32<br/>40.115.37.106/32<br/>40.121.219.215/32<br/>40.121.221.62/32<br/>40.121.222.201/32<br/>40.121.223.186/32<br/>51.104.9.100/30<br/>52.183.20.244/32<br/>52.183.31.0/32<br/>52.183.94.59/32<br/>52.184.145.166/32<br/>191.233.50.4/30<br/>191.233.207.64/26<br/>2603:1000:4:402::178/125<br/>2603:1000:104:402::178/125<br/>2603:1010:6:402::178/125<br/>2603:1010:101:402::178/125<br/>2603:1010:304:402::178/125<br/>2603:1010:404:402::178/125<br/>2603:1020:5:402::178/125<br/>2603:1020:206:402::178/125<br/>2603:1020:305:402::178/125<br/>2603:1020:405:402::178/125<br/>2603:1020:605:402::178/125<br/>2603:1020:705:402::178/125<br/>2603:1020:805:402::178/125<br/>2603:1020:905:402::178/125<br/>2603:1020:a04:402::178/125<br/>2603:1020:b04:402::178/125<br/>2603:1020:c04:402::178/125<br/>2603:1020:d04:402::178/125<br/>2603:1020:e04:402::178/125<br/>2603:1020:f04:402::178/125<br/>2603:1020:1004:800::f8/125<br/>2603:1020:1104:400::178/125<br/>2603:1030:f:400::978/125<br/>2603:1030:10:402::178/125<br/>2603:1030:104:402::178/125<br/>2603:1030:107:400::f0/125<br/>2603:1030:210:402::178/125<br/>2603:1030:40b:400::978/125<br/>2603:1030:40c:402::178/125<br/>2603:1030:504:802::f8/125<br/>2603:1030:608:402::178/125<br/>2603:1030:807:402::178/125<br/>2603:1030:a07:402::8f8/125<br/>2603:1030:b04:402::178/125<br/>2603:1030:c06:400::978/125<br/>2603:1030:f05:402::178/125<br/>2603:1030:1005:402::178/125<br/>2603:1040:5:402::178/125<br/>2603:1040:207:402::178/125<br/>2603:1040:407:402::178/125<br/>2603:1040:606:402::178/125<br/>2603:1040:806:402::178/125<br/>2603:1040:904:402::178/125<br/>2603:1040:a06:402::178/125<br/>2603:1040:b04:402::178/125<br/>2603:1040:c06:402::178/125<br/>2603:1040:d04:800::f8/125<br/>2603:1040:f05:402::178/125<br/>2603:1040:1104:400::178/125<br/>2603:1050:6:402::178/125<br/>2603:1050:403:400::1f8/125<br/> | 443 |
-
-To receive updates about changes to these IP addresses, we recommend you configure a Service Health alert, which monitors for Informational notifications about the Action Groups service.
+You can query the list of IP addresses used by Action Groups using the [Get-AzNetworkServiceTag PowerShell command](https://docs.microsoft.com/powershell/module/az.network/Get-AzNetworkServiceTag).
### Action Groups Service Tag Managing changes to Source IP addresses can be quite time consuming. Using **Service Tags** eliminates the need to update your configuration. A service tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the IP addresses and automatically updates the service tag as addresses change, eliminating the need to update network security rules for an Action Group.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/samples/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/samples/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/25/2021 Last updated : 01/29/2021
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/cross-region-replication-introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/cross-region-replication-introduction.md
@@ -13,8 +13,9 @@
na ms.devlang: na Previously updated : 01/21/2021 Last updated : 01/29/2021 + # Cross-region replication of Azure NetApp Files volumes
@@ -23,27 +24,32 @@ The Azure NetApp Files replication functionality provides data protection throug
> [!IMPORTANT] > The cross-region replication feature is currently in public preview. You need to submit a waitlist request for accessing the feature through the [Azure NetApp Files cross-region replication waitlist submission page](https://aka.ms/anfcrrpreviewsignup). Wait for an official confirmation email from the Azure NetApp Files team before using the cross-region replication feature.
-## Supported region pairs
+## <a name="supported-region-pairs"></a>Supported cross-region replication pairs
-Azure NetApp Files volume replication is currently available in the following fixed region pairs:
+Azure NetApp Files volume replication is supported between various [Azure regional pairs](/azure/best-practices-availability-paired-regions#azure-regional-pairs) and non-pairs. Azure NetApp Files volume replication is currently available between the following regions:
-* US West and US East
-* US West 2 and US East
-* US South Central and US Central
-* US South Central and US East
-* US South Central and US East 2
-* US East and US East 2
-* US East 2 and US Central
+### Azure regional pairs
+
+* East US and West US
+* East US 2 and Central US
* Australia East and Australia Southeast * Canada Central and Canada East
-* Central India and South India
+* South India and Central India
* Germany West Central and Germany North * Japan East and Japan West * North Europe and West Europe
-* Southeast Asia and Australia East
-* UK South and Germany West Central
* UK South and UK West
+### Azure regional non-pairs
+
+* West US 2 and East US
+* South Central US and Central US
+* South Central US and East US
+* South Central US and East US 2
+* East US and East US 2
+* Australia East and Southeast Asia
+* Germany West Central and UK South
+ ## Service-level objectives Recovery Point Objectives (RPO), or the maximum tolerable data loss, is defined as twice the replication schedule. The actual RPO observed might vary based on factors such as the total dataset size along with the change rate, the percentage of data overwrites, and the replication bandwidth available for transfer.
azure-portal https://docs.microsoft.com/en-us/azure/azure-portal/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/25/2021 Last updated : 01/29/2021
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/custom-providers/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/custom-providers/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/25/2021 Last updated : 01/29/2021
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/managed-applications/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/25/2021 Last updated : 01/29/2021
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/25/2021 Last updated : 01/29/2021
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/resource-name-rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/resource-name-rules.md
@@ -2,7 +2,7 @@
Title: Resource naming restrictions description: Shows the rules and restrictions for naming Azure resources. Previously updated : 01/26/2021 Last updated : 01/27/2021 # Naming rules and restrictions for Azure resources
@@ -87,7 +87,7 @@ In the following tables, the term alphanumeric refers to:
> [!div class="mx-tableFixed"] > | Entity | Scope | Length | Valid Characters | > | | | | |
-> | automationAccounts | resource group | 6-50 | Alphanumerics and hyphens.<br><br>Start with letter, and end with alphanumeric. |
+> | automationAccounts | resource group & region <br>(See note below) | 6-50 | Alphanumerics and hyphens.<br><br>Start with letter, and end with alphanumeric. |
> | automationAccounts / certificates | automation account | 1-128 | Can't use:<br> `<>*%&:\?.+/` <br><br>Can't end with space. | > | automationAccounts / connections | automation account | 1-128 | Can't use:<br> `<>*%&:\?.+/` <br><br>Can't end with space. | > | automationAccounts / credentials | automation account | 1-128 | Can't use:<br> `<>*%&:\?.+/` <br><br>Can't end with space. |
@@ -97,6 +97,9 @@ In the following tables, the term alphanumeric refers to:
> | automationAccounts / watchers | automation account | 1-63 | Alphanumerics, underscores, and hyphens.<br><br>Start with letter. | > | automationAccounts / webhooks | automation account | 1-128 | Can't use:<br> `<>*%&:\?.+/` <br><br>Can't end with space. |
+> [!NOTE]
+> Automation account names are unique per region and resource group. Names for deleted Automation accounts might not be immediately available.
+ ## Microsoft.Batch > [!div class="mx-tableFixed"]
azure-signalr https://docs.microsoft.com/en-us/azure/azure-signalr/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/25/2021 Last updated : 01/29/2021
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure SQL Database description: Lists Azure Policy built-in policy definitions for Azure SQL Database and SQL Managed Instance. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/25/2021 Last updated : 01/29/2021
backup https://docs.microsoft.com/en-us/azure/backup/backup-support-matrix-iaas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix-iaas.md
@@ -145,6 +145,7 @@ Gen2 VMs | Supported <br> Azure Backup supports backup and restore of [Gen2 VMs]
Backup of Azure VMs with locks | Unsupported for unmanaged VMs. <br><br> Supported for managed VMs. [Spot VMs](../virtual-machines/spot-vms.md) | Unsupported. Azure Backup restores Spot VMs as regular Azure VMs. [Azure Dedicated Host](https://docs.microsoft.com/azure/virtual-machines/dedicated-hosts) | Supported
+Windows Storage Spaces configuration of standalone Azure VMs | Supported
## VM storage support
backup https://docs.microsoft.com/en-us/azure/backup/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/25/2021 Last updated : 01/29/2021
batch https://docs.microsoft.com/en-us/azure/batch/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/25/2021 Last updated : 01/29/2021
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Computer-vision/read-container-migration-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/read-container-migration-guide.md
@@ -8,7 +8,7 @@
Previously updated : 10/23/2020 Last updated : 01/29/2021
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Content-Moderator/ecommerce-retail-catalog-moderation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/ecommerce-retail-catalog-moderation.md
@@ -9,7 +9,7 @@
Previously updated : 10/23/2020 Last updated : 01/29/2021
@@ -18,7 +18,7 @@
# Tutorial: Moderate e-commerce product images with Azure Content Moderator
-In this tutorial, you'll learn how to use Azure Cognitive Services, including Content Moderator, to classify and moderate product images for an e-commerce scenario. You'll use Computer Vision and Custom Vision to apply tags (labels) to images, and then you will create a team review, which combines Content Moderator's machine-learning-based technologies with human review teams to provide an intelligent moderation system.
+In this tutorial, you'll learn how to use Azure Cognitive Services, including Content Moderator, to classify and moderate product images for an e-commerce scenario. You'll use Computer Vision and Custom Vision to apply tags (labels) to images, and then you'll create a team review, which combines Content Moderator's machine-learning-based technologies with human review teams to provide an intelligent moderation system.
This tutorial shows you how to:
@@ -53,7 +53,7 @@ Next, create custom tags in the Review tool (see the [Tags](./review-tool-user-g
## Create Visual Studio project 1. In Visual Studio, open the New Project dialog. Expand **Installed**, then **Visual C#**, then select **Console app (.NET Framework)**.
-1. Name the application **EcommerceModeration**, then click **OK**.
+1. Name the application **EcommerceModeration**, then select **OK**.
1. If you're adding this project to an existing solution, select this project as the single startup project. This tutorial highlights the code that is central to the project, but it won't cover every line of code. Copy the full contents of _Program.cs_ from the sample project ([Samples eCommerce Catalog Moderation](https://github.com/MicrosoftContentModerator/samples-eCommerceCatalogModeration)) into the _Program.cs_ file of your new project. Then, step through the following sections to learn about how the project works and how to use it yourself.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Content-Moderator/facebook-post-moderation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/facebook-post-moderation.md
@@ -9,7 +9,7 @@
Previously updated : 10/05/2020 Last updated : 01/29/2021 #Customer intent: As the moderator of a Facebook page, I want to use Azure's machine learning technology to automate and streamline the process of post moderation.
@@ -102,14 +102,14 @@ Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps:
![facebook developer page](images/facebook-developer-app.png) 1. Navigate to the [Facebook developer site](https://developers.facebook.com/)
- 1. Click on **My Apps**.
+ 1. Go to **My Apps**.
1. Add a New App.
- 1. name it something
+ 1. Provide a name
1. Select **Webhooks -> Set Up** 1. Select **Page** in the dropdown menu and select **Subscribe to this object** 1. Provide the **FBListener Url** as the Callback URL and the **Verify Token** you configured under the **Function App Settings** 1. Once subscribed, scroll down to feed and select **subscribe**.
- 1. Click on the **Test** button of the **feed** row to send a test message to your FBListener Azure Function, then hit the **Send to My Server** button. You should see the request being received on your FBListener.
+ 1. Select the **Test** button of the **feed** row to send a test message to your FBListener Azure Function, then hit the **Send to My Server** button. You should see the request being received on your FBListener.
1. Create a Facebook Page.
@@ -121,7 +121,7 @@ Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps:
1. Navigate to the [Graph API Explorer](https://developers.facebook.com/tools/explorer/). 1. Select **Application**. 1. Select **Page Access Token**, Send a **Get** request.
- 1. Click the **Page ID** in the response.
+ 1. Select the **Page ID** in the response.
1. Now append the **/subscribed_apps** to the URL and Send a **Get** (empty response) request. 1. Submit a **Post** request. You get the response as **success: true**.
@@ -156,7 +156,7 @@ The solution sends all images and text posted on your Facebook page to Content M
## Next steps
-In this tutorial, you set up a program to analyze product images for the purpose of tagging them by product type and allowing a review team to make informed decisions about content moderation. Next, learn more about the details of image moderation.
+In this tutorial, you set up a program to analyze product images, tag them by product type, and allow a review team to make informed decisions about content moderation. Next, learn more about the details of image moderation.
> [!div class="nextstepaction"] > [Image moderation](./image-moderation-api.md)\ No newline at end of file
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Content-Moderator/quick-start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/quick-start.md
@@ -9,7 +9,7 @@
Previously updated : 09/29/2020 Last updated : 01/29/2021 keywords: content moderator, content moderation
@@ -84,6 +84,6 @@ Or, continue with the next steps to get started using the Moderation APIs in you
## Next steps Learn how to use the Moderation APIs themselves in your app.-- Implement image moderation. Use the [API console](try-image-api.md) or follow a [client library or REST API quickstart](client-libraries.md) to scan images and detect potential adult and racy content by using tags, confidence scores, and other extracted information.-- Implement text moderation. Use the [API console](try-text-api.md) or follow a [client library or REST API quickstart](client-libraries.md) to scan text content for potential profanity, machine-assisted unwanted text classification (preview), and personal data.
+- Implement image moderation. Use the [API console](try-image-api.md) or follow a [quickstart](client-libraries.md) to scan images and detect potential adult and racy content by using tags, confidence scores, and other extracted information.
+- Implement text moderation. Use the [API console](try-text-api.md) or follow a [quickstart](client-libraries.md) to scan text content for potential profanity, personal data, and other unwanted text.
- Implement video moderation. Follow the [Video moderation how-to guide for C#](video-moderation-api.md) to scan videos and detect potential adult and racy content.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/get-started-build-detector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/get-started-build-detector.md
@@ -9,7 +9,7 @@
Previously updated : 09/30/2020 Last updated : 01/29/2021 keywords: image recognition, image recognition app, custom vision
@@ -64,9 +64,9 @@ In your web browser, navigate to the [Custom Vision web page](https://customvisi
## Upload and tag images
-In this section you will upload and manually tag images to help train the detector.
+In this section, you will upload and manually tag images to help train the detector.
-1. To add images, click the __Add images__ button and then select __Browse local files__. Select __Open__ to upload the images.
+1. To add images, select __Add images__ and then select __Browse local files__. Select __Open__ to upload the images.
![The add images control is shown in the upper left, and as a button at bottom center.](./media/get-started-build-detector/add-images.png)
@@ -74,7 +74,7 @@ In this section you will upload and manually tag images to help train the detect
![Images uploaded, in Untagged section](./media/get-started-build-detector/images-untagged.png)
-1. Click and drag a rectangle around the object in your image. Then, enter a new tag name with the **+** button, or select an existing tag from the drop-down list. It's very important to tag every instance of the object(s) you want to detect, because the detector uses the untagged background area as a negative example in training. When you're done tagging, click the arrow on the right to save your tags and move on to the next image.
+1. Click and drag a rectangle around the object in your image. Then, enter a new tag name with the **+** button, or select an existing tag from the drop-down list. It's important to tag every instance of the object(s) you want to detect, because the detector uses the untagged background area as a negative example in training. When you're done tagging, click the arrow on the right to save your tags and move on to the next image.
![Tagging an object with a rectangular selection](./media/get-started-build-detector/image-tagging.png)
@@ -110,7 +110,7 @@ The **Overlap Threshold** slider deals with how correct an object prediction mus
## Manage training iterations
-Each time you train your detector, you create a new _iteration_ with its own updated performance metrics. You can view all of your iterations in the left pane of the **Performance** tab. In the left pane you will also find the **Delete** button, which you can use to delete an iteration if it's obsolete. When you delete an iteration, you delete any images that are uniquely associated with it.
+Each time you train your detector, you create a new _iteration_ with its own updated performance metrics. You can view all of your iterations in the left pane of the **Performance** tab. In the left pane you'll also find the **Delete** button, which you can use to delete an iteration if it's obsolete. When you delete an iteration, you delete any images that are uniquely associated with it.
See [Use your model with the prediction API](./use-prediction-api.md) to learn how to access your trained models programmatically.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/getting-started-build-a-classifier https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/getting-started-build-a-classifier.md
@@ -9,7 +9,7 @@
Previously updated : 09/29/2020 Last updated : 01/29/2021 keywords: image recognition, image recognition app, custom vision
@@ -67,7 +67,7 @@ In your web browser, navigate to the [Custom Vision web page](https://customvisi
In this section, you'll upload and manually tag images to help train the classifier.
-1. To add images, click the __Add images__ button and then select __Browse local files__. Select __Open__ to move to tagging. Your tag selection will be applied to the entire group of images you've selected to upload, so it's easier to upload images in separate groups according to their desired tags. You can also change the tags for individual images after they have been uploaded.
+1. To add images, select __Add images__ and then select __Browse local files__. Select __Open__ to move to tagging. Your tag selection will be applied to the entire group of images you've selected to upload, so it's easier to upload images in separate groups according to their applied tags. You can also change the tags for individual images after they've been uploaded.
![The add images control is shown in the upper left, and as a button at bottom center.](./media/getting-started-build-a-classifier/add-images01.png)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/client-library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/quickstarts/client-library.md
@@ -8,7 +8,7 @@
Previously updated : 09/21/2020 Last updated : 01/29/2021 zone_pivot_groups: programming-languages-set-formre
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/label-tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/quickstarts/label-tool.md
@@ -8,7 +8,7 @@
Previously updated : 09/30/2020 Last updated : 01/29/2021 keywords: document processing
@@ -198,7 +198,7 @@ Next, you'll create tags (labels) and apply them to the text elements that you w
1. Click **+** to create a new tag. 1. Enter the tag name. 1. Press Enter to save the tag.
-1. In the main editor, click to select words from the highlighted text elements. In the _v2.1 preview.2_ you can also click to select _Selection Marks_ like radio buttons and checkboxes as key value pairs. Form Recognizer will identify whether the selection mark is "selected" or "unselected" as the value.
+1. In the main editor, click to select words from the highlighted text elements. In the _v2.1 preview.2_ API, you can also click to select _Selection Marks_ like radio buttons and checkboxes as key value pairs. Form Recognizer will identify whether the selection mark is "selected" or "unselected" as the value.
1. Click on the tag you want to apply, or press the corresponding keyboard key. The number keys are assigned as hotkeys for the first 10 tags. You can reorder your tags using the up and down arrow icons in the tag editor pane. > [!Tip] > Keep the following tips in mind when you're labeling your forms.
@@ -287,7 +287,7 @@ This feature is currently available in v2.1. preview.
With Model Compose, you can compose up to 100 models to a single model ID. When you call Analyze with this composed model ID, Form Recognizer will first classify the form you submitted, matching it to the best matching model, and then return results for that model. This is useful when incoming forms may belong to one of several templates. To compose models in the sample labeling tool, click on the Model Compose (merging arrow) icon on the left. On the left, select the models you wish to compose together. Models with the arrows icon are already composed models.
-Click on the "Compose" button. In the pop up, name your new composed model and click "Compose". When the operation completes, your new composed model should appear in the list.
+Click on the "Compose" button. In the pop-up, name your new composed model and click "Compose". When the operation completes, your new composed model should appear in the list.
:::image type="content" source="../media/label-tool/model-compose.png" alt-text="Model compose UX view.":::
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/plan-manage-costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/plan-manage-costs.md
@@ -60,6 +60,32 @@ After you delete QnA Maker resources, the following resources might continue to
You can pay for Cognitive Services charges with your Azure Prepayment (previously called monetary commitment) credit. However, you can't use Azure Prepayment credit to pay for charges for third party products and services including those from the Azure Marketplace.
+## Monitor costs
+
+<!-- Note to Azure service writer: Modify the following as needed for your service. Replace example screenshots with ones taken for your service. If you need assistance capturing screenshots, ask banders for help. -->
+
+As you use Azure resources with Cognitive Services, you incur costs. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days) or by unit usage (bytes, megabytes, and so on). As soon as use of a Cognitive Service (or Cognitive Services) starts, costs are incurred and you can see the costs in [cost analysis](../cost-management/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+
+When you use cost analysis, you view Cognitive Services costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. And you see where overspending might have occurred. If you've created budgets, you can also easily see where they're exceeded.
+
+To view Cognitive Services costs in cost analysis:
+
+1. Sign in to the Azure portal.
+2. Open the scope in the Azure portal and select **Cost analysis** in the menu. For example, go to **Subscriptions**, select a subscription from the list, and then select **Cost analysis** in the menu. Select **Scope** to switch to a different scope in cost analysis.
+3. By default, cost for services are shown in the first donut chart. Select the area in the chart labeled Cognitive Services.
+
+Actual monthly costs are shown when you initially open cost analysis. Here's an example showing all monthly usage costs.
+
+:::image type="content" source="./media/cost-management/all-costs.png" alt-text="Example showing accumulated costs for a subscription":::
+
+- To narrow costs for a single service, like Cognitive Services, select **Add filter** and then select **Service name**. Then, select **Cognitive Services**.
+
+Here's an example showing costs for just Cognitive Services.
+
+:::image type="content" source="./media/cost-management/cognitive-services-costs.png" alt-text="Example showing accumulated costs for Cognitive Services":::
+
+In the preceding example, you see the current cost for the service. Costs by Azure regions (locations) and Cognitive Services costs by resource group are also shown. From here, you can explore costs on your own.
+ ## Create budgets You can create [budgets](../cost-management-billing/costs/tutorial-acm-create-budgets.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that automatically notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an overall cost monitoring strategy.
@@ -70,13 +96,6 @@ Budgets can be created with filters for specific resources or services in Azure
You can also [export your cost data](../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need or others to do additional data analysis for costs. For example, finance teams can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
-<!--
-## Other ways to manage and reduce costs for Cognitive Services
-
-Work with Dean to complete this section in 2021.
->- ## Next steps - Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Cognitive Services description: Lists Azure Policy built-in policy definitions for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/25/2021 Last updated : 01/29/2021
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/concepts/model-versioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/concepts/model-versioning.md
@@ -24,9 +24,9 @@ Use the table below to find which model versions are supported by each hosted en
| Endpoint | Supported Versions | latest version | ||--|-| | `/sentiment` | `2019-10-01`, `2020-04-01` | `2020-04-01` |
-| `/languages` | `2019-10-01`, `2020-07-01`, `2020-09-01`, `2021-01-15` | `2021-01-15` |
+| `/languages` | `2019-10-01`, `2020-07-01`, `2020-09-01`, `2021-01-15` | `2021-01-05` |
| `/entities/linking` | `2019-10-01`, `2020-02-01` | `2020-02-01` |
-| `/entities/recognition/general` | `2019-10-01`, `2020-02-01`, `2020-04-01`,`2021-01-05` | `2021-01-05` |
+| `/entities/recognition/general` | `2019-10-01`, `2020-02-01`, `2020-04-01`,`2021-01-05` | `2021-01-15` |
| `/entities/recognition/pii` | `2019-10-01`, `2020-02-01`, `2020-04-01`,`2020-07-01` | `2020-07-01` | | `/entities/health` | `2020-09-03` | `2020-09-03` | | `/keyphrases` | `2019-10-01`, `2020-07-01` | `2020-07-01` |
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/whats-new https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/whats-new.md
@@ -23,7 +23,12 @@ The Text Analytics API is updated on an ongoing basis. To stay up-to-date with r
* Expanded language support for [several general entity categories](named-entity-types.md). * Improved AI quality of general entity categories for all supported v3 languages.
-* The `2021-01-05` model-version for [language detection](how-tos/text-analytics-how-to-language-detection.md), which provides additional [language support](language-support.md?tabs=language-detection).
+* The `2021-01-05` model version for [language detection](how-tos/text-analytics-how-to-language-detection.md), which provides additional [language support](language-support.md?tabs=language-detection).
+
+These model versions are currently unavailable in the East US region.
+
+> [!div class="nextstepaction"]
+> [Learn more about about the new NER model](https://azure.microsoft.com/updates/text-analytics-ner-improved-ai-quality)
## December 2020
container-registry https://docs.microsoft.com/en-us/azure/container-registry/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/25/2021 Last updated : 01/29/2021
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/25/2021 Last updated : 01/29/2021
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-databricks-delta-lake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-databricks-delta-lake.md
@@ -36,8 +36,8 @@ In general, Azure Data Factory supports Delta Lake with the following capabiliti
To use this Azure Databricks Delta Lake connector, you need to set up a cluster in Azure Databricks. -- To copy data to delta lake, Copy activity invokes Azure Databricks cluster to read data from an Azure Storage, which is either your original source or a staging area to where Data Factory firstly writes the source data via built-in staged copy. Learn more from [Delta lake as the source](#delta-lake-as-source).-- Similarly, to copy data from delta lake, Copy activity invokes Azure Databricks cluster to write data to an Azure Storage, which is either your original sink or a staging area from where Data Factory continues to write data to final sink via built-in staged copy. Learn more from [Delta lake as the sink](#delta-lake-as-sink).
+- To copy data to delta lake, Copy activity invokes Azure Databricks cluster to read data from an Azure Storage, which is either your original source or a staging area to where Data Factory firstly writes the source data via built-in staged copy. Learn more from [Delta lake as the sink](#delta-lake-as-sink).
+- Similarly, to copy data from delta lake, Copy activity invokes Azure Databricks cluster to write data to an Azure Storage, which is either your original sink or a staging area from where Data Factory continues to write data to final sink via built-in staged copy. Learn more from [Delta lake as the source](#delta-lake-as-source).
The Databricks cluster needs to have access to Azure Blob or Azure Data Lake Storage Gen2 account, both the storage container/file system used for source/sink/staging and the container/file system where you want to write the Delta Lake tables.
@@ -377,4 +377,4 @@ For more information about the properties, see [Lookup activity](control-flow-lo
## Next steps
-For a list of data stores supported as sources and sinks by Copy activity in Data Factory, see [supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
\ No newline at end of file
+For a list of data stores supported as sources and sinks by Copy activity in Data Factory, see [supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory https://docs.microsoft.com/en-us/azure/data-factory/data-flow-troubleshoot-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-troubleshoot-guide.md
@@ -121,11 +121,144 @@ If you are executing the data flow in a debug test execution from a debug pipeli
- **Causes**: In the Mapping data flow, currently, the multiline CSV source does not work with the \r\n as row delimiter. Sometimes extra lines at carriage returns break source values. - **Recommendation**: Either generate the file at the source with \n as row delimiter rather than \r\n. Or, use Copy Activity to convert CSV file with \r\n to \n as a row delimiter.
-## General troubleshooting guidance
+### Error code: DF-Executor-SourceInvalidPayload
+- **Message**: Data preview, debug, and pipeline data flow execution failed because container does not exist
+- **Causes**: When dataset contains a container that does not exist in the storage
+- **Recommendation**: Make sure that the container referenced in your dataset exists or accessible.
++
+ ### Error code: DF-Executor-SystemImplicitCartesian
+- **Message**: Implicit cartesian product for INNER join is not supported, use CROSS JOIN instead. Columns used in join should create a unique key for rows.
+- **Causes**: Implicit cartesian product for INNER join between logical plans is not supported. If the columns used in the join creates the unique key
+- **Recommendation**: For non-equality based joins you have to opt for CROSS JOIN.
++
+ ### Error code: DF-Executor-SystemInvalidJson
+- **Message**: JSON parsing error, unsupported encoding or multiline
+- **Causes**: Possible issues with the JSON file: unsupported encoding, corrupt bytes, or using JSON source as single document on many nested lines
+- **Recommendation**: Verify the JSON file's encoding is supported. On the Source transformation that is using a JSON dataset, expand 'JSON Settings' and turn on 'Single Document'.
++
+ ### Error code: DF-Executor-BroadcastTimeout
+- **Message**: Broadcast join timeout error, you can choose 'Off' of broadcast option in join/exists/lookup transformation to avoid this issue. If you intend to broadcast join option to improve performance then make sure broadcast stream can produce data within 60 secs in debug runs and 300 secs in job runs.
+- **Causes**: Broadcast has a default timeout of 60 secs in debug runs and 300 secs in job runs. On broadcast join, the stream chosen for broadcast seems too large to produce data within this limit. If a broadcast join is not used, the default broadcast done by dataflow can reach the same limit
+- **Recommendation**: Turn off the broadcast option or avoid broadcasting large data streams where the processing can take more than 60 secs. Choose a smaller stream to broadcast instead. Large SQL/DW tables and source files are typically bad candidates. In the absence of a broadcast join, use a larger cluster if the error occurs.
++
+ ### Error code: DF-Executor-Conversion
+- **Message**: Converting to a date or time failed due to an invalid character
+- **Causes**: Data is not in the expected format
+- **Recommendation**: Use the correct data type
++
+ ### Error code: DF-Executor-InvalidColumn
+- **Message**: Column name needs to be specified in the query, set an alias if using a SQL function
+- **Causes**: No column name was specified.
++
+ ### Error code: DF-Executor-DriverError
+- **Message**: INT96 is legacy timestamp type which is not supported by ADF Dataflow. Please consider upgrading the column type to the latest types.
+- **Causes**: It is a driver error.
+- **Recommendation**: INT96 is legacy timestamp type which is not supported by ADF Dataflow. Please consider upgrading the column type to the latest types.
++
+ ### Error code: DF-Executor-BlockCountExceedsLimitError
+- **Message**: The uncommitted block count cannot exceed the maximum limit of 100,000 blocks. Check blob configuration.
+- **Causes**: There can be a maximum of 100,000 uncommitted blocks in a blob.
+- **Recommendation**: Please contact Microsoft product team regarding this issue for more details
+
+ ### Error code: DF-Executor-PartitionDirectoryError
+- **Message**: The specified source path has either multiple partitioned directories (for e.g. <Source Path>/<Partition Root Directory 1>/a=10/b=20, <Source Path>/<Partition Root Directory 2>/c=10/d=30) or partitioned directory with other file or non-partitioned directory (for e.g. <Source Path>/<Partition Root Directory 1>/a=10/b=20, <Source Path>/Directory 2/file1), remove partition root directory from source path and read it through separate source transformation.
+- **Causes**: Source path has either multiple partitioned directories or partitioned directory with other file or non-partitioned directory.
+- **Recommendation**: Remove partitioned root directory from source path and read it through separate source transformation.
+
+ ### Error code: DF-Executor-OutOfMemoryError
+- **Message**: Cluster ran into out of memory issue during execution, please retry using an integration runtime with bigger core count and/or memory optimized compute type
+- **Causes**: Cluster is running out of memory.
+- **Recommendation**: Debug clusters are meant for development purposes. Leverage data sampling appropriate compute type and size to run the payload. Refer to [Dataflow Performance Guide](https://docs.microsoft.com/azure/data-factory/concepts-data-flow-performance) for tuning the dataflows for best performance.
++
+ ### Error code: DF-Executor-illegalArgument
+- **Message**: Please make sure that the access key in your Linked Service is correct.
+- **Causes**: Account Name or Access Key is incorrect.
+- **Recommendation**: Please supply right account name or access key.
++
+ ### Error code: DF-Executor-InvalidType
+- **Message**: Please make sure that the type of parameter matches with type of value passed in. Passing float parameters from pipelines isn't currently supported.
+- **Causes**: Incompatible data types between declared type and actual parameter value
+- **Recommendation**: Please supply right data types.
++
+ ### Error code: DF-Executor-ColumnUnavailable
+- **Message**: Column name used in expression is unavailable or invalid.
+- **Causes**: Invalid or unavailable column name is used in expressions.
+- **Recommendation**: Check column name(s) used in expressions.
++
+ ### Error code: DF-Executor-ParseError
+- **Message**: Expression cannot be parsed.
+- **Causes**: Expression has parsing errors due to formatting.
+- **Recommendation**: Check formatting in expression.
++
+ ### Error code: DF-Executor-OutOfDiskSpaceError
+- **Message**: Internal server error
+- **Causes**: Cluster is running out of disk space.
+- **Recommendation**: Please retry the pipeline. If problem persists, contact customer support.
++
+ ### Error code: DF-Executor-StoreIsNotDefined
+- **Message**: The store configuration is not defined. This error is potentially caused by invalid parameter assignment in the pipeline.
+- **Causes**: Undetermined
+- **Recommendation**: Please check parameter value assignment in the pipeline. Parameter expression may contain invalid characters.
++
+ ### Error code: DF-Excel-InvalidConfiguration
+- **Message**: Excel sheet name or index is required.
+- **Causes**: Undetermined
+- **Recommendation**: Please check parameter value and specify sheet name or index to read Excel data.
++
+ ### Error code: DF-Excel-InvalidConfiguration
+- **Message**: Excel sheet name and index cannot exist at the same time.
+- **Causes**: Undetermined
+- **Recommendation**: Please check parameter value and specify sheet name or index to read Excel data.
++
+ ### Error code: DF-Excel-InvalidConfiguration
+- **Message**: Invalid range is provided.
+- **Causes**: Undetermined
+- **Recommendation**: Please check parameter value and specify valid range by reference: [Excel properties](https://docs.microsoft.com/azure/data-factory/format-excel#dataset-properties).
++
+ ### Error code: DF-Excel-InvalidData
+- **Message**: Excel worksheet does not exist.
+- **Causes**: Undetermined
+- **Recommendation**: Please check parameter value and specify valid sheet name or index to read Excel data.
+
+ ### Error code: DF-Excel-InvalidData
+- **Message**: Reading excel files with different schema is not supported now.
+- **Causes**: Undetermined
+- **Recommendation**: Use correct Excel file.
++
+ ### Error code: DF-Excel-InvalidData
+- **Message**: Data type is not supported.
+- **Causes**: Undetermined
+- **Recommendation**: Use Excel file right data types.
+
+ ### Error code: DF-Excel-InvalidConfiguration
+- **Message**: Invalid excel file is provided while only .xlsx and .xls are supported
+- **Causes**: Undetermined
+- **Recommendation**: Make sure Excel file extension is either .xlsx or .xls.
+
+## General troubleshooting guidance
1. Check the status of your dataset connections. In each Source and Sink transformation, visit the Linked Service for each dataset that you are using and test connections.
-1. Check the status of your file and table connections from the data flow designer. Switch on Debug and click on Data Preview on your Source transformations to ensure that you are able to access your data.
-1. If everything looks good from data preview, go into the Pipeline designer and put your data flow in a pipeline activity. Debug the pipeline for an end-to-end test.
+2. Check the status of your file and table connections from the data flow designer. Switch on Debug and click on Data Preview on your Source transformations to ensure that you are able to access your data.
+3. If everything looks good from data preview, go into the Pipeline designer and put your data flow in a pipeline activity. Debug the pipeline for an end-to-end test.
+ ## Next steps
data-lake-analytics https://docs.microsoft.com/en-us/azure/data-lake-analytics/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/25/2021 Last updated : 01/29/2021
data-lake-store https://docs.microsoft.com/en-us/azure/data-lake-store/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/25/2021 Last updated : 01/29/2021
databox-online https://docs.microsoft.com/en-us/azure/databox-online/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/25/2021 Last updated : 01/29/2021
databox https://docs.microsoft.com/en-us/azure/databox/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/25/2021 Last updated : 01/29/2021
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/references-work-with-defender-for-iot-apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/references-work-with-defender-for-iot-apis.md
@@ -51,34 +51,32 @@ To generate a token:
This section describes the following sensor APIs: -- /api/v1/devices
+- [Retrieve device information - /api/v1/devices](#retrieve-device-informationapiv1devices)
-- /api/v1/devices/connections
+- [Retrieve device connection information - /api/v1/devices/connections](#retrieve-device-connection-informationapiv1devicesconnections)
-- /api/v1/devices/cves
+- [Retrieve information on CVEs - /api/v1/devices/cves](#retrieve-information-on-cvesapiv1devicescves)
-- /api/v1/alerts
+- [Retrieve alert information - /api/v1/alerts](#retrieve-alert-informationapiv1alerts)
-- /api/v1/events
+- [Retrieve timeline events - /api/v1/events](#retrieve-timeline-eventsapiv1events)
-- /api/v1/reports/vulnerabilities/devices
+- [Retrieve vulnerability information - /api/v1/reports/vulnerabilities/devices](#retrieve-vulnerability-informationapiv1reportsvulnerabilitiesdevices)
-- /api/v1/reports/vulnerabilities/security
+- [Retrieve security vulnerabilities - /api/v1/reports/vulnerabilities/security](#retrieve-security-vulnerabilitiesapiv1reportsvulnerabilitiessecurity)
-- /api/v1/reports/vulnerabilities/operational
+- [Retrieve operational vulnerabilities - /api/v1/reports/vulnerabilities/operational](#retrieve-operational-vulnerabilitiesapiv1reportsvulnerabilitiesoperational)
-- /api/external/authentication/validation
+- [Validate user credentials - /api/external/authentication/validation](#validate-user-credentialsapiexternalauthenticationvalidation)
-- /external/authentication/set_password
+- [Change password - /external/authentication/set_password](#change-passwordexternalauthenticationset_password)
-- /external/authentication/set_password_by_admin
+- [User password update by system admin - /external/authentication/set_password_by_admin](#user-password-update-by-system-adminexternalauthenticationset_password_by_admin)
-### Retrieve device information
+### Retrieve device information - /api/v1/devices
Use this API to request a list of all devices that a Defender for IoT sensor has detected.
-#### /api/v1/devices
- #### Method **GET**
@@ -274,11 +272,15 @@ Array of JSON objects that represent devices.
] ```
-### Retrieve device connection information
+#### Curl command
-Use this API to request a list of all the connections per device.
+| Type | APIs | Example |
+|--|--|--|
+| GET | curl -k -H "Authorization: <AUTH_TOKEN>" https://<IP_ADDRESS>/api/v1/devices | curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https:<span>//127<span>.0.0.1/api/v1/devices?authorized=true |
-#### /api/v1/devices/connections
+### Retrieve device connection information - /api/v1/devices/connections
+
+Use this API to request a list of all the connections per device.
#### Method
@@ -442,11 +444,17 @@ Array of JSON objects that represent device connections.
] ```
-### Retrieve information on CVEs
+#### Curl command
-Use this API to request a list of all known CVEs discovered on devices in the network.
+> [!div class="mx-tdBreakAll"]
+> | Type | APIs | Example |
+> |--|--|--|
+> | GET | curl -k -H "Authorization: <AUTH_TOKEN>" https://<IP_ADDRESS>/api/v1/devices/connections | curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https:/<span>/127.0.0.1/api/v1/devices/connections |
+> | GET | curl -k -H "Authorization: <AUTH_TOKEN>" 'https://<IP_ADDRESS>/api/v1/devices/<deviceId>/connections?lastActiveInMinutes=&discoveredBefore=&discoveredAfter=' | curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" 'https:/<span>/127.0.0.1/api/v1/devices/2/connections?lastActiveInMinutes=20&discoveredBefore=1594550986000&discoveredAfter=1594550986000' |
+
+### Retrieve information on CVEs - /api/v1/devices/cves
-#### /api/v1/devices/cves
+Use this API to request a list of all known CVEs discovered on devices in the network.
#### Method
@@ -552,11 +560,16 @@ Array of JSON objects that represent CVEs identified on IP addresses.
] ```
-### Retrieve alert information
+#### Curl command
-Use this API to request a list of all the alerts that the Defender for IoT sensor has detected.
+| Type | APIs | Example |
+|--|--|--|
+| GET | curl -k -H "Authorization: <AUTH_TOKEN>" https://<IP_ADDRESS>/api/v1/devices/cves | curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https:/<span>/127.0.0.1/api/v1/devices/cves |
+| GET | curl -k -H "Authorization: <AUTH_TOKEN>" https://<IP_ADDRESS>/api/v1/devices/<deviceIpAddress>/cves?top= | curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https:/<span>/127.0.0.1/api/v1/devices/10.10.10.15/cves?top=50 |
+
+### Retrieve alert information - /api/v1/alerts
-#### /api/v1/alerts
+Use this API to request a list of all the alerts that the Defender for IoT sensor has detected.
#### Method
@@ -680,11 +693,16 @@ Array of JSON objects that represent alerts.
```
-### Retrieve timeline events
+#### Curl command
-Use this API to request a list of events reported to the event timeline.
+> [!div class="mx-tdBreakAll"]
+> | Type | APIs | Example |
+> |--|--|--|
+> | GET | curl -k -H "Authorization: <AUTH_TOKEN>" 'https://<IP_ADDRESS>/api/v1/alerts?state=&fromTime=&toTime=&type=' | curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" 'https:/<span>/127.0.0.1/api/v1/alerts?state=unhandled&fromTime=1594550986000&toTime=1594550986001&type=disconnections' |
-#### /api/v1/events
+### Retrieve timeline events - /api/v1/events
+
+Use this API to request a list of events reported to the event timeline.
#### Method
@@ -797,11 +815,15 @@ Array of JSON objects that represent alerts.
```
-### Retrieve vulnerability information
+#### Curl command
-Use this API to request vulnerability assessment results for each device.
+| Type | APIs | Example |
+|--|--|--|
+| GET | curl -k -H "Authorization: <AUTH_TOKEN>" 'https://<IP_ADDRESS>/api/v1/events?minutesTimeFrame=&type=' | curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" 'https:/<span>/127.0.0.1/api/v1/events?minutesTimeFrame=20&type=DEVICE_CONNECTION_CREATED' |
+
+### Retrieve vulnerability information - /api/v1/reports/vulnerabilities/devices
-#### /api/v1/reports/vulnerabilities/devices
+Use this API to request vulnerability assessment results for each device.
#### Method
@@ -1047,14 +1069,18 @@ The device object contains:
```
-### Retrieve security vulnerabilities
+#### Curl command
+
+| Type | APIs | Example |
+|--|--|--|
+| GET | curl -k -H "Authorization: <AUTH_TOKEN>" https://<IP_ADDRESS>/api/v1/reports/vulnerabilities/devices | curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https:/<span>/127.0.0.1/api/v1/reports/vulnerabilities/devices |
+
+### Retrieve security vulnerabilities - /api/v1/reports/vulnerabilities/security
Use this API to request results of a general vulnerability assessment. This assessment provides insight into your system's security level. This assessment is based on general network and system information and not on a specific device evaluation.
-#### /api/v1/reports/vulnerabilities/security
- #### Method **GET**
@@ -1290,11 +1316,15 @@ JSON object that represents assessed results. Each key can be nullable. Otherwis
```
-### Retrieve operational vulnerabilities
+#### Curl command
-Use this API to request results of a general vulnerability assessment. This assessment provides insight into the operational status of your network. It's based on general network and system information and not on a specific device evaluation.
+| Type | APIs | Example |
+|--|--|--|
+| GET | curl -k -H "Authorization: <AUTH_TOKEN>" https://<IP_ADDRESS>/api/v1/reports/vulnerabilities/security | curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https:/<span>/127.0.0.1/api/v1/reports/vulnerabilities/security |
-#### /api/v1/reports/vulnerabilities/operational
+### Retrieve operational vulnerabilities - /api/v1/reports/vulnerabilities/operational
+
+Use this API to request results of a general vulnerability assessment. This assessment provides insight into the operational status of your network. It's based on general network and system information and not on a specific device evaluation.
#### Method
@@ -1483,14 +1513,18 @@ JSON object that represents assessed results. Each key contains a JSON array of
```
-### Validate user credentials
+#### Curl command
+
+| Type | APIs | Example |
+|--|--|--|
+| GET | curl -k -H "Authorization: <AUTH_TOKEN>" https://<IP_ADDRESS>/api/v1/reports/vulnerabilities/operational | curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https:/<span>/127.0.0.1/api/v1/reports/vulnerabilities/operational |
+
+### Validate user credentials - /api/external/authentication/validation
Use this API to validate a Defender for IoT username and password. All Defender for IoT user roles can work with the API. You don't need a Defender for IoT access token to use this API.
-#### /api/external/authentication/validation
- #### Method **POST**
@@ -1546,11 +1580,15 @@ response:
```
-### Change password
+#### Curl command
-Use this API to let users change their own passwords. All Defender for IoT user roles can work with the API. You don't need a Defender for IoT access token to use this API.
+| Type | APIs | Example |
+|--|--|--|
+| GET | curl -k -H "Authorization: <AUTH_TOKEN>" https://<IP_ADDRESS>/api/external/authentication/validation | curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https:/<span>/127.0.0.1/api/external/authentication/validation |
+
+### Change password - /external/authentication/set_password
-#### /external/authentication/set_password
+Use this API to let users change their own passwords. All Defender for IoT user roles can work with the API. You don't need a Defender for IoT access token to use this API.
#### Method
@@ -1616,11 +1654,15 @@ response:
| **password** | String | No | | **new_password** | String | No |
-### User password update by system admin
+#### Curl command
-Use this API to let system administrators change passwords for specified users. Defender for IoT administrator user roles can work with the API. You don't need a Defender for IoT access token to use this API.
+| Type | APIs | Example |
+|--|--|--|
+| POST | curl -k -d '{"username": "<USER_NAME>","password": "<CURRENT_PASSWORD>","new_password": "<NEW_PASSWORD>"}' -H 'Content-Type: application/json' https://<IP_ADDRESS>/api/external/authentication/set_password | curl -k -d '{"username": "myUser","password": "1234@abcd","new_password": "abcd@1234"}' -H 'Content-Type: application/json' https:/<span>/127.0.0.1/api/external/authentication/set_password |
+
+### User password update by system admin - /external/authentication/set_password_by_admin
-#### /external/authentication/set_password_by_admin
+Use this API to let system administrators change passwords for specified users. Defender for IoT administrator user roles can work with the API. You don't need a Defender for IoT access token to use this API.
#### Method
@@ -1692,6 +1734,13 @@ response:
| **username** | String | No | | **new_password** | String | No |
+#### Curl command
+
+> [!div class="mx-tdBreakAll"]
+> | Type | APIs | Example |
+> |--|--|--|
+> | POST | curl -k -d '{"admin_username":"<ADMIN_USERNAME>","admin_password":"<ADMIN_PASSWORD>","username": "<USER_NAME>","new_password": "<NEW_PASSWORD>"}' -H 'Content-Type: application/json' https://<IP_ADDRESS>/api/external/authentication/set_password_by_admin | curl -k -d '{"admin_user":"adminUser","admin_password": "1234@abcd","username": "myUser","new_password": "abcd@1234"}' -H 'Content-Type: application/json' https:/<span>/127.0.0.1/api/external/authentication/set_password_by_admin |
+ ## On-premises management console API specifications This section describes the following on-premises management console APIs:
@@ -1721,24 +1770,18 @@ The APIs that you define here appear in the on-premises management console's **A
```
-#### Change password
+#### Change password - /external/authentication/set_password
Use this API to let users change their own passwords. All Defender for IoT user roles can work with the API. You don't need a Defender for IoT access token to use this API. -- **/external/authentication/set_password**-
-#### User password update by system admin
+#### User password update by system admin - /external/authentication/set_password_by_admin
Use this API to let system administrators change passwords for specific users. Defender for IoT admin user roles can work with the API. You don't need a Defender for IoT access token to use this API. -- **/external/authentication/set_password_by_admin**-
-### Retrieve device information
+### Retrieve device information - /external/v1/devices
This API requests a list of all devices detected by Defender for IoT sensors that are connected to an on-premises management console. -- **/external/v1/devices**- #### Method **GET**
@@ -1954,11 +1997,15 @@ Array of JSON objects that represent devices.
] ```
-### Retrieve alert information
+#### Curl command
-Use this API to retrieve all or filtered alerts from an on-premises management console.
+| Type | APIs | Example |
+|--|--|--|
+| GET | curl -k -H "Authorization: <AUTH_TOKEN>" 'https://<>IP_ADDRESS>/external/v1/devices?siteId=&zoneId=&sensorId=&authorized=' | curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" 'https:/<span>/127.0.0.1/external/v1/devices?siteId=1&zoneId=2&sensorId=5&authorized=true' |
+
+### Retrieve alert information - /external/v1/alerts
-#### /external/v1/alerts
+Use this API to retrieve all or filtered alerts from an on-premises management console.
#### Method
@@ -2111,6 +2158,13 @@ Use this API to retrieve all or filtered alerts from an on-premises management c
] ```
+#### Curl command
+
+> [!div class="mx-tdBreakAll"]
+> | Type | APIs | Example |
+> |--|--|--|
+> | GET | curl -k -H "Authorization: <AUTH_TOKEN>" 'https://<>IP_ADDRESS>/external/v1/alerts?state=&zoneId=&fromTime=&toTime=&siteId=&sensor=' | curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" 'https:/<span>/127.0.0.1/external/v1/alerts?state=unhandled&zoneId=1&fromTime=0&toTime=1594551777000&siteId=1&sensor=1' |
+ ### QRadar alerts QRadar integration with Defender for IoT helps you identify the alerts generated by Defender for IoT and perform actions with these alerts. QRadar receives the data from Defender for IoT and then contacts the public API on-premises management console component.
@@ -2208,7 +2262,13 @@ Array of JSON objects that represent devices.
} ```
-### Alert exclusions (maintenance window)
+#### Curl command
+
+| Type | APIs | Example |
+|--|--|--|
+| PUT | curl -k -X PUT -d '{"action": "<ACTION>"}' -H "Authorization: <AUTH_TOKEN>" https://<IP_ADDRESS>/external/v1/alerts/<UUID> | curl -k -X PUT -d '{"action": "handle"}' -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https:/<span>/127.0.0.1/external/v1/alerts/1-1594550943000 |
+
+### Alert exclusions (maintenance window) - /external/v1/maintenanceWindow
Define conditions under which alerts won't be sent. For example, define and update stop and start times, devices or subnets that should be excluded when triggering alerts, or Defender for IoT engines that should be excluded. For example, during a maintenance window, you might want to stop alert delivery of all alerts, except for malware alerts on critical devices.
@@ -2216,8 +2276,6 @@ The APIs that you define here appear in the on-premises management console's **A
:::image type="content" source="media/references-work-with-defender-for-iot-apis/alert-exclusion-window.png" alt-text="The Alert Exclusions window, showing a list of all the exclusion rules. ":::
-#### /external/v1/maintenanceWindow
- #### Method - POST #### Query parameters
@@ -2362,11 +2420,18 @@ Array of JSON objects that represent maintenance window operations.
| **ttl** | Numeric | - | yes | | **operationType** | String | Values are "OPEN", "UPDATE", and "CLOSE" | no |
-### Authenticate user credentials
+#### Curl command
-Use this API to validate user credentials. All Defender for IoT user roles can work with the API. You don't need a Defender for IoT access token to use this API.
+| Type | APIs | Example |
+|--|--|--|
+| POST | curl -k -X POST -d '{"ticketId": "<TICKET_ID>",ttl": <TIME_TO_LIVE>,"engines": [<ENGINE1, ENGINE2...ENGINEn>],"sensorIds": [<SENSOR_ID1, SENSOR_ID2...SENSOR_IDn>],"subnets": [<SUBNET1, SUBNET2....SUBNETn>]}' -H "Authorization: <AUTH_TOKEN>" https:/<span>/127.0.0.1/external/v1/maintenanceWindow | curl -k -X POST -d '{"ticketId": "a5fe99c-d914-4bda-9332-307384fe40bf","ttl": "20","engines": ["ANOMALY"],"sensorIds": ["5","3"],"subnets": ["10.0.0.3"]}' -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https:/<span>/127.0.0.1/external/v1/maintenanceWindow |
+| PUT | curl -k -X PUT -d '{"ticketId": "<TICKET_ID>",ttl": "<TIME_TO_LIVE>"}' -H "Authorization: <AUTH_TOKEN>" https:/<span>/127.0.0.1/external/v1/maintenanceWindow | curl -k -X PUT -d '{"ticketId": "a5fe99c-d914-4bda-9332-307384fe40bf","ttl": "20"}' -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https:/<span>/127.0.0.1/external/v1/maintenanceWindow |
+| DELETE | curl -k -X DELETE -d '{"ticketId": "<TICKET_ID>"}' -H "Authorization: <AUTH_TOKEN>" https:/<span>/127.0.0.1/external/v1/maintenanceWindow | curl -k -X DELETE -d '{"ticketId": "a5fe99c-d914-4bda-9332-307384fe40bf"}' -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https:/<span>/127.0.0.1/external/v1/maintenanceWindow |
+| GET | curl -k -H "Authorization: <AUTH_TOKEN>" 'https://<IP_ADDRESS>/external/v1/maintenanceWindow?fromDate=&toDate=&ticketId=&tokenName=' | curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" 'https:/<span>/127.0.0.1/external/v1/maintenanceWindow?fromDate=2020-01-01&toDate=2020-07-14&ticketId=a5fe99c-d914-4bda-9332-307384fe40bf&tokenName=a' |
-#### /external/authentication/validation
+### Authenticate user credentials - /external/authentication/validation
+
+Use this API to validate user credentials. All Defender for IoT user roles can work with the API. You don't need a Defender for IoT access token to use this API.
#### Method
@@ -2421,11 +2486,15 @@ response:
} ```
-### Change password
+#### Curl command
-Use this API to let users change their own passwords. All Defender for IoT user roles can work with the API. You don't need a Defender for IoT access token to use this API.
+| Type | APIs | Example |
+|--|--|--|
+| POST | curl -k -d '{"username":"<USER_NAME>","password":"PASSWORD"}' 'https://<IP_ADDRESS>/external/authentication/validation' | curl -k -d '{"username":"myUser","password":"1234@abcd"}' 'https:/<span>/127.0.0.1/external/authentication/validation' |
-#### /external/authentication/set_password
+### Change password - /external/authentication/set_password
+
+Use this API to let users change their own passwords. All Defender for IoT user roles can work with the API. You don't need a Defender for IoT access token to use this API.
#### Method
@@ -2491,11 +2560,15 @@ response:
| **password** | String | No | | **new_password** | String | No |
-### User password update by system admin
+#### Curl command
-Use this API to let system administrators change passwords for specified users. Defender for IoT admin user roles can work with the API. You don't need a Defender for IoT access token to use this API.
+| Type | APIs | Example |
+|--|--|--|
+| POST | curl -k -d '{"username": "<USER_NAME>","password": "<CURRENT_PASSWORD>","new_password": "<NEW_PASSWORD>"}' -H 'Content-Type: application/json' https://<IP_ADDRESS>/external/authentication/set_password | curl -k -d '{"username": "myUser","password": "1234@abcd","new_password": "abcd@1234"}' -H 'Content-Type: application/json' https:/<span>/127.0.0.1/external/authentication/set_password |
-#### /external/authentication/set_password_by_admin
+### User password update by system admin - /external/authentication/set_password_by_admin
+
+Use this API to let system administrators change passwords for specified users. Defender for IoT admin user roles can work with the API. You don't need a Defender for IoT access token to use this API.
#### Method
@@ -2567,6 +2640,15 @@ response:
| **username** | String | No | | **new_password** | String | No |
-## See also
-[Investigate sensor detections in a device inventory](how-to-investigate-sensor-detections-in-a-device-inventory.md)
-[Investigate all enterprise sensor detections in a device inventory](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md)
+#### Curl command
+
+> [!div class="mx-tdBreakAll"]
+> | Type | APIs | Example |
+> |--|--|--|
+> | POST | curl -k -d '{"admin_username":"<ADMIN_USERNAME>","admin_password":"<ADMIN_PASSWORD>","username": "<USER_NAME>","new_password": "<NEW_PASSWORD>"}' -H 'Content-Type: application/json' https://<IP_ADDRESS>/external/authentication/set_password_by_admin | curl -k -d '{"admin_user":"adminUser","admin_password": "1234@abcd","username": "myUser","new_password": "abcd@1234"}' -H 'Content-Type: application/json' https:/<span>/127.0.0.1/external/authentication/set_password_by_admin |
+
+## Next steps
+
+- [Investigate sensor detections in a device inventory](how-to-investigate-sensor-detections-in-a-device-inventory.md)
+
+- [Investigate all enterprise sensor detections in a device inventory](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md)
event-grid https://docs.microsoft.com/en-us/azure/event-grid/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/25/2021 Last updated : 01/29/2021
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/25/2021 Last updated : 01/29/2021
governance https://docs.microsoft.com/en-us/azure/governance/policy/assign-policy-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/assign-policy-portal.md
@@ -1,7 +1,7 @@
Title: "Quickstart: New policy assignment with portal" description: In this quickstart, you use Azure portal to create an Azure Policy assignment to identify non-compliant resources. Previously updated : 10/05/2020 Last updated : 01/29/2021 # Quickstart: Create a policy assignment to identify non-compliant resources
@@ -67,6 +67,17 @@ disks_ policy definition.
**Assigned by** will automatically fill based on who is logged in. This field is optional, so custom values can be entered.
+1. Leave policy enforcement _Enabled_. For more information, see
+ [Policy assignment - enforcement mode](./concepts/assignment-structure.md#enforcement-mode).
+
+1. Select **Next** at the bottom of the page or the **Parameters** tab at the top of the page to
+ move to the next segment of the assignment wizard.
+
+1. If the policy definition selected on the **Basics** tab included parameters, they are configured
+ on this tab. Since the _Audit VMs that do not use managed disks_ has no parameters, select
+ **Next** at the bottom of the page or the **Remediation** tab at the top of the page to move to
+ the next segment of the assignment wizard.
+ 1. Leave **Create a Managed Identity** unchecked. This box _must_ be checked when the policy or initiative includes a policy with either the [deployIfNotExists](./concepts/effects.md#deployifnotexists) or
@@ -75,7 +86,17 @@ disks_ policy definition.
[managed identities](../../active-directory/managed-identities-azure-resources/overview.md) and [how remediation security works](./how-to/remediate-resources.md#how-remediation-security-works).
-1. Select **Assign**.
+1. Select **Next** at the bottom of the page or the **Non-compliance messages** tab at the top of
+ the page to move to the next segment of the assignment wizard.
+
+1. Set the **Non-compliance message** to _Virtual machines should use a managed disk_. This custom
+ message is displayed when a resource is denied or for non-compliant resources during regular
+ evaluation.
+
+1. Select **Next** at the bottom of the page or the **Review + Create** tab at the top of the page
+ to move to the next segment of the assignment wizard.
+
+1. Review the selected options, then select **Create** at the bottom of the page.
You're now ready to identify non-compliant resources to understand the compliance state of your environment.
governance https://docs.microsoft.com/en-us/azure/governance/policy/assign-policy-rest-api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/assign-policy-rest-api.md
@@ -1,7 +1,7 @@
Title: "Quickstart: New policy assignment with REST API" description: In this quickstart, you use REST API to create an Azure Policy assignment to identify non-compliant resources. Previously updated : 10/14/2020 Last updated : 01/29/2021 # Quickstart: Create a policy assignment to identify non-compliant resources with REST API
@@ -51,6 +51,11 @@ Run the following command to create a policy assignment:
"displayName": "Audit VMs without managed disks Assignment", "description": "Shows all virtual machines not using managed disks", "policyDefinitionId": "/providers/Microsoft.Authorization/policyDefinitions/06a78e20-9358-41c9-923c-fb736d382a4d",
+ "nonComplianceMessages": [
+ {
+ "message": "Virtual machines should use a managed disk"
+ }
+ ]
} } ```
@@ -74,6 +79,9 @@ Request Body:
- **policyDefinitionId** ΓÇô The policy definition ID, based on which you're using to create the assignment. In this case, it's the ID of policy definition _Audit VMs that do not use managed disks_.
+- **nonComplianceMessages** - Set the message seen when a resource is denied due to non-compliance
+ or evaluated to be non-compliant. For more information, see
+ [assignment non-compliance messages](./concepts/assignment-structure.md#non-compliance-messages).
## Identify non-compliant resources
governance https://docs.microsoft.com/en-us/azure/governance/policy/concepts/assignment-structure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/assignment-structure.md
@@ -1,7 +1,7 @@
Title: Details of the policy assignment structure description: Describes the policy assignment definition used by Azure Policy to relate policy definitions and parameters to resources for evaluation. Previously updated : 09/22/2020 Last updated : 01/29/2021 # Azure Policy assignment structure
@@ -19,6 +19,7 @@ You use JSON to create a policy assignment. The policy assignment contains eleme
- enforcement mode - excluded scopes - policy definition
+- non-compliance messages
- parameters For example, the following JSON shows a policy assignment in _DoNotEnforce_ mode with dynamic parameters:
@@ -34,6 +35,11 @@ For example, the following JSON shows a policy assignment in _DoNotEnforce_ mode
"enforcementMode": "DoNotEnforce", "notScopes": [], "policyDefinitionId": "/subscriptions/{mySubscriptionID}/providers/Microsoft.Authorization/policyDefinitions/ResourceNaming",
+ "nonComplianceMessages": [
+ {
+ "message": "Resource names must start with 'DeptA' and end with '-LC'."
+ }
+ ],
"parameters": { "prefix": { "value": "DeptA"
@@ -93,6 +99,38 @@ This field must be the full path name of either a policy definition or an initia
`policyDefinitionId` is a string and not an array. It's recommended that if multiple policies are often assigned together, to use an [initiative](./initiative-definition-structure.md) instead.
+## Non-compliance messages
+
+To set a custom message that describe why a resource is non-compliant with the policy or initiative
+definition, set `nonComplianceMessages` in the assignment definition. This node is an array of
+`message` entries. This custom message is in addition to the default error message for
+non-compliance and is optional.
+
+```json
+"nonComplianceMessages": [
+ {
+ "message": "Default message"
+ }
+]
+```
+
+If the assignment is for an initiative, different messages can be configured for each policy
+definition in the initiative. The messages use the `policyDefinitionReferenceId` value configured in
+the initiative definition. For details, see
+[property definitions properties](./initiative-definition-structure.md#policy-definition-properties).
+
+```json
+"nonComplianceMessages": [
+ {
+ "message": "Default message"
+ },
+ {
+ "message": "Message for just this policy definition by reference ID",
+ "policyDefinitionReferenceId": "10420126870854049575"
+ }
+]
+```
+ ## Parameters This segment of the policy assignment provides the values for the parameters defined in the
governance https://docs.microsoft.com/en-us/azure/governance/policy/samples/built-in-initiatives https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/built-in-initiatives.md
@@ -1,7 +1,7 @@
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 01/25/2021 Last updated : 01/29/2021
governance https://docs.microsoft.com/en-us/azure/governance/policy/samples/built-in-policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/built-in-policies.md
@@ -1,7 +1,7 @@
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 01/25/2021 Last updated : 01/29/2021
@@ -35,6 +35,10 @@ side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browse
[!INCLUDE [azure-policy-reference-policies-app-service](../../../../includes/policy/reference/bycat/policies-app-service.md)]
+## Attestation
+
+[!INCLUDE [azure-policy-reference-policies-attestation](../../../../includes/policy/reference/bycat/policies-attestation.md)]
+ ## Automanage [!INCLUDE [azure-policy-reference-policies-automanage](../../../../includes/policy/reference/bycat/policies-automanage.md)]
@@ -59,9 +63,9 @@ side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browse
[!INCLUDE [azure-policy-reference-policies-batch](../../../../includes/policy/reference/bycat/policies-batch.md)]
-## Bot Services
+## Bot Service
-[!INCLUDE [azure-policy-reference-policies-bot-services](../../../../includes/policy/reference/bycat/policies-bot-services.md)]
+[!INCLUDE [azure-policy-reference-policies-bot-service](../../../../includes/policy/reference/bycat/policies-bot-service.md)]
## Cache
governance https://docs.microsoft.com/en-us/azure/governance/policy/tutorials/create-and-manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/tutorials/create-and-manage.md
@@ -1,7 +1,7 @@
Title: "Tutorial: Build policies to enforce compliance" description: In this tutorial, you use policies to enforce standards, control costs, maintain security, and impose enterprise wide design principles. Previously updated : 10/05/2020 Last updated : 01/29/2021 # Tutorial: Create and manage policies to enforce compliance
@@ -95,6 +95,12 @@ resources missing the tag.
and [how remediation security works](../how-to/remediate-resources.md#how-remediation-security-works).
+1. Select the **Non-compliance messages** tab at the top of the wizard.
+
+1. Set the **Non-compliance message** to _This resource doesn't have the required tag_. This custom
+ message is displayed when a resource is denied or for non-compliant resources during regular
+ evaluation.
+ 1. Select the **Review + create** tab at the top of the wizard. 1. Review your selections, then select **Create** at the bottom of the page.
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/25/2021 Last updated : 01/29/2021
healthcare-apis https://docs.microsoft.com/en-us/azure/healthcare-apis/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/25/2021 Last updated : 01/29/2021
iot-dps https://docs.microsoft.com/en-us/azure/iot-dps/quick-create-simulated-device-x509-python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/quick-create-simulated-device-x509-python.md
@@ -3,7 +3,7 @@ Title: Quickstart - Provision simulated X.509 device to Azure IoT Hub using Pyth
description: Quickstart - Create and provision a simulated X.509 device using Python device SDK for IoT Hub Device Provisioning Service (DPS). This quickstart uses individual enrollments. Previously updated : 11/08/2019 Last updated : 01/29/2021
@@ -15,148 +15,192 @@
[!INCLUDE [iot-dps-selector-quick-create-simulated-device-x509](../../includes/iot-dps-selector-quick-create-simulated-device-x509.md)]
-In this quickstart, you create a simulated X.509 device on a Windows computer. You use device sample Python code to connect this simulated device with your IoT hub using an individual enrollment with the Device Provisioning Service (DPS).
+In this quickstart, you provision a development machine as a Python X.509 device. You use sample device code from the [Azure IoT Python SDK](https://github.com/Azure/azure-iot-sdk-python) to connect the device to your IoT hub. An individual enrollment is used with the Device Provisioning Service (DPS) in this example.
## Prerequisites - Familiar with [provisioning](about-iot-dps.md#provisioning-process) concepts. - Completion of [Set up IoT Hub Device Provisioning Service with the Azure portal](./quick-setup-auto-provision.md). - An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).-- [Visual Studio 2015+](https://visualstudio.microsoft.com/vs/) with Desktop development with C++.-- [CMake build system](https://cmake.org/download/).
+- [Python 3.5.3 or later](https://www.python.org/downloads/)
- [Git](https://git-scm.com/download/).
-> [!IMPORTANT]
-> This article only applies to the deprecated V1 Python SDK. Device and service clients for the Iot Hub Device Provisioning Service are not yet available in V2. The team is currently hard at work to bring V2 to feature parity.
[!INCLUDE [IoT Device Provisioning Service basic](../../includes/iot-dps-basic.md)] ## Prepare the environment
-1. Make sure you have installed either [Visual Studio](https://visualstudio.microsoft.com/vs/) 2015 or later, with the 'Desktop development with C++' workload enabled for your Visual Studio installation.
+1. Make sure `git` is installed on your machine and is added to the environment variables accessible to the command window. See [Software Freedom Conservancy's Git client tools](https://git-scm.com/download/) for the latest version of `git` tools to install, which includes the **Git Bash**, the command-line app that you can use to interact with your local Git repository.
-2. Download and install the [CMake build system](https://cmake.org/download/).
-
-3. Make sure `git` is installed on your machine and is added to the environment variables accessible to the command window. See [Software Freedom Conservancy's Git client tools](https://git-scm.com/download/) for the latest version of `git` tools to install, which includes the **Git Bash**, the command-line app that you can use to interact with your local Git repository.
-
-4. Open a command prompt or Git Bash. Clone the GitHub repo for device simulation code sample.
+2. Open a Git Bash prompt. Clone the GitHub repo for [Azure IoT Python SDK](https://github.com/Azure/azure-iot-sdk-python).
```cmd/sh git clone https://github.com/Azure/azure-iot-sdk-python.git --recursive ```
-5. Create a folder in your local copy of this GitHub repo for CMake build process.
- ```cmd/sh
- cd azure-iot-sdk-python/c
- mkdir cmake
- cd cmake
+## Create a self-signed X.509 device certificate
+
+In this section, you will create a self-signed X.509 certificate. It is important to keep in mind the following points:
+
+* Self-signed certificates are for testing only, and should not be used in production.
+* The default expiration date for a self-signed certificate is one year.
+
+If you don't already have your device certificates to authenticate a device, you can create a self-signed certificate with OpenSSL for testing with this article. OpenSSL is included with the Git installation.
+
+1. Run the following command in the Git Bash prompt.
+
+ # [Windows](#tab/windows)
+
+ ```bash
+ winpty openssl req -outform PEM -x509 -sha256 -newkey rsa:4096 -keyout ./python-device.key.pem -out ./python-device.pem -days 365 -extensions usr_cert -subj "//CN=Python-device-01"
```
-6. Run the following command to create the Visual Studio solution for the provisioning client.
+ > [!IMPORTANT]
+ > The extra forward slash given for the subject name (`//CN=Python-device-01`) is only required to escape the string with Git on Windows platforms.
- ```cmd/sh
- cmake -Duse_prov_client:BOOL=ON ..
+ # [Linux](#tab/linux)
+
+ ```bash
+ openssl req -outform PEM -x509 -sha256 -newkey rsa:4096 -keyout ./python-device.key.pem -out ./python-device.pem -days 365 -extensions usr_cert -subj "/CN=Python-device-01"
```
+
+
+
+2. When asked to **Enter PEM pass phrase:**, use the pass phrase `1234` for testing with this article.
+3. When asked again **Verifying - Enter PEM pass phrase:**, use the pass phrase `1234` again.
-## Create a self-signed X.509 device certificate and individual enrollment entry
+A test certificate file (*python-device.pem*) and private key file (*python-device.key.pem*) are generated in the directory where you ran the `openssl` command.
-In this section you, will use a self-signed X.509 certificate. It is important to keep in mind the following points:
-* Self-signed certificates are for testing only, and should not be used in production.
-* The default expiration date for a self-signed certificate is one year.
+## Create an individual enrollment entry in DPS
-You will use sample code from the Azure IoT C SDK to create the certificate to be used with the individual enrollment entry for the simulated device.
The Azure IoT Device Provisioning Service supports two types of enrollments: - [Enrollment groups](concepts-service.md#enrollment-group): Used to enroll multiple related devices. - [Individual enrollments](concepts-service.md#individual-enrollment): Used to enroll a single device.
-This article demonstrates individual enrollments.
+This article demonstrates an individual enrollment for a single device to be provisioned with an IoT hub.
-1. Open the solution generated in the *cmake* folder named `azure_iot_sdks.sln`, and build it in Visual Studio.
+1. Sign in to the Azure portal, select the **All resources** button on the left-hand menu and open your provisioning service.
-2. Right-click the **dice\_device\_enrollment** project under the **Provision\_Tools** folder, and select **Set as Startup Project**. Run the solution.
+2. From the Device Provisioning Service menu, select **Manage enrollments**. Select **Individual Enrollments** tab and select the **Add individual enrollment** button at the top.
-3. In the output window, enter `i` for individual enrollment when prompted. The output window displays a locally generated X.509 certificate for your simulated device.
-
- ```output
- Copy the first certificate to clipboard. Begin with the first occurrence of:
-
- --BEGIN CERTIFICATE--
-
- End you copying after the first occurrence of:
-
- --END CERTIFICATE--
-
- Make sure to include both of those lines as well.
- ```
-
- ![Dice device enrollment application](./media/python-quick-create-simulated-device-x509/dice-device-enrollment.png)
-
-4. Create a file named **_X509testcertificate.pem_** on your Windows machine, open it in an editor of your choice, and copy the clipboard contents to this file. Save the file.
-
-5. Sign in to the Azure portal, select the **All resources** button on the left-hand menu and open your provisioning service.
-
-6. From the Device Provisioning Service menu, select **Manage enrollments**. Select **Individual Enrollments** tab and select the **Add individual enrollment** button at the top.
-
-7. In the **Add Enrollment** panel, enter the following information:
+3. In the **Add Enrollment** panel, enter the following information:
- Select **X.509** as the identity attestation *Mechanism*.
- - Under the *Primary certificate .pem or .cer file*, choose *Select a file* to select the certificate file **X509testcertificate.pem** created in the previous steps.
+ - Under the *Primary certificate .pem or .cer file*, choose *Select a file* to select the certificate file **python-device.pem** if you are using the test certificate created earlier.
- Optionally, you may provide the following information: - Select an IoT hub linked with your provisioning service.
- - Enter a unique device ID. Make sure to avoid sensitive data while naming your device.
- Update the **Initial device twin state** with the desired initial configuration for the device. - Once complete, press the **Save** button. [![Add individual enrollment for X.509 attestation in the portal](./media/python-quick-create-simulated-device-x509/device-enrollment.png)](./media/python-quick-create-simulated-device-x509/device-enrollment.png#lightbox)
- Upon successful enrollment, your X.509 device appears as **riot-device-cert** under the *Registration ID* column in the *Individual Enrollments* tab.
+ Upon successful enrollment, your X.509 device appears as **Python-device-01** under the *Registration ID* column in the *Individual Enrollments* tab. This registration value comes from the subject name on the device certificate.
## Simulate the device
-1. From the Device Provisioning Service menu, select **Overview**. Note your _ID Scope_ and _Global Service Endpoint_.
+The Python provisioning sample, [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/master/azure-iot-device/samples/async-hub-scenarios/provision_x509.py) is located in the `azure-iot-sdk-python/azure-iot-device/samples/async-hub-scenarios` directory. This sample uses six environment variables to authenticate and provision an IoT device using DPS. These environment variables are:
- ![Service information](./media/python-quick-create-simulated-device-x509/extract-dps-endpoints.png)
+| Variable name | Description |
+| :- | :- |
+| `PROVISIONING_HOST` | This value is the global endpoint used for connecting to your DPS resource |
+| `PROVISIONING_IDSCOPE` | This value is the ID Scope for your DPS resource |
+| `DPS_X509_REGISTRATION_ID` | This value is the ID for your device. It must also match the subject name on the device certificate |
+| `X509_CERT_FILE` | Your device certificate filename |
+| `X509_KEY_FILE` | The private key filename for your device certificate |
+| `PASS_PHRASE` | The pass phrase you used to encrypt the certificate and private key file (`1234`). |
-2. Download and install [Python 2.x or 3.x](https://www.python.org/downloads/). Make sure to use the 32-bit or 64-bit installation as required by your setup. When prompted during the installation, make sure to add Python to your platform-specific environment variables. If you are using Python 2.x, you may need to [install or upgrade *pip*, the Python package management system](https://pip.pypa.io/en/stable/installing/).
-
- > [!NOTE]
- > If you are using Windows, also install the [Visual C++ Redistributable for Visual Studio 2015](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads). The pip packages require the redistributable in order to load/execute the C DLLs.
+1. From the Device Provisioning Service menu, select **Overview**. Note your _ID Scope_ and _Global device endpoint_.
-3. Follow [these instructions](https://github.com/Azure/azure-iot-sdk-python/blob/v1-deprecated/doc/python-devbox-setup.md) to build the Python packages.
+ ![Service information](./media/python-quick-create-simulated-device-x509/extract-dps-endpoints.png)
- > [!NOTE]
- > If using `pip` make sure to also install the `azure-iot-provisioning-device-client` package.
+2. In your Git Bash prompt, use the following commands add the environment variables for the global device endpoint and ID Scope.
-4. Navigate to the samples folder.
+ ```bash
+ $export PROVISIONING_HOST=global.azure-devices-provisioning.net
+ $export PROVISIONING_IDSCOPE=<ID scope for your DPS resource>
+ ```
- ```cmd/sh
- cd azure-iot-sdk-python/provisioning_device_client/samples
+3. The registration ID for the IoT device must match subject name on its device certificate. If you generated a self-signed test certificate, `Python-device-01` is the subject name and registration ID for the device.
+
+ If you already have a device certificate, you can use `certutil` to verify the subject common name used for your device as shown below for a self-signed test certificate:
+
+ ```bash
+ $ certutil python-device.pem
+ X509 Certificate:
+ Version: 3
+ Serial Number: fa33152fe1140dc8
+ Signature Algorithm:
+ Algorithm ObjectId: 1.2.840.113549.1.1.11 sha256RSA
+ Algorithm Parameters:
+ 05 00
+ Issuer:
+ CN=Python-device-01
+ Name Hash(sha1): 1dd88de40e9501fb64892b698afe12d027011000
+ Name Hash(md5): a62c784820daa931b9d3977739b30d12
+
+ NotBefore: 1/29/2021 7:05 PM
+ NotAfter: 1/29/2022 7:05 PM
+
+ Subject:
+ ===> CN=Python-device-01 <===
+ Name Hash(sha1): 1dd88de40e9501fb64892b698afe12d027011000
+ Name Hash(md5): a62c784820daa931b9d3977739b30d12
```
-5. Using your Python IDE, edit the python script named **provisioning\_device\_client\_sample.py**. Modify the _GLOBAL\_PROV\_URI_ and _ID\_SCOPE_ variables to the values noted previously.
+ In the Git Bash prompt, set the environment variable for the registration ID as follows:
- ```python
- GLOBAL_PROV_URI = "{globalServiceEndpoint}"
- ID_SCOPE = "{idScope}"
- SECURITY_DEVICE_TYPE = ProvisioningSecurityDeviceType.X509
- PROTOCOL = ProvisioningTransportProvider.HTTP
+ ```bash
+ $export DPS_X509_REGISTRATION_ID=Python-device-01
```
-6. Run the sample.
+4. In the Git Bash prompt, set the environment variables for the certificate file, private key file, and pass phrase.
- ```cmd/sh
- python provisioning_device_client_sample.py
+ ```bash
+ $export X509_CERT_FILE=./python-device.pem
+ $export X509_KEY_FILE=./python-device.key.pem
+ $export PASS_PHRASE=1234
```
-7. The application will connect, enroll the device, and display a successful enrollment message.
-
- ![successful enrollment](./media/python-quick-create-simulated-device-x509/enrollment-success.png)
+5. Review the code for [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/master/azure-iot-device/samples/async-hub-scenarios/provision_x509.py) If your not using **Python version 3.7** or later, make the [code change mentioned here](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device/samples/async-hub-scenarios#advanced-iot-hub-scenario-samples-for-the-azure-iot-hub-device-sdk) to replace `asyncio.run(main())` and save your change.
+
+6. Run the sample. The sample will connect, provision the device to a hub, and send some test messages to the hub.
+
+ ```bash
+ $ winpty python azure-iot-sdk-python/azure-iot-device/samples/async-hub-scenarios/provision_x509.py
+ RegistrationStage(RequestAndResponseOperation): Op will transition into polling after interval 2. Setting timer.
+ The complete registration result is
+ Python-device-01
+ TestHub12345.azure-devices.net
+ initialAssignment
+ null
+ Will send telemetry from the provisioned device
+ sending message #4
+ sending message #7
+ sending message #2
+ sending message #8
+ sending message #5
+ sending message #9
+ sending message #1
+ sending message #6
+ sending message #10
+ sending message #3
+ done sending message #4
+ done sending message #7
+ done sending message #2
+ done sending message #8
+ done sending message #5
+ done sending message #9
+ done sending message #1
+ done sending message #6
+ done sending message #10
+ done sending message #3
+ ```
-8. In the portal, navigate to the IoT hub linked to your provisioning service and open the **Device Explorer** blade. On successful provisioning of the simulated X.509 device to the hub, its device ID appears on the **Device Explorer** blade, with *STATUS* as **enabled**. You might need to press the **Refresh** button at the top if you already opened the blade prior to running the sample device application.
+7. In the portal, navigate to the IoT hub linked to your provisioning service and open the **IoT devices** blade located under the **Explorers** section in the left menu. On successful provisioning of the simulated X.509 device to the hub, its device ID appears on the **Device Explorer** blade, with *STATUS* as **enabled**. You might need to press the **Refresh** button at the top if you already opened the blade prior to running the sample device application.
![Device is registered with the IoT hub](./media/python-quick-create-simulated-device-x509/registration.png)
@@ -174,7 +218,7 @@ If you plan to continue working on and exploring the device client sample, do no
## Next steps
-In this quickstart, youΓÇÖve created a simulated X.509 device on your Windows machine and provisioned it to your IoT hub using the Azure IoT Hub Device Provisioning Service on the portal. To learn how to enroll your X.509 device programmatically, continue to the quickstart for programmatic enrollment of X.509 devices.
+In this quickstart, youΓÇÖve created a simulated X.509 device on your development machine and provisioned it to your IoT hub using the Azure IoT Hub Device Provisioning Service on the portal. To learn how to enroll your X.509 device programmatically, continue to the quickstart for programmatic enrollment of X.509 devices.
> [!div class="nextstepaction"] > [Azure quickstart - Enroll X.509 devices to Azure IoT Hub Device Provisioning Service](quick-enroll-device-x509-python.md)
key-vault https://docs.microsoft.com/en-us/azure/key-vault/keys/hsm-protected-keys-byok https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/hsm-protected-keys-byok.md
@@ -9,7 +9,7 @@ tags: azure-resource-manager
Previously updated : 05/29/2020 Last updated : 02/01/2021
@@ -62,6 +62,7 @@ The following table lists prerequisites for using BYOK in Azure Key Vault:
|Cryptomathic|ISV (Enterprise Key Management System)|Multiple HSM brands and models including<ul><li>nCipher</li><li>Thales</li><li>Utimaco</li></ul>See [Cryptomathic site for details](https://www.cryptomathic.com/azurebyok)|[Cryptomathic BYOK tool and documentation](https://www.cryptomathic.com/azurebyok)| |Securosys SA|Manufacturer, HSM as a service|Primus HSM family, Securosys Clouds HSM|[Primus BYOK tool and documentation](https://www.securosys.com/primus-azure-byok)| |StorMagic|ISV (Enterprise Key Management System)|Multiple HSM brands and models including<ul><li>Utimaco</li><li>Thales</li><li>nCipher</li></ul>See [StorMagic site for details](https://stormagic.com/doc/svkms/Content/Integrations/Azure_KeyVault_BYOK.htm)|[SvKMS and Azure Key Vault BYOK](https://stormagic.com/doc/svkms/Content/Integrations/Azure_KeyVault_BYOK.htm)|
+|IBM|Manufacturer|IBM 476x, CryptoExpress|[IBM Enterprise Key Management Foundation](https://www.ibm.com/security/key-management/ekmf-bring-your-own-key-azure)|
||||
key-vault https://docs.microsoft.com/en-us/azure/key-vault/keys/hsm-protected-keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/hsm-protected-keys.md
@@ -9,7 +9,7 @@ tags: azure-resource-manager
Previously updated : 05/29/2020 Last updated : 02/01/2021
@@ -37,6 +37,7 @@ Transferring HSM-protected keys to Key Vault is supported via two different meth
|Cryptomathic|ISV (Enterprise Key Management System)|Multiple HSM brands and models including<ul><li>nCipher</li><li>Thales</li><li>Utimaco</li></ul>See [Cryptomathic site for details](https://www.cryptomathic.com/azurebyok)|[Use new BYOK method](hsm-protected-keys-byok.md)| |Securosys SA|Manufacturer, HSM as a service|Primus HSM family, Securosys Clouds HSM|[Use new BYOK method](hsm-protected-keys-byok.md)| |StorMagic|ISV (Enterprise Key Management System)|Multiple HSM brands and models including<ul><li>Utimaco</li><li>Thales</li><li>nCipher</li></ul>See [StorMagic site for details](https://stormagic.com/doc/svkms/Content/Integrations/Azure_KeyVault_BYOK.htm)|[Use new BYOK method](hsm-protected-keys-byok.md)|
+|IBM|Manufacturer|IBM 476x, CryptoExpress|Use new BYOK method](hsm-protected-keys-byok.md)|
||||| ## Next steps
key-vault https://docs.microsoft.com/en-us/azure/key-vault/managed-hsm/hsm-protected-keys-byok https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/hsm-protected-keys-byok.md
@@ -7,7 +7,7 @@ tags: azure-resource-manager
Previously updated : 09/17/2020 Last updated : 02/01/2021
@@ -65,6 +65,7 @@ For more information on login options via the CLI take a look at [sign in with A
|Cryptomathic|ISV (Enterprise Key Management System)|Multiple HSM brands and models including<ul><li>nCipher</li><li>Thales</li><li>Utimaco</li></ul>See [Cryptomathic site for details](https://www.cryptomathic.com/azurebyok)|[Cryptomathic BYOK tool and documentation](https://www.cryptomathic.com/azurebyok)| |Securosys SA|Manufacturer, HSM as a service|Primus HSM family, Securosys Clouds HSM|[Primus BYOK tool and documentation](https://www.securosys.com/primus-azure-byok)| |StorMagic|ISV (Enterprise Key Management System)|Multiple HSM brands and models including<ul><li>Utimaco</li><li>Thales</li><li>nCipher</li></ul>See [StorMagic site for details](https://stormagic.com/doc/svkms/Content/Integrations/Azure_KeyVault_BYOK.htm)|[SvKMS and Azure Key Vault BYOK](https://stormagic.com/doc/svkms/Content/Integrations/Azure_KeyVault_BYOK.htm)|
+|IBM|Manufacturer|IBM 476x, CryptoExpress|[IBM Enterprise Key Management Foundation](https://www.ibm.com/security/key-management/ekmf-bring-your-own-key-azure)|
||||
logic-apps https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-gateway-connection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-gateway-connection.md
@@ -1,16 +1,16 @@
Title: Access data sources on premises
-description: Connect to on-premises data sources from Azure Logic Apps by creating an data gateway resource in Azure
+description: Connect to on-premises data sources from Azure Logic Apps by creating a data gateway resource in Azure
ms.suite: integration-+ Previously updated : 08/18/2020 Last updated : 01/20/2021 # Connect to on-premises data sources from Azure Logic Apps
-After you [install the *on-premises data gateway* on a local computer](../logic-apps/logic-apps-gateway-install.md) and before you can access data sources on premises from your logic apps, you need to create a gateway resource in Azure for your gateway installation. You can then select this gateway resource in the triggers and actions that you want to use for the [on-premises connectors](../connectors/apis-list.md#on-premises-connectors) available in Azure Logic Apps. Azure Logic Apps supports read and write operations through the data gateway. However, these operations have [limits on their payload size](/data-integration/gateway/service-gateway-onprem#considerations).
+After you [install the *on-premises data gateway* on a local computer](../logic-apps/logic-apps-gateway-install.md) and before you can access data sources on premises from your logic apps, you have to create a gateway resource in Azure for your gateway installation. You can then select this gateway resource in the triggers and actions that you want to use for the [on-premises connectors](../connectors/apis-list.md#on-premises-connectors) available in Azure Logic Apps. Azure Logic Apps supports read and write operations through the data gateway. However, these operations have [limits on their payload size](/data-integration/gateway/service-gateway-onprem#considerations).
This article shows how to create your Azure gateway resource for a previously [installed gateway on your local computer](../logic-apps/logic-apps-gateway-install.md). For more information about the gateway, see [How the gateway works](../logic-apps/logic-apps-gateway-install.md#gateway-cloud-service).
@@ -44,17 +44,21 @@ In Azure Logic Apps, the on-premises data gateway supports the [on-premises conn
* SQL Server * Teradata
-You can also create [custom connectors](../logic-apps/custom-connector-overview.md) that connect to data sources over HTTP or HTTPS by using REST or SOAP. Although the gateway itself doesn't incur additional costs, the [Logic Apps pricing model](../logic-apps/logic-apps-pricing.md) applies to these connectors and other operations in Azure Logic Apps.
+You can also create [custom connectors](../logic-apps/custom-connector-overview.md) that connect to data sources over HTTP or HTTPS by using REST or SOAP. Although the gateway itself doesn't incur extra costs, the [Logic Apps pricing model](../logic-apps/logic-apps-pricing.md) applies to these connectors and other operations in Azure Logic Apps.
## Prerequisites * You already [installed the on-premises data gateway on a local computer](../logic-apps/logic-apps-gateway-install.md). This gateway installation must exist before you can create a gateway resource that links to this installation.
-* You have the [same Azure account and subscription](../logic-apps/logic-apps-gateway-install.md#requirements) that you used for your gateway installation. This Azure account must belong only to a single [Azure Active Directory (Azure AD) tenant or directory](../active-directory/fundamentals/active-directory-whatis.md#terminology). You need to use the same Azure account and subscription to create your gateway resource in Azure because only the gateway administrator can create the gateway resource in Azure. Service principals currently aren't supported.
+* You have the [same Azure account and subscription](../logic-apps/logic-apps-gateway-install.md#requirements) that you used for your gateway installation. This Azure account must belong only to a single [Azure Active Directory (Azure AD) tenant or directory](../active-directory/fundamentals/active-directory-whatis.md#terminology). You have to use the same Azure account and subscription to create your gateway resource in Azure because only the gateway administrator can create the gateway resource in Azure. Service principals currently aren't supported.
* When you create a gateway resource in Azure, you select a gateway installation to link with your gateway resource and only that gateway resource. Each gateway resource can link to only one gateway installation. You can't select a gateway installation that's already associated with another gateway resource.
-
- * Your logic app and gateway resource don't have to exist in the same Azure subscription. Provided that you have subscription access, in triggers and actions that can access on-premises data sources, you can select other Azure subscriptions that have gateway resources.
+
+ * Your logic app and gateway resource don't have to exist in the same Azure subscription. In triggers and actions where you can use the gateway resource, you can select a different Azure subscription that has a gateway resource, but only if that subscription exists in the same Azure AD tenant or directory as your logic app. You also have to have administrator permissions on the gateway, which another administrator can set up for you. For more information, see [Data Gateway: Automation using PowerShell - Part 1](https://community.powerbi.com/t5/Community-Blog/Data-Gateway-Automation-using-PowerShell-Part-1/ba-p/1117330) and [PowerShell: Data Gateway - Add-DataGatewayClusterUser](/powershell/module/datagateway/add-datagatewayclusteruser).
+
+ > [!NOTE]
+ > Currently, you can't share a gateway resource or installation across multiple subscriptions.
+ > To submit product feedback, see [Microsoft Azure Feedback Forum](https://feedback.azure.com/forums/34192--general-feedback).
<a name="create-gateway-resource"></a>
@@ -99,10 +103,10 @@ After you create your gateway resource and associate your Azure subscription wit
1. Select **Connect via on-premises data gateway**.
-1. Under **Gateways**, from the **Subscriptions** list, select your Azure subscription that has the gateway resource you want.
-
- Provided that you have subscription access, you can select from different Azure subscriptions that are each associated with a different gateway resource. Your logic app and gateway resource don't have to exist in the same Azure subscription.
+1. Under **Gateway**, from the **Subscription** list, select your Azure subscription that has the gateway resource you want.
+ Your logic app and gateway resource don't have to exist in the same Azure subscription. You can select from other Azure subscriptions that each have a gateway resource, but only if these subscriptions exist in the same Azure AD tenant or directory as your logic app, and you have administrator permissions on the gateway, which another administrator can set up for you. For more information, see [Data Gateway: Automation using PowerShell - Part 1](https://community.powerbi.com/t5/Community-Blog/Data-Gateway-Automation-using-PowerShell-Part-1/ba-p/1117330) and [PowerShell: Data Gateway - Add-DataGatewayClusterUser](/powershell/module/datagateway/add-datagatewayclusteruser).
+
1. From the **Connection Gateway** list, which shows the available gateway resources in your selected subscription, select the gateway resource that you want. Each gateway resource is linked to a single gateway installation. > [!NOTE]
mysql https://docs.microsoft.com/en-us/azure/mysql/concept-performance-best-practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concept-performance-best-practices.md
@@ -1,16 +1,16 @@
Title: Performance best practices - Azure Database for MySQL
-description: This article describes the best practices to monitor and tune performance for your Azure Database for MySQL.
--
+description: This article describes some recommendations to monitor and tune performance for your Azure Database for MySQL.
++ Previously updated : 11/23/2020 Last updated : 1/28/2021 # Best practices for optimal performance of your Azure Database for MySQL - Single server
-Learn about the best practices for getting the best performance while working with your Azure Database for MySQL - Single server. As we add new capabilities to the platform, we will continue refine the best practices detailed in this section.
+Learn how to get best performance while working with your Azure Database for MySQL - Single server. As we add new capabilities to the platform, we will continue refining our recommendations in this section.
## Physical Proximity
@@ -18,7 +18,7 @@ Learn about the best practices for getting the best performance while working wi
## Accelerated Networking
-Use accelerated networking for the application server if you are using Azure virtual machine , Azure Kubernetes or App Services. Accelerated Networking enables single root I/O virtualization (SR-IOV) to a VM, greatly improving its networking performance. This high-performance path bypasses the host from the datapath, reducing latency, jitter, and CPU utilization, for use with the most demanding network workloads on supported VM types.
+Use accelerated networking for the application server if you are using Azure virtual machine, Azure Kubernetes, or App Services. Accelerated Networking enables single root I/O virtualization (SR-IOV) to a VM, greatly improving its networking performance. This high-performance path bypasses the host from the datapath, reducing latency, jitter, and CPU utilization, for use with the most demanding network workloads on supported VM types.
## Connection Efficiency
@@ -42,8 +42,25 @@ Establishing a new connection is always an expensive and time-consuming task. Wh
An Azure Database for MySQL performance best practice is to allocate enough RAM so that youΓÇÖre working set resides almost completely in memory. - Check if the memory percentage being used in reaching the [limits](./concepts-pricing-tiers.md) using the [metrics for the MySQL server](./concepts-monitoring.md). -- Set up alerts on such numbers to ensure that as the servers reaches limits you can take prompt actions to fix it. Based on the limits defined, check if scaling up the database SKU ΓÇö either to higher compute size or to better pricing tier which results in a dramatic increase in performance.
+- Set up alerts on such numbers to ensure that as the servers reaches limits you can take prompt actions to fix it. Based on the limits defined, check if scaling up the database SKUΓÇöeither to higher compute size or to better pricing tier, which results in a dramatic increase in performance.
- Scale up until your performance numbers no longer drops dramatically after a scaling operation. For information on monitoring a DB instance's metrics, see [MySQL DB Metrics](./concepts-monitoring.md#metrics).
+
+## Use InnoDB Buffer Pool Warmup
+
+After restarting Azure Database for MySQL server, the data pages residing in storage are loaded as the tables are queried which leads to increased latency and slower performance for the first execution of the queries. This may not be acceptable for latency sensitive workloads.
+
+Utilizing InnoDB buffer pool warmup shortens the warmup period by reloading disk pages that were in the buffer pool before the restart rather than waiting for DML or SELECT operations to access corresponding rows.
+
+You can reduce the warmup period after restarting your Azure Database for MySQL server, which represents a performance advantage by configuring [InnoDB buffer pool server parameters](https://dev.mysql.com/doc/refman/8.0/en/innodb-preload-buffer-pool.html). InnoDB saves a percentage of the most recently used pages for each buffer pool at server shutdown and restores these pages at server startup.
+
+It is also important to note that improved performance comes at the expense of longer start-up time for the server. When this parameter is enabled, server startup and restart time is expected to increase depending on the IOPS provisioned on the server.
+
+We recommend testing and monitor the restart time to ensure the start-up/restart performance is acceptable as the server is unavailable during that time. It is not recommended to use this parameter with less than 1000 provisioned IOPS (or in other words, when storage provisioned is less than 335 GB).
+
+To save the state of the buffer pool at server shutdown, set server parameter `innodb_buffer_pool_dump_at_shutdown` to `ON`. Similarly, set server parameter `innodb_buffer_pool_load_at_startup` to `ON` to restore the buffer pool state at server startup. You can control the impact on start-up/restart time by lowering and fine-tuning the value of server parameter `innodb_buffer_pool_dump_pct`. By default, this parameter is set to `25`.
+
+> [!Note]
+> InnoDB buffer pool warmup parameters are only supported in general purpose storage servers with up to 16-TB storage. Learn more about [Azure Database for MySQL storage options here](https://docs.microsoft.com/azure/mysql/concepts-pricing-tiers#storage).
## Next steps
mysql https://docs.microsoft.com/en-us/azure/mysql/how-to-major-version-upgrade https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/how-to-major-version-upgrade.md
@@ -1,14 +1,18 @@
Title: Major version upgrade in Azure Database for MySQL - Single Server description: This article describes how you can upgrade major version for Azure Database for MySQL - Single Server --++ Previously updated : 1/13/2021 Last updated : 1/28/2021 # Major version upgrade in Azure Database for MySQL Single Server
+> [!NOTE]
+> This article contains references to the term _slave_, a term that Microsoft no longer uses. When the term is removed from the software, we will remove it from this article.
+>
+ > [!IMPORTANT] > Major version upgrade for Azure database for MySQL Single Server is in public preview.
@@ -17,9 +21,8 @@ This article describes how you can upgrade your MySQL major version in-place in
This feature will enable customers to perform in-place upgrades of their MySQL 5.6 servers to MySQL 5.7 with a click of button without any data movement or the need of any application connection string changes. > [!Note]
-> * Major version upgrade is only available for major version upgrade from MySQL 5.6 to MySQL 5.7<br>
-> * Major version upgrade is not supported on replica server yet.
-> * The server will be unavailable throughout the upgrade operation. It is therefore recommended to perform upgrades during your planned maintenance window.
+> * Major version upgrade is only available for major version upgrade from MySQL 5.6 to MySQL 5.7.
+> * The server will be unavailable throughout the upgrade operation. It is therefore recommended to perform upgrades during your planned maintenance window. You can consider [performing minimal downtime major version upgrade from MySQL 5.6 to MySQL 5.7 using read replica.](#perform-minimal-downtime-major-version-upgrade-from-mysql-56-to-mysql-57-using-read-replicas)
## Perform major version upgrade from MySQL 5.6 to MySQL 5.7 using Azure portal
@@ -51,13 +54,59 @@ Follow these steps to perform major version upgrade for your Azure Database of M
This upgrade requires version 2.16.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. Run az version to find the version and dependent libraries that are installed. To upgrade to the latest version, run az upgrade. 2. After you sign in, run the [az mysql server upgrade](https://docs.microsoft.com/cli/azure/mysql/server?view=azure-cli-latest#az_mysql_server_upgrade&preserve-view=true) command:
-
+ ```azurecli az mysql server upgrade --name testsvr --resource-group testgroup --subscription MySubscription --target-server-version 5.7" ``` The command prompt shows the "-Running" message. After this message is no longer displayed, the version upgrade is complete.
+## Perform major version upgrade from MySQL 5.6 to MySQL 5.7 on read replica using Azure portal
+
+1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL 5.6 read replica server.
+
+2. From the **Overview** page, click the **Upgrade** button in the toolbar.
+
+3. In the **Upgrade** section, select **OK** to upgrade Azure database for MySQL 5.6 read replica server to 5.7 server.
+
+ :::image type="content" source="./media/how-to-major-version-upgrade-portal/upgrade.png" alt-text="Azure Database for MySQL - overview - upgrade":::
+
+4. A notification will confirm that upgrade is successful.
+
+5. From the **Overview** page, confirm that your Azure database for MySQL read replica server version is 5.7.
+
+6. Now go to your primary server and [Perform major version upgrade](#perform-major-version-upgrade-from-mysql-56-to-mysql-57-using-azure-portal) on it.
+
+## Perform minimal downtime major version upgrade from MySQL 5.6 to MySQL 5.7 using read replicas
+
+You can perform minimal downtime major version upgrade from MySQL 5.6 to MySQL 5.7 by utilizing read replicas. The idea is to upgrade the read replica of your server to 5.7 first and later failover your application to point to read replica and make it a new primary.
+
+1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL 5.6.
+
+2. Create a [read replica](https://docs.microsoft.com/azure/mysql/concepts-read-replicas#create-a-replica) from your primary server.
+
+3. [Upgrade your read replica](#perform-major-version-upgrade-from-mysql-56-to-mysql-57-on-read-replica-using-azure-portal) to version 5.7.
+
+4. Once you confirm that the replica server is running on version 5.7, stop your application from connecting to your primary server.
+
+5. Check replication status, and make sure replica is all caught up with primary so all the data is in sync and ensure there are no new operations performed in primary.
+
+ Call the [`show slave status`](https://dev.mysql.com/doc/refman/5.7/en/show-slave-status.html) command on the replica server to view the replication status.
+
+ ```sql
+ SHOW SLAVE STATUS\G
+ ```
+
+ If the state of `Slave_IO_Running` and `Slave_SQL_Running` are "yes" and the value of `Seconds_Behind_Master` is "0", replication is working well. `Seconds_Behind_Master` indicates how late the replica is. If the value isn't "0", it means that the replica is processing updates. Once you confirm `Seconds_Behind_Master` is "0" it's safe to stop replication.
+
+6. Promote your read replica to primary by [stopping replication](https://docs.microsoft.com/azure/mysql/howto-read-replicas-portal#stop-replication-to-a-replica-server).
+
+7. Point your application to the new primary (former replica) which is running server 5.7. Each server has a unique connection string. Update your application to point to the (former) replica instead of the source.
+
+> [!Note]
+> This scenario will have downtime during steps 4, 5 and 6 only.
++ ## Frequently asked questions ### When will this upgrade feature be GA as we have MySQL v5.6 in our production environment that we need to upgrade?
mysql https://docs.microsoft.com/en-us/azure/mysql/howto-configure-privatelink-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/howto-configure-privatelink-portal.md
@@ -218,6 +218,8 @@ After you've created **myVm**, connect to it from the internet as follows:
Name: myServer.privatelink.mysql.database.azure.com Address: 10.1.3.4 ```
+ > [!NOTE]
+ > If public access is disabled in the firewall settings in Azure Database for MySQL - Single Server. These ping and telnet tests will succeed regardless of the firewall settings. Those tests will ensure the network connectivity.
3. Test the private link connection for the MySQL server using any available client. In the example below I have used [MySQL Workbench](https://dev.mysql.com/doc/workbench/en/wb-installing-windows.html) to do the operation.
postgresql https://docs.microsoft.com/en-us/azure/postgresql/concepts-backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-backup.md
@@ -5,7 +5,7 @@
Previously updated : 02/25/2020 Last updated : 01/29/2021 # Backup and restore in Azure Database for PostgreSQL - Single Server
@@ -77,6 +77,16 @@ Point-in-time restore is useful in multiple scenarios. For example, when a user
You may need to wait for the next transaction log backup to be taken before you can restore to a point in time within the last five minutes.
+If you want to restore a dropped table,
+1. Restore source server using Point-in-time method.
+2. Dump the table using `pg_dump` from restored server.
+3. Rename source table on original server.
+4. Import table using psql command line on original server.
+5. You can optionally delete the restored server.
+
+>[!Note]
+> It is recommended not to create multiple restores for the same server at the same time.
+ ### Geo-restore You can restore a server to another Azure region where the service is available if you have configured your server for geo-redundant backups. Servers that support up to 4 TB of storage can be restored to the geo-paired region, or to any region that supports up to 16 TB of storage. For servers that support up to 16 TB of storage, geo-backups can be restored in any region that support 16 TB servers as well. Review [Azure Database for PostgreSQL pricing tiers](concepts-pricing-tiers.md) for the list of supported regions.
@@ -92,7 +102,7 @@ During geo-restore, the server configurations that can be changed include comput
After a restore from either recovery mechanism, you should perform the following tasks to get your users and applications back up and running: -- If the new server is meant to replace the original server, redirect clients and client applications to the new server
+- If the new server is meant to replace the original server, redirect clients and client applications to the new server. Also change the user name also to `username@new-restored-server-name`.
- Ensure appropriate server-level firewall and VNet rules are in place for users to connect. These rules are not copied over from the original server. - Ensure appropriate logins and database level permissions are in place - Configure alerts, as appropriate
postgresql https://docs.microsoft.com/en-us/azure/postgresql/concepts-read-replicas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-read-replicas.md
@@ -5,7 +5,7 @@
Previously updated : 11/05/2020 Last updated : 01/29/2021 # Read replicas in Azure Database for PostgreSQL - Single Server
@@ -51,8 +51,6 @@ In addition to the universal replica regions, you can create a read replica in t
If you are using cross-region replicas for disaster recovery planning, we recommend you create the replica in the paired region instead of one of the other regions. Paired regions avoid simultaneous updates and prioritize physical isolation and data residency. There are limitations to consider: -
-* Regional availability: Azure Database for PostgreSQL is available in France Central, UAE North, and Germany Central. However, their paired regions are not available.
* Uni-directional pairs: Some Azure regions are paired in one direction only. These regions include West India, Brazil South. This means that a primary server in West India can create a replica in South India. However, a primary server in South India cannot create a replica in West India. This is because West India's secondary region is South India, but South India's secondary region is not West India.
search https://docs.microsoft.com/en-us/azure/search/samples-dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/samples-dotnet.md
@@ -64,4 +64,5 @@ The following samples are also published by the Cognitive Search team, but are n
| [azure-search-power-skills](https://github.com/Azure-Samples/azure-search-power-skills) | Source code for consumable custom skills that you can incorporate in your won solutions. | | [Knowledge Mining Solution Accelerator](/samples/azure-samples/azure-search-knowledge-mining/azure-search-knowledge-mining/) | Includes templates, support files, and analytical reports to help you prototype an end-to-end knowledge mining solution. | | [Covid-19 Search App repository](https://github.com/liamca/covid19search) | Source code repository for the Cognitive Search based [Covid-19 Search App](https://covid19search.azurewebsites.net/) |
-| [JFK](https://github.com/Microsoft/AzureSearch_JFK_Files) | Learn more about the [JFK solution](https://www.microsoft.com/ai/ai-lab-jfk-files). |
\ No newline at end of file
+| [JFK](https://github.com/Microsoft/AzureSearch_JFK_Files) | Learn more about the [JFK solution](https://www.microsoft.com/ai/ai-lab-jfk-files). |
+| [Search + QnA Maker Accelerator](https://github.com/Azure-Samples/search-qna-maker-accelerator) | A [solution](https://techcommunity.microsoft.com/t5/azure-ai/qna-with-azure-cognitive-search/ba-p/2081381) combining the power of Search and QnA Maker. See the live [demo site](https://aka.ms/qnaWithAzureSearchDemo). |
\ No newline at end of file
search https://docs.microsoft.com/en-us/azure/search/search-blob-ai-integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-blob-ai-integration.md
@@ -1,5 +1,5 @@
Title: Use AI to understand Blob storage data
+ Title: Use AI to enrich blob content
description: Learn about the natural language and image analysis capabilities in Azure Cognitive Search, and how those processes apply to content stored in Azure blobs.
@@ -8,12 +8,12 @@
Previously updated : 09/23/2020 Last updated : 02/02/2021
-# Use AI to understand Blob storage data
+# Use AI to process and analyze Blob content in Azure Cognitive Search
-Data in Azure Blob storage is often a variety of unstructured content such as images, long text, PDFs, and Office documents. By using the AI capabilities in Azure Cognitive Search, you can understand and extract valuable information from blobs in a variety of ways. Examples of applying AI to blob content include:
+Content in Azure Blob storage that's composed of images or long undifferentiated text can undergo deep learning analysis to reveal and extract valuable information useful for downstream applications. By using [AI enrichment](cognitive-search-concept-intro.md), you can:
+ Extract text from images using optical character recognition (OCR) + Produce a scene description or tags from a photo
@@ -22,23 +22,23 @@ Data in Azure Blob storage is often a variety of unstructured content such as im
While you might need just one of these AI capabilities, itΓÇÖs common to combine multiple of them into the same pipeline (for example, extracting text from a scanned image and then finding all the dates and places referenced in it). It's also common to include custom AI or machine learning processing in the form of leading-edge external packages or in-house models tailored to your data and your requirements.
-AI enrichment creates new information, captured as text, stored in fields. Post-enrichment, you can access this information from a search index through full text search, or send enriched documents back to Azure storage to power new application experiences that include exploring data for discovery or analytics scenarios.
+Although you can apply AI enrichment to any data source supported by a search indexer, blobs are the most frequently used structures in an enrichment pipeline. Results are pulled into a search index for full text search, or rerouted back to Azure Storage to power new application experiences that include exploring data for discovery or analytics scenarios.
In this article, we view AI enrichment through a wide lens so that you can quickly grasp the entire process, from transforming raw data in blobs, to queryable information in either a search index or a knowledge store. ## What it means to "enrich" blob data with AI
-*AI enrichment* is part of the indexing architecture of Azure Cognitive Search that integrates built-in AI from Microsoft or custom AI that you provide. It helps you implement end-to-end scenarios where you need to process blobs (both existing ones and new ones as they come in or are updated), crack open all file formats to extract images and text, extract the desired information using various AI capabilities, and index them in a search index for fast search, retrieval and exploration.
+*AI enrichment* is part of the indexing architecture of Azure Cognitive Search that integrates machine learning models from Microsoft or custom learning models that you provide. It helps you implement end-to-end scenarios where you need to process blobs (both existing ones and new ones as they come in or are updated), crack open all file formats to extract images and text, extract the desired information using various AI capabilities, and index them in a search index for fast search, retrieval and exploration.
Inputs are your blobs, in a single container, in Azure Blob storage. Blobs can be almost any kind of text or image data. Output is always a search index, used for fast text search, retrieval, and exploration in client applications. Additionally, output can also be a [*knowledge store*](knowledge-store-concept-intro.md) that projects enriched documents into Azure blobs or Azure tables for downstream analysis in tools like Power BI or in data science workloads.
-In between is the pipeline architecture itself. The pipeline is based on the *indexer* feature, to which you can assign a *skillset*, which is composed of one or more *skills* providing the AI. The purpose of the pipeline is to produce *enriched documents* that enter as raw content but pick up additional structure, context, and information while moving through the pipeline. Enriched documents are consumed during indexing to create inverted indexes and other structures used in full text search or exploration and analytics.
+In between is the pipeline architecture itself. The pipeline is based on the [*indexers*](search-indexer-overview.md), to which you can assign a [*skillset*](cognitive-search-working-with-skillsets.md), which is composed of one or more *skills* providing the AI. The purpose of the pipeline is to produce *enriched documents* that enter the pipeline as raw content but pick up additional structure, context, and information while moving through the pipeline. Enriched documents are consumed during indexing to create inverted indexes and other structures used in full text search or exploration and analytics.
## Required resources
-You need Azure Blob storage, Azure Cognitive Search, and a third service or mechanism that provides the AI:
+In addition to Azure Blob storage and Azure Cognitive Search, you need a third service or mechanism that provides the AI:
+ For built-in AI, Cognitive Search integrates with Azure Cognitive Services vision and natural language processing APIs. You can [attach a Cognitive Services resource](cognitive-search-attach-cognitive-services.md) to add Optical Character Recognition (OCR), image analysis, or natural language processing (language detection, text translation, entity recognition, key phrase extraction).
security-center https://docs.microsoft.com/en-us/azure/security-center/kubernetes-workload-protections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/kubernetes-workload-protections.md
@@ -58,7 +58,7 @@ To configure the recommendations, install the **Azure Policy add-on for Kuberne
> [!TIP] > The recommendation is included in five different security controls and it doesn't matter which one you select in the next step.
- 1. From any of the security controls, select the recommendation to see the resources on which you can install the add on.
+ 1. From any of the security controls, select the recommendation to see the resources on which you can install the add-on.
1. Select the relevant cluster, and **Remediate**. :::image type="content" source="./media/defender-for-kubernetes-usage/recommendation-to-install-policy-add-on-for-kubernetes-details.png" alt-text="Recommendation details page for **Azure Policy add-on for Kubernetes should be installed and enabled on your clusters**":::
@@ -248,5 +248,5 @@ In this article, you learned how to configure Kubernetes workload protection.
For other related material, see the following pages: - [Security Center recommendations for compute](recommendations-reference.md#recs-compute)-- [Alerts for AKS cluster level](alerts-reference.md#alerts-akscluster)-- [Alerts for Container host level](alerts-reference.md#alerts-containerhost)\ No newline at end of file
+- [Alerts for AKS cluster level](alerts-reference.md#alerts-akscluster)
+- [Alerts for Container host level](alerts-reference.md#alerts-containerhost)
\ No newline at end of file
service-bus-messaging https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-messages-payloads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-messages-payloads.md
@@ -2,7 +2,7 @@
Title: Azure Service Bus messages, payloads, and serialization | Microsoft Docs description: This article provides an overview of Azure Service Bus messages, payloads, message routing, and serialization. Previously updated : 06/23/2020 Last updated : 01/29/2021 # Messages, payloads, and serialization
@@ -66,8 +66,6 @@ When using the legacy SBMP protocol, those objects are then serialized with the
While this hidden serialization magic is convenient, applications should take explicit control of object serialization and turn their object graphs into streams before including them into a message, and do the reverse on the receiver side. This yields interoperable results. It should also be noted that while AMQP has a powerful binary encoding model, it is tied to the AMQP messaging ecosystem and HTTP clients will have trouble decoding such payloads.
-We generally recommend JSON and Apache Avro as payload formats for structured data.
- The .NET Standard and Java API variants only accept byte arrays, which means that the application must handle object serialization control. ## Next steps
service-bus-messaging https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-messaging-exceptions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-messaging-exceptions.md
@@ -11,21 +11,21 @@ This article lists the .NET exceptions generated by .NET Framework APIs.
## Exception categories The messaging APIs generate exceptions that can fall into the following categories, along with the associated action you can take to try to fix them. The meaning and causes of an exception can vary depending on the type of messaging entity:
-1. User coding error ([System.ArgumentException](/dotnet/api/system.argumentexception?view=net-5.0), [System.InvalidOperationException](/dotnet/api/system.invalidoperationexception?view=net-5.0), [System.OperationCanceledException](/dotnet/api/system.operationcanceledexception?view=net-5.0), [System.Runtime.Serialization.SerializationException](/dotnet/api/system.runtime.serialization.serializationexception?view=net-5.0)). General action: try to fix the code before proceeding.
-2. Setup/configuration error ([Microsoft.ServiceBus.Messaging.MessagingEntityNotFoundException](/dotnet/api/microsoft.azure.servicebus.messagingentitynotfoundexception), [System.UnauthorizedAccessException](/dotnet/api/system.unauthorizedaccessexception?view=net-5.0). General action: review your configuration and change if necessary.
+1. User coding error ([System.ArgumentException](/dotnet/api/system.argumentexception), [System.InvalidOperationException](/dotnet/api/system.invalidoperationexception), [System.OperationCanceledException](/dotnet/api/system.operationcanceledexception), [System.Runtime.Serialization.SerializationException](/dotnet/api/system.runtime.serialization.serializationexception)). General action: try to fix the code before proceeding.
+2. Setup/configuration error ([Microsoft.ServiceBus.Messaging.MessagingEntityNotFoundException](/dotnet/api/microsoft.azure.servicebus.messagingentitynotfoundexception), [System.UnauthorizedAccessException](/dotnet/api/system.unauthorizedaccessexception). General action: review your configuration and change if necessary.
3. Transient exceptions ([Microsoft.ServiceBus.Messaging.MessagingException](/dotnet/api/microsoft.servicebus.messaging.messagingexception), [Microsoft.ServiceBus.Messaging.ServerBusyException](/dotnet/api/microsoft.azure.servicebus.serverbusyexception), [Microsoft.ServiceBus.Messaging.MessagingCommunicationException](/dotnet/api/microsoft.servicebus.messaging.messagingcommunicationexception)). General action: retry the operation or notify users. The `RetryPolicy` class in the client SDK can be configured to handle retries automatically. For more information, see [Retry guidance](/azure/architecture/best-practices/retry-service-specific#service-bus).
-4. Other exceptions ([System.Transactions.TransactionException](/dotnet/api/system.transactions.transactionexception?view=net-5.0), [System.TimeoutException](/dotnet/api/system.timeoutexception?view=net-5.0), [Microsoft.ServiceBus.Messaging.MessageLockLostException](/dotnet/api/microsoft.azure.servicebus.messagelocklostexception), [Microsoft.ServiceBus.Messaging.SessionLockLostException](/dotnet/api/microsoft.azure.servicebus.sessionlocklostexception)). General action: specific to the exception type; refer to the table in the following section:
+4. Other exceptions ([System.Transactions.TransactionException](/dotnet/api/system.transactions.transactionexception), [System.TimeoutException](/dotnet/api/system.timeoutexception), [Microsoft.ServiceBus.Messaging.MessageLockLostException](/dotnet/api/microsoft.azure.servicebus.messagelocklostexception), [Microsoft.ServiceBus.Messaging.SessionLockLostException](/dotnet/api/microsoft.azure.servicebus.sessionlocklostexception)). General action: specific to the exception type; refer to the table in the following section:
## Exception types The following table lists messaging exception types, and their causes, and notes suggested action you can take. | **Exception Type** | **Description/Cause/Examples** | **Suggested Action** | **Note on automatic/immediate retry** | | | | | |
-| [TimeoutException](/dotnet/api/system.timeoutexception?view=net-5.0) |The server didn't respond to the requested operation within the specified time, which is controlled by [OperationTimeout](/dotnet/api/microsoft.servicebus.messaging.messagingfactorysettings). The server may have completed the requested operation. It can happen because of network or other infrastructure delays. |Check the system state for consistency and retry if necessary. See [Timeout exceptions](#timeoutexception). |Retry might help in some cases; add retry logic to code. |
-| [InvalidOperationException](/dotnet/api/system.invalidoperationexception?view=net-5.0) |The requested user operation isn't allowed within the server or service. See the exception message for details. For example, [Complete()](/dotnet/api/microsoft.azure.servicebus.queueclient.completeasync) generates this exception if the message was received in [ReceiveAndDelete](/dotnet/api/microsoft.azure.servicebus.receivemode) mode. |Check the code and the documentation. Make sure the requested operation is valid. |Retry doesn't help. |
-| [OperationCanceledException](/dotnet/api/system.operationcanceledexception?view=net-5.0) |An attempt is made to invoke an operation on an object that has already been closed, aborted, or disposed. In rare cases, the ambient transaction is already disposed. |Check the code and make sure it doesn't invoke operations on a disposed object. |Retry doesn't help. |
-| [UnauthorizedAccessException](/dotnet/api/system.unauthorizedaccessexception?view=net-5.0) |The [TokenProvider](/dotnet/api/microsoft.servicebus.tokenprovider) object couldn't acquire a token, the token is invalid, or the token doesn't contain the claims required to do the operation. |Make sure the token provider is created with the correct values. Check the configuration of the Access Control Service. |Retry might help in some cases; add retry logic to code. |
-| [ArgumentException](/dotnet/api/system.argumentexception?view=net-5.0)<br /> [ArgumentNullException](/dotnet/api/system.argumentnullexception?view=net-5.0)<br />[ArgumentOutOfRangeException](/dotnet/api/system.argumentoutofrangeexception?view=net-5.0) |One or more arguments supplied to the method are invalid.<br /> The URI supplied to [NamespaceManager](/dotnet/api/microsoft.servicebus.namespacemanager) or [Create](/dotnet/api/microsoft.servicebus.messaging.messagingfactory) contains path segment(s).<br /> The URI scheme supplied to [NamespaceManager](/dotnet/api/microsoft.servicebus.namespacemanager) or [Create](/dotnet/api/microsoft.servicebus.messaging.messagingfactory) is invalid. <br />The property value is larger than 32 KB. |Check the calling code and make sure the arguments are correct. |Retry doesn't help. |
+| [TimeoutException](/dotnet/api/system.timeoutexception) |The server didn't respond to the requested operation within the specified time, which is controlled by [OperationTimeout](/dotnet/api/microsoft.servicebus.messaging.messagingfactorysettings). The server may have completed the requested operation. It can happen because of network or other infrastructure delays. |Check the system state for consistency and retry if necessary. See [Timeout exceptions](#timeoutexception). |Retry might help in some cases; add retry logic to code. |
+| [InvalidOperationException](/dotnet/api/system.invalidoperationexception) |The requested user operation isn't allowed within the server or service. See the exception message for details. For example, [Complete()](/dotnet/api/microsoft.azure.servicebus.queueclient.completeasync) generates this exception if the message was received in [ReceiveAndDelete](/dotnet/api/microsoft.azure.servicebus.receivemode) mode. |Check the code and the documentation. Make sure the requested operation is valid. |Retry doesn't help. |
+| [OperationCanceledException](/dotnet/api/system.operationcanceledexception) |An attempt is made to invoke an operation on an object that has already been closed, aborted, or disposed. In rare cases, the ambient transaction is already disposed. |Check the code and make sure it doesn't invoke operations on a disposed object. |Retry doesn't help. |
+| [UnauthorizedAccessException](/dotnet/api/system.unauthorizedaccessexception) |The [TokenProvider](/dotnet/api/microsoft.servicebus.tokenprovider) object couldn't acquire a token, the token is invalid, or the token doesn't contain the claims required to do the operation. |Make sure the token provider is created with the correct values. Check the configuration of the Access Control Service. |Retry might help in some cases; add retry logic to code. |
+| [ArgumentException](/dotnet/api/system.argumentexception)<br /> [ArgumentNullException](/dotnet/api/system.argumentnullexception)<br />[ArgumentOutOfRangeException](/dotnet/api/system.argumentoutofrangeexception) |One or more arguments supplied to the method are invalid.<br /> The URI supplied to [NamespaceManager](/dotnet/api/microsoft.servicebus.namespacemanager) or [Create](/dotnet/api/microsoft.servicebus.messaging.messagingfactory) contains path segment(s).<br /> The URI scheme supplied to [NamespaceManager](/dotnet/api/microsoft.servicebus.namespacemanager) or [Create](/dotnet/api/microsoft.servicebus.messaging.messagingfactory) is invalid. <br />The property value is larger than 32 KB. |Check the calling code and make sure the arguments are correct. |Retry doesn't help. |
| [MessagingEntityNotFoundException](/dotnet/api/microsoft.azure.servicebus.messagingentitynotfoundexception) |Entity associated with the operation doesn't exist or it has been deleted. |Make sure the entity exists. |Retry doesn't help. | | [MessageNotFoundException](/dotnet/api/microsoft.servicebus.messaging.messagenotfoundexception) |Attempt to receive a message with a particular sequence number. This message isn't found. |Make sure the message hasn't been received already. Check the deadletter queue to see if the message has been deadlettered. |Retry doesn't help. | | [MessagingCommunicationException](/dotnet/api/microsoft.servicebus.messaging.messagingcommunicationexception) |Client isn't able to establish a connection to Service Bus. |Make sure the supplied host name is correct and the host is reachable. <p>If your code runs in an environment with a firewall/proxy, ensure that the traffic to the Service Bus domain/IP address and ports isn't blocked.</p>|Retry might help if there are intermittent connectivity issues. |
@@ -79,9 +79,9 @@ There are two common causes for this error: the dead-letter queue, and non-funct
2. **Receiver stopped**. A receiver has stopped receiving messages from a queue or subscription. The way to identify this is to look at the [QueueDescription.MessageCountDetails](/dotnet/api/microsoft.servicebus.messaging.messagecountdetails) property, which shows the full breakdown of the messages. If the [ActiveMessageCount](/dotnet/api/microsoft.servicebus.messaging.messagecountdetails.activemessagecount) property is high or growing, then the messages aren't being read as fast as they are being written. ## TimeoutException
-A [TimeoutException](/dotnet/api/system.timeoutexception?view=net-5.0) indicates that a user-initiated operation is taking longer than the operation timeout.
+A [TimeoutException](/dotnet/api/system.timeoutexception) indicates that a user-initiated operation is taking longer than the operation timeout.
-You should check the value of the [ServicePointManager.DefaultConnectionLimit](/dotnet/api/system.net.servicepointmanager.defaultconnectionlimit?view=net-5.0) property, as hitting this limit can also cause a [TimeoutException](/dotnet/api/system.timeoutexception?view=net-5.0).
+You should check the value of the [ServicePointManager.DefaultConnectionLimit](/dotnet/api/system.net.servicepointmanager.defaultconnectionlimit) property, as hitting this limit can also cause a [TimeoutException](/dotnet/api/system.timeoutexception).
Timeouts are expected to happen during or in-between maintenance operations such as Service Bus service updates (or) OS updates on resources that run the service. During OS updates, entities are moved around and nodes are updated or rebooted, which can cause timeouts. For service level agreement (SLA) details for the Azure Service Bus service, see [SLA for Service Bus](https://azure.microsoft.com/support/legal/sla/service-bus/).
service-bus-messaging https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-messaging-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-messaging-overview.md
@@ -2,7 +2,7 @@
Title: Azure Service Bus messaging overview | Microsoft Docs description: This article provides a high-level overview of Azure Service Bus, a fully managed enterprise integration message broker. Previously updated : 11/20/2020 Last updated : 01/28/2021 # What is Azure Service Bus?
@@ -148,12 +148,12 @@ features with AMQP 1.0 clients directly.
Service Bus fully integrates with many Microsoft and Azure services, for instance:
-* [Event Grid](https://azure.microsoft.com/services/event-grid/)
-* [Logic Apps](https://azure.microsoft.com/services/logic-apps/)
-* [Azure Functions](https://azure.microsoft.com/services/functions/)
-* [Power Platform](https://powerplatform.microsoft.com/)
-* [Dynamics 365](https://dynamics.microsoft.com)
-* [Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/)
+* [Event Grid](service-bus-to-event-grid-integration-example.md)
+* [Logic Apps](../connectors/connectors-create-api-servicebus.md)
+* [Azure Functions](../azure-functions/functions-bindings-service-bus.md)
+* [Power Platform](../connectors/connectors-create-api-servicebus.md)
+* [Dynamics 365](/dynamics365/fin-ops-core/dev-itpro/business-events/how-to/how-to-servicebus)
+* [Azure Stream Analytics](../stream-analytics/stream-analytics-define-outputs.md)
## Next steps
storage https://docs.microsoft.com/en-us/azure/storage/common/manage-storage-analytics-logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/manage-storage-analytics-logs.md
@@ -0,0 +1,290 @@
+
+ Title: Enable and manage Azure Storage Analytics logs (classic) | Microsoft Docs
+description: Learn how to monitor a storage account in Azure by using Azure Storage Analytics.
+++ Last updated : 01/29/2021+++++
+# Enable and manage Azure Storage Analytics logs (classic)
+
+[Azure Storage Analytics](storage-analytics.md) provides logs for blobs, queues, and tables. You can use the [Azure portal](https://portal.azure.com) to configure logs are recorded for your account. This article shows you how to enable and manage logs. To learn how to enable metrics, see [Enable and manage Azure Storage Analytics metrics (classic)](storage-monitor-storage-account.md). There are costs associated with examining and storing monitoring data in the Azure portal. For more information, see [Storage Analytics](storage-analytics.md).
+
+> [!NOTE]
+> We recommend that you use Azure Storage logs in Azure Monitor instead of Storage Analytics logs. Azure Storage logs in Azure Monitor is in public preview and is available for preview testing in all public cloud regions. This preview enables logs for blobs (which includes Azure Data Lake Storage Gen2), files, queues,and tables. To learn more, see any of the following articles:
+>
+> - [Monitoring Azure Blob Storage](../blobs/monitor-blob-storage.md)
+> - [Monitoring Azure Files](../files/storage-files-monitoring.md)
+> - [Monitoring Azure Queue Storage](../queues/monitor-queue-storage.md)
+> - [Monitoring Azure Table storage](../tables/monitor-table-storage.md)
+
+For an in-depth guide on using Storage Analytics and other tools to identify, diagnose, and troubleshoot Azure Storage-related issues, see [Monitor, diagnose, and troubleshoot Microsoft Azure Storage](storage-monitoring-diagnosing-troubleshooting.md).
+
+<a id="configure-logging"></a>
+
+## Enable logs
+
+You can instruct Azure Storage to save diagnostics logs for read, write, and delete requests for the blob, table, and queue services. The data retention policy you set also applies to these logs.
+
+> [!NOTE]
+> Azure Files currently supports Storage Analytics metrics, but does not support Storage Analytics logging.
+
+### [Portal](#tab/azure-portal)
+
+1. In the [Azure portal](https://portal.azure.com), select **Storage accounts**, then the name of the storage account to open the storage account blade.
+
+2. Select **Diagnostic settings (classic)** in the **Monitoring (classic)** section of the menu blade.
+
+ ![Diagnostics menu item under MONITORING in the Azure portal.](./media/manage-storage-analytics-logs/storage-enable-metrics-00.png)
+
+3. Ensure **Status** is set to **On**, and select the **services** for which you'd like to enable logging.
+
+ > [!div class="mx-imgBorder"]
+ > ![Configure logging in the Azure portal.](./media/manage-storage-analytics-logs/enable-diagnostics.png)
++
+4. Ensure that the **Delete data** check box is selected. Then, set the number of days that you would like log data to be retained by moving the slider control beneath the check box, or by directly modifying the value that appears in the text box next to the slider control. The default for new storage accounts is seven days. If you do not want to set a retention policy, enter zero. If there is no retention policy, it is up to you to delete the log data.
+
+ > [!WARNING]
+ > Logs are stored as data in your account. log data can accumulate in your account over time which can increase the cost of storage. If you need log data for only a small period of time, you can reduce your costs by modifying the data retention policy. Stale log data (data older than your retention policy) is deleted by the system. We recommend setting a retention policy based on how long you want to retain the log data for your account. See [Billing on storage metrics](storage-analytics-metrics.md#billing-on-storage-metrics) for more information.
+
+4. Click **Save**.
+
+ The diagnostics logs are saved in a blob container named *$logs* in your storage account. You can view the log data using a storage explorer like the [Microsoft Azure Storage Explorer](https://storageexplorer.com), or programmatically using the storage client library or PowerShell.
+
+ For information about accessing the $logs container, see [Storage analytics logging](storage-analytics-logging.md).
+
+### [PowerShell](#tab/azure-powershell)
+
+1. Open a Windows PowerShell command window.
+
+2. Sign in to your Azure subscription with the `Connect-AzAccount` command and follow the on-screen directions.
+
+ ```powershell
+ Connect-AzAccount
+ ```
+
+3. If your identity is associated with more than one subscription, then set your active subscription.
+
+ ```powershell
+ $context = Get-AzSubscription -SubscriptionId <subscription-id>
+ Set-AzContext $context
+ ```
+
+ Replace the `<subscription-id>` placeholder value with the ID of your subscription.
+
+5. Get the storage account context that defines the storage account you want to use.
+
+ ```powershell
+ $storageAccount = Get-AzStorageAccount -ResourceGroupName "<resource-group-name>" -AccountName "<storage-account-name>"
+ $ctx = $storageAccount.Context
+ ```
+
+ * Replace the `<resource-group-name>` placeholder value with the name of your resource group.
+
+ * Replace the `<storage-account-name>` placeholder value with the name of your storage account.
+
+6. Use the **Set-AzStorageServiceLoggingProperty** to change the current log settings. The cmdlets that control Storage Logging use a **LoggingOperations** parameter that is a string containing a comma-separated list of request types to log. The three possible request types are **read**, **write**, and **delete**. To switch off logging, use the value **none** for the **LoggingOperations** parameter.
+
+ The following command switches on logging for read, write, and delete requests in the Queue service in your default storage account with retention set to five days:
+
+ ```powershell
+ Set-AzStorageServiceLoggingProperty -ServiceType Queue -LoggingOperations read,write,delete -RetentionDays 5 -Context $ctx
+ ```
+
+ > [!WARNING]
+ > Logs are stored as data in your account. log data can accumulate in your account over time which can increase the cost of storage. If you need log data for only a small period of time, you can reduce your costs by modifying the data retention policy. Stale log data (data older than your retention policy) is deleted by the system. We recommend setting a retention policy based on how long you want to retain the log data for your account. See [Billing on storage metrics](storage-analytics-metrics.md#billing-on-storage-metrics) for more information.
+
+ The following command switches off logging for the table service in your default storage account:
+
+ ```powershell
+ Set-AzStorageServiceLoggingProperty -ServiceType Table -LoggingOperations none -Context $ctx
+ ```
+
+ For information about how to configure the Azure PowerShell cmdlets to work with your Azure subscription and how to select the default storage account to use, see: [How to install and configure Azure PowerShell](/powershell/azure/).
+
+### [.NET v12](#tab/dotnet)
+
+:::code language="csharp" source="~/azure-storage-snippets/queues/howto/dotnet/dotnet-v12/Monitoring.cs" id="snippet_EnableDiagnosticLogs":::
+
+### [.NET v11](#tab/dotnet11)
+
+```csharp
+var storageAccount = CloudStorageAccount.Parse(connStr);
+var queueClient = storageAccount.CreateCloudQueueClient();
+var serviceProperties = queueClient.GetServiceProperties();
+
+serviceProperties.Logging.LoggingOperations = LoggingOperations.All;
+serviceProperties.Logging.RetentionDays = 2;
+
+queueClient.SetServiceProperties(serviceProperties);
+```
+++
+<a id="modify-retention-policy"></a>
+
+## Modify log data retention period
+
+Log data can accumulate in your account over time which can increase the cost of storage. If you need log data for only a small period of time, you can reduce your costs by modifying the log data retention period. For example, if you need logs for only three days, set your log data retention period to a value of `3`. That way logs will be automatically deleted from your account after 3 days. This section shows you how to view your current log data retention period, and then update that period if that's what you want to do.
+
+> [!NOTE]
+> These steps apply only for accounts that do not have the **Hierarchical namespace** setting enabled on them. If you've enabled that setting on your account, then the setting for retention days is not yet supported. Instead, you'll have to delete logs manually by using any supported tool such as Azure Storage Explorer, REST or an SDK. To find those logs in your storage account, see [How logs are stored](storage-analytics-logging.md#how-logs-are-stored).
+
+### [Portal](#tab/azure-portal)
+
+1. In the [Azure portal](https://portal.azure.com), select **Storage accounts**, then the name of the storage account to open the storage account blade.
+2. Select **Diagnostic settings (classic)** in the **Monitoring (classic)** section of the menu blade.
+
+ ![Diagnostics menu item under MONITORING in the Azure portal](./media/manage-storage-analytics-logs/storage-enable-metrics-00.png)
+
+3. Ensure that the **Delete data** check box is selected. Then, set the number of days that you would like log data to be retained by moving the slider control beneath the check box, or by directly modifying the value that appears in the text box next to the slider control.
+
+ > [!div class="mx-imgBorder"]
+ > ![Modify the retention period in the Azure portal](./media/manage-storage-analytics-logs/modify-retention-period.png)
+
+ The default number of days for new storage accounts is seven days. If you do not want to set a retention policy, enter zero. If there is no retention policy, it is up to you to delete the monitoring data.
+
+4. Click **Save**.
+
+ The diagnostics logs are saved in a blob container named *$logs* in your storage account. You can view the log data using a storage explorer like the [Microsoft Azure Storage Explorer](https://storageexplorer.com), or programmatically using the storage client library or PowerShell.
+
+ For information about accessing the $logs container, see [Storage analytics logging](storage-analytics-logging.md).
+
+### [PowerShell](#tab/azure-powershell)
+
+1. Open a Windows PowerShell command window.
+
+2. Sign in to your Azure subscription with the `Connect-AzAccount` command and follow the on-screen directions.
+
+ ```powershell
+ Connect-AzAccount
+ ```
+
+3. If your identity is associated with more than one subscription, then set your active subscription.
+
+ ```powershell
+ $context = Get-AzSubscription -SubscriptionId <subscription-id>
+ Set-AzContext $context
+ ```
+
+ Replace the `<subscription-id>` placeholder value with the ID of your subscription.
+
+5. Get the storage account context that defines the storage account.
+
+ ```powershell
+ $storageAccount = Get-AzStorageAccount -ResourceGroupName "<resource-group-name>" -AccountName "<storage-account-name>"
+ $ctx = $storageAccount.Context
+ ```
+
+ * Replace the `<resource-group-name>` placeholder value with the name of your resource group.
+
+ * Replace the `<storage-account-name>` placeholder value with the name of your storage account.
+
+6. Use the [Get-AzStorageServiceLoggingProperty](https://docs.microsoft.com/powershell/module/az.storage/get-azstorageserviceloggingproperty) to view the current log retention policy. The following example prints to the console the retention period for blob and queue storage services.
+
+ ```powershell
+ Get-AzStorageServiceLoggingProperty -ServiceType Blob, Queue -Context $ctx
+ ```
+
+ In the console output, the retention period appears beneath the `RetentionDays` column heading.
+
+ > [!div class="mx-imgBorder"]
+ > ![Retention policy in PowerShell output](./media/manage-storage-analytics-logs/retention-period-powershell.png)
+
+7. Use the [Set-AzStorageServiceLoggingProperty](https://docs.microsoft.com/powershell/module/az.storage/set-azstorageserviceloggingproperty) to change the retention period. The following example changes the retention period to 4 days.
+
+ ```powershell
+ Set-AzStorageServiceLoggingProperty -ServiceType Blob, Queue -RetentionDays 4 -Context $ctx
+ ```
+
+ For information about how to configure the Azure PowerShell cmdlets to work with your Azure subscription and how to select the default storage account to use, see: [How to install and configure Azure PowerShell](/powershell/azure/).
+
+### [.NET v12](#tab/dotnet)
+
+The following example prints to the console the retention period for blob and queue storage services.
+
+:::code language="csharp" source="~/azure-storage-snippets/queues/howto/dotnet/dotnet-v12/Monitoring.cs" id="snippet_ViewRetentionPeriod":::
+
+The following example changes the retention period to 4 days.
+
+:::code language="csharp" source="~/azure-storage-snippets/queues/howto/dotnet/dotnet-v12/Monitoring.cs" id="snippet_ModifyRetentionPeriod":::
+
+### [.NET v11](#tab/dotnet11)
+
+The following example prints to the console the retention period for blob and queue storage services.
+
+```csharp
+var storageAccount = CloudStorageAccount.Parse(connectionString);
+
+var blobClient = storageAccount.CreateCloudBlobClient();
+var queueClient = storageAccount.CreateCloudQueueClient();
+
+var blobserviceProperties = blobClient.GetServiceProperties();
+var queueserviceProperties = queueClient.GetServiceProperties();
+
+Console.WriteLine("Retention period for logs from the blob service is: " +
+ blobserviceProperties.Logging.RetentionDays.ToString());
+
+Console.WriteLine("Retention period for logs from the queue service is: " +
+ queueserviceProperties.Logging.RetentionDays.ToString());
+```
+
+The following example changes the retention period for logs for the blob and queue storage services to 4 days.
+
+```csharp
+
+blobserviceProperties.Logging.RetentionDays = 4;
+queueserviceProperties.Logging.RetentionDays = 4;
+
+blobClient.SetServiceProperties(blobserviceProperties);
+queueClient.SetServiceProperties(queueserviceProperties);
+```
+++
+### Verify that log data is being deleted
+
+You can verify that logs are being deleted by viewing the contents of the `$logs` container of your storage account. The following image shows the contents of a folder in the `$logs` container. The folder corresponds to January 2021 and each folder contains logs for one day. If the day today was January 29th 2021, and your retention policy is set to only one day, then this folder should contain logs for only one day.
+
+> [!div class="mx-imgBorder"]
+> ![List of log folders in the Azure Portal](./media/manage-storage-analytics-logs/verify-and-delete-logs.png)
+
+<a id="download-storage-logging-log-data"></a>
+
+## View log data
+
+ To view and analyze your log data, you should download the blobs that contain the log data you are interested in to a local machine. Many storage-browsing tools enable you to download blobs from your storage account; you can also use the Azure Storage team provided command-line Azure Copy Tool [AzCopy](storage-use-azcopy-v10.md) to download your log data.
+
+>[!NOTE]
+> The `$logs` container isn't integrated with Event Grid, so you won't receive notifications when log files are written.
+
+ To make sure you download the log data you are interested in and to avoid downloading the same log data more than once:
+
+- Use the date and time naming convention for blobs containing log data to track which blobs you have already downloaded for analysis to avoid re-downloading the same data more than once.
+
+- Use the metadata on the blobs containing log data to identify the specific period for which the blob holds log data to identify the exact blob you need to download.
+
+To get started with AzCopy, see [Get started with AzCopy](storage-use-azcopy-v10.md)
+
+The following example shows how you can download the log data for the queue service for the hours starting at 09 AM, 10 AM, and 11 AM on 20th May, 2014.
+
+```
+azcopy copy 'https://mystorageaccount.blob.core.windows.net/$logs/queue' 'C:\Logs\Storage' --include-path '2014/05/20/09;2014/05/20/10;2014/05/20/11' --recursive
+```
+
+To learn more about how to download specific files, see [Download blobs from Azure Blob storage by using AzCopy v10](./storage-use-azcopy-blobs-download.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json).
+
+When you have downloaded your log data, you can view the log entries in the files. These log files use a delimited text format that many log reading tools are able to parse (for more information, see the guide [Monitoring, Diagnosing, and Troubleshooting Microsoft Azure Storage](storage-monitoring-diagnosing-troubleshooting.md)). Different tools have different facilities for formatting, filtering, sorting, ad searching the contents of your log files. For more information about the Storage Logging log file format and content, see [Storage Analytics Log Format](/rest/api/storageservices/storage-analytics-log-format) and [Storage Analytics Logged Operations and Status Messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages).
+
+## Next steps
+
+* To learn more about Storage Analytics, see [Storage Analytics](storage-analytics.md) for Storage Analytics.
+* [Configure Storage Analytics metrics](storage-monitor-storage-account.md).
+* For more information about using a .NET language to configure Storage Logging, see [Storage Client Library Reference](/previous-versions/azure/dn261237(v=azure.100)).
+* For general information about configuring Storage Logging using the REST API, see [Enabling and Configuring Storage Analytics](/rest/api/storageservices/Enabling-and-Configuring-Storage-Analytics).
+* Learn more about the format of Storage Analytics logs. See [Storage Analytics Log Format](/rest/api/storageservices/storage-analytics-log-format).
\ No newline at end of file
storage https://docs.microsoft.com/en-us/azure/storage/common/manage-storage-analytics-metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/manage-storage-analytics-metrics.md
@@ -0,0 +1,278 @@
+
+ Title: Enable and manage Azure Storage Analytics metrics (classic) | Microsoft Docs
+description: Learn how to enable, edit, and view Azure Storage Analytics metrics.
+++ Last updated : 01/29/2021+++++
+# Enable and manage Azure Storage Analytics metrics (classic)
+
+[Azure Storage Analytics](storage-analytics.md) provides metrics for all storage services for blobs, queues, and tables. You can use the [Azure portal](https://portal.azure.com) to configure which metrics are recorded for your account, and configure charts that provide visual representations of your metrics data. This article shows you how to enable and manage metrics. To learn how to enable logs, see [Enable and manage Azure Storage Analytics logs (classic)](manage-storage-analytics-logs.md).
+
+We recommend you review [Azure Monitor for Storage](../../azure-monitor/insights/storage-insights-overview.md) (preview). It is a feature of Azure Monitor that offers comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. It does not require you to enable or configure anything, and you can immediately view these metrics from the pre-defined interactive charts and other visualizations included.
+
+> [!NOTE]
+> There are costs associated with examining monitoring data in the Azure portal. For more information, see [Storage Analytics](storage-analytics.md).
+>
+> Premium performance block blob storage accounts don't support Storage Analytics metrics. If you want to view metrics with premium performance block blob storage accounts, consider using [Azure Storage Metrics in Azure Monitor](../blobs/monitor-blob-storage.md).
+>
+> For an in-depth guide on using Storage Analytics and other tools to identify, diagnose, and troubleshoot Azure Storage-related issues, see [Monitor, diagnose, and troubleshoot Microsoft Azure Storage](storage-monitoring-diagnosing-troubleshooting.md).
+>
+
+<a id="Enable-metrics"></a>
+
+## Enable metrics
+
+### [Portal](#tab/azure-portal)
+
+1. In the [Azure portal](https://portal.azure.com), select **Storage accounts**, then the storage account name to open the account dashboard.
+
+2. Select **Diagnostic settings (classic)** in the **Monitoring (classic)** section of the menu blade.
+
+ ![Screenshot that highlights the Diagnostic settings (classic) option under the Monitoring (Classic) section.](./media/manage-storage-analytics-metrics/storage-enable-metrics-00.png)
+
+3. Select the **type** of metrics data for each **service** you wish to monitor, and the **retention policy** for the data. You can also disable monitoring by setting **Status** to **Off**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Configure logging in the Azure portal.](./media/manage-storage-analytics-logs/enable-diagnostics.png)
+
+ To set the data retention policy, move the **Retention (days)** slider or enter the number of days of data to retain, from 1 to 365. The default for new storage accounts is seven days. If you do not want to set a retention policy, enter zero. If there is no retention policy, it is up to you to delete the monitoring data.
+
+ > [!WARNING]
+ > Metics are stored as data in your account. Metric data can accumulate in your account over time which can increase the cost of storage. If you need metric data for only a small period of time, you can reduce your costs by modifying the data retention policy. Stale metrics data (data older than your retention policy) is deleted by the system. We recommend setting a retention policy based on how long you want to retain the metrics data for your account. See [Billing on storage metrics](storage-analytics-metrics.md#billing-on-storage-metrics) for more information.
+ >
+
+4. When you finish the monitoring configuration, select **Save**.
+
+A default set of metrics is displayed in charts on the **Overview** blade, as well as the **Metrics (classic)** blade.
+Once you've enabled metrics for a service, it may take up to an hour for data to appear in its charts. You can select **Edit** on any metric chart to configure which metrics are displayed in the chart.
+
+You can disable metrics collection and logging by setting **Status** to **Off**.
+
+> [!NOTE]
+> Azure Storage uses [table storage](storage-introduction.md#table-storage) to store the metrics for your storage account, and stores the metrics in tables in your account. For more information, see. [How metrics are stored](storage-analytics-metrics.md#how-metrics-are-stored).
+
+### [PowerShell](#tab/azure-powershell)
+
+1. Open a Windows PowerShell command window.
+
+2. Sign in to your Azure subscription with the `Connect-AzAccount` command and follow the on-screen directions.
+
+ ```powershell
+ Connect-AzAccount
+ ```
+
+3. If your identity is associated with more than one subscription, then set your active subscription.
+
+ ```powershell
+ $context = Get-AzSubscription -SubscriptionId <subscription-id>
+ Set-AzContext $context
+ ```
+
+ Replace the `<subscription-id>` placeholder value with the ID of your subscription.
+
+5. Get the storage account context that defines the storage account you want to use.
+
+ ```powershell
+ $storageAccount = Get-AzStorageAccount -ResourceGroupName "<resource-group-name>" -AccountName "<storage-account-name>"
+ $ctx = $storageAccount.Context
+ ```
+
+ * Replace the `<resource-group-name>` placeholder value with the name of your resource group.
+
+ * Replace the `<storage-account-name>` placeholder value with the name of your storage account.
+
+6. You can use PowerShell on your local machine to configure storage metrics in your storage account. Use the Azure PowerShell cmdlet **Set-AzStorageServiceMetricsProperty** to change the current settings.
+
+ The following command switches on minute metrics for the blob service in your storage account with the retention period set to five days.
+
+ ```powershell
+ Set-AzStorageServiceMetricsProperty -MetricsType Minute -ServiceType Blob -MetricsLevel ServiceAndApi -RetentionDays 5 -Context $ctx
+ ```
+
+ This cmdlet uses the following parameters:
+
+ - **ServiceType**: Possible values are **Blob**, **Queue**, **Table**, and **File**.
+ - **MetricsType**: Possible values are **Hour** and **Minute**.
+ - **MetricsLevel**: Possible values are:
+ - **None**: Turns off monitoring.
+ - **Service**: Collects metrics such as ingress and egress, availability, latency, and success percentages, which are aggregated for the blob, queue, table, and file services.
+ - **ServiceAndApi**: In addition to the service metrics, collects the same set of metrics for each storage operation in the Azure Storage service API.
+
+ The following command retrieves the current hourly metrics level and retention days for the blob service in your default storage account:
+
+ ```powershell
+ Get-AzStorageServiceMetricsProperty -MetricsType Hour -ServiceType Blob -Context $storagecontext.Context
+ ```
+
+ For information about how to configure the Azure PowerShell cmdlets to work with your Azure subscription and how to select the default storage account to use, see [Install and configure Azure PowerShell](/powershell/azure/).
+
+### [.NET v12](#tab/dotnet)
+
+:::code language="csharp" source="~/azure-storage-snippets/queues/howto/dotnet/dotnet-v12/Monitoring.cs" id="snippet_EnableDiagnosticLogs":::
+
+For more information about using a .NET language to configure storage metrics, see [Azure Storage client libraries for .NET](/dotnet/api/overview/azure/storage).
+
+For general information about configuring storage metrics by using the REST API, see [Enabling and configuring Storage Analytics](/rest/api/storageservices/Enabling-and-Configuring-Storage-Analytics).
+
+### [.NET v11](#tab/dotnet11)
+
+```csharp
+var storageAccount = CloudStorageAccount.Parse(connStr);
+var queueClient = storageAccount.CreateCloudQueueClient();
+var serviceProperties = queueClient.GetServiceProperties();
+
+serviceProperties.HourMetrics.MetricsLevel = MetricsLevel.Service;
+serviceProperties.HourMetrics.RetentionDays = 10;
+
+queueClient.SetServiceProperties(serviceProperties);
+```
+
+For more information about using a .NET language to configure storage metrics, see [Azure Storage client libraries for .NET](/dotnet/api/overview/azure/storage).
+
+For general information about configuring storage metrics by using the REST API, see [Enabling and configuring Storage Analytics](/rest/api/storageservices/Enabling-and-Configuring-Storage-Analytics).
+++
+<a id="view-metrics"></a>
+
+## View metrics in a chart
+
+After you configure Storage Analytics metrics to monitor your storage account, Storage Analytics records the metrics in a set of well-known tables in your storage account. You can configure charts to view hourly metrics in the [Azure portal](https://portal.azure.com).
+
+Use the following procedure to choose which storage metrics to view in a metrics chart.
+
+1. Start by displaying a storage metric chart in the Azure portal. You can find charts on the **storage account blade** and in the **Metrics (classic)** blade.
+
+ In this example, uses the following chart that appears on the **storage account blade**:
+
+ ![Chart selection in Azure portal](./media/manage-storage-analytics-metrics/stg-customize-chart-00.png)
+
+2. Click anywhere within the chart to edit the chart.
+
+3. Next, select the **Time Range** of the metrics to display in the chart, and the **service** (blob, queue, table, file) whose metrics you wish to display. Here, the past week's metrics are selected to display for the blob service:
+
+ ![Time range and service selection in the Edit Chart blade](./media/manage-storage-analytics-metrics/storage-edit-metric-time-range.png)
+
+4. Select the individual **metrics** you'd like displayed in the chart, then click **OK**.
+
+ ![Individual metric selection in Edit Chart blade](./media/manage-storage-analytics-metrics/storage-edit-metric-selections.png)
+
+Your chart settings do not affect the collection, aggregation, or storage of monitoring data in the storage account.
+
+#### Metrics availability in charts
+
+The list of available metrics changes based on which service you've chosen in the drop-down, and the unit type of the chart you're editing. For example, you can select percentage metrics like *PercentNetworkError* and *PercentThrottlingError* only if you're editing a chart that displays units in percentage:
+
+![Request error percentage chart in the Azure portal](./media/manage-storage-analytics-metrics/stg-customize-chart-04.png)
+
+#### Metrics resolution
+
+The metrics you selected in **Diagnostics** determines the resolution of the metrics that are available for your account:
+
+* **Aggregate** monitoring provides metrics such as ingress/egress, availability, latency, and success percentages. These metrics are aggregated from the blob, table, file, and queue services.
+* **Per API** provides finer resolution, with metrics available for individual storage operations, in addition to the service-level aggregates.
+
+## Download metrics to archive or analyze locally
+
+If you want to download the metrics for long-term storage or to analyze them locally, you must use a tool or write some code to read the tables. The tables don't appear if you list all the tables in your storage account, but you can access them directly by name. Many storage-browsing tools are aware of these tables and enable you to view them directly. For a list of available tools, see [Azure Storage client tools](./storage-explorers.md).
+
+|Metrics|Table names|Notes|
+|-|-|-|
+|Hourly metrics|$MetricsHourPrimaryTransactionsBlob<br /><br /> $MetricsHourPrimaryTransactionsTable<br /><br /> $MetricsHourPrimaryTransactionsQueue<br /><br /> $MetricsHourPrimaryTransactionsFile|In versions prior to August 15, 2013, these tables were known as:<br /><br /> $MetricsTransactionsBlob<br /><br /> $MetricsTransactionsTable<br /><br /> $MetricsTransactionsQueue<br /><br /> Metrics for the file service are available beginning with version April 5, 2015.|
+|Minute metrics|$MetricsMinutePrimaryTransactionsBlob<br /><br /> $MetricsMinutePrimaryTransactionsTable<br /><br /> $MetricsMinutePrimaryTransactionsQueue<br /><br /> $MetricsMinutePrimaryTransactionsFile|Can only be enabled by using PowerShell or programmatically.<br /><br /> Metrics for the file service are available beginning with version April 5, 2015.|
+|Capacity|$MetricsCapacityBlob|Blob service only.|
+
+For full details of the schemas for these tables, see [Storage Analytics metrics table schema](/rest/api/storageservices/storage-analytics-metrics-table-schema). The following sample rows show only a subset of the columns available, but they illustrate some important features of the way storage metrics saves these metrics:
+
+|PartitionKey|RowKey|Timestamp|TotalRequests|TotalBillableRequests|TotalIngress|TotalEgress|Availability|AverageE2ELatency|AverageServerLatency|PercentSuccess|
+|-|-|-|-|-|-|-|-|-|-|-|
+|20140522T1100|user;All|2014-05-22T11:01:16.7650250Z|7|7|4003|46801|100|104.4286|6.857143|100|
+|20140522T1100|user;QueryEntities|2014-05-22T11:01:16.7640250Z|5|5|2694|45951|100|143.8|7.8|100|
+|20140522T1100|user;QueryEntity|2014-05-22T11:01:16.7650250Z|1|1|538|633|100|3|3|100|
+|20140522T1100|user;UpdateEntity|2014-05-22T11:01:16.7650250Z|1|1|771|217|100|9|6|100|
+
+In this example of minute metrics data, the partition key uses the time at minute resolution. The row key identifies the type of information that's stored in the row. The information is composed of the access type and the request type:
+
+- The access type is either **user** or **system**, where **user** refers to all user requests to the storage service and **system** refers to requests made by Storage Analytics.
+- The request type is either **all**, in which case it's a summary line, or it identifies the specific API such as **QueryEntity** or **UpdateEntity**.
+
+This sample data shows all the records for a single minute (starting at 11:00AM), so the number of **QueryEntities** requests plus the number of **QueryEntity** requests plus the number of **UpdateEntity** requests adds up to seven. This total is shown in the **user:All** row. Similarly, you can derive the average end-to-end latency 104.4286 on the **user:All** row by calculating ((143.8 * 5) + 3 + 9)/7.
+
+## View metrics data programmatically
+
+The following listing shows sample C# code that accesses the minute metrics for a range of minutes and displays the results in a console window. The code sample uses the Azure Storage client library version 4.x or later, which includes the **CloudAnalyticsClient** class that simplifies accessing the metrics tables in storage.
+
+> [!NOTE]
+> The **CloudAnalyticsClient** class is not included in the Azure Blob storage client library v12 for .NET. On **August 31, 2023** Storage Analytics metrics, also referred to as *classic metrics* will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/azure-storage-classic-metrics-will-be-retired-on-31-august-2023/). If you use classic metrics, we recommend that you transition to metrics in Azure Monitor prior to that date.
+
+```csharp
+private static void PrintMinuteMetrics(CloudAnalyticsClient analyticsClient, DateTimeOffset startDateTime, DateTimeOffset endDateTime)
+{
+ // Convert the dates to the format used in the PartitionKey.
+ var start = startDateTime.ToUniversalTime().ToString("yyyyMMdd'T'HHmm");
+ var end = endDateTime.ToUniversalTime().ToString("yyyyMMdd'T'HHmm");
+
+ var services = Enum.GetValues(typeof(StorageService));
+ foreach (StorageService service in services)
+ {
+ Console.WriteLine("Minute Metrics for Service {0} from {1} to {2} UTC", service, start, end);
+ var metricsQuery = analyticsClient.CreateMinuteMetricsQuery(service, StorageLocation.Primary);
+ var t = analyticsClient.GetMinuteMetricsTable(service);
+ var opContext = new OperationContext();
+ var query =
+ from entity in metricsQuery
+ // Note, you can't filter using the entity properties Time, AccessType, or TransactionType
+ // because they are calculated fields in the MetricsEntity class.
+ // The PartitionKey identifies the DataTime of the metrics.
+ where entity.PartitionKey.CompareTo(start) >= 0 && entity.PartitionKey.CompareTo(end) <= 0
+ select entity;
+
+ // Filter on "user" transactions after fetching the metrics from Azure Table storage.
+ // (StartsWith is not supported using LINQ with Azure Table storage.)
+ var results = query.ToList().Where(m => m.RowKey.StartsWith("user"));
+ var resultString = results.Aggregate(new StringBuilder(), (builder, metrics) => builder.AppendLine(MetricsString(metrics, opContext))).ToString();
+ Console.WriteLine(resultString);
+ }
+}
+
+private static string MetricsString(MetricsEntity entity, OperationContext opContext)
+{
+ var entityProperties = entity.WriteEntity(opContext);
+ var entityString =
+ string.Format("Time: {0}, ", entity.Time) +
+ string.Format("AccessType: {0}, ", entity.AccessType) +
+ string.Format("TransactionType: {0}, ", entity.TransactionType) +
+ string.Join(",", entityProperties.Select(e => new KeyValuePair<string, string>(e.Key.ToString(), e.Value.PropertyAsObject.ToString())));
+ return entityString;
+}
+```
+
+<a id="add-metrics-to-dashboard"></a>
+
+## Add metrics charts to the portal dashboard
+
+You can add Azure Storage metrics charts for any of your storage accounts to your portal dashboard.
+
+1. Select click **Edit dashboard** while viewing your dashboard in the [Azure portal](https://portal.azure.com).
+1. In the **Tile Gallery**, select **Find tiles by** > **Type**.
+1. Select **Type** > **Storage accounts**.
+1. In **Resources**, select the storage account whose metrics you wish to add to the dashboard.
+1. Select **Categories** > **Monitoring**.
+1. Drag-and-drop the chart tile onto your dashboard for the metric you'd like displayed. Repeat for all metrics you'd like displayed on the dashboard. In the following image, the "Blobs - Total requests" chart is highlighted as an example, but all the charts are available for placement on your dashboard.
+
+ ![Tile gallery in Azure portal](./media/manage-storage-analytics-metrics/storage-customize-dashboard.png)
+1. Select **Done customizing** near the top of the dashboard when you're done adding charts.
+
+Once you've added charts to your dashboard, you can further customize them as described in Customize metrics charts.
+
+## Next steps
+
+* To learn more about Storage Analytics, see [Storage Analytics](storage-analytics.md) for Storage Analytics.
+* [Configure Storage Analytics logs](manage-storage-analytics-logs.md).
+* Learn more about the the metrics schema. See [Storage Analytics metrics table schema](/rest/api/storageservices/storage-analytics-metrics-table-schema).
\ No newline at end of file
storage https://docs.microsoft.com/en-us/azure/storage/common/storage-analytics-logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-analytics-logging.md
@@ -5,7 +5,7 @@
Previously updated : 07/23/2020 Last updated : 01/29/2021
@@ -15,7 +15,17 @@
Storage Analytics logs detailed information about successful and failed requests to a storage service. This information can be used to monitor individual requests and to diagnose issues with a storage service. Requests are logged on a best-effort basis.
- Storage Analytics logging is not enabled by default for your storage account. You can enable it in the [Azure portal](https://portal.azure.com/); for details, see [Monitor a storage account in the Azure portal](./storage-monitor-storage-account.md). You can also enable Storage Analytics programmatically via the REST API or the client library. Use the [Get Blob Service Properties](/rest/api/storageservices/Blob-Service-REST-API), [Get Queue Service Properties](/rest/api/storageservices/Get-Queue-Service-Properties), and [Get Table Service Properties](/rest/api/storageservices/Get-Table-Service-Properties) operations to enable Storage Analytics for each service.
+> [!NOTE]
+> We recommend that you use Azure Storage logs in Azure Monitor instead of Storage Analytics logs. Azure Storage logs in Azure Monitor is in public preview and is available for preview testing in all public cloud regions. This preview enables logs for blobs (which includes Azure Data Lake Storage Gen2), files, queues,and tables. To learn more, see any of the following articles:
+>
+> - [Monitoring Azure Blob Storage](../blobs/monitor-blob-storage.md)
+> - [Monitoring Azure Files](../files/storage-files-monitoring.md)
+> - [Monitoring Azure Queue Storage](../queues/monitor-queue-storage.md)
+> - [Monitoring Azure Table storage](../tables/monitor-table-storage.md)
+
+ Storage Analytics logging is not enabled by default for your storage account. You can enable it in the [Azure portal](https://portal.azure.com/) or by using PowerShell, or Azure CLI. For step-by-step guidance, see [Enable and manage Azure Storage Analytics logs (classic)](manage-storage-analytics-logs.md).
+
+You can also enable Storage Analytics logs programmatically via the REST API or the client library. Use the [Get Blob Service Properties](/rest/api/storageservices/Blob-Service-REST-API), [Get Queue Service Properties](/rest/api/storageservices/Get-Queue-Service-Properties), and [Get Table Service Properties](/rest/api/storageservices/Get-Table-Service-Properties) operations to enable Storage Analytics for each service. To see an example that enables Storage Analytics logs by using .NET, see [Enable logs](manage-storage-analytics-logs.md)
Log entries are created only if there are requests made against the service endpoint. For example, if a storage account has activity in its Blob endpoint but not in its Table or Queue endpoints, only logs pertaining to the Blob service will be created.
@@ -120,91 +130,10 @@ For information about listing blobs programmatically, see [Enumerating Blob Reso
- `EndTime=2011-07-31T18:22:09Z` - `LogVersion=1.0`
-## Enable Storage logging
-
-You can enable Storage logging with Azure portal, PowerShell, and Storage SDKs.
-
-### Enable Storage logging using the Azure portal
-
-In the Azure portal, use the **Diagnostics settings (classic)** blade to control Storage Logging, accessible from the **Monitoring (classic)** section of a storage account's **Menu blade**.
-
-You can specify the storage services that you want to log, and the retention period (in days) for the logged data.
-
-### Enable Storage logging using PowerShell
-
- You can use PowerShell on your local machine to configure Storage Logging in your storage account by using the Azure PowerShell cmdlet **Get-AzStorageServiceLoggingProperty** to retrieve the current settings, and the cmdlet **Set-AzStorageServiceLoggingProperty** to change the current settings.
-
- The cmdlets that control Storage Logging use a **LoggingOperations** parameter that is a string containing a comma-separated list of request types to log. The three possible request types are **read**, **write**, and **delete**. To switch off logging, use the value **none** for the **LoggingOperations** parameter.
-
- The following command switches on logging for read, write, and delete requests in the Queue service in your default storage account with retention set to five days:
-
-```powershell
-Set-AzStorageServiceLoggingProperty -ServiceType Queue -LoggingOperations read,write,delete -RetentionDays 5
-```
-
- The following command switches off logging for the table service in your default storage account:
-
-```powershell
-Set-AzStorageServiceLoggingProperty -ServiceType Table -LoggingOperations none
-```
-
- For information about how to configure the Azure PowerShell cmdlets to work with your Azure subscription and how to select the default storage account to use, see: [How to install and configure Azure PowerShell](/powershell/azure/).
-
-### Enable Storage logging programmatically
-
- In addition to using the Azure portal or the Azure PowerShell cmdlets to control Storage Logging, you can also use one of the Azure Storage APIs. For example, if you are using a .NET language you can use the Storage Client Library.
-
-# [\.NET v12 SDK](#tab/dotnet)
-
-:::code language="csharp" source="~/azure-storage-snippets/queues/howto/dotnet/dotnet-v12/Monitoring.cs" id="snippet_EnableDiagnosticLogs":::
-
-# [\.NET v11 SDK](#tab/dotnet11)
-
-```csharp
-var storageAccount = CloudStorageAccount.Parse(connStr);
-var queueClient = storageAccount.CreateCloudQueueClient();
-var serviceProperties = queueClient.GetServiceProperties();
-
-serviceProperties.Logging.LoggingOperations = LoggingOperations.All;
-serviceProperties.Logging.RetentionDays = 2;
-
-queueClient.SetServiceProperties(serviceProperties);
-```
----
- For more information about using a .NET language to configure Storage Logging, see [Storage Client Library Reference](/previous-versions/azure/dn261237(v=azure.100)).
-
- For general information about configuring Storage Logging using the REST API, see [Enabling and Configuring Storage Analytics](/rest/api/storageservices/Enabling-and-Configuring-Storage-Analytics).
-
-## Download Storage logging log data
-
- To view and analyze your log data, you should download the blobs that contain the log data you are interested in to a local machine. Many storage-browsing tools enable you to download blobs from your storage account; you can also use the Azure Storage team provided command-line Azure Copy Tool [AzCopy](storage-use-azcopy-v10.md) to download your log data.
-
->[!NOTE]
-> The `$logs` container isn't integrated with Event Grid, so you won't receive notifications when log files are written.
-
- To make sure you download the log data you are interested in and to avoid downloading the same log data more than once:
--- Use the date and time naming convention for blobs containing log data to track which blobs you have already downloaded for analysis to avoid re-downloading the same data more than once. --- Use the metadata on the blobs containing log data to identify the specific period for which the blob holds log data to identify the exact blob you need to download. -
-To get started with AzCopy, see [Get started with AzCopy](storage-use-azcopy-v10.md)
-
-The following example shows how you can download the log data for the queue service for the hours starting at 09 AM, 10 AM, and 11 AM on 20th May, 2014.
-
-```
-azcopy copy 'https://mystorageaccount.blob.core.windows.net/$logs/queue' 'C:\Logs\Storage' --include-path '2014/05/20/09;2014/05/20/10;2014/05/20/11' --recursive
-```
-
-To learn more about how to download specific files, see [Download specific files](./storage-use-azcopy-v10.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#transfer-data).
-
-When you have downloaded your log data, you can view the log entries in the files. These log files use a delimited text format that many log reading tools are able to parse (for more information, see the guide [Monitoring, Diagnosing, and Troubleshooting Microsoft Azure Storage](storage-monitoring-diagnosing-troubleshooting.md)). Different tools have different facilities for formatting, filtering, sorting, ad searching the contents of your log files. For more information about the Storage Logging log file format and content, see [Storage Analytics Log Format](/rest/api/storageservices/storage-analytics-log-format) and [Storage Analytics Logged Operations and Status Messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages).
## Next steps
+* [Enable and manage Azure Storage Analytics logs (classic)](manage-storage-analytics-logs.md)
* [Storage Analytics Log Format](/rest/api/storageservices/storage-analytics-log-format) * [Storage Analytics Logged Operations and Status Messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) * [Storage Analytics Metrics (classic)](storage-analytics-metrics.md)
storage https://docs.microsoft.com/en-us/azure/storage/common/storage-analytics-metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-analytics-metrics.md
@@ -4,20 +4,23 @@ description: Learn how to use Storage Analytics metrics in Azure Storage. Learn
Previously updated : 03/11/2019 Last updated : 01/29/2021 + # Azure Storage Analytics metrics (classic)
+On **August 31, 2023** Storage Analytics metrics, also referred to as *classic metrics* will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/azure-storage-classic-metrics-will-be-retired-on-31-august-2023/). If you use classic metrics, make sure to transition to metrics in Azure Monitor prior to that date. This article helps you make the transition.
+ Azure Storage uses the Storage Analytics solution to store metrics that include aggregated transaction statistics and capacity data about requests to a storage service. Transactions are reported at the API operation level and at the storage service level. Capacity is reported at the storage service level. Metrics data can be used to: - Analyze storage service usage. - Diagnose issues with requests made against the storage service. - Improve the performance of applications that use a service.
- Storage Analytics metrics are enabled by default for new storage accounts. You can configure metrics in the [Azure portal](https://portal.azure.com/). For more information, see [Monitor a storage account in the Azure portal](./storage-monitor-storage-account.md). You can also enable Storage Analytics programmatically via the REST API or the client library. Use the Set Service Properties operations to enable Storage Analytics for each service.
+ Storage Analytics metrics are enabled by default for new storage accounts. You can configure metrics in the [Azure portal](https://portal.azure.com/), by using PowerShell, or by using the Azure CLI. For step-by-step guidance, see [Enable and manage Azure Storage Analytic metrics (classic)](./storage-monitor-storage-account.md). You can also enable Storage Analytics programmatically via the REST API or the client library. Use the Set Service Properties operations to enable Storage Analytics for each service.
> [!NOTE] > Storage Analytics metrics are available for Azure Blob storage, Azure Queue storage, Azure Table storage, and Azure Files.
@@ -60,163 +63,15 @@ Azure Storage uses the Storage Analytics solution to store metrics that include
These tables are automatically created when Storage Analytics is enabled for a storage service endpoint. They're accessed via the namespace of the storage account, for example, `https://<accountname>.table.core.windows.net/Tables("$MetricsTransactionsBlob")`. The metrics tables don't appear in a listing operation and must be accessed directly via the table name.
-## Enable metrics by using the Azure portal
-Follow these steps to enable metrics in the [Azure portal](https://portal.azure.com):
-
-1. Go to your storage account.
-1. Select **Diagnostics settings (classic)** in the menu pane.
-1. Ensure that **Status** is set to **On**.
-1. Select the metrics for the services you want to monitor.
-1. Specify a retention policy to indicate how long to retain metrics and log data.
-1. Select **Save**.
-
-The [Azure portal](https://portal.azure.com) doesn't currently enable you to configure minute metrics in your storage account. You must enable minute metrics by using PowerShell or programmatically.
-
-## Enable storage metrics by using PowerShell
-You can use PowerShell on your local machine to configure storage metrics in your storage account by using the Azure PowerShell cmdlet **Get-AzStorageServiceMetricsProperty** to retrieve the current settings. Use the cmdlet **Set-AzStorageServiceMetricsProperty** to change the current settings.
-
-The cmdlets that control storage metrics use the following parameters:
-
-* **ServiceType**: Possible values are **Blob**, **Queue**, **Table**, and **File**.
-* **MetricsType**: Possible values are **Hour** and **Minute**.
-* **MetricsLevel**: Possible values are:
- * **None**: Turns off monitoring.
- * **Service**: Collects metrics such as ingress and egress, availability, latency, and success percentages, which are aggregated for the blob, queue, table, and file services.
- * **ServiceAndApi**: In addition to the service metrics, collects the same set of metrics for each storage operation in the Azure Storage service API.
-
-For example, the following command switches on minute metrics for the blob service in your storage account with the retention period set to five days:
-
-> [!NOTE]
-> This command assumes that you've signed in to your Azure subscription by using the `Connect-AzAccount` command.
-
-```powershell
-$storageAccount = Get-AzStorageAccount -ResourceGroupName "<resource-group-name>" -AccountName "<storage-account-name>"
-
-Set-AzStorageServiceMetricsProperty -MetricsType Minute -ServiceType Blob -MetricsLevel ServiceAndApi -RetentionDays 5 -Context $storageAccount.Context
-```
-
-* Replace the `<resource-group-name>` placeholder value with the name of your resource group.
-* Replace the `<storage-account-name>` placeholder value with the name of your storage account.
---
-The following command retrieves the current hourly metrics level and retention days for the blob service in your default storage account:
-
-```powershell
-Get-AzStorageServiceMetricsProperty -MetricsType Hour -ServiceType Blob -Context $storagecontext.Context
-```
-
-For information about how to configure the Azure PowerShell cmdlets to work with your Azure subscription and how to select the default storage account to use, see [Install and configure Azure PowerShell](/powershell/azure/).
-
-## Enable storage metrics programmatically
-In addition to using the Azure portal or the Azure PowerShell cmdlets to control storage metrics, you can also use one of the Azure Storage APIs. For example, if you use a .NET language you can use the Azure Storage client library.
-
-The classes **CloudBlobClient**, **CloudQueueClient**, **CloudTableClient**, and **CloudFileClient** all have methods such as **SetServiceProperties** and **SetServicePropertiesAsync** that take a **ServiceProperties** object as a parameter. You can use the **ServiceProperties** object to configure storage metrics. For example, the following C# snippet shows how to change the metrics level and retention days for the hourly queue metrics:
-
-```csharp
-var storageAccount = CloudStorageAccount.Parse(connStr);
-var queueClient = storageAccount.CreateCloudQueueClient();
-var serviceProperties = queueClient.GetServiceProperties();
-
-serviceProperties.HourMetrics.MetricsLevel = MetricsLevel.Service;
-serviceProperties.HourMetrics.RetentionDays = 10;
-
-queueClient.SetServiceProperties(serviceProperties);
-```
-
-For more information about using a .NET language to configure storage metrics, see [Azure Storage client libraries for .NET](/dotnet/api/overview/azure/storage).
-
-For general information about configuring storage metrics by using the REST API, see [Enabling and configuring Storage Analytics](/rest/api/storageservices/Enabling-and-Configuring-Storage-Analytics).
-
-## View storage metrics
-After you configure Storage Analytics metrics to monitor your storage account, Storage Analytics records the metrics in a set of well-known tables in your storage account. You can configure charts to view hourly metrics in the [Azure portal](https://portal.azure.com):
-
-1. Go to your storage account in the [Azure portal](https://portal.azure.com).
-1. Select **Metrics (classic)** in the menu pane for the service whose metrics you want to view.
-1. Select the chart you want to configure.
-1. On the **Edit Chart** pane, select the **Time range**, the **Chart type**, and the metrics you want displayed in the chart.
-
-In the **Monitoring (classic)** section of your storage account's menu pane in the Azure portal, you can configure [Alert rules](#metrics-alerts). For example, you can send email alerts to notify you when a specific metric reaches a certain value.
-
-If you want to download the metrics for long-term storage or to analyze them locally, you must use a tool or write some code to read the tables. You must download the minute metrics for analysis. The tables don't appear if you list all the tables in your storage account, but you can access them directly by name. Many storage-browsing tools are aware of these tables and enable you to view them directly. For a list of available tools, see [Azure Storage client tools](./storage-explorers.md).
-
-|Metrics|Table names|Notes|
-|-|-|-|
-|Hourly metrics|$MetricsHourPrimaryTransactionsBlob<br /><br /> $MetricsHourPrimaryTransactionsTable<br /><br /> $MetricsHourPrimaryTransactionsQueue<br /><br /> $MetricsHourPrimaryTransactionsFile|In versions prior to August 15, 2013, these tables were known as:<br /><br /> $MetricsTransactionsBlob<br /><br /> $MetricsTransactionsTable<br /><br /> $MetricsTransactionsQueue<br /><br /> Metrics for the file service are available beginning with version April 5, 2015.|
-|Minute metrics|$MetricsMinutePrimaryTransactionsBlob<br /><br /> $MetricsMinutePrimaryTransactionsTable<br /><br /> $MetricsMinutePrimaryTransactionsQueue<br /><br /> $MetricsMinutePrimaryTransactionsFile|Can only be enabled by using PowerShell or programmatically.<br /><br /> Metrics for the file service are available beginning with version April 5, 2015.|
-|Capacity|$MetricsCapacityBlob|Blob service only.|
-
-For full details of the schemas for these tables, see [Storage Analytics metrics table schema](/rest/api/storageservices/storage-analytics-metrics-table-schema). The following sample rows show only a subset of the columns available, but they illustrate some important features of the way storage metrics saves these metrics:
-
-|PartitionKey|RowKey|Timestamp|TotalRequests|TotalBillableRequests|TotalIngress|TotalEgress|Availability|AverageE2ELatency|AverageServerLatency|PercentSuccess|
-|-|-|-|-|-|-|-|-|-|-|-|
-|20140522T1100|user;All|2014-05-22T11:01:16.7650250Z|7|7|4003|46801|100|104.4286|6.857143|100|
-|20140522T1100|user;QueryEntities|2014-05-22T11:01:16.7640250Z|5|5|2694|45951|100|143.8|7.8|100|
-|20140522T1100|user;QueryEntity|2014-05-22T11:01:16.7650250Z|1|1|538|633|100|3|3|100|
-|20140522T1100|user;UpdateEntity|2014-05-22T11:01:16.7650250Z|1|1|771|217|100|9|6|100|
-
-In this example of minute metrics data, the partition key uses the time at minute resolution. The row key identifies the type of information that's stored in the row. The information is composed of the access type and the request type:
--- The access type is either **user** or **system**, where **user** refers to all user requests to the storage service and **system** refers to requests made by Storage Analytics. -- The request type is either **all**, in which case it's a summary line, or it identifies the specific API such as **QueryEntity** or **UpdateEntity**. -
-This sample data shows all the records for a single minute (starting at 11:00AM), so the number of **QueryEntities** requests plus the number of **QueryEntity** requests plus the number of **UpdateEntity** requests adds up to seven. This total is shown in the **user:All** row. Similarly, you can derive the average end-to-end latency 104.4286 on the **user:All** row by calculating ((143.8 * 5) + 3 + 9)/7.
- ## Metrics alerts
-Consider setting up alerts in the [Azure portal](https://portal.azure.com) so you'll be automatically notified of important changes in the behavior of your storage services. If you use a Storage Explorer tool to download this metrics data in a delimited format, you can use Microsoft Excel to analyze the data. For a list of available Storage Explorer tools, see [Azure Storage client tools](./storage-explorers.md). You can configure alerts in the **Alert (classic)** pane, which is accessible under **Monitoring (classic)** in the storage account menu pane.
+Consider setting up alerts in the [Azure portal](https://portal.azure.com) so you'll be automatically notified of important changes in the behavior of your storage services. For step-by-step guidance, see [Create metrics alerts](storage-monitor-storage-account.md#create-metric-alerts).
+
+If you use a Storage Explorer tool to download this metrics data in a delimited format, you can use Microsoft Excel to analyze the data. For a list of available Storage Explorer tools, see [Azure Storage client tools](./storage-explorers.md).
> [!IMPORTANT] > There might be a delay between a storage event and when the corresponding hourly or minute metrics data is recorded. In the case of minute metrics, several minutes of data might be written at once. This issue can lead to transactions from earlier minutes being aggregated into the transaction for the current minute. When this issue happens, the alert service might not have all available metrics data for the configured alert interval, which might lead to alerts firing unexpectedly. >
-## Access metrics data programmatically
-The following listing shows sample C# code that accesses the minute metrics for a range of minutes and displays the results in a console window. The code sample uses the Azure Storage client library version 4.x or later, which includes the **CloudAnalyticsClient** class that simplifies accessing the metrics tables in storage.
-
-> [!NOTE]
-> The **CloudAnalyticsClient** class is not included in the Azure Blob storage client library v12 for .NET. On **August 31, 2023** Storage Analytics metrics, also referred to as *classic metrics* will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/azure-storage-classic-metrics-will-be-retired-on-31-august-2023/). If you use classic metrics, we recommend that you transition to metrics in Azure Monitor prior to that date.
-
-```csharp
-private static void PrintMinuteMetrics(CloudAnalyticsClient analyticsClient, DateTimeOffset startDateTime, DateTimeOffset endDateTime)
-{
- // Convert the dates to the format used in the PartitionKey.
- var start = startDateTime.ToUniversalTime().ToString("yyyyMMdd'T'HHmm");
- var end = endDateTime.ToUniversalTime().ToString("yyyyMMdd'T'HHmm");
-
- var services = Enum.GetValues(typeof(StorageService));
- foreach (StorageService service in services)
- {
- Console.WriteLine("Minute Metrics for Service {0} from {1} to {2} UTC", service, start, end);
- var metricsQuery = analyticsClient.CreateMinuteMetricsQuery(service, StorageLocation.Primary);
- var t = analyticsClient.GetMinuteMetricsTable(service);
- var opContext = new OperationContext();
- var query =
- from entity in metricsQuery
- // Note, you can't filter using the entity properties Time, AccessType, or TransactionType
- // because they are calculated fields in the MetricsEntity class.
- // The PartitionKey identifies the DataTime of the metrics.
- where entity.PartitionKey.CompareTo(start) >= 0 && entity.PartitionKey.CompareTo(end) <= 0
- select entity;
-
- // Filter on "user" transactions after fetching the metrics from Azure Table storage.
- // (StartsWith is not supported using LINQ with Azure Table storage.)
- var results = query.ToList().Where(m => m.RowKey.StartsWith("user"));
- var resultString = results.Aggregate(new StringBuilder(), (builder, metrics) => builder.AppendLine(MetricsString(metrics, opContext))).ToString();
- Console.WriteLine(resultString);
- }
-}
-
-private static string MetricsString(MetricsEntity entity, OperationContext opContext)
-{
- var entityProperties = entity.WriteEntity(opContext);
- var entityString =
- string.Format("Time: {0}, ", entity.Time) +
- string.Format("AccessType: {0}, ", entity.AccessType) +
- string.Format("TransactionType: {0}, ", entity.TransactionType) +
- string.Join(",", entityProperties.Select(e => new KeyValuePair<string, string>(e.Key.ToString(), e.Value.PropertyAsObject.ToString())));
- return entityString;
-}
-```
- ## Billing on storage metrics Write requests to create table entities for metrics are charged at the standard rates applicable to all Azure Storage operations.
storage https://docs.microsoft.com/en-us/azure/storage/common/storage-monitor-storage-account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-monitor-storage-account.md
@@ -1,146 +0,0 @@
- Title: How to monitor an Azure Storage account in the Azure portal | Microsoft Docs
-description: Learn how to monitor a storage account in Azure by using the Azure portal and Azure Storage Analytics.
--- Previously updated : 01/09/2020-----
-# Monitor a storage account in the Azure portal
-
-[Azure Storage Analytics](storage-analytics.md) provides metrics for all storage services, and logs for blobs, queues, and tables. You can use the [Azure portal](https://portal.azure.com) to configure which metrics and logs are recorded for your account, and configure charts that provide visual representations of your metrics data.
-
-We recommend you review [Azure Monitor for Storage](../../azure-monitor/insights/storage-insights-overview.md) (preview). It is a feature of Azure Monitor that offers comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. It does not require you to enable or configure anything, and you can immediately view these metrics from the pre-defined interactive charts and other visualizations included.
-
-> [!NOTE]
-> There are costs associated with examining monitoring data in the Azure portal. For more information, see [Storage Analytics](storage-analytics.md).
->
-> Azure Files currently supports Storage Analytics metrics, but does not yet support logging.
->
-> Premium performance block blob storage accounts don't support Storage Analytic metrics but they do support logging. You can enable logging programmatically via the REST API or the client library. If you want to view metrics with premium performance blob blob storage accounts, consider using [Azure Storage Metrics in Azure Monitor](../blobs/monitor-blob-storage.md).
->
-> For an in-depth guide on using Storage Analytics and other tools to identify, diagnose, and troubleshoot Azure Storage-related issues, see [Monitor, diagnose, and troubleshoot Microsoft Azure Storage](storage-monitoring-diagnosing-troubleshooting.md).
->
-
-<a id="modify-retention-policy"></a>
-
-## Configure monitoring for a storage account
-
-1. In the [Azure portal](https://portal.azure.com), select **Storage accounts**, then the storage account name to open the account dashboard.
-1. Select **Diagnostics** in the **MONITORING** section of the menu blade.
-
- ![Screenshot that highlights the Diagnostic settings (classic) option under the Monitoring (Classic) section.](./media/storage-monitor-storage-account/storage-enable-metrics-00.png)
-
-1. Select the **type** of metrics data for each **service** you wish to monitor, and the **retention policy** for the data. You can also disable monitoring by setting **Status** to **Off**.
-
- ![MonitoringOptions](./media/storage-monitor-storage-account/storage-enable-metrics-01.png)
-
- To set the data retention policy, move the **Retention (days)** slider or enter the number of days of data to retain, from 1 to 365. The default for new storage accounts is seven days. If you do not want to set a retention policy, enter zero. If there is no retention policy, it is up to you to delete the monitoring data.
-
- > [!WARNING]
- > You are charged when you manually delete metrics data. Stale analytics data (data older than your retention policy) is deleted by the system at no cost. We recommend setting a retention policy based on how long you want to retain storage analytics data for your account. See [Billing on storage metrics](storage-analytics-metrics.md#billing-on-storage-metrics) for more information.
- >
-
-1. When you finish the monitoring configuration, select **Save**.
-
-A default set of metrics is displayed in charts on the storage account blade, as well as the individual service blades (blob, queue, table, and file). Once you've enabled metrics for a service, it may take up to an hour for data to appear in its charts. You can select **Edit** on any metric chart to configure which metrics are displayed in the chart.
-
-You can disable metrics collection and logging by setting **Status** to **Off**.
-
-> [!NOTE]
-> Azure Storage uses [table storage](storage-introduction.md#table-storage) to store the metrics for your storage account, and stores the metrics in tables in your account. For more information, see. [How metrics are stored](storage-analytics-metrics.md#how-metrics-are-stored).
->
-
-## Customize metrics charts
-
-Use the following procedure to choose which storage metrics to view in a metrics chart.
-
-1. Start by displaying a storage metric chart in the Azure portal. You can find charts on the **storage account blade** and in the **Metrics** blade for an individual service (blob, queue, table, file).
-
- In this example, uses the following chart that appears on the **storage account blade**:
-
- ![Chart selection in Azure portal](./media/storage-monitor-storage-account/stg-customize-chart-00.png)
-
-1. Click anywhere within the chart to edit the chart.
-
-1. Next, select the **Time Range** of the metrics to display in the chart, and the **service** (blob, queue, table, file) whose metrics you wish to display. Here, the past week's metrics are selected to display for the blob service:
-
- ![Time range and service selection in the Edit Chart blade](./media/storage-monitor-storage-account/storage-edit-metric-time-range.png)
-
-1. Select the individual **metrics** you'd like displayed in the chart, then click **OK**.
-
- ![Individual metric selection in Edit Chart blade](./media/storage-monitor-storage-account/storage-edit-metric-selections.png)
-
-Your chart settings do not affect the collection, aggregation, or storage of monitoring data in the storage account.
-
-### Metrics availability in charts
-
-The list of available metrics changes based on which service you've chosen in the drop-down, and the unit type of the chart you're editing. For example, you can select percentage metrics like *PercentNetworkError* and *PercentThrottlingError* only if you're editing a chart that displays units in percentage:
-
-![Request error percentage chart in the Azure portal](./media/storage-monitor-storage-account/stg-customize-chart-04.png)
-
-### Metrics resolution
-
-The metrics you selected in **Diagnostics** determines the resolution of the metrics that are available for your account:
-
-* **Aggregate** monitoring provides metrics such as ingress/egress, availability, latency, and success percentages. These metrics are aggregated from the blob, table, file, and queue services.
-* **Per API** provides finer resolution, with metrics available for individual storage operations, in addition to the service-level aggregates.
-
-## Configure metrics alerts
-
-You can create alerts to notify you when thresholds have been reached for storage resource metrics.
-
-1. To open the **Alert rules blade**, scroll down to the **MONITORING** section of the **Menu blade** and select **Alerts (classic)**.
-2. Select **Add metric alert (classic)** to open the **Add an alert rule** blade
-3. Enter a **Name** and **Description** for your new alert rule.
-4. Select the **Metric** for which you'd like to add an alert, an alert **Condition**, and a **Threshold**. The threshold unit type changes depending on the metric you've chosen. For example, "count" is the unit type for *ContainerCount*, while the unit for the *PercentNetworkError* metric is a percentage.
-5. Select the **Period**. Metrics that reach or exceed the Threshold within the period trigger an alert.
-6. (Optional) Configure **Email** and **Webhook** notifications. For more information on webhooks, see [Configure a webhook on an Azure metric alert](../../azure-monitor/platform/alerts-webhooks.md). If you do not configure email or webhook notifications, alerts will appear only in the Azure portal.
-
-!['Add an alert rule' blade in the Azure portal](./media/storage-monitor-storage-account/add-alert-rule.png)
-
-## Add metrics charts to the portal dashboard
-
-You can add Azure Storage metrics charts for any of your storage accounts to your portal dashboard.
-
-1. Select click **Edit dashboard** while viewing your dashboard in the [Azure portal](https://portal.azure.com).
-1. In the **Tile Gallery**, select **Find tiles by** > **Type**.
-1. Select **Type** > **Storage accounts**.
-1. In **Resources**, select the storage account whose metrics you wish to add to the dashboard.
-1. Select **Categories** > **Monitoring**.
-1. Drag-and-drop the chart tile onto your dashboard for the metric you'd like displayed. Repeat for all metrics you'd like displayed on the dashboard. In the following image, the "Blobs - Total requests" chart is highlighted as an example, but all the charts are available for placement on your dashboard.
-
- ![Tile gallery in Azure portal](./media/storage-monitor-storage-account/storage-customize-dashboard.png)
-1. Select **Done customizing** near the top of the dashboard when you're done adding charts.
-
-Once you've added charts to your dashboard, you can further customize them as described in Customize metrics charts.
-
-## Configure logging
-
-You can instruct Azure Storage to save diagnostics logs for read, write, and delete requests for the blob, table, and queue services. The data retention policy you set also applies to these logs.
-
-> [!NOTE]
-> Azure Files currently supports Storage Analytics metrics, but does not yet support logging.
->
-
-1. In the [Azure portal](https://portal.azure.com), select **Storage accounts**, then the name of the storage account to open the storage account blade.
-1. Select **Diagnostics settings (classic)** in the **Monitoring (classic)** section of the menu blade.
-
- ![Diagnostics menu item under MONITORING in the Azure portal.](./media/storage-monitor-storage-account/storage-enable-metrics-00.png)
-
-1. Ensure **Status** is set to **On**, and select the **services** for which you'd like to enable logging.
-
- ![Configure logging in the Azure portal.](./media/storage-monitor-storage-account/enable-diagnostics.png)
-1. Click **Save**.
-
-The diagnostics logs are saved in a blob container named *$logs* in your storage account. You can view the log data using a storage explorer like the [Microsoft Azure Storage Explorer](https://storageexplorer.com), or programmatically using the storage client library or PowerShell.
-
-For information about accessing the $logs container, see [Storage analytics logging](storage-analytics-logging.md).
-
-## Next steps
-
-* Find more details about [metrics, logging, and billing](storage-analytics.md) for Storage Analytics.
\ No newline at end of file
storage https://docs.microsoft.com/en-us/azure/storage/common/storage-network-security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-network-security.md
@@ -533,11 +533,11 @@ az storage account network-rule list \
<a id="exceptions"></a> <a id="trusted-microsoft-services"></a>
-## Grant access to Azure services
+## Grant access to trusted Azure services
-Some Azure services operate from networks that can't be included in your network rules. You can grant a subset of such trusted Azure services access to the storage account, while maintaining network rules for other apps. These trusted services will then use strong authentication to securely connect to your storage account.
+Some Azure services operate from networks that can't be included in your network rules. You can grant a subset of such trusted Azure services access to the storage account, while maintaining network rules for other apps. These trusted services will then use strong authentication to securely connect to your storage account.
-You can grant access to trusted Azure services by creating a network rule exception. For step-by-step guidance, see the [Manage exceptions](#manage-exceptions) section of this article.
+You can grant access to trusted Azure services by creating a network rule exception. For step-by-step guidance, see the [Manage exceptions](#manage-exceptions) section of this article.
When you grant access to trusted Azure services, you grant the following types of access:
@@ -578,17 +578,23 @@ The following table lists services that can have access to your storage account
| :-- | :- | :-- | | Azure API Management | Microsoft.ApiManagement/service | Enables Api Management service access to storage accounts behind firewall using policies. [Learn more](../../api-management/api-management-authentication-policies.md#use-managed-identity-in-send-request-policy). | | Azure Cognitive Search | Microsoft.Search/searchServices | Enables Cognitive Search services to access storage accounts for indexing, processing and querying. |
-| Azure Cognitive Services | Microsoft.CognitiveService | Enables Cognitive Services to access storage accounts. |
+| Azure Cognitive Services | Microsoft.CognitiveService/accounts | Enables Cognitive Services to access storage accounts. |
| Azure Container Registry Tasks | Microsoft.ContainerRegistry/registries | ACR Tasks can access storage accounts when building container images. | | Azure Data Factory | Microsoft.DataFactory/factories | Allows access to storage accounts through the ADF runtime. | | Azure Data Share | Microsoft.DataShare/accounts | Allows access to storage accounts through Data Share. |
+| Azure DevTest Labs | Microsoft.DevTestLab/labs | Allows access to storage accounts through DevTest Labs. |
| Azure IoT Hub | Microsoft.Devices/IotHubs | Allows data from an IoT hub to be written to Blob storage. [Learn more](../../iot-hub/virtual-network-support.md#egress-connectivity-to-storage-account-endpoints-for-routing) | | Azure Logic Apps | Microsoft.Logic/workflows | Enables logic apps to access storage accounts. [Learn more](../../logic-apps/create-managed-service-identity.md#authenticate-access-with-managed-identity). |
-| Azure Machine Learning Service | Microsoft.MachineLearningServices | Authorized Azure Machine Learning workspaces write experiment output, models, and logs to Blob storage and read the data. [Learn more](../../machine-learning/how-to-network-security-overview.md#secure-the-workspace-and-associated-resources). |
-| Azure Synapse Analytics | Microsoft.Sql | Allows import and export of data from specific SQL databases using the COPY statement or PolyBase (in dedicated pool), or the `openrowset` function and external tables in serverless pool. [Learn more](../../azure-sql/database/vnet-service-endpoint-rule-overview.md). |
-| Azure SQL Database | Microsoft.Sql | Allows [writing](../../azure-sql/database/audit-write-storage-account-behind-vnet-firewall.md) audit data to storage accounts behind firewall. |
-| Azure Stream Analytics | Microsoft.StreamAnalytics | Allows data from a streaming job to be written to Blob storage. [Learn more](../../stream-analytics/blob-output-managed-identity.md). |
-| Azure Synapse Analytics | Microsoft.Synapse/workspaces | Enables access to data in Azure Storage from Azure Synapse Analytics. |
+| Azure Machine Learning Service | Microsoft.MachineLearningServices | Authorized Azure Machine Learning workspaces write experiment output, models, and logs to Blob storage and read the data. [Learn more](../../machine-learning/how-to-network-security-overview.md#secure-the-workspace-and-associated-resources). |
+| Azure Media Services | Microsoft.Media/mediaservices | Allows access to storage accounts through Media Services. |
+| Azure Migrate | Microsoft.Migrate/migrateprojects | Allows access to storage accounts through Azure Migrate. |
+| Azure Purview | Microsoft.Purview/accounts | Allows Purview to access storage accounts. |
+| Azure Remote Rendering | Microsoft.MixedReality/remoteRenderingAccounts | Allows access to storage accounts through Remote Rendering. |
+| Azure Site Recovery | Microsoft.RecoveryServices/vaults | Allows access to storage accounts through Site Recovery. |
+| Azure SQL Database | Microsoft.Sql | Allows [writing](../../azure-sql/database/audit-write-storage-account-behind-vnet-firewall.md) audit data to storage accounts behind firewall. |
+| Azure Synapse Analytics | Microsoft.Sql | Allows import and export of data from specific SQL databases using the COPY statement or PolyBase (in dedicated pool), or the `openrowset` function and external tables in serverless pool. [Learn more](../../azure-sql/database/vnet-service-endpoint-rule-overview.md). |
+| Azure Stream Analytics | Microsoft.StreamAnalytics | Allows data from a streaming job to be written to Blob storage. [Learn more](../../stream-analytics/blob-output-managed-identity.md). |
+| Azure Synapse Analytics | Microsoft.Synapse/workspaces | Enables access to data in Azure Storage from Azure Synapse Analytics. |
## Grant access to storage analytics
storage https://docs.microsoft.com/en-us/azure/storage/common/troubleshoot-latency-storage-analytics-logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/troubleshoot-latency-storage-analytics-logs.md
@@ -22,7 +22,7 @@ The following steps demonstrate how to identify and troubleshoot latency issues
## Recommended steps
-1. Download the [Storage Analytics logs](./storage-analytics-logging.md#download-storage-logging-log-data).
+1. Download the [Storage Analytics logs](./manage-storage-analytics-logs.md#download-storage-logging-log-data).
2. Use the following PowerShell script to convert the raw format logs into tabular format:
storage https://docs.microsoft.com/en-us/azure/storage/files/storage-sync-files-planning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-sync-files-planning.md
@@ -4,7 +4,7 @@ description: Plan for a deployment with Azure File Sync, a service that allows y
Previously updated : 01/15/2020 Last updated : 01/29/2021
@@ -297,48 +297,16 @@ For more information about encryption in transit, see [requiring secure transfer
[!INCLUDE [storage-files-tiers-large-file-share-availability](../../../includes/storage-files-tiers-large-file-share-availability.md)] ## Azure file sync region availability
-Azure File Sync is available in the following regions:
-
-| Azure cloud | Geographic region | Azure region | Region code |
-|-|-|--|-|
-| Public | Asia | East Asia | `eastasia` |
-| Public | Asia | Southeast Asia | `southeastasia` |
-| Public | Australia | Australia East | `australiaeast` |
-| Public | Australia | Australia Southeast | `australiasoutheast` |
-| Public | Brazil | Brazil South | `brazilsouth` |
-| Public | Canada | Canada Central | `canadacentral` |
-| Public | Canada | Canada East | `canadaeast` |
-| Public | Europe | North Europe | `northeurope` |
-| Public | Europe | West Europe | `westeurope` |
-| Public | France | France Central | `francecentral` |
-| Public | France | France South* | `francesouth` |
-| Public | India | Central India | `centralindia` |
-| Public | India | South India | `southindia` |
-| Public | Japan | Japan East | `japaneast` |
-| Public | Japan | Japan West | `japanwest` |
-| Public | Korea | Korea Central | `koreacentral` |
-| Public | Korea | Korea South | `koreasouth` |
-| Public | South Africa | South Africa North | `southafricanorth` |
-| Public | South Africa | South Africa West* | `southafricawest` |
-| Public | UAE | UAE Central* | `uaecentral` |
-| Public | UAE | UAE North | `uaenorth` |
-| Public | UK | UK South | `uksouth` |
-| Public | UK | UK West | `ukwest` |
-| Public | US | Central US | `centralus` |
-| Public | US | East US | `eastus` |
-| Public | US | East US 2 | `eastus2` |
-| Public | US | North Central US | `northcentralus` |
-| Public | US | South Central US | `southcentralus` |
-| Public | US | West Central US | `westcentralus` |
-| Public | US | West US | `westus` |
-| Public | US | West US 2 | `westus2` |
-| US Gov | US | US Gov Arizona | `usgovarizona` |
-| US Gov | US | US Gov Texas | `usgovtexas` |
-| US Gov | US | US Gov Virginia | `usgovvirginia` |
-
-Azure File Sync supports syncing only with an Azure file share that's in the same region as the Storage Sync Service.
-
-For the regions marked with asterisks, you must contact Azure Support to request access to Azure Storage in those regions. The process is outlined in [this document](https://azure.microsoft.com/global-infrastructure/geographies/).
+
+For regional availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=storage).
+
+The following regions require you to request access to Azure Storage before you can use Azure File Sync with them:
+
+- France South
+- South Africa West
+- UAE Central
+
+To request access for these regions, follow the process in [this document](https://azure.microsoft.com/global-infrastructure/geographies/).
## Redundancy [!INCLUDE [storage-files-redundancy-overview](../../../includes/storage-files-redundancy-overview.md)]
virtual-desktop https://docs.microsoft.com/en-us/azure/virtual-desktop/key-distribution-center-proxy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/key-distribution-center-proxy.md
@@ -3,13 +3,18 @@ Title: Set up Kerberos Key Distribution Center proxy Windows Virtual Desktop - A
description: How to set up a Windows Virtual Desktop host pool to use a Kerberos Key Distribution Center proxy. Previously updated : 01/26/2021 Last updated : 01/30/2021
-# Configure a Kerberos Key Distribution Center proxy
+# Configure a Kerberos Key Distribution Center proxy (preview)
-This article will show you how to configure a Kerberos Key Distribiution Center (KDC) proxy for your host pool. This proxy lets organizations authenticate with Kerberos outside of their enterprise boundaries. For example, you can use the KDC proxy to enable Smartcard authentication for external clients.
+> [!IMPORTANT]
+> This feature is currently in public preview.
+> This preview version is provided without a service level agreement, and we don't recommend using it for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+This article will show you how to configure a Kerberos Key Distribution Center (KDC) proxy (preview) for your host pool. This proxy lets organizations authenticate with Kerberos outside of their enterprise boundaries. For example, you can use the KDC proxy to enable Smartcard authentication for external clients.
## How to configure the KDC proxy
virtual-desktop https://docs.microsoft.com/en-us/azure/virtual-desktop/network-connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/network-connectivity.md
@@ -45,7 +45,7 @@ Client connection sequence described below:
## Connection security
-TLS 1.2 is used for all connections initiated from the clients and session hosts to the Windows Virtual Desktop infrastructure components.
+TLS 1.2 is used for all connections initiated from the clients and session hosts to the Windows Virtual Desktop infrastructure components. Windows Virtual Desktop uses the same TLS 1.2 ciphers as [Azure Front Door](../frontdoor/front-door-faq.md#what-are-the-current-cipher-suites-supported-by-azure-front-door). It's important to make sure both client computers and session hosts can use these ciphers.
For reverse connect transport, both client and session host connect to the Windows Virtual Desktop gateway. After establishing the TCP connection, the client or session host validates the Windows Virtual Desktop gateway's certificate. After establishing the base transport, RDP establishes a nested TLS connection between client and session host using the session host's certificates. By default, the certificate used for RDP encryption is self-generated by the OS during the deployment. If desired, customers may deploy centrally managed certificates issued by the enterprise certification authority. For more information about configuring certificates, see [Windows Server documentation](/troubleshoot/windows-server/remote/remote-desktop-listener-certificate-configurations).
virtual-desktop https://docs.microsoft.com/en-us/azure/virtual-desktop/rd-gateway-role https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/rd-gateway-role.md
@@ -3,14 +3,19 @@ Title: Deploy RD Gateway role Windows Virtual Desktop - Azure
description: How to deploy the RD Gateway role in Windows Virtual Desktop. Previously updated : 01/26/2021 Last updated : 01/30/2021
-# Deploy the RD Gateway role in Windows Virtual Desktop
+# Deploy the RD Gateway role in Windows Virtual Desktop (preview)
-This article will tell you how to deploy the Remote Desktop Gateway servers in your environment. You can install the server roles on physical machines or virtual machines, depending on whether you are creating an on-premises, cloud-based, or hybrid environment.
+> [!IMPORTANT]
+> This feature is currently in public preview.
+> This preview version is provided without a service level agreement, and we don't recommend using it for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+This article will tell you how to use the RD Gateway role (preview) to deploy Remote Desktop Gateway servers in your environment. You can install the server roles on physical machines or virtual machines depending on whether you are creating an on-premises, cloud-based, or hybrid environment.
## Install the RD Gateway role
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/mainframe-rehosting/ibm/demo https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/mainframe-rehosting/ibm/demo.md
@@ -52,7 +52,7 @@ Access to the ADCD media is required. The steps below assume you are an IBM cust
4. Enter the part description or part number, and click **Finder**.
-5. Optionally, click the alphabetical order list to display and view theproduct by name.
+5. Optionally, click the alphabetical order list to display and view the product by name.
6. Select **All Operating Systems** in the **Operating system field**, and **All Languages** in the **Languages field**. Then, click **Go**.
@@ -185,7 +185,7 @@ Congratulations! You are now running an IBM mainframe environment on Azure.
## Learn more - [Mainframe migration: myths and facts](/azure/architecture/cloud-adoption/infrastructure/mainframe-migration/myths-and-facts)-- [IBM DB2 pureScale on Azure](../../../linux/ibm-db2-purescale-azure.md)
+- [IBM DB2 pureScale on Azure](ibm-db2-purescale-azure.md)
- [Troubleshooting](../../../troubleshooting/index.yml) - [Demystifying mainframe to Azure migration](https://azure.microsoft.com/resources/demystifying-mainframe-to-azure-migration/)
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/mainframe-rehosting/ibm/deploy-ibm-db2-purescale-azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/mainframe-rehosting/ibm/deploy-ibm-db2-purescale-azure.md
@@ -0,0 +1,142 @@
+
+ Title: Deploy IBM DB2 pureScale on Azure
+description: Learn how to deploy an example architecture used recently to migrate an enterprise from its IBM DB2 environment running on z/OS to IBM DB2 pureScale on Azure.
+++ Last updated : 11/09/2018++++
+# Deploy IBM DB2 pureScale on Azure
+
+This article describes how to deploy an [example architecture](ibm-db2-purescale-azure.md) that an enterprise customer recently used to migrate from its IBM DB2 environment running on z/OS to IBM DB2 pureScale on Azure.
+
+To follow the steps used for the migration, see the installation scripts in the [DB2onAzure](https://aka.ms/db2onazure) repository on GitHub. These scripts are based on the architecture for a typical, medium-sized online transaction processing (OLTP) workload.
+
+## Get started
+
+To deploy this architecture, download and run the deploy.sh script found in the [DB2onAzure](https://aka.ms/db2onazure) repository on GitHub.
+
+The repository also has scripts for setting up a Grafana dashboard. You can use the dashboard to query Prometheus, the open-source monitoring and alerting system included with DB2.
+
+> [!NOTE]
+> The deploy.sh script on the client creates private SSH keys and passes them to the deployment template over HTTPS. For greater security, we recommend using [Azure Key Vault](../../../../key-vault/general/overview.md) to store secrets, keys, and passwords.
+
+## How the deployment script works
+
+The deploy.sh script creates and configures the Azure resources for this architecture. The script prompts you for the Azure subscription and virtual machines used in the target environment, and then performs the following operations:
+
+- Sets up the resource group, virtual network, and subnets on Azure for the installation.
+
+- Sets up the network security groups and SSH for the environment.
+
+- Sets up multiple NICs on both the shared storage and the DB2 pureScale virtual machines.
+
+- Creates the shared storage virtual machines. If you use Storage Spaces Direct or another storage solution, see [Storage Spaces Direct overview](/windows-server/storage/storage-spaces/storage-spaces-direct-overview).
+
+- Creates the jumpbox virtual machine.
+
+- Creates the DB2 pureScale virtual machines.
+
+- Creates the witness virtual machine that DB2 pureScale pings. Skip this part of the deployment if your version of Db2 pureScale does not require a witness.
+
+- Creates a Windows virtual machine to use for testing but doesn't install anything on it.
+
+Next, the deployment scripts set up an iSCSI virtual storage area network (vSAN) for shared storage on Azure. In this example, iSCSI connects to the shared storage cluster. In the original customer solution, GlusterFS was used. However, IBM no longer supports this approach. To maintain your support from IBM, you need to use a supported iSCSI-compatible file system. Microsoft offers Storage Spaces Direct (S2D) as an option.
+
+This solution also gives you the option to install the iSCSI targets as a single Windows node. iSCSI provides a shared block storage interface over TCP/IP that allows the DB2 pureScale setup procedure to use a device interface to connect to shared storage.
+
+The deployment scripts run these general steps:
+
+1. Set up a shared storage cluster on Azure. This step involves at least two Linux nodes.
+
+2. Set up an iSCSI Direct interface on target Linux servers for the shared storage cluster.
+
+3. Set up the iSCSI initiator on the Linux virtual machines. The initiator will access the shared storage cluster by using an iSCSI target. For setup details, see [How To Configure An iSCSI Target And Initiator In Linux](https://www.rootusers.com/how-to-configure-an-iscsi-target-and-initiator-in-linux/) in the RootUsers documentation.
+
+4. Install the shared storage layer for the iSCSI interface.
+
+After the scripts create the iSCSI device, the final step is to install DB2 pureScale. As part of the DB2 pureScale setup, [IBM Spectrum Scale](https://www.ibm.com/support/knowledgecenter/SSEPGG_11.1.0/com.ibm.db2.luw.qb.server.doc/doc/t0057167.html) (formerly known as GPFS) is compiled and installed on the GlusterFS cluster. This clustered file system enables DB2 pureScale to share data among the virtual machines that run the DB2 pureScale engine. For more information, see the [IBM Spectrum Scale](https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.2.0/ibmspectrumscale42_welcome.html) documentation on the IBM website.
+
+## DB2 pureScale response file
+
+The GitHub repository includes DB2server.rsp, a response (.rsp) file that enables you to generate an automated script for the DB2 pureScale installation. The following table lists the DB2 pureScale options that the response file uses for setup. You can customize the response file as needed for your environment.
+
+> [!NOTE]
+> A sample response file, DB2server.rsp, is included in the [DB2onAzure](https://aka.ms/db2onazure) repository on GitHub. If you use this file, you must edit it before it can work in your environment.
+
+| Screen name | Field | Value |
+||-|-|
+| Welcome | | New Install |
+| Choose a Product | | DB2 Version 11.1.3.3. Server Editions with DB2 pureScale |
+| Configuration | Directory | /data1/opt/ibm/db2/V11.1 |
+| | Select the installation type | Typical |
+| | I agree to the IBM terms | Checked |
+| Instance Owner | Existing User For Instance, User name | DB2sdin1 |
+| Fenced User | Existing User, User name | DB2sdfe1 |
+| Cluster File System | Shared disk partition device path | /dev/dm-2 |
+| | Mount point | /DB2sd\_1804a |
+| | Shared disk for data | /dev/dm-1 |
+| | Mount point (Data) | /DB2fs/datafs1 |
+| | Shared disk for log | /dev/dm-0 |
+| | Mount point (Log) | /DB2fs/logfs1 |
+| | DB2 Cluster Services Tiebreaker. Device path | /dev/dm-3 |
+| Host List | d1 [eth1], d2 [eth1], cf1 [eth1], cf2 [eth1] | |
+| | Preferred primary CF | cf1 |
+| | Preferred secondary CF | cf2 |
+| Response File and Summary | first option | Install DB2 Server Edition with the IBM DB2 pureScale feature and save my settings in a response file |
+| | Response file name | /root/DB2server.rsp |
+
+### Notes about this deployment
+
+- The values for /dev-dm0, /dev-dm1, /dev-dm2, and /dev-dm3 can change after a restart on the virtual machine where the setup takes place (d0 in the automated script). To find the right values, you can issue the following command before completing the response file on the server where the setup will run:
+
+ ```
+ [root\@d0 rhel]\# ls -als /dev/mapper
+ total 0
+ 0 drwxr-xr-x 2 root root 140 May 30 11:07 .
+ 0 drwxr-xr-x 19 root root 4060 May 30 11:31 ..
+ 0 crw- 1 root root 10, 236 May 30 11:04 control
+ 0 lrwxrwxrwx 1 root root 7 May 30 11:07 db2data1 -\> ../dm-1
+ 0 lrwxrwxrwx 1 root root 7 May 30 11:07 db2log1 -\> ../dm-0
+ 0 lrwxrwxrwx 1 root root 7 May 30 11:26 db2shared -\> ../dm-2
+ 0 lrwxrwxrwx 1 root root 7 May 30 11:08 db2tieb -\> ../dm-3
+ ```
+
+- The setup scripts use aliases for the iSCSI disks so that the actual names can be found easily.
+
+- When the setup script is run on d0, the **/dev/dm-\*** values might be different on d1, cf0, and cf1. The difference in values doesn't affect the DB2 pureScale setup.
+
+## Troubleshooting and known issues
+
+The GitHub repo includes a knowledge base that the authors maintain. It lists potential problems you might have and resolutions you can try. For example, known problems can happen when:
+
+- You're trying to reach the gateway IP address.
+
+- You're compiling General Public License (GPL).
+
+- The security handshake between hosts fails.
+
+- The DB2 installer detects an existing file system.
+
+- You're manually installing IBM Spectrum Scale.
+
+- You're installing DB2 pureScale when IBM Spectrum Scale is already created.
+
+- You're removing DB2 pureScale and IBM Spectrum Scale.
+
+For more information about these and other known problems, see the kb.md file in the [DB2onAzure](https://aka.ms/DB2onAzure) repo.
+
+## Next steps
+
+- [Creating required users for a DB2 pureScale Feature installation](https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.qb.server.doc/doc/t0055374.html?pos=2)
+
+- [DB2icrt - Create instance command](https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.admin.cmd.doc/doc/r0002057.html)
+
+- [DB2 pureScale Clusters Data Solution](https://www.ibmbigdatahub.com/blog/db2-purescale-clustered-database-solution-part-1)
+
+- [IBM Data Studio](https://www.ibm.com/developerworks/downloads/im/data/https://docsupdatetracker.net/index.html/)
+
+- [Azure Virtual Data Center Lift and Shift Guide](https://azure.microsoft.com/resources/azure-virtual-datacenter-lift-and-shift-guide/)
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/mainframe-rehosting/ibm/get-started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/mainframe-rehosting/ibm/get-started.md
@@ -18,7 +18,7 @@ keywords:
Many IBM mainframe workloads based on z/OS can be replicated in Azure with no loss of functionality and without users even noticing changes in their underlying systems. Rehosting applications on Azure gives you the mainframe-like features you need plus the elasticity, availability, and potential cost savings of the cloud.
-Azure supports integration with existing IBM mainframe environments, enabling you to migrate the applicates that make sense, run hybrid solutions where needed, and migrate over time. Although you can completely rewrite existing mainframe-based programs for Azure, itΓÇÖs more common to rehost them. Rewriting adds cost, complexity, and time to migration projects. With rehosting, you can:
+Azure supports integration with existing IBM mainframe environments, enabling you to migrate the applications that make sense, run hybrid solutions where needed, and migrate over time. Although you can completely rewrite existing mainframe-based programs for Azure, itΓÇÖs more common to rehost them. Rewriting adds cost, complexity, and time to migration projects. With rehosting, you can:
- Move applications to a cloud-based emulator.
@@ -35,4 +35,4 @@ An extensive partner ecosystem is available to help you migrate IBM mainframe sy
- [Mainframe migration: myths and facts](/azure/architecture/cloud-adoption/infrastructure/mainframe-migration/myths-and-facts) - [Install IBM zD&T dev/test environment on Azure](./install-ibm-z-environment.md) - [Set up an Application Developers Controlled Distribution (ADCD) in IBM zD&T v1](./demo.md)-- [IBM DB2 pureScale on Azure](../../../linux/ibm-db2-purescale-azure.md)
+- [IBM DB2 pureScale on Azure](ibm-db2-purescale-azure.md)
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/mainframe-rehosting/ibm/ibm-db2-purescale-azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/mainframe-rehosting/ibm/ibm-db2-purescale-azure.md
@@ -0,0 +1,106 @@
+
+ Title: IBM DB2 pureScale on Azure
+description: In this article, we show an architecture for running an IBM DB2 pureScale environment on Azure.
++
+editor: edprice
++++ Last updated : 11/09/2018++++
+# IBM DB2 pureScale on Azure
+
+The IBM DB2 pureScale environment provides a database cluster for Azure with high availability and scalability on Linux operating systems. This article shows an architecture for running DB2 pureScale on Azure.
+
+## Overview
+
+Enterprises have long used traditional relational database management system (RDBMS) platforms to cater to their online transaction processing (OLTP) needs. These days, many are migrating their mainframe-based database environments to Azure as a way to expand capacity, reduce costs, and maintain a steady operational cost structure. Migration is often the first step in modernizing a legacy platform.
+
+Recently, an enterprise customer rehosted its IBM DB2 environment running on z/OS to IBM DB2 pureScale on Azure. The Db2 pureScale database cluster solution provides high availability and scalability on Linux operating systems. The customer ran Db2 successfully as a standalone, scale-up instance on a single virtual machine (VM) in a large scale-up system on Azure prior to installing Db2 pureScale.
+
+Though not identical to the original environment, IBM DB2 pureScale on Linux delivers similar high-availability and scalability features as IBM DB2 for z/OS running in a Parallel Sysplex configuration on the mainframe. In this scenario, the cluster is connected via iSCSI to a shared storage cluster. We used the GlusterFS file system, a free, scalable, open source distributed file system specifically optimized for cloud storage. However, IBM no longer supports this solution. To maintain your support from IBM, you need to use a supported iSCSI-compatible file system. Microsoft offers Storage Spaces Direct (S2D) as an option
+
+This article describes the architecture used for this Azure migration. The customer used Red Hat Linux 7.4 to test the configuration. This version is available from the Azure Marketplace. Before you choose a Linux distribution, make sure to verify the currently supported versions. For details, see the documentation for [IBM DB2 pureScale](https://www.ibm.com/support/knowledgecenter/SSEPGG) and [GlusterFS](https://docs.gluster.org/en/latest/).
+
+This article is a starting point for your DB2 implementation plan. Your business requirements will differ, but the same basic pattern applies. You can also use this architectural pattern for online analytical processing (OLAP) applications on Azure.
+
+This article doesn't cover differences and possible migration tasks for moving an IBM DB2 for z/OS database to IBM DB2 pureScale running on Linux. And it doesn't provide sizing estimations and workload analyses for moving from DB2 z/OS to DB2 pureScale.
+
+To help you decide on the best DB2 pureScale architecture for your environment, we recommend that you fully estimate sizing and make a hypothesis. On the source system, make sure to consider DB2 z/OS Parallel Sysplex with data-sharing architecture, Coupling Facility configuration, and distributed data facility (DDF) usage statistics.
+
+> [!NOTE]
+> This article describes one approach to DB2 migration, but there are others. For example, DB2 pureScale can also run in virtualized on-premises environments. IBM supports DB2 on Microsoft Hyper-V in various configurations. For more information, see [DB2 pureScale virtualization architecture](https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.qb.server.doc/doc/r0061462.html) in the IBM Knowledge Center.
+
+## Architecture
+
+To support high availability and scalability on Azure, you can use a scale-out, shared data architecture for DB2 pureScale. The customer migration used the following example architecture.
+
+![DB2 pureScale on Azure virtual machines showing storage and networking](media/pureScaleArchitecture.png "DB2 pureScale on Azure virtual machines showing storage and networking")
++
+The diagram shows the logical layers needed for a DB2 pureScale cluster. These include virtual machines for a client, for management, for caching, for the database engine, and for shared storage.
+
+In addition to the database engine nodes, the diagram includes two nodes used for cluster caching facilities (CFs). A minimum of two nodes are used for the database engine itself. A DB2 server that belongs to a pureScale cluster is called a member.
+
+The cluster is connected via iSCSI to a three-node shared storage cluster to provide scale-out storage and high availability. DB2 pureScale is installed on Azure virtual machines running Linux.
+
+This approach is a template that you can modify for the size and scale of your organization. It's based on the following:
+
+- Two or more database members are combined with at least two CF nodes. The nodes manage a global buffer pool (GBP) for shared memory and global lock manager (GLM) services to control shared access and lock contention from active members. One CF node acts as the primary and the other as the secondary, failover CF node. To avoid a single point of failure in the environment, a DB2 pureScale cluster requires at least four nodes.
+
+- High-performance shared storage (shown in P30 size in the diagram). Each node uses this storage.
+
+- High-performance networking for the data members and shared storage.
+
+### Compute considerations
+
+This architecture runs the application, storage, and data tiers on Azure virtual machines. The [deployment setup scripts](https://aka.ms/db2onazure) create the following:
+
+- A DB2 pureScale cluster. The type of compute resources you need on Azure depends on your setup. In general, you can use two approaches:
+
+ - Use a multi-node, high-performance computing (HPC)-style network where small to medium-sized instances access shared storage. For this HPC type of configuration, Azure memory-optimized E-series or storage-optimized L-series [virtual machines](../../../sizes.md) provide the needed compute power.
+
+ - Use fewer large virtual machine instances for the data engines. For large instances, the largest memory-optimized [M-series](https://azure.microsoft.com/pricing/details/virtual-machines/series/) virtual machines are ideal for heavy in-memory workloads. You might need a dedicated instance, depending on the size of the logical partition (LPAR) that's used to run DB2.
+
+- The DB2 CF uses memory-optimized virtual machines, such as E-series or L-series.
+
+- A shared storage cluster that uses Standard\_DS4\_v2 virtual machines running Linux.
+
+- The management jumpbox is a Standard\_DS2\_v2 virtual machine running Linux. An alternative is Azure Bastion, a service that provides a secure RDP/SSH experience for all the VMs in your virtual network.
+
+- The client is a Standard\_DS3\_v2 virtual machine running Windows (used for testing).
+
+- *Optional*. A witness server. This is needed only with certain earlier versions of Db2 pureScale. This example uses a Standard\_DS3\_v2 virtual machine running Linux (used for DB2 pureScale).
+
+> [!NOTE]
+> A DB2 pureScale cluster requires at least two DB2 instances. It also requires a cache instance and a lock manager instance.
+
+### Storage considerations
+
+Like Oracle RAC, DB2 pureScale is a high-performance block I/O, scale-out database. We recommend using the largest [Azure premium SSD](../../../disks-types.md) option that suits your needs. Smaller storage options might be suitable for development and test environments, while production environments often need more storage capacity. The example architecture uses [P30](https://azure.microsoft.com/pricing/details/managed-disks/) because of its ratio of IOPS to size and price. Regardless of size, use Premium Storage for best performance.
+
+DB2 pureScale uses a shared-everything architecture, where all data is accessible from all cluster nodes. Premium storage must be shared across multiple instances, whether on demand or on dedicated instances.
+
+A large DB2 pureScale cluster can require 200 terabytes (TB) or more of premium shared storage, with IOPS of 100,000. DB2 pureScale supports an iSCSI block interface that you can use on Azure. The iSCSI interface requires a shared storage cluster that you can implement with S2D or another tool. This type of solution creates a virtual storage area network (vSAN) device in Azure. DB2 pureScale uses the vSAN to install the clustered file system that's used to share data among virtual machines.
+
+### Networking considerations
+
+IBM recommends InfiniBand networking for all members in a DB2 pureScale cluster. DB2 pureScale also uses remote direct memory access (RDMA), where available, for the CFs.
+
+During setup, you create an Azure [resource group](https://docs.microsoft.com/azure/azure-resource-manager/management/overview) to contain all the virtual machines. In general, you group resources based on their lifetime and who will manage them. The virtual machines in this architecture require [accelerated networking](https://azure.microsoft.com/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/). It's an Azure feature that provides consistent, ultra-low network latency via single-root I/O virtualization (SR-IOV) to a virtual machine.
+
+Every Azure virtual machine is deployed into a virtual network that has subnets: main, Gluster FS front end (gfsfe), Gluster FS back end (bfsbe), DB2 pureScale (db2be), and DB2 pureScale front end (db2fe). The installation script also creates the primary [NICs](https://docs.microsoft.com/azure/virtual-machines/windows/multiple-nics) on the virtual machines in the main subnet.
+
+Use [network security groups](../../../../virtual-network/virtual-network-vnet-plan-design-arm.md) to restrict network traffic within the virtual network and to
+isolate the subnets.
+
+On Azure, DB2 pureScale needs to use TCP/IP as the network connection for storage.
+
+## Next steps
+
+- [Deploy this architecture on Azure](deploy-ibm-db2-purescale-azure.md)
\ No newline at end of file
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/mainframe-rehosting/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/mainframe-rehosting/overview.md
@@ -65,7 +65,7 @@ To get started:
The IBM DB2 pureScale environment provides a database cluster for Azure. It's not identical to the original environment, but it delivers similar availability and scale as IBM DB2 for z/OS running in a Parallel Sysplex setup.
-To get started, see [IBM DB2 pureScale on Azure](../../linux/ibm-db2-purescale-azure.md).
+To get started, see [IBM DB2 pureScale on Azure](.//ibm/ibm-db2-purescale-azure.md).
## Considerations